Skip to content

merge #19

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 10,000 commits into from
Closed

merge #19

wants to merge 10,000 commits into from

Conversation

omyshell
Copy link

No description provided.

marcalff and others added 30 commits August 19, 2014 11:20
… 5.0.87

Pre-requisite patch: Get rid of the Statement class.

Move old Statement members to THD and/or Prepared_statement.
Move old Statement backup/restore logic to sql_prepare.cc.
Rewrite Statement_map - only Preppared_statement is stored there.
Remove Query_arena::type(), not needed.

This makes it easier to reason about the copy of query strings
between Prepared_statement and THD, which is the cause of the bug.
WL#7777 Integrate PFS memory instrumentation with InnoDB

Account all memory allocations in InnoDB via PFS using the interface
provided by "WL#3249 PERFORMANCE SCHEMA, Instrument memory usage".

Approved by:	Yasufumi, Annamalai (rb:5845)
         replication P_S tables                                               
                                                                                
Added more fields to replication P_S tables.
Following variables that were a part of
SHOW STATUS have now been added:                                                 
                                                                                                                    
Show status like 'Slave_last_heartbeat';                                  
Show status like 'Slave_received_heartbeats';                                   
show status like 'Slave_heartbeat_period';
Show status like 'Slave_retried_transactions';
Post merge fix, reduce memory consumption by default.
…TRUE

We should pass sync=false so that we use the asynchronous aio.

Approved by Sunny over IM.
…TRUE

We should pass sync=false so that we use the asynchronous aio.

Approved by Sunny over IM.
1. Fix the test case, DEADLOCK is a full rollback. The COMMIT following the
   UPDATE is superfluous.

2. Remove the blocking from the trx_t::hit_list if it is rolled back during
   the record lock enqueue phase

3. When the trx_t::state == TRX_STATE_FORCED_ROLLBACK, return DB_FORCED_ABORT
   on COMMIT/ROLLBACK requests.
It's a bug introduced by wl6711 on unique_constraint implementation.

When unique_constriant is used, Hash value is treated to compare whether
two tuples are equal or not. However, Hash function can't assure the
uniqueness for different tuples. So we need to compare the content if
there is the same Hash value. During compare the content, there is an
error for comparing length in function cmp_field_value.
Update a forgotten .result file (it needs --big-test to run)
…. It is

incrmented when the transaction is started. Before killing a blocking
transaction in the trx_t::hit_list we check if it hasn't been reused to
start a new transaction. We kill the transaction only if the version numbers
match.
  Follow-up checkin. Increasing the timeout so that TCs passes on all
  machine (including slow one)
… MYSQLBINLOG

Background:
Some combinations of options for mysqlbinlog are invalid.
Those combinations give various error messages.
Problem:
Some of these error messages were ungrammatical, unnecessarily complex,
or partially wrong.
Fix:
Corrected the messages.
The function has never made an InnoDB redo log checkpoint,
which would involve flushing data pages from the buffer pool
to the file system. It has only ever flushed the redo log buffer
to the redo log files.

The actual InnoDB function call has been changed a few times
since the introduction of InnoDB to MySQL in 2000, but the semantics
never changed.

Approved by Vasil Dimov
After refactoring of mysql_upgrade there are descriptions of options that are out of date. --help option can print default values as well as special options.

Added options sorting by long name, --help first. Updated options descriptions. Printing special options and default values in help.
After refactoring of mysql_upgrade there are descriptions of options that are out of date. --help option can print default values as well as special options.

Fixed compilation issue.
GlebShchepa and others added 24 commits August 29, 2014 16:24
If SELECT query is cached and later if there exists a session
state then the same SELECT query when run will not send OK packet
with session state as the result sets is picked from cache. For
now disabling deprecate_eof.test when query_cache is ON.
  
Follow-up patch: Fix compile failure in unit tests when compiling
without performance schema. The fix is to remove the PSI_mutex_key
object from being included in the unit tests. This is no
longer needed since the unit test no longer needs to link in
ha_resolve_by_name().
             ESTIMATE ON 32 BIT PLATFORMS
    
The innodb_stats_fetch test failed when running
on 32 bit platforms due to an "off by one" cardinality number
in the result from a query that read from the statitics table
in information schema. This test failure is caused by WL#7339.

The cardinality numbers that are retrieved from information schema
is roughly calculated this way when the table is stored in InnoDB:

1. InnoDB has the number of rows and the cardinality for the 
   index stored in the persistent statistics. In the failing
   case, InnoDB had 1000 as the number of rows and 3 as the 
   cardinality.
2. InnoDB calculates the records per key value and stores this
   in the KEY object. This is calculated as 1000/3 and is thus
   333.333333.... when using the code from WL#7339 (before
   this worklog, the rec_per_key value was 166).
3. When filling data into information schema, we re-calculate
   the cardinality number by using the records per key
   information (in sql_show.cc):

      double records= (show_table->file->stats.records /
                       key->records_per_key(j));
      table->field[9]->store((longlong) records, TRUE);

   in this case we first compute the cardinality to be
 
      records= 1000 / 333.3333... = 3.0000... or 2.9999999

   and then use the cast to get an integer value. On 64-bit
   platforms, the result of this was 3.00000 which was 
   casted to 3. On 32 bit platforms, the result was 2.999999
   which was casted to 2 before inserting it into the
   information schema table. (in the pre-wl7339 version, the
   calculated cardinality number was 6 for this case).

This issue is caused by having converted to using a float numbers
for the records per key estimate. So when re-calculating the
cardinatily number in step 3 above, we can easily get a
result that is just below the actual correct cardinality number
and due to the use of cast, the number is always truncated.

The suggested fix for this problem is to round the calculated
cardinality number to the nearest integer value before 
inserting it into the statistics table. This will both avoid the
issue with different results on different platforms but it will
also produce a more correct cardinality estimate.

This change has caused a few other test result files to be
re-recorded. The updated cardinality numbers are more correct
than the previous.
------------------------------------------------------------
revno: 8734
committer: bin.x.su@oracle.com
branch nick: mysql-trunk
timestamp: Fri 2014-08-29 10:15:45 +0800
message:
  Commit the missing test case result file for WL#6835.

------------------------------------------------------------
revno: 8732 [merge]
committer: Sunny Bains <Sunny.Bains@Oracle.Com>
branch nick: trunk
timestamp: Fri 2014-08-29 10:24:18 +1000
message:
  WL#6835 - InnoDB: GCS Replication: Deterministic Deadlock Handling (High Prio Transactions in InnoDB)
  
  Introduce transaction priority. Transactions with a higher priority cannot
  be rolled back by transactions with a lower priority. A higher priority
  transaction will jump the lock wait queue and grab the record lock instead
  of waiting.
  
  This code is not currently visible to the users. However, there are debug
  tests that can exercise the code. It will probably require some additional
  work once it is used by GCS.
  
  rb#6036 Approved by Jimmy Yang.
…PACKAGES

  Fixed by adding --rpm to mysql_install_db command
  Also some corrections enterprise -> commercial
  Merge of cset 8778 from trunk
…T OF SOLARIS PKG INSTALL

  Remove --insecure from postinstall-solaris, it was a temporary fix
  This is a dummy empty commit, as fix has already been applied
  Merged cset 8875 from trunk
@mysql-oca-bot
Copy link

Hi, thank you for submitting this pull request. In order to consider your code we need you to sign the Oracle Contribution Agreement (OCA). Please review the details and follow the instructions at http://www.oracle.com/technetwork/community/oca-486395.html
Please make sure to include your MySQL bug system user (email) in the returned form.
Thanks

@mysql-admin
Copy link

Closing pull request as it appears to have been submitted in error (not intended to be a contribution)
==Omer

@mysql-admin mysql-admin closed this Jul 2, 2015
akopytov pushed a commit to akopytov/mysql-server that referenced this pull request Aug 25, 2017
Patch mysql#19: Fix -Wunused-parameter warnings in release build.
bjornmu pushed a commit that referenced this pull request Jul 1, 2024
… for connection xxx'.

The new iterator based explains are not impacted.

The issue here is a race condition. More than one thread is using the
query term iterator at the same time (whoch is neithe threas safe nor
reantrant), and part of its state is in the query terms being visited
which leads to interference/race conditions.

a) the explain thread

uses an iterator here:

   Sql_cmd_explain_other_thread::execute

is inspecting the Query_expression of the running query
calling master_query_expression()->find_blocks_query_term which uses
an iterator over the query terms in the query expression:

   for (auto qt : query_terms<>()) {
       if (qt->query_block() == qb) {
           return qt;
       }
   }

the above search fails to find qb due to the interference of the
thread b), see below, and then tries to access a nullpointer:

    * thread #36, name = ‘connection’, stop reason = EXC_BAD_ACCESS (code=1, address=0x0)
  frame #0: 0x000000010bb3cf0d mysqld`Query_block::type(this=0x00007f8f82719088) const at sql_lex.cc:4441:11
  frame #1: 0x000000010b83763e mysqld`(anonymous namespace)::Explain::explain_select_type(this=0x00007000020611b8) at opt_explain.cc:792:50
  frame #2: 0x000000010b83cc4d mysqld`(anonymous namespace)::Explain_join::explain_select_type(this=0x00007000020611b8) at opt_explain.cc:1487:21
  frame #3: 0x000000010b837c34 mysqld`(anonymous namespace)::Explain::prepare_columns(this=0x00007000020611b8) at opt_explain.cc:744:26
  frame #4: 0x000000010b83ea0e mysqld`(anonymous namespace)::Explain_join::explain_qep_tab(this=0x00007000020611b8, tabnum=0) at opt_explain.cc:1415:32
  frame #5: 0x000000010b83ca0a mysqld`(anonymous namespace)::Explain_join::shallow_explain(this=0x00007000020611b8) at opt_explain.cc:1364:9
  frame #6: 0x000000010b83379b mysqld`(anonymous namespace)::Explain::send(this=0x00007000020611b8) at opt_explain.cc:770:14
  frame #7: 0x000000010b834147 mysqld`explain_query_specification(explain_thd=0x00007f8fbb111e00, query_thd=0x00007f8fbb919c00, query_term=0x00007f8f82719088, ctx=CTX_JOIN) at opt_explain.cc:2088:20
  frame #8: 0x000000010bd36b91 mysqld`Query_expression::explain_query_term(this=0x00007f8f7a090360, explain_thd=0x00007f8fbb111e00, query_thd=0x00007f8fbb919c00, qt=0x00007f8f82719088) at sql_union.cc:1519:11
  frame #9: 0x000000010bd36c68 mysqld`Query_expression::explain_query_term(this=0x00007f8f7a090360, explain_thd=0x00007f8fbb111e00, query_thd=0x00007f8fbb919c00, qt=0x00007f8f8271d748) at sql_union.cc:1526:13
  frame #10: 0x000000010bd373f7 mysqld`Query_expression::explain(this=0x00007f8f7a090360, explain_thd=0x00007f8fbb111e00, query_thd=0x00007f8fbb919c00) at sql_union.cc:1591:7
  frame #11: 0x000000010b835820 mysqld`mysql_explain_query_expression(explain_thd=0x00007f8fbb111e00, query_thd=0x00007f8fbb919c00, unit=0x00007f8f7a090360) at opt_explain.cc:2392:17
  frame #12: 0x000000010b835400 mysqld`explain_query(explain_thd=0x00007f8fbb111e00, query_thd=0x00007f8fbb919c00, unit=0x00007f8f7a090360) at opt_explain.cc:2353:13
 * frame #13: 0x000000010b8363e4 mysqld`Sql_cmd_explain_other_thread::execute(this=0x00007f8fba585b68, thd=0x00007f8fbb111e00) at opt_explain.cc:2531:11
  frame #14: 0x000000010bba7d8b mysqld`mysql_execute_command(thd=0x00007f8fbb111e00, first_level=true) at sql_parse.cc:4648:29
  frame #15: 0x000000010bb9e230 mysqld`dispatch_sql_command(thd=0x00007f8fbb111e00, parser_state=0x0000700002065de8) at sql_parse.cc:5303:19
  frame #16: 0x000000010bb9a4cb mysqld`dispatch_command(thd=0x00007f8fbb111e00, com_data=0x0000700002066e38, command=COM_QUERY) at sql_parse.cc:2135:7
  frame #17: 0x000000010bb9c846 mysqld`do_command(thd=0x00007f8fbb111e00) at sql_parse.cc:1464:18
  frame #18: 0x000000010b2f2574 mysqld`handle_connection(arg=0x0000600000e34200) at connection_handler_per_thread.cc:304:13
  frame #19: 0x000000010e072fc4 mysqld`pfs_spawn_thread(arg=0x00007f8fba8160b0) at pfs.cc:3051:3
  frame #20: 0x00007ff806c2b202 libsystem_pthread.dylib`_pthread_start + 99
  frame #21: 0x00007ff806c26bab libsystem_pthread.dylib`thread_start + 15

b) the query thread being explained is itself performing LEX::cleanup
and as part of the iterates over the query terms, but still allows
EXPLAIN of the query plan since

   thd->query_plan.set_query_plan(SQLCOM_END, ...)

hasn't been called yet.

     20:frame: Query_terms<(Visit_order)1, (Visit_leaves)0>::Query_term_iterator::operator++() (in mysqld) (query_term.h:613)
     21:frame: Query_expression::cleanup(bool) (in mysqld) (sql_union.cc:1861)
     22:frame: LEX::cleanup(bool) (in mysqld) (sql_lex.h:4286)
     30:frame: Sql_cmd_dml::execute(THD*) (in mysqld) (sql_select.cc:799)
     31:frame: mysql_execute_command(THD*, bool) (in mysqld) (sql_parse.cc:4648)
     32:frame: dispatch_sql_command(THD*, Parser_state*) (in mysqld) (sql_parse.cc:5303)
     33:frame: dispatch_command(THD*, COM_DATA const*, enum_server_command) (in mysqld) (sql_parse.cc:2135)
     34:frame: do_command(THD*) (in mysqld) (sql_parse.cc:1464)
     57:frame: handle_connection(void*) (in mysqld) (connection_handler_per_thread.cc:304)
     58:frame: pfs_spawn_thread(void*) (in mysqld) (pfs.cc:3053)
     65:frame: _pthread_start (in libsystem_pthread.dylib) + 99
     66:frame: thread_start (in libsystem_pthread.dylib) + 15

Solution:

This patch solves the issue by removing iterator state from
Query_term, making the query_term iterators thread safe. This solution
labels every child query_term with its index in its parent's
m_children vector.  The iterator can therefore easily compute the next
child to visit based on Query_term::m_sibling_idx.

A unit test case is added to check reentrancy.

One can also manually verify that we have no remaining race condition
by running two client connections files (with \. <file>) with a big
number of copies of the repro query in one connection and a big number
of EXPLAIN format=json FOR <connection>, e.g.

    EXPLAIN FORMAT=json FOR CONNECTION 8\G

in the other. The actual connection number would need to verified
in connection one, of course.

Change-Id: Ie7d56610914738ccbbecf399ccc4f465f7d26ea7
dbussink added a commit to planetscale/mysql-server that referenced this pull request Nov 21, 2024
In case `with_ndb_home` is set, `buf` is allocated with `PATH_MAX` and
the home is already written into the buffer.

The additional path is written using `snprintf` and it starts off at
`len`. It still can write up to `PATH_MAX` though which is wrong, since
if we already have a home written into it, we only have `PATH_MAX - len`
available in the buffer.

On Ubuntu 24.04 with debug builds this is caught and it crashes:

```
*** buffer overflow detected ***: terminated
Signal 6 thrown, attempting backtrace.
stack_bottom = 0 thread_stack 0x0
 #0 0x604895341cb6 <unknown>
 mysql#1 0x7ff22524531f <unknown> at sysdeps/unix/sysv/linux/x86_64/libc_sigaction.c:0
 mysql#2 0x7ff22529eb1c __pthread_kill_implementation at ./nptl/pthread_kill.c:44
 mysql#3 0x7ff22529eb1c __pthread_kill_internal at ./nptl/pthread_kill.c:78
 mysql#4 0x7ff22529eb1c __GI___pthread_kill at ./nptl/pthread_kill.c:89
 mysql#5 0x7ff22524526d __GI_raise at sysdeps/posix/raise.c:26
 mysql#6 0x7ff2252288fe __GI_abort at ./stdlib/abort.c:79
 mysql#7 0x7ff2252297b5 __libc_message_impl at sysdeps/posix/libc_fatal.c:132
 mysql#8 0x7ff225336c18 __GI___fortify_fail at ./debug/fortify_fail.c:24
 mysql#9 0x7ff2253365d3 __GI___chk_fail at ./debug/chk_fail.c:28
 mysql#10 0x7ff225337db4 ___snprintf_chk at ./debug/snprintf_chk.c:29
 mysql#11 0x6048953593ba <unknown>
 mysql#12 0x604895331a3d <unknown>
 mysql#13 0x6048953206e7 <unknown>
 mysql#14 0x60489531f4b1 <unknown>
 mysql#15 0x60489531e8e6 <unknown>
 mysql#16 0x7ff22522a1c9 __libc_start_call_main at sysdeps/nptl/libc_start_call_main.h:58
 mysql#17 0x7ff22522a28a __libc_start_main_impl at csu/libc-start.c:360
 mysql#18 0x60489531ed54 <unknown>
 mysql#19 0xffffffffffffffff <unknown>
```

In practice this buffer overflow only would happen with very long paths.

Signed-off-by: Dirkjan Bussink <d.bussink@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.