8 commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
jnemeth
|
c49d8591ba |
Update to MySQL Cluster 7.4.12
----- 7.4.12 Changes in MySQL Cluster NDB 7.4.12 (5.6.31-ndb-7.4.12) (2016-07-18) MySQL Cluster NDB 7.4.12 is a new release of MySQL Cluster 7.4, based on MySQL Server 5.6 and including features in version 7.4 of the NDB storage engine, as well as fixing recently discovered bugs in previous MySQL Cluster releases. This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.6 through MySQL 5.6.31 (see Changes in MySQL 5.6.31 (2016-06-02)). Functionality Added or Changed ClusterJ: To make it easier for ClusterJ to handle fatal errors that require the SessionFactory to be closed, a new public method in the SessionFactory interface, getConnectionPoolSessionCounts(), has been created. When it returns zeros for all pooled connections, it means all sessions have been closed, at which point the SessionFactory can be closed and reopened. See Reconnecting to a MySQL Cluster for more detail. (Bug #22353594) Bugs Fixed Incompatible Change: When the data nodes are only partially connected to the API nodes, a node used for a pushdown join may get its request from a transaction coordinator on a different node, without (yet) being connected to the API node itself. In such cases, the NodeInfo object for the requesting API node contained no valid info about the software version of the API node, which caused the DBSPJ block to assume (incorrectly) when aborting to assume that the API node used NDB version 7.2.4 or earlier, requiring the use of a backward compatability mode to be used during query abort which sent a node failure error instead of the real error causing the abort. Now, whenever this situation occurs, it is assumed that, if the NDB software version is not yet available, the API node version is greater than 7.2.4. (Bug #23049170) Although arguments to the DUMP command are 32-bit integers, ndb_mgmd used a buffer of only 10 bytes when processing them. (Bug #23708039) During shutdown, the mysqld process could sometimes hang after logging NDB Util: Stop ... NDB Util: Wakeup. (Bug #23343739) References: See also: Bug #21098142. During an online upgrade from a MySQL Cluster NDB 7.3 release to an NDB 7.4 (or later) release, the failures of several data nodes running the lower version during local checkpoints (LCPs), and just prior to upgrading these nodes, led to additional node failures following the upgrade. This was due to lingering elements of the EMPTY_LCP protocol initiated by the older nodes as part of an LCP-plus-restart sequence, and which is no longer used in NDB 7.4 and later due to LCP optimizations implemented in those versions. (Bug #23129433) Reserved send buffer for the loopback transporter, introduced in MySQL Cluster NDB 7.4.8 and used by API and management nodes for administrative signals, was calculated incorrectly. (Bug #23093656, Bug #22016081) References: This issue is a regression of: Bug #21664515. During a node restart, re-creation of internal triggers used for verifying the referential integrity of foreign keys was not reliable, because it was possible that not all distributed TC and LDM instances agreed on all trigger identities. To fix this problem, an extra step is added to the node restart sequence, during which the trigger identities are determined by querying the current master node. (Bug #23068914) References: See also: Bug #23221573. Following the forced shutdown of one of the 2 data nodes in a cluster where NoOfReplicas=2, the other data node shut down as well, due to arbitration failure. (Bug #23006431) The ndbinfo.tc_time_track_stats table uses histogram buckets to give a sense of the distribution of latencies. The sizes of these buckets were also reported as HISTOGRAM BOUNDARY INFO messages during data node startup; this printout was redundant and so has been removed. (Bug #22819868) A failure occurred in DBTUP in debug builds when variable-sized pages for a fragment totalled more than 4 GB. (Bug #21313546) mysqld did not shut down cleanly when executing ndb_index_stat. (Bug #21098142) References: See also: Bug #23343739. DBDICT and GETTABINFOREQ queue debugging were enhanced as follows: Monitoring by a data node of the progress of GETTABINFOREQ signals can be enabled by setting DictTrace >= 2. Added the ApiVerbose configuration parameter, which enables NDB API debug logging for an API node where it is set greater than or equal to 2. Added DUMP code 1229 which shows the current state of the GETTABINFOREQ queue. (See DUMP 1229.) See also The DBDICT Block. (Bug #20368450) References: See also: Bug #20368354. Cluster API: Deletion of Ndb objects used a dispoportionately high amount of CPU. (Bug #22986823) ----- 7.4.11 Changes in MySQL Cluster NDB 7.4.11 (5.6.29-ndb-7.4.11) (2016-04-20) MySQL Cluster NDB 7.4.11 is a new release of MySQL Cluster 7.4, based on MySQL Server 5.6 and including features in version 7.4 of the NDB storage engine, as well as fixing recently discovered bugs in previous MySQL Cluster releases. This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.6 through MySQL 5.6.29 (see Changes in MySQL 5.6.29 (2016-02-05)). Functionality Added or Changed Cluster API: Added the Ndb::setEventBufferQueueEmptyEpoch() method, which makes it possible to enable queuing of empty events (event type TE_EMPTY). (Bug #22157845) Bugs Fixed Important Change: The minimum value for the BackupDataBufferSize data node configuration parameter has been lowered from 2 MB to 512 KB. The default and maximum values for this parameter remain unchanged. (Bug #22749509) Microsoft Windows: Performing ANALYZE TABLE on a table having one or more indexes caused ndbmtd to fail with an InvalidAttrInfo error due to signal corruption. This issue occurred consistently on Windows, but could also be encountered on other platforms. (Bug #77716, Bug #21441297) During node failure handling, the request structure used to drive the cleanup operation was not maintained correctly when the request was executed. This led to inconsistencies that were harmless during normal operation, but these could lead to assertion failures during node failure handling, with subsequent failure of additional nodes. (Bug #22643129) The previous fix for a lack of mutex protection for the internal TransporterFacade::deliver_signal() function was found to be incomplete in some cases. (Bug #22615274) Compilation of MySQL with Visual Studio 2015 failed in ConfigInfo.cpp, due to a change in Visual Studio's handling of spaces and concatenation. (Bug #22558836, Bug #80024) When setup of the binary log as an atomic operation on one SQL node failed, this could trigger a state in other SQL nodes in which they appeared to detect the SQL node participating in schema change distribution, whereas it had not yet completed binary log setup. This could in turn cause a deadlock on the global metadata lock when the SQL node still retrying binary log setup needed this lock, while another mysqld had taken the lock for itself as part of a schema change operation. In such cases, the second SQL node waited for the first one to act on its schema distribution changes, which it was not yet able to do. (Bug #22494024) Duplicate key errors could occur when ndb_restore was run on a backup containing a unique index. This was due to the fact that, during restoration of data, the database can pass through one or more inconsistent states prior to completion, such an inconsistent state possibly having duplicate values for a column which has a unique index. (If the restoration of data is preceded by a run with --disable-indexes and followed by one with --rebuild-indexes, these errors are avoided.) Added a check for unique indexes in the backup which is performed only when restoring data, and which does not process tables that have explicitly been excluded. For each unique index found, a warning is now printed. (Bug #22329365) Restoration of metadata with ndb_restore -m occasionally failed with the error message Failed to create index... when creating a unique index. While disgnosing this problem, it was found that the internal error PREPARE_SEIZE_ERROR (a temporary error) was reported as an unknown error. Now in such cases, ndb_restore retries the creation of the unique index, and PREPARE_SEIZE_ERROR is reported as NDB Error 748 Busy during read of event table. (Bug #21178339) References: See also: Bug #22989944. When setting up event logging for ndb_mgmd on Windows, MySQL Cluster tries to add a registry key to HKEY_LOCAL_MACHINE, which fails if the user does not have access to the registry. In such cases ndb_mgmd logged the error Could neither create or open key, which is not accurate and which can cause confusion for users who may not realize that file logging is available and being used. Now in such cases, ndb_mgmd logs a warning Could not create or access the registry key needed for the application to log to the Windows EventLog. Run the application with sufficient privileges once to create the key, or add the key manually, or turn off logging for that application. An error (as opposed to a warning) is now reported in such cases only if there is no available output at all for ndb_mgmd event logging. (Bug #20960839) NdbDictionary metadata operations had a hard-coded 7-day timeout, which proved to be excessive for short-lived operations such as retrieval of table definitions. This could lead to unnecessary hangs in user applications which were difficult to detect and handle correctly. To help address this issue, timeout behaviour is modified so that read-only or short-duration dictionary interactions have a 2-minute timeout, while schema transactions of potentially long duration retain the existing 7-day timeout. Such timeouts are intended as a safety net: In the event of problems, these return control to users, who can then take corrective action. Any reproducible issue with NdbDictionary timeouts should be reported as a bug. (Bug #20368354) Optimization of signal sending by buffering and sending them periodically, or when the buffer became full, could cause SUB_GCP_COMPLETE_ACK signals to be excessively delayed. Such signals are sent for each node and epoch, with a minimum interval of TimeBetweenEpochs; if they are not received in time, the SUMA buffers can overflow as a result. The overflow caused API nodes to be disconnected, leading to current transactions being aborted due to node failure. This condition made it difficult for long transactions (such as altering a very large table), to be completed. Now in such cases, the ACK signal is sent without being delayed. (Bug #18753341) An internal function used to validate connections failed to update the connection count when creating a new Ndb object. This had the potential to create a new Ndb object for every operation validating the connection, which could have an impact on performance, particularly when performing schema operations. (Bug #80750, Bug #22932982) When an SQL node was started, and joined the schema distribution protocol, another SQL node, already waiting for a schema change to be distributed, timed out during that wait. This was because the code incorrectly assumed that the new SQL node would also acknowledge the schema distribution even though the new node joined too late to be a participant in it. As part of this fix, printouts of schema distribution progress now always print the more significant part of a bitmask before the less significant; formatting of bitmasks in such printouts has also been improved. (Bug #80554, Bug #22842538) Settings for the SchedulerResponsiveness data node configuration parameter (introduced in MySQL Cluster NDB 7.4.9) were ignored. (Bug #80341, Bug #22712481) MySQL Cluster did not compile correctly with Microsoft Visual Studio 2015, due to a change from previous versions in the VS implementation of the _vsnprintf() function. (Bug #80276, Bug #22670525) When setting CPU spin time, the value was needlessly cast to a boolean internally, so that setting it to any nonzero value yielded an effective value of 1. This issue, as well as the fix for it, apply both to setting the SchedulerSpinTimer parameter and to setting spintime as part of a ThreadConfig parameter value. (Bug #80237, Bug #22647476) Processing of local checkpoints was not handled correctly on Mac OS X, due to an uninitialized variable. (Bug #80236, Bug #22647462) A logic error in an if statement in storage/ndb/src/kernel/blocks/dbacc/DbaccMain.cpp rendered useless a check for determining whether ZREAD_ERROR should be returned when comparing operations. This was detected when compiling with gcc using -Werror=logical-op. (Bug #80155, Bug #22601798) References: This issue is a regression of: Bug #21285604. The ndb_print_file utility failed consistently on Solaris 9 for SPARC. (Bug #80096, Bug #22579581) Builds with the -Werror and -Wextra flags (as for release builds) failed on SLES 11. (Bug #79950, Bug #22539531) When using CREATE INDEX to add an index on either of two NDB tables sharing circular foreign keys, the query succeeded but a temporary table was left on disk, breaking the foreign key constraints. This issue was also observed when attempting to create an index on a table in the middle of a chain of foreign keysthat is, a table having both parent and child keys, but on different tables. The problem did not occur when using ALTER TABLE to perform the same index creation operation; and subsequent analysis revealed unintended differences in the way such operations were performed by CREATE INDEX. To fix this problem, we now make sure that operations performed by a CREATE INDEX statement are always handled internally in the same way and at the same time that the same operations are handled when performed by ALTER TABLE or DROP INDEX. (Bug #79156, Bug #22173891) NDB failed to ignore index prefixes on primary and unique keys, causing CREATE TABLE and ALTER TABLE statements using them to be rejected. (Bug #78441, Bug #21839248) Cluster API: Executing a transaction with an NdbIndexOperation based on an obsolete unique index caused the data node process to fail. Now the index is checked in such cases, and if it cannot be used the transaction fails with an appropriate error. (Bug #79494, Bug #22299443) Integer overflow could occur during client handshake processing, leading to a server exit. (Bug #22722946) For busy servers, client connection or communication failure could occur if an I/O-related system call was interrupted. The mysql_options() C API function now has a MYSQL_OPT_RETRY_COUNT option to control the number of retries for interrupted system calls. (Bug #22336527) References: See also: Bug #22389653. |
||
fhajny
|
73cb9327ed |
Update databases/mysql-cluster to 7.4.10.
7.4.10 - A serious regression was inadvertently introduced in MySQL Cluster NDB 7.4.8 whereby local checkpoints and thus restarts often took much longer than expected. See more at: http://dev.mysql.com/doc/relnotes/mysql-cluster/7.4/en/mysql-cluster-news-7-4-10.html 7.4.9 - Important Change: Previously, the NDB scheduler always optimized for speed against throughput in a predetermined manner (this was hard coded); this balance can now be set using the SchedulerResponsiveness data node configuration parameter. - Added the tc_time_track_stats table to the ndbinfo information database. - Cluster Replication: Normally, RESET SLAVE causes all entries to be deleted from the mysql.ndb_apply_status table. This release adds the ndb_clear_apply_status system variable, which makes it possible to override this behavior. See more at: http://dev.mysql.com/doc/relnotes/mysql-cluster/7.4/en/mysql-cluster-news-7-4-9.html 7.4.8 - Changes have been made in the minimum values for a number of parameters applying to data buffers for backups and LCPs. These parameters, listed here, can no longer be set so as to make the system impossible to run: - BackupDataBufferSize: minimum increased from 0 to 2M. - BackupLogBufferSize: minimum increased from 0 to 2M. - BackupWriteSize: minimum increased from 2K to 32K. - BackupMaxWriteSize: minimum increased from 2K to 256K. In addition, the BackupMemory data node parameter is now deprecated and subject to removal in a future version of MySQL Cluster. Use BackupDataBufferSize and BackupLogBufferSize instead. - When a backup was unsuccessful due to insufficient resourcesa subsequent retry worked only for those parts of the backup that worked in the same thread, since delayed signals are only supported in the same thread. Delayed signals are no longer sent to other threads in such cases. - An instance of an internal list object used in searching for queued scans was not actually destroyed before calls to functions that could manipulate the base object used to create it. - ACC scans were queued in the category of range scans, which could lead to starting an ACC scan when DBACC had no free slots for scans. We fix this by implementing a separate queue for ACC scans. - Cluster Replication: Added the create_old_temporals server system variable to compliment the system variables avoid_temporal_upgrade and show_old_temporals introduced in MySQL 5.6.24 and available in MySQL Cluster beginning with NDB 7.3.9 and NDB 7.4.6. - When the --database option has not been specified for ndb_show_tables, and no tables are found in the TEST_DB database, an appropriate warning message is now issued. - Bug fixes. See more at http://dev.mysql.com/doc/relnotes/mysql-cluster/7.4/en/mysql-cluster-news-7-4-8.html |
||
jnemeth
|
5d7dace2a0 |
Update to MySQL Cluster 7.4.7: this is mainly a bug fix release.
pkgsrc change: delete one patch that has been upstreamed Changes in MySQL Cluster NDB 7.4.7 (5.6.25-ndb-7.4.7) (2015-07-13) MySQL Cluster NDB 7.4.7 is a new release of MySQL Cluster 7.4, based on MySQL Server 5.6 and including features in version 7.4 of the NDB storage engine, as well as fixing recently discovered bugs in previous MySQL Cluster releases. This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.6 through MySQL 5.6.25 (see Changes in MySQL 5.6.25 (2015-05-29)). Functionality Added or Changed - Deprecated MySQL Cluster node configuration parameters are now indicated as such by ndb_config --configinfo --xml. For each parameter currently deprecated, the corresponding <param/> tag in the XML output now includes the attribute deprecated="true". (Bug #21127135) Bugs Fixed - Important Change; Cluster API: The Ndb::getHighestQueuedEpoch() method returned the greatest epoch in the event queue instead of the greatest epoch found after calling pollEvents2(). (Bug #20700220) - Important Change; Cluster API: Ndb::pollEvents() is now compatible with the TE_EMPTY, TE_INCONSISTENT, and TE_OUT_OF_MEMORY event types introduced in MySQL Cluster NDB 7.4.3. For detailed information about this change, see the description of this method in the MySQL Cluster API Developer Guide. (Bug #20646496) - Important Change; Cluster API: Added the method Ndb::isExpectingHigherQueuedEpochs() to the NDB API to detect when additional, newer event epochs were detected by pollEvents2(). The behavior of Ndb::pollEvents() has also been modified such that it now returns NDB_FAILURE_GCI (equal to ~(Uint64)0) when a cluster failure has been detected. (Bug #18753887) - After restoring the database metadata (but not any data) by running ndb_restore --restore_meta (or -m), SQL nodes would hang while trying to SELECT from a table in the database to which the metadata was restored. In such cases the attempt to query the table now fails as expected, since the table does not actually exist until ndb_restore is executed with --restore_data (-r). (Bug #21184102) References: See also Bug #16890703. - When a great many threads opened and closed blocks in the NDB API in rapid succession, the internal close_clnt() function synchronizing the closing of the blocks waited an insufficiently long time for a self-signal indicating potential additional signals needing to be processed. This led to excessive CPU usage by ndb_mgmd, and prevented other threads from opening or closing other blocks. This issue is fixed by changing the function polling call to wait on a specific condition to be woken up (that is, when a signal has in fact been executed). (Bug #21141495) - Previously, multiple send threads could be invoked for handling sends to the same node; these threads then competed for the same send lock. While the send lock blocked the additional send threads, work threads could be passed to other nodes. This issue is fixed by ensuring that new send threads are not activated while there is already an active send thread assigned to the same node. In addition, a node already having an active send thread assigned to it is no longer visible to other, already active, send threads; that is, such a node is longer added to the node list when a send thread is currently assigned to it. (Bug #20954804, Bug #76821) - Queueing of pending operations when the redo log was overloaded (DefaultOperationRedoProblemAction API node configuration parameter) could lead to timeouts when data nodes ran out of redo log space (P_TAIL_PROBLEM errors). Now when the redo log is full, the node aborts requests instead of queuing them. (Bug #20782580) References: See also Bug #20481140. - An NDB event buffer can be used with an Ndb object to subscribe to table-level row change event streams. Users subscribe to an existing event; this causes the data nodes to start sending event data signals (SUB_TABLE_DATA) and epoch completion signals (SUB_GCP_COMPLETE) to the Ndb object. SUB_GCP_COMPLETE_REP signals can arrive for execution in concurrent receiver thread before completion of the internal method call used to start a subscription. Execution of SUB_GCP_COMPLETE_REP signals depends on the total number of SUMA buckets (sub data streams), but this may not yet have been set, leading to the present issue, when the counter used for tracking the SUB_GCP_COMPLETE_REP signals (TOTAL_BUCKETS_INIT) was found to be set to erroneous values. Now TOTAL_BUCKETS_INIT is tested to be sure it has been set correctly before it is used. (Bug #20575424) References: See also Bug #20561446, Bug #21616263. - NDB statistics queries could be delayed by the error delay set for ndb_index_stat_option (default 60 seconds) when the index that was queried had been marked with internal error. The same underlying issue could also cause ANALYZE TABLE to hang when executed against an NDB table having multiple indexes where an internal error occured on one or more but not all indexes. Now in such cases, any existing statistics are returned immediately, without waiting for any additonal statistics to be discovered. (Bug #20553313, Bug #20707694, Bug #76325) - The multi-threaded scheduler sends to remote nodes either directly from each worker thread or from dedicated send threads, depending on the cluster's configuration. This send might transmit all, part, or none of the available data from the send buffers. While there remained pending send data, the worker or send threads continued trying to send in a loop. The actual size of the data sent in the most recent attempt to perform a send is now tracked, and used to detect lack of send progress by the send or worker threads. When no progress has been made, and there is no other work outstanding, the scheduler takes a 1 millisecond pause to free up the CPU for use by other threads. (Bug #18390321) References: See also Bug #20929176, Bug #20954804. - In some cases, attempting to restore a table that was previously backed up failed with a File Not Found error due to a missing table fragment file. This occurred as a result of the NDB kernel BACKUP block receiving a Busy error while trying to obtain the table description, due to other traffic from external clients, and not retrying the operation. The fix for this issue creates two separate queues for such requests: one for internal clients such as the BACKUP block or ndb_restore, and one for external clients such as API nodes and prioritizing the internal queue. Note that it has always been the case that external client applications using the NDB API (including MySQL applications running against an SQL node) are expected to handle Busy errors by retrying transactions at a later time; this expectation is not changed by the fix for this issue. (Bug #17878183) References: See also Bug #17916243. - On startup, API nodes (including mysqld processes running as SQL nodes) waited to connect with data nodes that had not yet joined the cluster. Now they wait only for data nodes that have actually already joined the cluster. In the case of a new data node joining an existing cluster, API nodes still try to connect with the new data node within HeartbeatIntervalDbApi milliseconds. (Bug #17312761) - In some cases, the DBDICT block failed to handle repeated GET_TABINFOREQ signals after the first one, leading to possible node failures and restarts. This could be observed after setting a sufficiently high value for MaxNoOfExecutionThreads and low value for LcpScanProgressTimeout. (Bug #77433, Bug #21297221) - Client lookup for delivery of API signals to the correct client by the internal TransporterFacade::deliver_signal() function had no mutex protection, which could cause issues such as timeouts encountered during testing, when other clients connected to the same TransporterFacade. (Bug #77225, Bug #21185585) - It was possible to end up with a lock on the send buffer mutex when send buffers became a limiting resource, due either to insufficient send buffer resource configuration, problems with slow or failing communications such that all send buffers became exhausted, or slow receivers failing to consume what was sent. In this situation worker threads failed to allocate send buffer memory for signals, and attempted to force a send in order to free up space, while at the same time the send thread was busy trying to send to the same node or nodes. All of these threads competed for taking the send buffer mutex, which resulted in the lock already described, reported by the watchdog as Stuck in Send. This fix is made in two parts, listed here: 1. The send thread no longer holds the global send thread mutex while getting the send buffer mutex; it now releases the global mutex prior to locking the send buffer mutex. This keeps worker threads from getting stuck in send in such cases. 2. Locking of the send buffer mutex done by the send threads now uses a try-lock. If the try-lock fails, the node to make the send to is reinserted at the end of the list of send nodes in order to be retried later. This removes the Stuck in Send condition for the send threads. (Bug #77081, Bug #21109605) - Cluster API: The pollEvents2() method now waits indefinitely for events when a negative value is used for the time argument. (Bug #20762291) - Cluster API: NdbEventOperation::isErrorEpoch() incorrectly returned false for the TE_INCONSISTENT table event type (see The Event::TableEvent Type). This caused a subsequent call to getEventType() to fail. (Bug #20729091) - Cluster API: Creation and destruction of Ndb_cluster_connection objects by multiple threads could make use of the same application lock, which in some cases led to failures in the global dictionary cache. To alleviate this problem, the creation and destruction of several internal NDB API objects have been serialized. (Bug #20636124) - Cluster API: A number of timeouts were not handled correctly in the NDB API. (Bug #20617891) - Cluster API: When an Ndb object created prior to a failure of the cluster was reused, the event queue of this object could still contain data node events originating from before the failure. These events could reference old epochs (from before the failure occurred), which in turn could violate the assumption made by the nextEvent() method that epoch numbers always increase. This issue is addressed by explicitly clearing the event queue in such cases. (Bug #18411034) References: See also Bug #20888668. |
||
jnemeth
|
cc6147e399 |
Update to MySQL Cluster 7.4.6:
---- Changes in MySQL Cluster NDB 7.4.6 (5.6.24-ndb-7.4.6) Bugs Fixed During backup, loading data from one SQL node followed by repeated DELETE statements on the tables just loaded from a different SQL node could lead to data node failures. (Bug #18949230) When an instance of NdbEventBuffer was destroyed, any references to GCI operations that remained in the event buffer data list were not freed. Now these are freed, and items from the event bufer data list are returned to the free list when purging GCI containers. (Bug #76165, Bug #20651661) When a bulk delete operation was committed early to avoid an additional round trip, while also returning the number of affected rows, but failed with a timeout error, an SQL node performed no verification that the transaction was in the Committed state. (Bug #74494, Bug #20092754) References: See also Bug #19873609. Changes in MySQL Cluster NDB 7.4.5 (5.6.23-ndb-7.4.5) Bugs Fixed In the event of a node failure during an initial node restart followed by another node start, the restart of the the affected node could hang with a START_INFOREQ that occurred while invalidation of local checkpoints was still ongoing. (Bug #20546157, Bug #75916) References: See also Bug #34702. It was found during testing that problems could arise when the node registered as the arbitrator disconnected or failed during the arbitration process. In this situation, the node requesting arbitration could never receive a positive acknowledgement from the registered arbitrator; this node also lacked a stable set of members and could not initiate selection of a new arbitrator. Now in such cases, when the arbitrator fails or loses contact during arbitration, the requesting node immediately fails rather than waiting to time out. (Bug #20538179) DROP DATABASE failed to remove the database when the database directory contained a .ndb file which had no corresponding table in NDB. Now, when executing DROP DATABASE, NDB performs an check specifically for leftover .ndb files, and deletes any that it finds. (Bug #20480035) References: See also Bug #44529. The maximum failure time calculation used to ensure that normal node failure handling mechanisms are given time to handle survivable cluster failures (before global checkpoint watchdog mechanisms start to kill nodes due to GCP delays) was excessively conservative, and neglected to consider that there can be at most number_of_data_nodes / NoOfReplicas node failures before the cluster can no longer survive. Now the value of NoOfReplicas is properly taken into account when performing this calculation. (Bug #20069617, Bug #20069624) References: See also Bug #19858151, Bug #20128256, Bug #20135976. When performing a restart, it was sometimes possible to find a log end marker which had been written by a previous restart, and that should have been invalidated. Now when when searching for the last page to invalidate, the same search algorithm is used as when searching for the last page of the log to read. (Bug #76207, Bug #20665205) During a node restart, if there was no global checkpoint completed between the START_LCP_REQ for a local checkpoint and its LCP_COMPLETE_REP it was possible for a comparison of the LCP ID sent in the LCP_COMPLETE_REP signal with the internal value SYSFILE->latestLCP_ID to fail. (Bug #76113, Bug #20631645) When sending LCP_FRAG_ORD signals as part of master takeover, it is possible that the master is not synchronized with complete accuracy in real time, so that some signals must be dropped. During this time, the master can send a LCP_FRAG_ORD signal with its lastFragmentFlag set even after the local checkpoint has been completed. This enhancement causes this flag to persist until the statrt of the next local checkpoint, which causes these signals to be dropped as well. This change affects ndbd only; the issue described did not occur with ndbmtd. (Bug #75964, Bug #20567730) When reading and copying transporter short signal data, it was possible for the data to be copied back to the same signal with overlapping memory. (Bug #75930, Bug #20553247) NDB node takeover code made the assumption that there would be only one takeover record when starting a takeover, based on the further assumption that the master node could never perform copying of fragments. However, this is not the case in a system restart, where a master node can have stale data and so need to perform such copying to bring itself up to date. (Bug #75919, Bug #20546899) Cluster API: A scan operation, whether it is a single table scan or a query scan used by a pushed join, stores the result set in a buffer. This maximum size of this buffer is calculated and preallocated before the scan operation is started. This buffer may consume a considerable amount of memory; in some cases we observed a 2 GB buffer footprint in tests that executed 100 parallel scans with 2 single-threaded (ndbd) data nodes. This memory consumption was found to scale linearly with additional fragments. A number of root causes, listed here, were discovered that led to this problem: Result rows were unpacked to full NdbRecord format before they were stored in the buffer. If only some but not all columns of a table were selected, the buffer contained empty space (essentially wasted). Due to the buffer format being unpacked, VARCHAR and VARBINARY columns always had to be allocated for the maximum size defined for such columns. BatchByteSize and MaxScanBatchSize values were not taken into consideration as a limiting factor when calculating the maximum buffer size. These issues became more evident in NDB 7.2 and later MySQL Cluster release series. This was due to the fact buffer size is scaled by BatchSize, and that the default value for this parameter was increased fourfold (from 64 to 256) beginning with MySQL Cluster NDB 7.2.1. This fix causes result rows to be buffered using the packed format instead of the unpacked format; a buffered scan result row is now not unpacked until it becomes the current row. In addition, BatchByteSize and MaxScanBatchSize are now used as limiting factors when calculating the required buffer size. Also as part of this fix, refactoring has been done to separate handling of buffered (packed) from handling of unbuffered result sets, and to remove code that had been unused since NDB 7.0 or earlier. The NdbRecord class declaration has also been cleaned up by removing a number of unused or redundant member variables. (Bug #73781, Bug #75599, Bug #19631350, Bug #20408733) ----- Changes in MySQL Cluster NDB 7.4.4 (5.6.23-ndb-7.4.4) Bugs Fixed When upgrading a MySQL Cluster from NDB 7.3 to NDB 7.4, the first data node started with the NDB 7.4 data node binary caused the master node (still running NDB 7.3) to fail with Error 2301, then itself failed during Start Phase 5. (Bug #20608889) A memory leak in NDB event buffer allocation caused an event to be leaked for each epoch. (Due to the fact that an SQL node uses 3 event buffers, each SQL node leaked 3 events per epoch.) This meant that a MySQL Cluster mysqld leaked an amount of memory that was inversely proportional to the size of TimeBetweenEpochs that is, the smaller the value for this parameter, the greater the amount of memory leaked per unit of time. (Bug #20539452) The values of the Ndb_last_commit_epoch_server and Ndb_last_commit_epoch_session status variables were incorrectly reported on some platforms. To correct this problem, these values are now stored internally as long long, rather than long. (Bug #20372169) When restoring a MySQL Cluster from backup, nodes that failed and were restarted during restoration of another node became unresponsive, which subsequently caused ndb_restore to fail and exit. (Bug #20069066) When a data node fails or is being restarted, the remaining nodes in the same nodegroup resend to subscribers any data which they determine has not already been sent by the failed node. Normally, when a data node (actually, the SUMA kernel block) has sent all data belonging to an epoch for which it is responsible, it sends a SUB_GCP_COMPLETE_REP signal, together with a count, to all subscribers, each of which responds with a SUB_GCP_COMPLETE_ACK. When SUMA receives this acknowledgment from all subscribers, it reports this to the other nodes in the same nodegroup so that they know that there is no need to resend this data in case of a subsequent node failure. If a node failed before all subscribers sent this acknowledgement but before all the other nodes in the same nodegroup received it from the failing node, data for some epochs could be sent (and reported as complete) twice, which could lead to an unplanned shutdown. The fix for this issue adds to the count reported by SUB_GCP_COMPLETE_ACK a list of identifiers which the receiver can use to keep track of which buckets are completed and to ignore any duplicate reported for an already completed bucket. (Bug #17579998) The output format of SHOW CREATE TABLE for an NDB table containing foreign key constraints did not match that for the equivalent InnoDB table, which could lead to issues with some third-party applications. (Bug #75515, Bug #20364309) An ALTER TABLE statement containing comments and a partitioning option against an NDB table caused the SQL node on which it was executed to fail. (Bug #74022, Bug #19667566) Cluster API: When a transaction is started from a cluster connection, Table and Index schema objects may be passed to this transaction for use. If these schema objects have been acquired from a different connection (Ndb_cluster_connection object), they can be deleted at any point by the deletion or disconnection of the owning connection. This can leave a connection with invalid schema objects, which causes an NDB API application to fail when these are dereferenced. To avoid this problem, if your application uses multiple connections, you can now set a check to detect sharing of schema objects between connections when passing a schema object to a transaction, using the NdbTransaction::setSchemaObjectOwnerChecks() method added in this release. When this check is enabled, the schema objects having the same names are acquired from the connection and compared to the schema objects passed to the transaction. Failure to match causes the application to fail with an error. (Bug #19785977) Cluster API: The increase in the default number of hashmap buckets (DefaultHashMapSize API node configuration parameter) from 240 to 3480 in MySQL Cluster NDB 7.2.11 increased the size of the internal DictHashMapInfo::HashMap type considerably. This type was allocated on the stack in some getTable() calls which could lead to stack overflow issues for NDB API users. To avoid this problem, the hashmap is now dynamically allocated from the heap. (Bug #19306793) ----- Changes in MySQL Cluster NDB 7.4.3 (5.6.22-ndb-7.4.3) Functionality Added or Changed Important Change; Cluster API: This release introduces an epoch-driven Event API for the NDB API that supercedes the earlier GCI-based model. The new version of this API also simplifies error detection and handling, and monitoring of event buffer memory usage has been been improved. New event handling methods for Ndb and NdbEventOperation added by this change include NdbEventOperation::getEventType2(), pollEvents2(), nextEvent2(), getHighestQueuedEpoch(), getNextEventOpInEpoch2(), getEpoch(), isEmptyEpoch(), and isErrorEpoch. The pollEvents(), nextEvent(), getLatestGCI(), getGCIEventOperations(), isConsistent(), isConsistentGCI(), getEventType(), getGCI(), getLatestGCI(), isOverrun(), hasError(), and clearError() methods are deprecated beginning with the same release. Some (but not all) of the new methods act as replacements for deprecated methods; not all of the deprecated methods map to new ones. The Event Class, provides information as to which old methods correspond to new ones. Error handling using the new API is no longer handled using dedicated hasError() and clearError() methods, which are now deprecated as previously noted. To support this change, TableEvent now supports the values TE_EMPTY (empty epoch), TE_INCONSISTENT (inconsistent epoch), and TE_OUT_OF_MEMORY (insufficient event buffer memory). Event buffer memory management has also been improved with the introduction of the get_eventbuffer_free_percent(), set_eventbuffer_free_percent(), and get_eventbuffer_memory_usage() methods, as well as a new NDB API error Free percent out of range (error code 4123). Memory buffer usage can now be represented in applications using the EventBufferMemoryUsage data structure, and checked from MySQL client applications by reading the ndb_eventbuffer_free_percent system variable. For more information, see the detailed descriptions for the Ndb and NdbEventOperation methods listed. See also The Event::TableEvent Type, as well as The EventBufferMemoryUsage Structure. Additional logging is now performed of internal states occurring during system restarts such as waiting for node ID allocation and master takeover of global and local checkpoints. (Bug #74316, Bug #19795029) Added the MaxParallelCopyInstances data node configuration parameter. In cases where the parallelism used during restart copy phase (normally the number of LDMs up to a maximum of 16) is excessive and leads to system overload, this parameter can be used to override the default behavior by reducing the degree of parallelism employed. Added the operations_per_fragment table to the ndbinfo information database. Using this table, you can now obtain counts of operations performed on a given fragment (or fragment replica). Such operations include reads, writes, updates, and deletes, scan and index operations performed while executing them, and operations refused, as well as information relating to rows scanned on and returned from a given fragment replica. This table also provides information about interpreted programs used as attribute values, and values returned by them. Cluster API: Two new example programs, demonstrating reads and writes of CHAR, VARCHAR, and VARBINARY column values, have been added to storage/ndb/ndbapi-examples in the MySQL Cluster source tree. For more information about these programs, including source code listings, see NDB API Simple Array Example, and NDB API Simple Array Example Using Adapter. Bugs Fixed The global checkpoint commit and save protocols can be delayed by various causes, including slow disk I/O. The DIH master node monitors the progress of both of these protocols, and can enforce a maximum lag time during which the protocols are stalled by killing the node responsible for the lag when it reaches this maximum. This DIH master GCP monitor mechanism did not perform its task more than once per master node; that is, it failed to continue monitoring after detecting and handling a GCP stop. (Bug #20128256) References: See also Bug #19858151, Bug #20069617, Bug #20062754. When running mysql_upgrade on a MySQL Cluster SQL node, the expected drop of the performance_schema database on this node was instead performed on all SQL nodes connected to the cluster. (Bug #20032861) The warning shown when an ALTER TABLE ALGORITHM=INPLACE ... ADD COLUMN statement automatically changes a column's COLUMN_FORMAT from FIXED to DYNAMIC now includes the name of the column whose format was changed. (Bug #20009152, Bug #74795) The local checkpoint scan fragment watchdog and the global checkpoint monitor can each exclude a node when it is too slow when participating in their respective protocols. This exclusion was implemented by simply asking the failing node to shut down, which in case this was delayed (for whatever reason) could prolong the duration of the GCP or LCP stall for other, unaffected nodes. To minimize this time, an isolation mechanism has been added to both protocols whereby any other live nodes forcibly disconnect the failing node after a predetermined amount of time. This allows the failing node the opportunity to shut down gracefully (after logging debugging and other information) if possible, but limits the time that other nodes must wait for this to occur. Now, once the remaining live nodes have processed the disconnection of any failing nodes, they can commence failure handling and restart the related protocol or protocol, even if the failed node takes an excessiviely long time to shut down. (Bug #19858151) References: See also Bug #20128256, Bug #20069617, Bug #20062754. The matrix of values used for thread configuration when applying the setting of the MaxNoOfExecutionThreads configuration parameter has been improved to align with support for greater numbers of LDM threads. See Multi-Threading Configuration Parameters (ndbmtd), for more information about the changes. (Bug #75220, Bug #20215689) When a new node failed after connecting to the president but not to any other live node, then reconnected and started again, a live node that did not see the original connection retained old state information. This caused the live node to send redundant signals to the president, causing it to fail. (Bug #75218, Bug #20215395) In the NDB kernel, it was possible for a TransporterFacade object to reset a buffer while the data contained by the buffer was being sent, which could lead to a race condition. (Bug #75041, Bug #20112981) mysql_upgrade failed to drop and recreate the ndbinfo database and its tables as expected. (Bug #74863, Bug #20031425) Due to a lack of memory barriers, MySQL Cluster programs such as ndbmtd did not compile on POWER platforms. (Bug #74782, Bug #20007248) In spite of the presence of a number of protection mechanisms against overloading signal buffers, it was still in some cases possible to do so. This fix adds block-level support in the NDB kernel (in SimulatedBlock) to make signal buffer overload protection more reliable than when implementing such protection on a case-by-case basis. (Bug #74639, Bug #19928269) Copying of metadata during local checkpoints caused node restart times to be highly variable which could make it difficult to diagnose problems with restarts. The fix for this issue introduces signals (including PAUSE_LCP_IDLE, PAUSE_LCP_REQUESTED, and PAUSE_NOT_IN_LCP_COPY_META_DATA) to pause LCP execution and flush LCP reports, making it possible to block LCP reporting at times when LCPs during restarts become stalled in this fashion. (Bug #74594, Bug #19898269) When a data node was restarted from its angel process (that is, following a node failure), it could be allocated a new node ID before failure handling was actually completed for the failed node. (Bug #74564, Bug #19891507) In NDB version 7.4, node failure handling can require completing checkpoints on up to 64 fragments. (This checkpointing is performed by the DBLQH kernel block.) The requirement for master takeover to wait for completion of all such checkpoints led in such cases to excessive length of time for completion. To address these issues, the DBLQH kernel block can now report that it is ready for master takeover before it has completed any ongoing fragment checkpoints, and can continue processing these while the system completes the master takeover. (Bug #74320, Bug #19795217) Local checkpoints were sometimes started earlier than necessary during node restarts, while the node was still waiting for copying of the data distribution and data dictionary to complete. (Bug #74319, Bug #19795152) The check to determine when a node was restarting and so know when to accelerate local checkpoints sometimes reported a false positive. (Bug #74318, Bug #19795108) Values in different columns of the ndbinfo tables disk_write_speed_aggregate and disk_write_speed_aggregate_node were reported using differing multiples of bytes. Now all of these columns display values in bytes. In addition, this fix corrects an error made when calculating the standard deviations used in the std_dev_backup_lcp_speed_last_10sec, std_dev_redo_speed_last_10sec, std_dev_backup_lcp_speed_last_60sec, and std_dev_redo_speed_last_60sec columns of the ndbinfo.disk_write_speed_aggregate table. (Bug #74317, Bug #19795072) Recursion in the internal method Dblqh::finishScanrec() led to an attempt to create two list iterators with the same head. This regression was introduced during work done to optimize scans for version 7.4 of the NDB storage engine. (Bug #73667, Bug #19480197) Transporter send buffers were not updated properly following a failed send. (Bug #45043, Bug #20113145) Disk Data: An update on many rows of a large Disk Data table could in some rare cases lead to node failure. In the event that such problems are observed with very large transactions on Disk Data tables you can now increase the number of page entries allocated for disk page buffer memory by raising the value of the DiskPageBufferEntries data node configuration parameter added in this release. (Bug #19958804) Disk Data: In some cases, during DICT master takeover, the new master could crash while attempting to roll forward an ongoing schema transaction. (Bug #19875663, Bug #74510) Cluster API: It was possible to delete an Ndb_cluster_connection object while there remained instances of Ndb using references to it. Now the Ndb_cluster_connection destructor waits for all related Ndb objects to be released before completing. (Bug #19999242) References: See also Bug #19846392. ----- Changes in MySQL Cluster NDB 7.4.2 (5.6.21-ndb-7.4.2) Functionality Added or Changed Added the restart_info table to the ndbinfo information database to provide current status and timing information relating to node and system restarts. By querying this table, you can observe the progress of restarts in real time. (Bug #19795152) After adding new data nodes to the configuration file of a MySQL Cluster having many API nodes, but prior to starting any of the data node processes, API nodes tried to connect to these missing data nodes several times per second, placing extra loads on management nodes and the network. To reduce unnecessary traffic caused in this way, it is now possible to control the amount of time that an API node waits between attempts to connect to data nodes which fail to respond; this is implemented in two new API node configuration parameters StartConnectBackoffMaxTime and ConnectBackoffMaxTime. Time elapsed during node connection attempts is not taken into account when applying these parameters, both of which are given in milliseconds with approximately 100 ms resolution. As long as the API node is not connected to any data nodes as described previously, the value of the StartConnectBackoffMaxTime parameter is applied; otherwise, ConnectBackoffMaxTime is used. In a MySQL Cluster with many unstarted data nodes, the values of these parameters can be raised to circumvent connection attempts to data nodes which have not yet begun to function in the cluster, as well as moderate high traffic to management nodes. For more information about the behavior of these parameters, see Defining SQL and Other API Nodes in a MySQL Cluster. (Bug #17257842) Bugs Fixed When performing a batched update, where one or more successful write operations from the start of the batch were followed by write operations which failed without being aborted (due to the AbortOption being set to AO_IgnoreError), the failure handling for these by the transaction coordinator leaked CommitAckMarker resources. (Bug #19875710) References: This bug was introduced by Bug #19451060, Bug #73339. Online downgrades to MySQL Cluster NDB 7.3 failed when a MySQL Cluster NDB 7.4 master attempted to request a local checkpoint with 32 fragments from a data node already running NDB 7.3, which supports only 2 fragments for LCPs. Now in such cases, the NDB 7.4 master determines how many fragments the data node can handle before making the request. (Bug #19600834) The fix for a previous issue with the handling of multiple node failures required determining the number of TC instances the failed node was running, then taking them over. The mechanism to determine this number sometimes provided an invalid result which caused the number of TC instances in the failed node to be set to an excessively high value. This in turn caused redundant takeover attempts, which wasted time and had a negative impact on the processing of other node failures and of global checkpoints. (Bug #19193927) References: This bug was introduced by Bug #18069334. The server side of an NDB transporter disconnected an incoming client connection very quickly during the handshake phase if the node at the server end was not yet ready to receive connections from the other node. This led to problems when the client immediately attempted once again to connect to the server socket, only to be disconnected again, and so on in a repeating loop, until it suceeded. Since each client connection attempt left behind a socket in TIME_WAIT, the number of sockets in TIME_WAIT increased rapidly, leading in turn to problems with the node on the server side of the transporter. Further analysis of the problem and code showed that the root of the problem lay in the handshake portion of the transporter connection protocol. To keep the issue described previously from occurring, the node at the server end now sends back a WAIT message instead of disconnecting the socket when the node is not yet ready to accept a handshake. This means that the client end should no longer need to create a new socket for the next retry, but can instead begin immediately with a new handshake hello message. (Bug #17257842) Corrupted messages to data nodes sometimes went undetected, causing a bad signal to be delivered to a block which aborted the data node. This failure in combination with disconnecting nodes could in turn cause the entire cluster to shut down. To keep this from happening, additional checks are now made when unpacking signals received over TCP, including checks for byte order, compression flag (which must not be used), and the length of the next message in the receive buffer (if there is one). Whenever two consecutive unpacked messages fail the checks just described, the current message is assumed to be corrupted. In this case, the transporter is marked as having bad data and no more unpacking of messages occurs until the transporter is reconnected. In addition, an entry is written to the cluster log containing the error as well as a hex dump of the corrupted message. (Bug #73843, Bug #19582925) During restore operations, an attribute's maximum length was used when reading variable-length attributes from the receive buffer instead of the attribute's actual length. (Bug #73312, Bug #19236945) ----- Changes in MySQL Cluster NDB 7.4.1 (5.6.20-ndb-7.4.1) Node Restart Performance and Reporting Enhancements Performance: A number of performance and other improvements have been made with regard to node starts and restarts. The following list contains a brief description of each of these changes: Before memory allocated on startup can be used, it must be touched, causing the operating system to allocate the actual physical memory needed. The process of touching each page of memory that was allocated has now been multithreaded, with touch times on the order of 3 times shorter than with a single thread when performed by 16 threads. When performing a node or system restart, it is necessary to restore local checkpoints for the fragments. This process previously used delayed signals at a point which was found to be critical to performance; these have now been replaced with normal (undelayed) signals, which should shorten significantly the time required to back up a MySQL Cluster or to restore it from backup. Previously, there could be at most 2 LDM instances active with local checkpoints at any given time. Now, up to 16 LDMs can be used for performing this task, which increases utilization of available CPU power, and can speed up LCPs by a factor of 10, which in turn can greatly improve restart times. Better reporting of disk writes and increased control over these also make up a large part of this work. New ndbinfo tables disk_write_speed_base, disk_write_speed_aggregate, and disk_write_speed_aggregate_node provide information about the speed of disk writes for each LDM thread that is in use. The DiskCheckpointSpeed and DiskCheckpointSpeedInRestart configuration parameters have been deprecated, and are subject to removal in a future MySQL Cluster release. This release adds the data node configuration parameters MinDiskWriteSpeed, MaxDiskWriteSpeed, MaxDiskWriteSpeedOtherNodeRestart, and MaxDiskWriteSpeedOwnRestart to control write speeds for LCPs and backups when the present node, another node, or no node is currently restarting. For more information, see the descriptions of the ndbinfo tables and MySQL Cluster configuration parameters named previously. Reporting of MySQL Cluster start phases has been improved, with more frequent printouts. New and better information about the start phases and their implementation has also been provided in the sources and documentation. See Summary of MySQL Cluster Start Phases. Improved Scan and SQL Processing Performance: Several internal methods relating to the NDB receive thread have been optimized to make mysqld more efficient in processing SQL applications with the NDB storage engine. In particular, this work improves the performance of the NdbReceiver::execTRANSID_AI() method, which is commonly used to receive a record from the data nodes as part of a scan operation. (Since the receiver thread sometimes has to process millions of received records per second, it is critical that this method does not perform unnecessary work, or tie up resources that are not strictly needed.) The associated internal functions receive_ndb_packed_record() and handleReceivedSignal() methods have also been improved, and made more efficient. Per-Fragment Memory Reporting Information about memory usage by individual fragments can now be obtained from the memory_per_fragment view added in this release to the ndbinfo information database. This information includes pages having fixed, and variable element size, rows, fixed element free slots, variable element free bytes, and hash index memory usage. For information, see The ndbinfo memory_per_fragment Table. Bugs Fixed In some cases, transporter receive buffers were reset by one thread while being read by another. This happened when a race condition occurred between a thread receiving data and another thread initiating disconnect of the transporter (disconnection clears this buffer). Concurrency logic has now been implemented to keep this race from taking place. (Bug #19552283, Bug #73790) When a new data node started, API nodes were allowed to attempt to register themselves with the data node for executing transactions before the data node was ready. This forced the API node to wait an extra heartbeat interval before trying again. To address this issue, a number of HA_ERR_NO_CONNECTION errors (Error 4009) that could be issued during this time have been changed to Cluster temporarily unavailable errors (Error 4035), which should allow API nodes to use new data nodes more quickly than before. As part of this fix, some errors which were incorrectly categorised have been moved into the correct categories, and some errors which are no longer used have been removed. (Bug #19524096, Bug #73758) Executing ALTER TABLE ... REORGANIZE PARTITION after increasing the number of data nodes in the cluster from 4 to 16 led to a crash of the data nodes. This issue was shown to be a regression caused by previous fix which added a new dump handler using a dump code that was already in use (7019), which caused the command to execute two different handlers with different semantics. The new handler was assigned a new DUMP code (7024). (Bug #18550318) References: This bug is a regression of Bug #14220269. When certain queries generated signals having more than 18 data words prior to a node failure, such signals were not written correctly in the trace file. (Bug #18419554) Failure of multiple nodes while using ndbmtd with multiple TC threads was not handled gracefully under a moderate amount of traffic, which could in some cases lead to an unplanned shutdown of the cluster. (Bug #18069334) For multithreaded data nodes, some threads do communicate often, with the result that very old signals can remain at the top of the signal buffers. When performing a thread trace, the signal dumper calculated the latest signal ID from what it found in the signal buffers, which meant that these old signals could be erroneously counted as the newest ones. Now the signal ID counter is kept as part of the thread state, and it is this value that is used when dumping signals for trace files. (Bug #73842, Bug #19582807) Cluster API: When an NDB API client application received a signal with an invalid block or signal number, NDB provided only a very brief error message that did not accurately convey the nature of the problem. Now in such cases, appropriate printouts are provided when a bad signal or message is detected. In addition, the message length is now checked to make certain that it matches the size of the embedded signal. (Bug #18426180) ----- The following improvements to MySQL Cluster have been made in MySQL Cluster NDB 7.4: Conflict detection and resolution enhancements. A reserved column name namespace NDB$ is now employed for exceptions table metacolumns, allowing an arbitrary subset of main table columns to be recorded, even if they are not part of the original table's primary key. Recording the complete original primary key is no longer required, due to the fact that matching against exceptions table columns is now done by name and type only. It is now also possible for you to record values of columns which not are part of the main table's primary key in the exceptions table. Read conflict detection is now possible. All rows read by the conflicting transaction are flagged, and logged in the exceptions table. Rows inserted in the same transaction are not included among the rows read or logged. This read tracking depends on the slave having an exclusive read lock which requires setting ndb_log_exclusive_reads in advance. See Read conflict detection and resolution, for more information and examples. Existing exceptions tables remain supported. For more information, see Section 18.6.11, "MySQL Cluster Replication Conflict Resolution". Circular ("active-active") replication improvements. When using a circular or "active-active" MySQL Cluster Replication topology, you can assign one of the roles of primary of secondary to a given MySQL Cluster using the ndb_slave_conflict_role server system variable, which can be employed when failing over from a MySQL Cluster acting as primary, or when using conflict detection and resolution with NDB$EPOCH2() and NDB$EPOCH2_TRANS() (MySQL Cluster NDB 7.4.2 and later), which support delete-delete conflict handling. See the description of the ndb_slave_conflict_role variable, as well as NDB$EPOCH2(), for more information. See also Section 18.6.11, MySQL Cluster Replication Conflict Resolution. Per-fragment memory usage reporting. You can now obtain data about memory usage by individual MySQL Cluster fragments from the memory_per_fragment view, added in MySQL Cluster NDB 7.4.1 to the ndbinfo information database. For more information, see Section 18.5.10.17, "The ndbinfo memory_per_fragment Table". Node restart improvements. MySQL Cluster NDB 7.4 includes a number of improvements which decrease the time needed for data nodes to be restarted. These are described in the following list: Memory allocated that is allocated on node startup cannot be used until it has been, which causes the operating system to set aside the actual physical memory required. In previous versions of MySQL Cluster, the process of touching each page of memory that was allocated was singlethreaded, which made it relatively time-consuming. This process has now been reimplimented with multithreading. In tests with 16 threads, touch times on the order of 3 times shorter than with a single thread were observed. Increased parallelization of local checkpoints; in MySQL Cluster NDB 7.4, LCPs now support 32 fragments rather than 2 as before. This greatly increases utilization of CPU power that would otherwise go unused, and can make LCPs faster by up to a factor of 10; this speedup in turn can greatly improve node restart times. The degree of parallelization used for the node copy phase during node and system restarts can be controlled in MySQL Cluster NDB 7.4.3 and later by setting the MaxParallelCopyInstances data node configuration parameter to a nonzero value. Reporting on disk writes is provided by new ndbinfo tables disk_write_speed_base, disk_write_speed_aggregate, and disk_write_speed_aggregate_node, which provide information about the speed of disk writes for each LDM thread that is in use. This release also adds the data node configuration parameters MinDiskWriteSpeed, MaxDiskWriteSpeed, MaxDiskWriteSpeedOtherNodeRestart, and MaxDiskWriteSpeedOwnRestart to control write speeds for LCPs and backups when the present node, another node, or no node is currently restarting. These changes are intended to supersede configuration of disk writes using the DiskCheckpointSpeed and DiskCheckpointSpeedInRestart configuration parameters. These 2 parameters have now been deprecated, and are subject to removal in a future MySQL Cluster release. Faster times for restoring a MySQL Cluster from backup have been obtained by replacing delayed signals found at a point which was found to be critical to performance with normal (undelayed) signals. The elimination or replacement of these unnecessary delayed signals should noticeably reduce the amount of time required to back up a MySQL Cluster, or to restore a MySQL Cluster from backup. Several internal methods relating to the NDB receive thread have been optimized, to increase the efficiency of SQL processing by NDB. The receiver thread at time may have to process several million received records per second, so it is critical that it not perform unnecessary work or waste resources when retrieving records from MySQL Cluster data nodes. Improved reporting of MySQL Cluster restarts and start phases. The restart_info table (included in the ndbinfo information database beginning with MySQL Cluster NDB 7.4.2) provides current status and timing information about node and system restarts. Reporting and logging of MySQL Cluster start phases also provides more frequent and specific printouts during startup than previously. See Section 18.5.1, Summary of MySQL Cluster Start Phases, for more information. NDB API: new Event API. MySQL Cluster NDB 7.4.3 introduces an epoch-driven Event API that supercedes the earlier GCI-based model. The new version of the API also simplifies error detection and handling. These changes are realized in the NDB API by implementing a number of new methods for Ndb and NdbEventOperation, deprecating several other methods of both classes, and adding new type values to Event::TableEvent. The event handling methods added to Ndb in MySQL Cluster NDB 7.4.3 are pollEvents2(), nextEvent2(), getHighestQueuedEpoch(), and getNextEventOpInEpoch2(). The Ndb methods pollEvents(), nextEvent(), getLatestGCI(), getGCIEventOperations(), isConsistent(), and isConsistentGCI() are deprecated beginning with the same release. MySQL Cluster NDB 7.4.3 adds the NdbEventOperation event handling methods getEventType2(), getEpoch(), isEmptyEpoch(), and isErrorEpoch; it obsoletes getEventType(), getGCI(), getLatestGCI(), isOverrun(), hasError(), and clearError(). While some (but not all) of the new methods are direct replacements for deprecated methods, not all of the deprecated methods map to new ones. The Event Class, provides information as to which old methods correspond to new ones. Error handling using the new API is no longer handled using dedicated hasError() and clearError() methods, which are now deprecated (and thus subject to removal in a future release of MySQL Cluster). To support this change, the list of TableEvent types now includes the values TE_EMPTY (empty epoch), TE_INCONSISTENT (inconsistent epoch), and TE_OUT_OF_MEMORY (inconsistent data). Improvements in event buffer management have also been made by implementing new get_eventbuffer_free_percent(), set_eventbuffer_free_percent(), and get_eventbuffer_memory_usage() methods. Memory buffer usage can now be represented in application code using EventBufferMemoryUsage. The ndb_eventbuffer_free_percent system variable, also implemented in MySQL Cluster NDB 7.4, makes it possible for event buffer memory usage to be checked from MySQL client applications. For more information, see the detailed descriptions for the Ndb and NdbEventOperation methods listed. See also The Event::TableEvent Type, as well as The EventBufferMemoryUsage Structure. Per-fragment operations information. In MySQL Cluster NDB 7.4.3 and later, counts of various types of operations on a given fragment or fragment replica can obtained easily using the operations_per_fragment table in the ndbinfo information database. This includes read, write, update, and delete operations, as well as scan and index operations performed by these. Information about operations refused, and about rows scanned and returned from a given fragment replica, is also shown in operations_per_fragment. This table also provides information about interpreted programs used as attribute values, and values returned by them. MySQL Cluster NDB 7.4 is also supported by MySQL Cluster Manager, which provides an advanced command-line interface that can simplify many complex MySQL Cluster management tasks. See MySQL Cluster Manager 1.3.5 User Manual, for more information. ----- Changes in MySQL Cluster NDB 7.3.9 (5.6.24-ndb-7.3.9) Bugs Fixed It was found during testing that problems could arise when the node registered as the arbitrator disconnected or failed during the arbitration process. In this situation, the node requesting arbitration could never receive a positive acknowledgement from the registered arbitrator; this node also lacked a stable set of members and could not initiate selection of a new arbitrator. Now in such cases, when the arbitrator fails or loses contact during arbitration, the requesting node immediately fails rather than waiting to time out. (Bug #20538179) The values of the Ndb_last_commit_epoch_server and Ndb_last_commit_epoch_session status variables were incorrectly reported on some platforms. To correct this problem, these values are now stored internally as long long, rather than long. (Bug #20372169) The maximum failure time calculation used to ensure that normal node failure handling mechanisms are given time to handle survivable cluster failures (before global checkpoint watchdog mechanisms start to kill nodes due to GCP delays) was excessively conservative, and neglected to consider that there can be at most number_of_data_nodes / NoOfReplicas node failures before the cluster can no longer survive. Now the value of NoOfReplicas is properly taken into account when performing this calculation. (Bug #20069617, Bug #20069624) References: See also Bug #19858151, Bug #20128256, Bug #20135976. When a data node fails or is being restarted, the remaining nodes in the same nodegroup resend to subscribers any data which they determine has not already been sent by the failed node. Normally, when a data node (actually, the SUMA kernel block) has sent all data belonging to an epoch for which it is responsible, it sends a SUB_GCP_COMPLETE_REP signal, together with a count, to all subscribers, each of which responds with a SUB_GCP_COMPLETE_ACK. When SUMA receives this acknowledgment from all subscribers, it reports this to the other nodes in the same nodegroup so that they know that there is no need to resend this data in case of a subsequent node failure. If a node failed before all subscribers sent this acknowledgement but before all the other nodes in the same nodegroup received it from the failing node, data for some epochs could be sent (and reported as complete) twice, which could lead to an unplanned shutdown. The fix for this issue adds to the count reported by SUB_GCP_COMPLETE_ACK a list of identifiers which the receiver can use to keep track of which buckets are completed and to ignore any duplicate reported for an already completed bucket. (Bug #17579998) When performing a restart, it was sometimes possible to find a log end marker which had been written by a previous restart, and that should have been invalidated. Now when when searching for the last page to invalidate, the same search algorithm is used as when searching for the last page of the log to read. (Bug #76207, Bug #20665205) When reading and copying transporter short signal data, it was possible for the data to be copied back to the same signal with overlapping memory. (Bug #75930, Bug #20553247) When a bulk delete operation was committed early to avoid an additional round trip, while also returning the number of affected rows, but failed with a timeout error, an SQL node performed no verification that the transaction was in the Committed state. (Bug #74494, Bug #20092754) References: See also Bug #19873609. An ALTER TABLE statement containing comments and a partitioning option against an NDB table caused the SQL node on which it was executed to fail. (Bug #74022, Bug #19667566) Cluster API: When a transaction is started from a cluster connection, Table and Index schema objects may be passed to this transaction for use. If these schema objects have been acquired from a different connection (Ndb_cluster_connection object), they can be deleted at any point by the deletion or disconnection of the owning connection. This can leave a connection with invalid schema objects, which causes an NDB API application to fail when these are dereferenced. To avoid this problem, if your application uses multiple connections, you can now set a check to detect sharing of schema objects between connections when passing a schema object to a transaction, using the NdbTransaction::setSchemaObjectOwnerChecks() method added in this release. When this check is enabled, the schema objects having the same names are acquired from the connection and compared to the schema objects passed to the transaction. Failure to match causes the application to fail with an error. (Bug #19785977) Cluster API: The increase in the default number of hashmap buckets (DefaultHashMapSize API node configuration parameter) from 240 to 3480 in MySQL Cluster NDB 7.2.11 increased the size of the internal DictHashMapInfo::HashMap type considerably. This type was allocated on the stack in some getTable() calls which could lead to stack overflow issues for NDB API users. To avoid this problem, the hashmap is now dynamically allocated from the heap. (Bug #19306793) Cluster API: A scan operation, whether it is a single table scan or a query scan used by a pushed join, stores the result set in a buffer. The maximum size of this buffer is calculated and preallocated before the scan operation is started. This buffer may consume a considerable amount of memory; in some cases we observed a 2 GB buffer footprint in tests that executed 100 parallel scans with 2 single-threaded (ndbd) data nodes. This memory consumption was found to scale linearly with additional fragments. A number of root causes, listed here, were discovered that led to this problem: Result rows were unpacked to full NdbRecord format before they were stored in the buffer. If only some but not all columns of a table were selected, the buffer contained empty space (essentially wasted). Due to the buffer format being unpacked, VARCHAR and VARBINARY columns always had to be allocated for the maximum size defined for such columns. BatchByteSize and MaxScanBatchSize values were not taken into consideration as a limiting factor when calculating the maximum buffer size. These issues became more evident in NDB 7.2 and later MySQL Cluster release series. This was due to the fact buffer size is scaled by BatchSize, and that the default value for this parameter was increased fourfold (from 64 to 256) beginning with MySQL Cluster NDB 7.2.1. This fix causes result rows to be buffered using the packed format instead of the unpacked format; a buffered scan result row is now not unpacked until it becomes the current row. In addition, BatchByteSize and MaxScanBatchSize are now used as limiting factors when calculating the required buffer size. Also as part of this fix, refactoring has been done to separate handling of buffered (packed) from handling of unbuffered result sets, and to remove code that had been unused since NDB 7.0 or earlier. The NdbRecord class declaration has also been cleaned up by removing a number of unused or redundant member variables. (Bug #73781, Bug #75599, Bug #19631350, Bug #20408733) |
||
jperkin
|
aad20bde6b | Fix PLIST on non-x86_64 platforms. | ||
jnemeth
|
0a853b56c3 |
Update to MySQL Cluster 7.3.8:
Changes in MySQL Cluster NDB 7.3.8 (5.6.22-ndb-7.3.8) (2015-01-21) MySQL Cluster NDB 7.3.8 is a new release of MySQL Cluster, based on MySQL Server 5.6 and including features from version 7.3 of the NDB storage engine, as well as fixing a number of recently discovered bugs in previous MySQL Cluster releases. This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.6 through MySQL 5.6.22 (see Changes in MySQL 5.6.22 (2014-12-01)). Functionality Added or Changed * Performance: Recent improvements made to the multithreaded scheduler were intended to optimize the cache behavior of its internal data structures, with members of these structures placed such that those local to a given thread do not overflow into a cache line which can be accessed by another thread. Where required, extra padding bytes are inserted to isolate cache lines owned (or shared) by other threads, thus avoiding invalidation of the entire cache line if another thread writes into a cache line not entirely owned by itself. This optimization improved MT Scheduler performance by several percent. It has since been found that the optimization just described depends on the global instance of struct thr_repository starting at a cache line aligned base address as well as the compiler not rearranging or adding extra padding to the scheduler struct; it was also found that these prerequisites were not guaranteed (or even checked). Thus this cache line optimization has previously worked only when g_thr_repository (that is, the global instance) ended up being cache line aligned only by accident. In addition, on 64-bit platforms, the compiler added extra padding words in struct thr_safe_pool such that attempts to pad it to a cache line aligned size failed. The current fix ensures that g_thr_repository is constructed on a cache line aligned address, and the constructors modified so as to verify cacheline aligned adresses where these are assumed by design. Results from internal testing show improvements in MT Scheduler read performance of up to 10% in some cases, following these changes. (Bug #18352514) * Cluster API: Two new example programs, demonstrating reads and writes of CHAR, VARCHAR, and VARBINARY column values, have been added to storage/ndb/ndbapi-examples in the MySQL Cluster source tree. For more information about these programs, including source code listings, see NDB API Simple Array Example, and NDB API Simple Array Example Using Adapter. Bugs Fixed * The global checkpoint commit and save protocols can be delayed by various causes, including slow disk I/O. The DIH master node monitors the progress of both of these protocols, and can enforce a maximum lag time during which the protocols are stalled by killing the node responsible for the lag when it reaches this maximum. This DIH master GCP monitor mechanism did not perform its task more than once per master node; that is, it failed to continue monitoring after detecting and handling a GCP stop. (Bug #20128256) References: See also Bug #19858151. * When running mysql_upgrade on a MySQL Cluster SQL node, the expected drop of the performance_schema database on this node was instead performed on all SQL nodes connected to the cluster. (Bug #20032861) * A number of problems relating to the fired triggers pool have been fixed, including the following issues: + When the fired triggers pool was exhausted, NDB returned Error 218 (Out of LongMessageBuffer). A new error code 221 is added to cover this case. + An additional, separate case in which Error 218 was wrongly reported now returns the correct error. + Setting low values for MaxNoOfFiredTriggers led to an error when no memory was allocated if there was only one hash bucket. + An aborted transaction now releases any fired trigger records it held. Previously, these records were held until its ApiConnectRecord was reused by another transaction. + In addition, for the Fired Triggers pool in the internal ndbinfo.ndb$pools table, the high value always equalled the total, due to the fact that all records were momentarily seized when initializing them. Now the high value shows the maximum following completion of initialization. (Bug #19976428) * Online reorganization when using ndbmtd data nodes and with binary logging by mysqld enabled could sometimes lead to failures in the TRIX and DBLQH kernel blocks, or in silent data corruption. (Bug #19903481) References: See also Bug #19912988. * The local checkpoint scan fragment watchdog and the global checkpoint monitor can each exclude a node when it is too slow when participating in their respective protocols. This exclusion was implemented by simply asking the failing node to shut down, which in case this was delayed (for whatever reason) could prolong the duration of the GCP or LCP stall for other, unaffected nodes. To minimize this time, an isolation mechanism has been added to both protocols whereby any other live nodes forcibly disconnect the failing node after a predetermined amount of time. This allows the failing node the opportunity to shut down gracefully (after logging debugging and other information) if possible, but limits the time that other nodes must wait for this to occur. Now, once the remaining live nodes have processed the disconnection of any failing nodes, they can commence failure handling and restart the related protocol or protocol, even if the failed node takes an excessiviely long time to shut down. (Bug #19858151) References: See also Bug #20128256. * A watchdog failure resulted from a hang while freeing a disk page in TUP_COMMITREQ, due to use of an uninitialized block variable. (Bug #19815044, Bug #74380) * Multiple threads crashing led to multiple sets of trace files being printed and possibly to deadlocks. (Bug #19724313) * When a client retried against a new master a schema transaction that failed previously against the previous master while the latter was restarting, the lock obtained by this transaction on the new master prevented the previous master from progressing past start phase 3 until the client was terminated, and resources held by it were cleaned up. (Bug #19712569, Bug #74154) * When using the NDB storage engine, the maximum possible length of a database or table name is 63 characters, but this limit was not always strictly enforced. This meant that a statement using a name having 64 characters such CREATE DATABASE, DROP DATABASE, or ALTER TABLE RENAME could cause the SQL node on which it was executed to fail. Now such statements fail with an appropriate error message. (Bug #19550973) * When a new data node started, API nodes were allowed to attempt to register themselves with the data node for executing transactions before the data node was ready. This forced the API node to wait an extra heartbeat interval before trying again. To address this issue, a number of HA_ERR_NO_CONNECTION errors (Error 4009) that could be issued during this time have been changed to Cluster temporarily unavailable errors (Error 4035), which should allow API nodes to use new data nodes more quickly than before. As part of this fix, some errors which were incorrectly categorised have been moved into the correct categories, and some errors which are no longer used have been removed. (Bug #19524096, Bug #73758) * When executing very large pushdown joins involving one or more indexes each defined over several columns, it was possible in some cases for the DBSPJ block (see The DBSPJ Block) in the NDB kernel to generate SCAN_FRAGREQ signals that were excessively large. This caused data nodes to fail when these could not be handled correctly, due to a hard limit in the kernel on the size of such signals (32K). This fix bypasses that limitation by breaking up SCAN_FRAGREQ data that is too large for one such signal, and sending the SCAN_FRAGREQ as a chunked or fragmented signal instead. (Bug #19390895) * ndb_index_stat sometimes failed when used against a table containing unique indexes. (Bug #18715165) * Queries against tables containing a CHAR(0) columns failed with ERROR 1296 (HY000): Got error 4547 'RecordSpecification has overlapping offsets' from NDBCLUSTER. (Bug #14798022) * In the NDB kernel, it was possible for a TransporterFacade object to reset a buffer while the data contained by the buffer was being sent, which could lead to a race condition. (Bug #75041, Bug #20112981) * mysql_upgrade failed to drop and recreate the ndbinfo database and its tables as expected. (Bug #74863, Bug #20031425) * Due to a lack of memory barriers, MySQL Cluster programs such as ndbmtd did not compile on POWER platforms. (Bug #74782, Bug #20007248) * In some cases, when run against a table having an AFTER DELETE trigger, a DELETE statement that matched no rows still caused the trigger to execute. (Bug #74751, Bug #19992856) * A basic requirement of the NDB storage engine's design is that the transporter registry not attempt to receive data (TransporterRegistry::performReceive()) from and update the connection status (TransporterRegistry::update_connections()) of the same set of transporters concurrently, due to the fact that the updates perform final cleanup and reinitialization of buffers used when receiving data. Changing the contents of these buffers while reading or writing to them could lead to "garbage" or inconsistent signals being read or written. During the course of work done previously to improve the implementation of the transporter facade, a mutex intended to protect against the concurrent use of the performReceive() and update_connections()) methods on the same transporter was inadvertently removed. This fix adds a watchdog check for concurrent usage. In addition, update_connections() and performReceive() calls are now serialized together while polling the transporters. (Bug #74011, Bug #19661543) * ndb_restore failed while restoring a table which contained both a built-in conversion on the primary key and a staging conversion on a TEXT column. During staging, a BLOB table is created with a primary key column of the target type. However, a conversion function was not provided to convert the primary key values before loading them into the staging blob table, which resulted in corrupted primary key values in the staging BLOB table. While moving data from the staging table to the target table, the BLOB read failed because it could not find the primary key in the BLOB table. Now all BLOB tables are checked to see whether there are conversions on primary keys of their main tables. This check is done after all the main tables are processed, so that conversion functions and parameters have already been set for the main tables. Any conversion functions and parameters used for the primary key in the main table are now duplicated in the BLOB table. (Bug #73966, Bug #19642978) * Corrupted messages to data nodes sometimes went undetected, causing a bad signal to be delivered to a block which aborted the data node. This failure in combination with disconnecting nodes could in turn cause the entire cluster to shut down. To keep this from happening, additional checks are now made when unpacking signals received over TCP, including checks for byte order, compression flag (which must not be used), and the length of the next message in the receive buffer (if there is one). Whenever two consecutive unpacked messages fail the checks just described, the current message is assumed to be corrupted. In this case, the transporter is marked as having bad data and no more unpacking of messages occurs until the transporter is reconnected. In addition, an entry is written to the cluster log containing the error as well as a hex dump of the corrupted message. (Bug #73843, Bug #19582925) * Transporter send buffers were not updated properly following a failed send. (Bug #45043, Bug #20113145) * ndb_restore --print_data truncated TEXT and BLOB column values to 240 bytes rather than 256 bytes. * Disk Data: An update on many rows of a large Disk Data table could in some rare cases lead to node failure. In the event that such problems are observed with very large transactions on Disk Data tables you can now increase the number of page entries allocated for disk page buffer memory by raising the value of the DiskPageBufferEntries data node configuration parameter added in this release. (Bug #19958804) * Disk Data: When a node acting as a DICT master fails, the arbitrator selects another node to take over in place of the failed node. During the takeover procedure, which includes cleaning up any schema transactions which are still open when the master failed, the disposition of the uncommitted schema transaction is decided. Normally this transaction be rolled back, but if it has completed a sufficient portion of a commit request, the new master finishes processing the commit. Until the fate of the transaction has been decided, no new TRANS_END_REQ messages from clients can be processed. In addition, since multiple concurrent schema transactions are not supported, takeover cleanup must be completed before any new transactions can be started. A similar restriction applies to any schema operations which are performed in the scope of an open schema transaction. The counter used to coordinate schema operation across all nodes is employed both during takeover processing and when executing any non-local schema operations. This means that starting a schema operation while its schema transaction is in the takeover phase causes this counter to be overwritten by concurrent uses, with unpredictable results. The scenarios just described were handled previously using a pseudo-random delay when recovering from a node failure. Now we check before the new master has rolled forward or backwards any schema transactions remaining after the failure of the previous master and avoid starting new schema transactions or performing operations using old transactions until takeover processing has cleaned up after the abandoned transaction. (Bug #19874809, Bug #74503) * Disk Data: When a node acting as DICT master fails, it is still possible to request that any open schema transaction be either committed or aborted by sending this request to the new DICT master. In this event, the new master takes over the schema transaction and reports back on whether the commit or abort request succeeded. In certain cases, it was possible for the new master to be misidentified--that is, the request was sent to the wrong node, which responded with an error that was interpreted by the client application as an aborted schema transaction, even in cases where the transaction could have been successfully committed, had the correct node been contacted. (Bug #74521, Bug #19880747) * Cluster Replication: When an NDB client thread made a request to flush the binary log using statements such as FLUSH BINARY LOGS or SHOW BINLOG EVENTS, this caused not only the most recent changes made by this client to be flushed, but all recent changes made by all other clients to be flushed as well, even though this was not needed. This behavior caused unnecessary waiting for the statement to execute, which could lead to timeouts and other issues with replication. Now such statements flush the most recent database changes made by the requesting thread only. As part of this fix, the status variables Ndb_last_commit_epoch_server, Ndb_last_commit_epoch_session, and Ndb_slave_max_replicated_epoch, originally implemented in MySQL Cluster NDB 7.4, are also now available in MySQL Cluster NDB 7.3. For descriptions of these variables, see MySQL Cluster Status Variables; for further information, see MySQL Cluster Replication Conflict Resolution. (Bug #19793475) * Cluster Replication: It was possible using wildcards to set up conflict resolution for an exceptions table (that is, a table named using the suffix $EX), which should not be allowed. Now when a replication conflict function is defined using wildcard expressions, these are checked for possible matches so that, in the event that the function would cover an exceptions table, it is not set up for this table. (Bug #19267720) * Cluster API: It was possible to delete an Ndb_cluster_connection object while there remained instances of Ndb using references to it. Now the Ndb_cluster_connection destructor waits for all related Ndb objects to be released before completing. (Bug #19999242) References: See also Bug #19846392. * Cluster API: The buffer allocated by an NdbScanOperation for receiving scanned rows was not released until the NdbTransaction owning the scan operation was closed. This could lead to excessive memory usage in an application where multiple scans were created within the same transaction, even if these scans were closed at the end of their lifecycle, unless NdbScanOperation::close() was invoked with the releaseOp argument equal to true. Now the buffer is released whenever the cursor navigating the result set is closed with NdbScanOperation::close(), regardless of the value of this argument. (Bug #75128, Bug #20166585) * ClusterJ: The following errors were logged at the SEVERE level; they are now logged at the NORMAL level, as they should be: + Duplicate primary key + Duplicate unique key + Foreign key constraint error: key does not exist + Foreign key constraint error: key exists (Bug #20045455) * ClusterJ: The com.mysql.clusterj.tie class gave off a logging message at the INFO logging level for every single query, which was unnecessary and was affecting the performance of applications that used ClusterJ. (Bug #20017292) * ClusterJ: ClusterJ reported a segmentation violation when an application closed a session factory while some sessions were still active. This was because MySQL Cluster allowed an Ndb_cluster_connection object be to deleted while some Ndb instances were still active, which might result in the usage of null pointers by ClusterJ. This fix stops that happening by preventing ClusterJ from closing a session factory when any of its sessions are still active. (Bug #19846392) References: See also Bug #19999242. |
||
fhajny
|
c112971377 | Fix PLIST for SunOS. | ||
jnemeth
|
7697633caf |
MySQL Cluster is a highly scalable, real-time, ACID-compliant
transactional database, combining 99.999% availability with the low TCO of open source. Designed around a distributed, multi-master architecture with no single point of failure, MySQL Cluster scales horizontally on commodity hardware to serve read and write intensive workloads, accessed via SQL and NoSQL interfaces. |