Changes in 1.4.4
================
- Introduce TimedPrioritizedRunnable base class to all commands
that go into InternalClusterService.updateTasksExecutor
- Ensure that we don't pass negative timeInQueue to writeVLong
- Aggregations: Prevent negative intervals in date_histogram
- Packaging: Add antlr and asm dependencies
Changes in 1.4.3
================
- Disable dynamic Groovy scripting by marking Groovy as not sandboxed.
Aggs:
- Add standard error bounds to extended_stats
- nested agg needs to reset root doc between segments.
- Fix handling of multiple buckets being emitted for the same parent doc
- id in nested aggregation
- In reverse nested aggregation, fix handling of the same child doc id
being processed multiple times.
- The parent filter of the nested aggregator isn't resolved correctly all
the time
- post collection the children agg should also invoke that phase on its
wrapped child aggs.
- Validate the aggregation order on unmapped terms in terms agg.
Allocation:
- Weight deltas must be absolute deltas
Core:
- don't throttle recovery indexing operations
- Fix Store.checkIntegrity() for lucene 3.x index files.
- Don't verify adler32 for lucene 3.x terms dict/terms index.
- Mapping update task back references already closed index shard
- Disable auto gen id optimization
- Verify the index state of concrete indices after alias resolution
- ignore_unavailable shouldn't ignore closed indices
- Terms filter lookup caching should cache values, not filters.
Discovery:
- publishing timeout to log at WARN and indicate pending nodes
- check index uuid when merging incoming cluster state into the local one
Engine:
- back port fix to a potential dead lock when failing engine during
COMMIT_TRANSLOG flush
Geo:
- Update GeoPolygonFilter to handle polygons crossing the dateline
- GeoPolygonFilter not properly handling dateline and pole crossing
- Removing unnecessary orientation enumerators
- Add optional left/right parameter to GeoJSON
- Feature/Fix for OGC compliant polygons failing with ambiguity
- Correct bounding box logic for GeometryCollection type
- Throw helpful exception for Polygons with holes outside of shell
- GIS envelope validation
Indices API:
- Fix to make GET Index API consistent with docs
- Fix wrong search stats groups
Internal:
- ClusterInfoService should wipe local cache upon unknown exceptions
- Log when upgrade starts and stops
- promptly cleanup updateTask timeout handler
- Avoid unnecessary utf8 conversion when creating ScriptDocValues for a
string field.
Logging:
- improve logging messages added in
- Better timeout logging on stalled recovery and exception
- add logging around gateway shard allocation
Mapping:
- Throw StrictDynamicMappingException exception
- Include currentFieldName into ObjectMapper errors
- Explicit _timestamp default null is set to now
- Using default=null for _timestamp field creates a index loss on restart
- Reencode transformed result with same xcontent
- serialize doc values settings for _timestamp
- Mapping With a null Default Timestamp Causes NullPointerException on Merge
Nodes Stats:
- Fix open file descriptors count on Windows
Parent/child:
- Fix concurrency issues of the _parent field data.
Percolator:
- Support encoded body as query string param consistently
- Fixed bug when using multi percolate api with routing
Plugins:
- Installation failed when directories are on different file systems
- NPE when plugins dir is inaccessible
Query cache:
- Remove query-cache serialization optimization.
- Queries are never cached when date math expressions are used (including
exact dates)
Query DSL:
- Expose max_determinized_states in regexp query, filter
Recovery:
- add a timeout to local mapping change check
- flush immediately after a remote recovery finishes (unless there are
ongoing ones)
REST:
- Add fielddata_fields to the REST spec
Scripting:
- Make script.groovy.sandbox.method_blacklist_patch truly append-only
- Make groovy sandbox method blacklist dynamically additive
- Add explainable script again
- Disallow method pointer expressions in Groovy scripting
- Make _score in groovy scripts comparable
Search:
- Make sure that named filters/ queries defined in a wrapped
query/filters aren't lost
- Fix paging on strings sorted in ascending order.
- Function score and optional weight : avg score is wrong
Settings:
- Reset TieredMP settings only if the value actually changed
- cluster.routing.allocation.disk.threshold_enabled accepts wrong values
Snapshot status api:
- make sure headers are handed over to inner nodes request
Stats:
- Relax restrictions on filesystem size reporting in DiskUsage
Tribe node:
- remove closed indices from cluster state
Upgrade:
- Change wait_for_completion to default to true
- Fix version check in bytes to upgrade that spans major versions
Windows:
- makes elasticsearch.bat more friendly to automated processes
Version 3.5.1 Released February 17, 2015 (git commit 6c3457ee20c19ae492d29c490af6800e7e6a0774)
- Prevent core dump if the second argument to the quote() method
is anything but a hashref
[Greg Sabino Mullane]
(CPAN bug #101980)
- Better "support" for SQL_ASCII servers in the tests.
Allow env var DBDPG_TEST_ALWAYS_ENV to force use of DBI_DSN and DBI_USER in tests.
[Greg Sabino Mullane]
- Fix client_encoding detection on pre-9.1 servers
[Dagfinn Ilmari Mannsåker]
- Fix operator existence check in tests on pre-8.3 servers
[Dagfinn Ilmari Mannsåker]
- Documentation fix
[Stuart A Johnston]
- Fix pg_switch_prepared database handle documentation
[Dagfinn Ilmari Mannsåker]
- bug #4728 Incorrect headings in routine editor
- bug #4730 Notice while browsing tables when phpmyadmin pma database
exists, but not all the tables
- bug #4729 Display original field when using "Relational display column"
option and display column is empty
- bug #4734 Default values for binary fields do not support binary values
- bug #4736 Changing display options breaks query highlighting
- bug Undefined index submit_type
- bug #4738 Header lose align when scrolling in Firefox
- bug #4741 in ./libraries/Advisor.class.php#184 vsprintf(): Too few arguments
- bug #4743 Unable to move cursor with keyboard in filter rows box
- bug Incorrect link in doc
- bug #4745 Tracking does not handle views properly
- bug #4706 Schema export doesn't handle dots in db/table name
- bug #3935 Table Header not displayed correct (Safari 5.0.5 Mac)
- bug #4750 Disable renaming referenced columns
- bug #4748 Column name center-aligned instead of left-aligned in Relations
- bug Undefined constant PMA_DRIZZLE
- bug #4712 Wrongly positioned date-picker while Grid-Editing
- bug #4714 Forced ORDER BY for own sql statements
- bug #4721 Undefined property: stdClass::$version
- bug #4719 'only_db' not working
- bug #4700 Error text: Internal Server Error
- bug #4722 Incorrect width table summary when favorite tables is disabled
- bug #4710 Nav tree error after filtering the tables
- bug #4716 Collapse all in navigation panel is sometimes broken
- bug #4724 Cannot navigate in filtered table list
- bug #4717 Database navigation menu broken when resolution/screen is changing
- bug #4727 Collation column missing in database list when DisableIS is true
- bug Undefined index central_columnswork
- bug Undefined index favorite_tables
- bug #4694 js error on marking table as favorite in Safari (in private mode)
- bug #4695 Changing $cfg['DefaultTabTable'] doesn't update link and title
- bug Undefined index menuswork
- bug Undefined index navwork
- bug Undefined index central_columnswork
- bug #4697 Server Status refresh not behaving as expected
- bug Null argument in array_multisort()
- bug #4699 Navigation panel should not hide icons based on
'TableNavigationLinksMode'
- bug #4703 Unsaved schema page exported as pdf.pdf
- bug #4707 Call to undefined method PMA_Schema_PDF::dieSchema()
- bug #4702 URL is non RFC-2396 compatible in get_scripts.js.php
Changes are not available. Frmo commit log,
* Fix msec is not passed when calling db_timezone by rb_funcall().
* match callbacks_run inside event loop.
Other changes are Windows and cross build improvements.x
What's new in psycopg 2.6
-------------------------
New features:
- Added support for large objects larger than 2GB. Many thanks to Blake Rouse
and the MAAS Team for the feature development.
- Python `time` objects with a tzinfo specified and PostgreSQL :sql:`timetz`
data are converted into each other (🎫`#272`).
Bug fixes:
- Json apapter's `!str()` returns the adapted content instead of the `!repr()`
(🎫`#191`).
Security Fixes
* CVE-2015-0241 Buffer overruns in "to_char" functions.
* CVE-2015-0242 Buffer overrun in replacement printf family of functions.
* CVE-2015-0243 Memory errors in functions in the pgcrypto extension.
* CVE-2015-0244 An error in extended protocol message reading.
* CVE-2014-8161 Constraint violation errors can cause display of values in columns which the user would not normally have rights to see.
JSON and JSONB Unicode Escapes
Other Fixes and Improvements
* Cope with the non-ASCII Norwegian Windows locale name.
* Avoid data corruption when databases are moved to new tablespaces and back again.
* Ensure that UNLOGGED tables are correctly copied during ALTER DATABASE operations.
* Avoid deadlocks when locking recently modified rows.
* Fix two SELECT FOR UPDATE query issues.
* Prevent false negative for shortest-first regular expression matches.
* Fix false positives and negatives in tsquery contains operator.
* Fix namespace handling in xpath().
* Prevent row-producing functions from creating empty column names.
* Make autovacuum use per-table cost_limit and cost_delay settings.
* When autovacuum=off, limit autovacuum work to wraparound prevention only.
* Multiple fixes for logical decoding in 9.4.
* Fix transient errors on hot standby queries due to page replacement.
* Prevent duplicate WAL file archiving at end of recovery or standby promotion.
* Prevent deadlock in parallel restore of schema-only dump.
Security Fixes
* CVE-2015-0241 Buffer overruns in "to_char" functions.
* CVE-2015-0242 Buffer overrun in replacement printf family of functions.
* CVE-2015-0243 Memory errors in functions in the pgcrypto extension.
* CVE-2015-0244 An error in extended protocol message reading.
* CVE-2014-8161 Constraint violation errors can cause display of values in columns which the user would not normally have rights to see.
JSON and JSONB Unicode Escapes
Other Fixes and Improvements
* Cope with the non-ASCII Norwegian Windows locale name.
* Avoid data corruption when databases are moved to new tablespaces and back again.
* Ensure that UNLOGGED tables are correctly copied during ALTER DATABASE operations.
* Avoid deadlocks when locking recently modified rows.
* Fix two SELECT FOR UPDATE query issues.
* Prevent false negative for shortest-first regular expression matches.
* Fix false positives and negatives in tsquery contains operator.
* Fix namespace handling in xpath().
* Prevent row-producing functions from creating empty column names.
* Make autovacuum use per-table cost_limit and cost_delay settings.
* When autovacuum=off, limit autovacuum work to wraparound prevention only.
* Multiple fixes for logical decoding in 9.4.
* Fix transient errors on hot standby queries due to page replacement.
* Prevent duplicate WAL file archiving at end of recovery or standby promotion.
* Prevent deadlock in parallel restore of schema-only dump.
Security Fixes
* CVE-2015-0241 Buffer overruns in "to_char" functions.
* CVE-2015-0242 Buffer overrun in replacement printf family of functions.
* CVE-2015-0243 Memory errors in functions in the pgcrypto extension.
* CVE-2015-0244 An error in extended protocol message reading.
* CVE-2014-8161 Constraint violation errors can cause display of values in columns which the user would not normally have rights to see.
JSON and JSONB Unicode Escapes
Other Fixes and Improvements
* Cope with the non-ASCII Norwegian Windows locale name.
* Avoid data corruption when databases are moved to new tablespaces and back again.
* Ensure that UNLOGGED tables are correctly copied during ALTER DATABASE operations.
* Avoid deadlocks when locking recently modified rows.
* Fix two SELECT FOR UPDATE query issues.
* Prevent false negative for shortest-first regular expression matches.
* Fix false positives and negatives in tsquery contains operator.
* Fix namespace handling in xpath().
* Prevent row-producing functions from creating empty column names.
* Make autovacuum use per-table cost_limit and cost_delay settings.
* When autovacuum=off, limit autovacuum work to wraparound prevention only.
* Multiple fixes for logical decoding in 9.4.
* Fix transient errors on hot standby queries due to page replacement.
* Prevent duplicate WAL file archiving at end of recovery or standby promotion.
* Prevent deadlock in parallel restore of schema-only dump.
Security Fixes
* CVE-2015-0241 Buffer overruns in "to_char" functions.
* CVE-2015-0242 Buffer overrun in replacement printf family of functions.
* CVE-2015-0243 Memory errors in functions in the pgcrypto extension.
* CVE-2015-0244 An error in extended protocol message reading.
* CVE-2014-8161 Constraint violation errors can cause display of values in columns which the user would not normally have rights to see.
JSON and JSONB Unicode Escapes
Other Fixes and Improvements
* Cope with the non-ASCII Norwegian Windows locale name.
* Avoid data corruption when databases are moved to new tablespaces and back again.
* Ensure that UNLOGGED tables are correctly copied during ALTER DATABASE operations.
* Avoid deadlocks when locking recently modified rows.
* Fix two SELECT FOR UPDATE query issues.
* Prevent false negative for shortest-first regular expression matches.
* Fix false positives and negatives in tsquery contains operator.
* Fix namespace handling in xpath().
* Prevent row-producing functions from creating empty column names.
* Make autovacuum use per-table cost_limit and cost_delay settings.
* When autovacuum=off, limit autovacuum work to wraparound prevention only.
* Multiple fixes for logical decoding in 9.4.
* Fix transient errors on hot standby queries due to page replacement.
* Prevent duplicate WAL file archiving at end of recovery or standby promotion.
* Prevent deadlock in parallel restore of schema-only dump.
Security Fixes
* CVE-2015-0241 Buffer overruns in "to_char" functions.
* CVE-2015-0242 Buffer overrun in replacement printf family of functions.
* CVE-2015-0243 Memory errors in functions in the pgcrypto extension.
* CVE-2015-0244 An error in extended protocol message reading.
* CVE-2014-8161 Constraint violation errors can cause display of values in columns which the user would not normally have rights to see.
JSON and JSONB Unicode Escapes
Other Fixes and Improvements
* Cope with the non-ASCII Norwegian Windows locale name.
* Avoid data corruption when databases are moved to new tablespaces and back again.
* Ensure that UNLOGGED tables are correctly copied during ALTER DATABASE operations.
* Avoid deadlocks when locking recently modified rows.
* Fix two SELECT FOR UPDATE query issues.
* Prevent false negative for shortest-first regular expression matches.
* Fix false positives and negatives in tsquery contains operator.
* Fix namespace handling in xpath().
* Prevent row-producing functions from creating empty column names.
* Make autovacuum use per-table cost_limit and cost_delay settings.
* When autovacuum=off, limit autovacuum work to wraparound prevention only.
* Multiple fixes for logical decoding in 9.4.
* Fix transient errors on hot standby queries due to page replacement.
* Prevent duplicate WAL file archiving at end of recovery or standby promotion.
* Prevent deadlock in parallel restore of schema-only dump.
Changes in MySQL Cluster NDB 7.3.8 (5.6.22-ndb-7.3.8) (2015-01-21)
MySQL Cluster NDB 7.3.8 is a new release of MySQL Cluster, based on
MySQL Server 5.6 and including features from version 7.3 of the NDB
storage engine, as well as fixing a number of recently discovered bugs
in previous MySQL Cluster releases.
This release also incorporates all bugfixes and changes made in
previous MySQL Cluster releases, as well as all bugfixes and feature
changes which were added in mainline MySQL 5.6 through MySQL 5.6.22
(see Changes in MySQL 5.6.22 (2014-12-01)).
Functionality Added or Changed
* Performance: Recent improvements made to the multithreaded
scheduler were intended to optimize the cache behavior of its
internal data structures, with members of these structures placed
such that those local to a given thread do not overflow into a
cache line which can be accessed by another thread. Where required,
extra padding bytes are inserted to isolate cache lines owned (or
shared) by other threads, thus avoiding invalidation of the entire
cache line if another thread writes into a cache line not entirely
owned by itself. This optimization improved MT Scheduler
performance by several percent.
It has since been found that the optimization just described
depends on the global instance of struct thr_repository starting at
a cache line aligned base address as well as the compiler not
rearranging or adding extra padding to the scheduler struct; it was
also found that these prerequisites were not guaranteed (or even
checked). Thus this cache line optimization has previously worked
only when g_thr_repository (that is, the global instance) ended up
being cache line aligned only by accident. In addition, on 64-bit
platforms, the compiler added extra padding words in struct
thr_safe_pool such that attempts to pad it to a cache line aligned
size failed.
The current fix ensures that g_thr_repository is constructed on a
cache line aligned address, and the constructors modified so as to
verify cacheline aligned adresses where these are assumed by
design.
Results from internal testing show improvements in MT Scheduler
read performance of up to 10% in some cases, following these
changes. (Bug #18352514)
* Cluster API: Two new example programs, demonstrating reads and
writes of CHAR, VARCHAR, and VARBINARY column values, have been
added to storage/ndb/ndbapi-examples in the MySQL Cluster source
tree. For more information about these programs, including source
code listings, see NDB API Simple Array Example, and NDB API Simple
Array Example Using Adapter.
Bugs Fixed
* The global checkpoint commit and save protocols can be delayed by
various causes, including slow disk I/O. The DIH master node
monitors the progress of both of these protocols, and can enforce a
maximum lag time during which the protocols are stalled by killing
the node responsible for the lag when it reaches this maximum. This
DIH master GCP monitor mechanism did not perform its task more than
once per master node; that is, it failed to continue monitoring
after detecting and handling a GCP stop. (Bug #20128256)
References: See also Bug #19858151.
* When running mysql_upgrade on a MySQL Cluster SQL node, the
expected drop of the performance_schema database on this node was
instead performed on all SQL nodes connected to the cluster. (Bug
#20032861)
* A number of problems relating to the fired triggers pool have been
fixed, including the following issues:
+ When the fired triggers pool was exhausted, NDB returned Error
218 (Out of LongMessageBuffer). A new error code 221 is added
to cover this case.
+ An additional, separate case in which Error 218 was wrongly
reported now returns the correct error.
+ Setting low values for MaxNoOfFiredTriggers led to an error
when no memory was allocated if there was only one hash
bucket.
+ An aborted transaction now releases any fired trigger records
it held. Previously, these records were held until its
ApiConnectRecord was reused by another transaction.
+ In addition, for the Fired Triggers pool in the internal
ndbinfo.ndb$pools table, the high value always equalled the
total, due to the fact that all records were momentarily
seized when initializing them. Now the high value shows the
maximum following completion of initialization.
(Bug #19976428)
* Online reorganization when using ndbmtd data nodes and with binary
logging by mysqld enabled could sometimes lead to failures in the
TRIX and DBLQH kernel blocks, or in silent data corruption. (Bug
#19903481)
References: See also Bug #19912988.
* The local checkpoint scan fragment watchdog and the global
checkpoint monitor can each exclude a node when it is too slow when
participating in their respective protocols. This exclusion was
implemented by simply asking the failing node to shut down, which
in case this was delayed (for whatever reason) could prolong the
duration of the GCP or LCP stall for other, unaffected nodes.
To minimize this time, an isolation mechanism has been added to
both protocols whereby any other live nodes forcibly disconnect the
failing node after a predetermined amount of time. This allows the
failing node the opportunity to shut down gracefully (after logging
debugging and other information) if possible, but limits the time
that other nodes must wait for this to occur. Now, once the
remaining live nodes have processed the disconnection of any
failing nodes, they can commence failure handling and restart the
related protocol or protocol, even if the failed node takes an
excessiviely long time to shut down. (Bug #19858151)
References: See also Bug #20128256.
* A watchdog failure resulted from a hang while freeing a disk page
in TUP_COMMITREQ, due to use of an uninitialized block variable.
(Bug #19815044, Bug #74380)
* Multiple threads crashing led to multiple sets of trace files being
printed and possibly to deadlocks. (Bug #19724313)
* When a client retried against a new master a schema transaction
that failed previously against the previous master while the latter
was restarting, the lock obtained by this transaction on the new
master prevented the previous master from progressing past start
phase 3 until the client was terminated, and resources held by it
were cleaned up. (Bug #19712569, Bug #74154)
* When using the NDB storage engine, the maximum possible length of a
database or table name is 63 characters, but this limit was not
always strictly enforced. This meant that a statement using a name
having 64 characters such CREATE DATABASE, DROP DATABASE, or ALTER
TABLE RENAME could cause the SQL node on which it was executed to
fail. Now such statements fail with an appropriate error message.
(Bug #19550973)
* When a new data node started, API nodes were allowed to attempt to
register themselves with the data node for executing transactions
before the data node was ready. This forced the API node to wait an
extra heartbeat interval before trying again.
To address this issue, a number of HA_ERR_NO_CONNECTION errors
(Error 4009) that could be issued during this time have been
changed to Cluster temporarily unavailable errors (Error 4035),
which should allow API nodes to use new data nodes more quickly
than before. As part of this fix, some errors which were
incorrectly categorised have been moved into the correct
categories, and some errors which are no longer used have been
removed. (Bug #19524096, Bug #73758)
* When executing very large pushdown joins involving one or more
indexes each defined over several columns, it was possible in some
cases for the DBSPJ block (see The DBSPJ Block) in the NDB kernel
to generate SCAN_FRAGREQ signals that were excessively large. This
caused data nodes to fail when these could not be handled
correctly, due to a hard limit in the kernel on the size of such
signals (32K). This fix bypasses that limitation by breaking up
SCAN_FRAGREQ data that is too large for one such signal, and
sending the SCAN_FRAGREQ as a chunked or fragmented signal instead.
(Bug #19390895)
* ndb_index_stat sometimes failed when used against a table
containing unique indexes. (Bug #18715165)
* Queries against tables containing a CHAR(0) columns failed with
ERROR 1296 (HY000): Got error 4547 'RecordSpecification has
overlapping offsets' from NDBCLUSTER. (Bug #14798022)
* In the NDB kernel, it was possible for a TransporterFacade object
to reset a buffer while the data contained by the buffer was being
sent, which could lead to a race condition. (Bug #75041, Bug
#20112981)
* mysql_upgrade failed to drop and recreate the ndbinfo database and
its tables as expected. (Bug #74863, Bug #20031425)
* Due to a lack of memory barriers, MySQL Cluster programs such as
ndbmtd did not compile on POWER platforms. (Bug #74782, Bug
#20007248)
* In some cases, when run against a table having an AFTER DELETE
trigger, a DELETE statement that matched no rows still caused the
trigger to execute. (Bug #74751, Bug #19992856)
* A basic requirement of the NDB storage engine's design is that the
transporter registry not attempt to receive data
(TransporterRegistry::performReceive()) from and update the
connection status (TransporterRegistry::update_connections()) of
the same set of transporters concurrently, due to the fact that the
updates perform final cleanup and reinitialization of buffers used
when receiving data. Changing the contents of these buffers while
reading or writing to them could lead to "garbage" or inconsistent
signals being read or written.
During the course of work done previously to improve the
implementation of the transporter facade, a mutex intended to
protect against the concurrent use of the performReceive() and
update_connections()) methods on the same transporter was
inadvertently removed. This fix adds a watchdog check for
concurrent usage. In addition, update_connections() and
performReceive() calls are now serialized together while polling
the transporters. (Bug #74011, Bug #19661543)
* ndb_restore failed while restoring a table which contained both a
built-in conversion on the primary key and a staging conversion on
a TEXT column.
During staging, a BLOB table is created with a primary key column
of the target type. However, a conversion function was not provided
to convert the primary key values before loading them into the
staging blob table, which resulted in corrupted primary key values
in the staging BLOB table. While moving data from the staging table
to the target table, the BLOB read failed because it could not find
the primary key in the BLOB table.
Now all BLOB tables are checked to see whether there are
conversions on primary keys of their main tables. This check is
done after all the main tables are processed, so that conversion
functions and parameters have already been set for the main tables.
Any conversion functions and parameters used for the primary key in
the main table are now duplicated in the BLOB table. (Bug #73966,
Bug #19642978)
* Corrupted messages to data nodes sometimes went undetected, causing
a bad signal to be delivered to a block which aborted the data
node. This failure in combination with disconnecting nodes could in
turn cause the entire cluster to shut down.
To keep this from happening, additional checks are now made when
unpacking signals received over TCP, including checks for byte
order, compression flag (which must not be used), and the length of
the next message in the receive buffer (if there is one).
Whenever two consecutive unpacked messages fail the checks just
described, the current message is assumed to be corrupted. In this
case, the transporter is marked as having bad data and no more
unpacking of messages occurs until the transporter is reconnected.
In addition, an entry is written to the cluster log containing the
error as well as a hex dump of the corrupted message. (Bug #73843,
Bug #19582925)
* Transporter send buffers were not updated properly following a
failed send. (Bug #45043, Bug #20113145)
* ndb_restore --print_data truncated TEXT and BLOB column values to
240 bytes rather than 256 bytes.
* Disk Data: An update on many rows of a large Disk Data table could
in some rare cases lead to node failure. In the event that such
problems are observed with very large transactions on Disk Data
tables you can now increase the number of page entries allocated
for disk page buffer memory by raising the value of the
DiskPageBufferEntries data node configuration parameter added in
this release. (Bug #19958804)
* Disk Data: When a node acting as a DICT master fails, the
arbitrator selects another node to take over in place of the failed
node. During the takeover procedure, which includes cleaning up any
schema transactions which are still open when the master failed,
the disposition of the uncommitted schema transaction is decided.
Normally this transaction be rolled back, but if it has completed a
sufficient portion of a commit request, the new master finishes
processing the commit. Until the fate of the transaction has been
decided, no new TRANS_END_REQ messages from clients can be
processed. In addition, since multiple concurrent schema
transactions are not supported, takeover cleanup must be completed
before any new transactions can be started.
A similar restriction applies to any schema operations which are
performed in the scope of an open schema transaction. The counter
used to coordinate schema operation across all nodes is employed
both during takeover processing and when executing any non-local
schema operations. This means that starting a schema operation
while its schema transaction is in the takeover phase causes this
counter to be overwritten by concurrent uses, with unpredictable
results.
The scenarios just described were handled previously using a
pseudo-random delay when recovering from a node failure. Now we
check before the new master has rolled forward or backwards any
schema transactions remaining after the failure of the previous
master and avoid starting new schema transactions or performing
operations using old transactions until takeover processing has
cleaned up after the abandoned transaction. (Bug #19874809, Bug
#74503)
* Disk Data: When a node acting as DICT master fails, it is still
possible to request that any open schema transaction be either
committed or aborted by sending this request to the new DICT
master. In this event, the new master takes over the schema
transaction and reports back on whether the commit or abort request
succeeded. In certain cases, it was possible for the new master to
be misidentified--that is, the request was sent to the wrong node,
which responded with an error that was interpreted by the client
application as an aborted schema transaction, even in cases where
the transaction could have been successfully committed, had the
correct node been contacted. (Bug #74521, Bug #19880747)
* Cluster Replication: When an NDB client thread made a request to
flush the binary log using statements such as FLUSH BINARY LOGS or
SHOW BINLOG EVENTS, this caused not only the most recent changes
made by this client to be flushed, but all recent changes made by
all other clients to be flushed as well, even though this was not
needed. This behavior caused unnecessary waiting for the statement
to execute, which could lead to timeouts and other issues with
replication. Now such statements flush the most recent database
changes made by the requesting thread only.
As part of this fix, the status variables
Ndb_last_commit_epoch_server, Ndb_last_commit_epoch_session, and
Ndb_slave_max_replicated_epoch, originally implemented in MySQL
Cluster NDB 7.4, are also now available in MySQL Cluster NDB 7.3.
For descriptions of these variables, see MySQL Cluster Status
Variables; for further information, see MySQL Cluster Replication
Conflict Resolution. (Bug #19793475)
* Cluster Replication: It was possible using wildcards to set up
conflict resolution for an exceptions table (that is, a table named
using the suffix $EX), which should not be allowed. Now when a
replication conflict function is defined using wildcard
expressions, these are checked for possible matches so that, in the
event that the function would cover an exceptions table, it is not
set up for this table. (Bug #19267720)
* Cluster API: It was possible to delete an Ndb_cluster_connection
object while there remained instances of Ndb using references to
it. Now the Ndb_cluster_connection destructor waits for all related
Ndb objects to be released before completing. (Bug #19999242)
References: See also Bug #19846392.
* Cluster API: The buffer allocated by an NdbScanOperation for
receiving scanned rows was not released until the NdbTransaction
owning the scan operation was closed. This could lead to excessive
memory usage in an application where multiple scans were created
within the same transaction, even if these scans were closed at the
end of their lifecycle, unless NdbScanOperation::close() was
invoked with the releaseOp argument equal to true. Now the buffer
is released whenever the cursor navigating the result set is closed
with NdbScanOperation::close(), regardless of the value of this
argument. (Bug #75128, Bug #20166585)
* ClusterJ: The following errors were logged at the SEVERE level;
they are now logged at the NORMAL level, as they should be:
+ Duplicate primary key
+ Duplicate unique key
+ Foreign key constraint error: key does not exist
+ Foreign key constraint error: key exists
(Bug #20045455)
* ClusterJ: The com.mysql.clusterj.tie class gave off a logging
message at the INFO logging level for every single query, which was
unnecessary and was affecting the performance of applications that
used ClusterJ. (Bug #20017292)
* ClusterJ: ClusterJ reported a segmentation violation when an
application closed a session factory while some sessions were still
active. This was because MySQL Cluster allowed an
Ndb_cluster_connection object be to deleted while some Ndb
instances were still active, which might result in the usage of
null pointers by ClusterJ. This fix stops that happening by
preventing ClusterJ from closing a session factory when any of its
sessions are still active. (Bug #19846392)
References: See also Bug #19999242.
shared-mime-info 1.4 (2015-02-05)
* Add glob for low-resolution videos from GoPro
* Add mime-type for partially downloaded files
* Use IANA registered mime-type for Debian packages
* Add another magic for OTF fonts
* Add support for Adobe PageMaker
* Remove the Apple iOS PNG variant
* Add *.dbk glob for DocBook
* Use IANA registered mime-type for Vivo
* Remove obsolete application/x-gmc-link mime-type
* Make application/x-wais-source a subclass of text/plain
* Flip application/smil+xml and application/smil type/alias
* Add Nintendo 64 ROM magic
* Add qpress archive support
* Add image/x-tiff-multipage mime-type
* Rename "Microsoft icon" to "Windows icon"
* Add magic for ODB files
* Use IANA registered text/markdown for Markdown
* New mimetype for SCons scripts as subclass of x-python
* Make application/pgp-encrypted a subclass of text/plain
* Associate *.qmltypes and *.qmlproject files with the text/x-qml mime type
* Add text/x-genie mime type for Genie source code
* Disable fdatasync() usage if PKGSYSTEM_ENABLE_FSYNC is set
* Skip mime database update if packages are older than cache
* Add "-n" option to update-mime-database to only update if "newer"
* The linked OpenSSL library for the MySQL Commercial Server has been updated from version 1.0.1j to version 1.0.1k.
* Support for the SSL 2.0 and SSL 3.0 protocols has been disabled because they provide weak encryption.
* yaSSL was upgraded to version 2.3.7.
* The valid date range of the SSL certificates in mysql-test/std_data has been extended to the year 2029.
* Bugs Fixed
* Support for the SSL 2.0 and SSL 3.0 protocols has been disabled because they provide weak encryption.
* yaSSL was upgraded to version 2.3.7.
* The valid date range of the SSL certificates in mysql-test/std_data has been extended to the year 2029.
* Bugs Fixed
=== 4.19.0 (2015-02-01)
* Make jdbc/sqlanywhere correctly set :auto_increment entry in schema hashes (jeremyevans)
* Add Model#cancel_action for canceling actions in before hooks, instead of having the hooks return false (jeremyevans)
* Support not setting @@wait_timeout on MySQL via :timeout=>nil Database option (jeremyevans)
* Add accessed_columns plugin, recording which columns have been accessed for a model instance (jeremyevans)
* Use correct migration version when using IntegerMigrator with :allow_missing_migration_files (blerins) (#938)
* Make Dataset#union, #intersect, and #except automatically handle datasets with raw SQL (jeremyevans) (#934)
* Add column_conflicts plugin to automatically handle columns that conflict with method names (jeremyevans) (#929)
* Add Model#get_column_value and #set_column_value to get/set column values (jeremyevans) (#929)
== v0.18.1 [2015-01-05] Michael Granger <ged@FaerieMUD.org>
Correct the minimum compatible Ruby version to 1.9.3. #199
== v0.18.0 [2015-01-01] Michael Granger <ged@FaerieMUD.org>
Bugfixes:
- Fix OID to Integer mapping (it is unsigned now). #187
- Fix possible segfault in conjunction with notice receiver. #185
Enhancements:
- Add an extensible type cast system.
- A lot of performance improvements.
- Return frozen String objects for result field names.
- Add PG::Result#stream_each and #stream_each_row as fast helpers for
the single row mode.
- Add Enumerator variant to PG::Result#each and #each_row.
- Add PG::Connection#conninfo and #hostaddr.
- Add PG.init_openssl and PG.init_ssl methods.
- Force zero termination for all text strings that are given to libpq.
It raises an ArgumentError if the string contains a null byte.
- Update Windows cross build to PostgreSQL 9.3.
From commit log:
* Substantial speed increase in lookups
* Extract method to list all hierarchy files
* Add Trusty cow to stable branch
* Remove current directory from Ruby load path.
* Fix a bug in the sorting logic, present since version 3.8.4, that can cause output to appear in the wrong order on queries that contains an ORDER BY clause, a LIMIT clause, and that have approximately 60 or more columns in the result set.
http://mail-index.netbsd.org/pkgsrc-users/2015/01/15/msg020923.html
Import libdbh2-5.0.16 as databases/libdbh2.
Disk based hashes is a method to create multidimensional binary trees on disk.
This library permits the extension of database concept to a plethora of
electronic data, such as graphic information. With the multidimensional binary
tree it is possible to mathematically prove that access time to any
particular record is minimized (using the concept of critical points from
calculus), which provides the means to construct optimized databases for
particular applications.
(From: doc/html/DBH.html)
Disk Based Hashtables (DBH) 64 bit ? Library to create and manage hash
tables residing on disk. Associations are made between keys and values
so that for a given a key the value can be found and loaded into memory
quickly. Being disk based allows for large and persistent hashes. 64 bit
support allows for hashtables with sizes over 4 Gigabytes on 32 bit
systems. Quantified key generation allows for minimum access time on
balanced multidimensional trees.
New Features:
* Added the PRAGMA data_version command that can be used to determine if a database file has been modified by another process.
* Added the SQLITE_CHECKPOINT_TRUNCATE option to the sqlite3_wal_checkpoint_v2() interface, with corresponding enhancements to PRAGMA wal_checkpoint.
* Added the sqlite3_stmt_scanstatus() interface, available only when compiled with SQLITE_ENABLE_STMT_SCANSTATUS.
* The sqlite3_table_column_metadata() is enhanced to work correctly on WITHOUT ROWID tables and to check for the existence of a a table if the column name parameter is NULL. The interface is now also included in the build by default, without requiring the SQLITE_ENABLE_COLUMN_METADATA compile-time option.
* Added the SQLITE_ENABLE_API_ARMOR compile-time option.
* Added the SQLITE_REVERSE_UNORDERED_SELECTS compile-time option.
* Added the SQLITE_SORTER_PMASZ compile-time option and SQLITE_CONFIG_PMASZ start-time option.
* Added the SQLITE_CONFIG_PCACHE_HDRSZ option to sqlite3_config() which makes it easier for applications to determine the appropriate amount of memory for use with SQLITE_CONFIG_PAGECACHE.
* The number of rows in a VALUES clause is no longer limited by SQLITE_LIMIT_COMPOUND_SELECT.
* Added the eval.c loadable extension that implements an eval() SQL function that will recursively evaluate SQL.
Performance Enhancements:
* Reduce the number of memcpy() operations involved in balancing a b-tree, for 3.2% overall performance boost.
* Improvements to cost estimates for the skip-scan optimization.
* The automatic indexing optimization is now capable of generating a partial index if that is appropriate.
Bug fixes:
* Ensure durability following a power loss with "PRAGMA journal_mode=TRUNCATE" by calling fsync() right after truncating the journal file.
* The query planner now recognizes that any column in the right-hand table of a LEFT JOIN can be NULL, even if that column has a NOT NULL constraint. Avoid trying to optimize out NULL tests in those cases.
* Make sure ORDER BY puts rows in ascending order even if the DISTINCT operator is implemented using a descending index.
* Fix data races that might occur under stress when running with many threads in shared cache mode where some of the threads are opening and closing connections.
* Fix obscure crash bugs found by american fuzzy lop.
* Work around a GCC optimizer bug (for gcc 4.2.1 on MacOS 10.7) that caused the R-Tree extension to compute incorrect results when compiled with -O3.
Other changes:
* Disable the use of the strchrnul() C-library routine unless it is specifically enabled using the -DHAVE_STRCHRNULL compile-time option.
* Improvements to the effectiveness and accuracy of the likelihood(), likely(), and unlikely() SQL hint functions.
Python client driver for Apache Cassandra. This driver works exclusively
with the Cassandra Query Language v3 (CQL3) and Cassandra's native protocol.
Cassandra versions 1.2 through 2.1 are supported.
This release adds many new features which enhance PostgreSQL's flexibility, scalability and performance for many different types of database users, including improvements to JSON support, replication and index performance.
Changes in DBI 1.633 - 11th Jan 2015
Fixed selectrow_*ref to return undef on error in list context
instead if an empty list.
Changed t/42prof_data.t more informative
Changed $sth->{TYPE} to be NUMERIC in DBD::File drivers as per the
DBI docs. Note TYPE_NAME is now also available. [H.Merijn Brand]
Fixed compilation error on bleadperl due DEFSV no longer being an lvalue
[Dagfinn Ilmari Mannsåker]
Added docs for escaping placeholders using a backslash.
Added docs for get_info(9000) indicating ability to escape placeholders.
Added multi_ prefix for DBD::Multi (Dan Wright) and ad2_ prefix for
DBD::AnyData2
* Fixed missing ReconnectLDAPObject._reconnect_lock when pickling
* Added ldap.controls.pagedresults which is pure Python implementation of
Simple Paged Results Control (see RFC 2696) and delivers the correct
result size
- bug Undefined index notices while configuring recent and favorite
tables
- bug #4687 Designer breaks without configuration storage
- bug #4686 Select elements flicker and selects something else
- bug #4689 Setup tool creates "pma__favorites" incorrectly
- bug #4685 Call to a member function isUserType() on a non-object
- bug #4691 Do not include console when no server is selected
- bug #4688 File permissions in archive
- bug #4692 Dynamic javascripts gives 500 when db selected
- bug Auto-configuration: tables were not created automatically
- bug #4677 Advanced feature checker does not check for favorite tables
feature
- bug #4678 Some of the data stored in configuration storage are not deleted
upon db or table delete
- bug #4679 Setup does not allow providing a name for favorites table
- bug #4680 Number of favorite table are not configurable in setup
- bug #4681 'Central columns table' field in setup does not have a
description
- bug #4318 Default connection collation and sorting
- bug #4683 Relational data is not properly updated on table rename
- bug #4655 Undefined index: collation_connection (second patch)
- bug #4682 4.3.3 & 4.3.4 Import sql created by mysqldump fails on foreign
keys
- bug #4676 Auto-configuration issues
- bug #4416 New lines are removed when grid editing (part two: TEXT)
- bug #4653 Always connection error was shown, on /setup at tab
"configuration storage"
- bug #4661 Drag and drop file import always fails
- bug #4651 don't open console with esc
- bug #4664 select min() displays 1 row, but reports the table amount of
rows returned
- bug #4666 Undefined indexes in table stucture print view of a view
- bug #4663 Export missing back ticks for order table name
- bug #4668 Remove from central columns error
- bug #4670 CSV import reads both commas and values into first column after
first row
- bug #4642 phpmyadmin often fails to load due to specific load order
- bug #4671 Unable to move all columns
- bug #4645 Import of export created with mysqldump
- bug #4672 "Distinct values" does not page
- bug #4667 Consistency in borders
- bug #4658 Illegal string offset (Data_length, Index_length)
- bug #4655 Undefined index: collation_connection
- bug #4673 Delimiter causing page lock
- bug The "Recently used tables" setting should be with Nav panel
- bug #4647 Can't disable Favorites
- bug #4646 Version Check Broken
- bug #4630 AJAX request infinite loop
- bug #4649 Attributes field size smaller than others
- bug #4622 Cannot remove table ordering on a Mac
- bug Fix initial replication configuration
- bug Undefined index central_columnswork
- bug #4657 Don't have default blowfish_secret
- bug #4656 Some error popups fade away too quickly
- bug #4648 Consistency in borders
- bug $cfg['Error_Handler']['display'] no longer necessary
- bug #4659 Leading and trailing whitespace in column name
Version 3.5.0 Released January 6, 2015
- Allow "placeholder escaping" by the use of a backslash directly before it, e.g.
"SELECT 1 FROM jsontable WHERE foo \\? ?"
will contain a single placeholder, and the first question mark will be sent directly
to the backend to be parsed as an operator.
[Greg Sabino Mullane, Tim Bunce]
(CPAN bug #101030)
- Improve the workings of the ping() method, so it always tests for
a valid database backend and returns the correct true/false.
[Greg Sabino Mullane, with help from Andrew Gierth and Tim Bunce]
(CPAN bug #100648)
- Add get_info(9000) => 1 to indicate driver can escape placeholders.
[Tim Bunce]
- In tests, force the client_encoding to UTF8, skip tests that involve
characters not supported by the server_encoding
[Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>]
- Fix memory leak when selecting from arrays
[Dagfinn Ilmari Mannsåker, reported by Krystian Samp]
- Make get_info much more efficient and slightly simpler.
[Tim Bunce]
Berkeley DB is an embeddable database system that supports keyed access to
data. The software is distributed in source code form, and developers can
compile and link the source code into a single library for inclusion
directly in their applications.
Developers may choose to store data in any of several different storage
structures to satisfy the requirements of a particular application. In
database terminology, these storage structures and the code that operates on
them are called access methods. The library includes support for the
following access methods:
* B+tree: Stores keys in sorted order, using either a programmer-supplied
ordering function or a default function that does lexicographical
ordering of keys. Applications may perform equality or range searches.
* Hashing: Stores records in a hash table for fast searches based on
strict equality. Extended Linear Hashing modifies the hash function
used by the table as new records are inserted, in order to keep buckets
underfull in the steady state.
* Fixed and Variable-Length Records: Stores fixed- or variable-length
records in sequential order. Record numbers may be immutable or
mutable, i.e., permitting new records to be inserted between existing
records or requiring that new records be added only at the end of the
database.
This package privides Berkeley DB 6 released under GNU AGPL3.
packaged for wip by nros.
SQLite 3 driver for the Qore language DBI system, using this module Qore
programs can access SQLite 3 file and in-memory databases using the
Datasource and DatasourcePool classes.
packaged for wip by nros.
PostgreSQL driver for Qore. It makes it possible for Qore programs to
access PostgreSQL database through the Datasource, DatasourcePool and
SQLStatement classes.
packaged for wip by nros.
MySQL driver for Qores DBI system that allows Qore programs to MySQL
Databases through the Datasource, DatasourcePool and SQLStatement
classes.
packaged for wip by nros.
The Freetds DB driver for qore gives Qore programs the possibility to
communicate with databases that use the TDS(Tabular Data Stream)
protocol for access such as MS SQL Server and Sybase databases.
2.4.4
=====
* Backwards-incompatible changes
- The argument signature for the SqliteExtDatabase.aggregate() decorator
changed so that the aggregate name is the first parameter, and
the number of parameters is the second parameter. If no values are
specified, peewee will choose the name of the class and an un-specified
number of arguments (-1).
- The logic for saving a model with a composite key changed slightly.
Previously, if a model had a composite primary key and you called save(),
only the dirty fields would be saved.
* Bugs fixed
- #462
- #465, add hook for disabling backref validation.
- #466, fix case-sensitive table names with migration module.
- #469, save only dirty fields.
* New features
- Lots of enhancements and cleanup to the playhouse.apsw_ext module.
- The playhouse.reflection module now supports introspecting indexes.
- Added a model option for disabling backref validation.
- Added support for the SQLite closure table extension.
- Added support for virtual fields, which act on dynamically-created
virtual table fields.
- Added a new example: a virtual table implementation that exposes Redis
as a relational database table.
- Added a module playhouse.sqlite_aggregates that contains a handful of
aggregates you may find useful when developing with SQLite.
- Small documentation updates here and there.
2.4.3
=====
* Bugs fixed
- #466, table names are case sensitive in the SQLite migrations module.
- #465, added option to disable backref validation.
- #462, use the schema name consistently with postgres reflection.
* New features
- New model Meta option to disable backref validation. See
validate_backrefs.
- Added documentation on ordering by calculated values.
- Added basic PyPy compatibility.
- Added logic to close cursors after they have been exhausted.
- Structured and consolidated database metadata introspection, including
improvements for introspecting indexes.
- Added support to prefetch for traversing up the query tree.
- Added introspection option to skip invalid models while introspecting.
- Added option to limit the tables introspected.
- Added closed connection detection to the MySQL connection pool.
- Enhancements to passing options to creating virtual tables with SQLite.
- Added factory method for generating Closure tables for use with the
transitive_closure SQLite extension.
- Added support for loading SQLite extensions.
- Numerous test-suite enhancements and new test-cases.
2.4.2
=====
* Bugs fixed
- #449, typo in the db_url extension, thanks to @malea for the fix.
- #457 and #458, fixed documentation deficiences.
* New features
- Added support for importing data when using the DataSet extension.
- Added an encrypted diary app to the examples.
- Better index reconstruction when altering columns on SQLite databases
with the migrate module.
- Support for multi-column primary keys in the reflection module.
- Close cursors more aggressively when executing SELECT queries.
- A variety of fixes for read preference based node selection
- Avoided inclusion of getLastError in 2.6 writeConcern
- Correct handling of pass through params for collection_aggregate
- Improved error reporting in socket connect
- Public MONGOC_DEFAULT_CONNECTTIMEOUTMS
- update test app scripts with latest catalyst.pl
- tweek .sql to make autoincrement PKs work for sqlite3
- change Plugin::Static::Simple to Plugin::Static::Simple::ByClass
for test app (now matches SYNOPSIS)
- switch to File::Slurp::Tiny
0.17 2013-10-11
- fix SYNOPSIS to add note about default_view
0.16 2012-10-31
- more pod patches from Adam Mackler
- improve docs
0.813 (11.07.2014) - John Siracusa <siracusa@gmail.com>
* Added prepare_options parameter to get_objects_iterator_from_sql() and
get_objects_from_sql() Manager methods (RT 98014)
0.812 (11.07.2014) - John Siracusa <siracusa@gmail.com>
* Second attempt to address precision and scale mix-ups in auto-loaded
numeric column metadata.
* Fixed get_objects_count() with custom select lists and distinct
(Reported by Alexander Karelas)
* Fixed precision and scale references in the tutorial (RT 96079)
* Fixed an incorrect method name in the Rose::DB::Object::Helpers
documentation (RT 97602)
* Fixed a bug where save() parameters were not passed to map record
save() calls (RT 98730)
* Corrected a typo in the documentation (RT 94100)
* Updated tests to work with DBD::Pg versions greater than 2.x.x.
0.811 (03.21.2014) - John Siracusa <siracusa@gmail.com>
* Fixed a bug that prevented many-to-many map records from being saved
to the database (RT 93531)