Commit graph

6729 commits

Author SHA1 Message Date
richard
ff7c4d4cf3 update to pgadmin3 1.20.0
primary changes include:
Date       Dev Ver     Change details
---------- --- ------  --------------
2014-11-28 AV  1.20.0  Resolving the inconsistency of preservation of modified
                       information of the columns, while adding/editing a table
                       from the table dialog [Reported by Jasmine Liu, Patch
                       from Sanket Mehta, and reviewed by Akshay Joshi and me]
2014-11-22 GL  1.20.0  Add missing nodes to graphical explain plan [J.F. Oster]
2014-11-03 AV  1.20.0  Fix the Resource Groups dialog - separated the
                       statements, which can not be ran in single transaction
                       [Akshay Joshi]
2014-10-28 AV  1.20.0  Support for Slony-I 2.2+ with PostgreSQL 9.3+
                       [Neel Patel, Akshay Joshi, Sanket Mehta, Ashesh Vashi]
2014-10-28 AV  1.20.0  Support smallserial columns in Edit Data Window
                       [J.F. Oster]
2014-10-28 AV  1.20.0  Restore the user specified size on the grid (which
                       contains the result) in Query windows to fix an issue
                       related to the column size is too narrow on subsequent
                       Explain execution [J.F. Oster]
2014-10-28 AV  1.20.0  Empty the Undo/Redo history everytime new file is opened
                       [J.F. Oster]
2014-10-28 AV  1.20.0  Added accelerator F8 for 'Execute to file' in the Query
                       window [Mads Jensen]
2014-10-28 AV  1.20.0  Beautify the query shown in the SQL pane for the
                       functions having multiple arguments by adding new lines
                       with small modification by me [J.F. Oster]
2014-10-13 AV  1.20.0  Check for the columnlist for the UPDATE OF syntax on
                       trigger also works with PPAS 9.5+ [Sanket Mehta]
2014-09-27 AV  1.20.0  Proper saving of columns width in the server status
                       window [Dmitriy Olshevskiy]
2014-09-19 DP  1.20.0  Fix support for triggers with inline code on PPAS
                       [Sanket Mehta]
2014-08-09 AV  1.20.0  CHECK OPTION is applicable to the views only, and not
                       the materialized views
2014-08-09 GL  1.20.0  Prevent a crash during the update of the macro or
                       favourite list [Dmitriy Olshevskiy]
2014-07-20 AV  1.20.0  Allow quick injectection of favourites by name
                       [J.F. Oster]
2014-07-20 AV  1.20.0  Fix handling of saving macros after pasting a query in
                       the box [Dmitriy Olshevskiy]
2014-07-14 GL  1.18.2  Fix SQL for GRANT on a sequence. Report from
                       liuyuanyuan.
2014-07-03 DP  1.20.0  Fix handling of char()[] columns in the Edit Grid
                       [Dmitriy Olshevskiy]
2014-07-02 GL  1.20.0  Support the new check_option parameter of views
2014-07-02 GL  1.20.0  Handle the 9.4 MOVE clause of ALTER TABLESPACE
2014-07-02 GL  1.20.0  Add a new action menu to refresh a materialized view
2014-07-02 GL  1.20.0  Support the new 9.4 columns in pg_stat_activity
2014-06-27 DP  1.20.0  Save search options for next time [Dimitriy Olshevskiy]
2014-06-25 DP  1.20.0  Ensure Favourite queries are only saved when the OK
                       button is pressed [Dmitriy Olshevskiy]
2014-06-18 DP  1.20.0  Fix an issue refreshing functions in PPAS packages
                       [Akshay Joshi]
2014-06-18 DP  1.20.0  Improve handling of lost connections [Ashesh Vashi]
2014-06-12 DP  1.20.0  Allow searching for database objects by comment and
                       definition content [J.F. Oster]
2014-06-10 DP  1.20.0  Add support for Resource Groups in PPAS 9.4 [Akshay
                       Joshi]
2014-06-07 DP  1.20.0  Add Save and Exit shortcut keys to the edit grid
                       [Fredrik de Vibe]
2014-05-28 DP  1.18.2  Fix escape handling in pgpass files [Akshay Joshi]
2014-05-04 GL  1.18.2  GQB forgot the materialized views. Report from Eduard
                       Szöcs.
2014-02-13 AV  1.20.0  Enable backward search in SQL Box using Shift + F3
                       [J.F. Oster]
2014-03-07 AV  1.18.2  Don't set log_min_messages from the debugger
2014-03-07 AV  1.18.2  Fix a crash on OSX when run explain through shortcut in
                       query editor, when mouse is over a existing shape, and
                       popup, showing the step information, is on. Report from
                       Attila Soki
2014-02-26 AV  1.18.2  Using the GetOid() function instead of GetLong() for
                       Oid columns [Ian Barwick]
2014-02-13 DP  1.20.0  Allow more flexible selection/deselection of rows and
                       columns in Edit Grid [J.F. Oster]
2014-02-12 GL  1.20.0  Allow more options with the plain backup.
2014-02-11 DP  1.18.2  Fix SQL comments for inherited columns [J.F. Oster].
2014-01-27 AV  1.18.2  Flush the changes in the settings as soon as new server
                       is added [Kaarel Moppel]
2014-01-27 DP  1.18.2  Don't include obsolete config settings from <= 8.3 in the
                       config editor unless running on an appropriate server.
                       [Neel Patel].
2013-12-16 DP  1.20.0  Use a much smarter auto-sizing algorithm for the columns
                       in the Query Tool output grid [J.F. Oster].
2013-12-06 DP  1.20.0  Remember the last panel used on the Options dialogue,
                       and display the default panel if a group node is
                       selected on the tree [J.F. Oster]
2013-12-06 DP  1.18.2  Ensure the Type dialogue detects name changes [Timon].
2013-12-06 DP  1.18.2  Ensure report titles don't overflow the page width
                       [Dhiraj Chawla]
2013-11-25 DP  1.18.2  Ensure reports overflow the page width properly [Akshay
                       Joshi]
2013-11-20 DP  1.18.2  Add missing "port" option for SSH tunnels [Akshay Joshi].
2013-11-18 AV  1.18.2  Set 32bit PostgreSQL, EnterpriseDB PATH (in case - 64bit
                       respective applications not found, and 32bit(s) are
                       available) for 64bit pgAdmin III [Dinesh Kumar].
2013-10-22 AV  1.18.2  Improved the debugger to work better with PPAS <= 9.2
2013-10-22 DP  1.18.2  Fix the handling of the "Valid Until" date/time on the
                       role dialogue on Mac [Dinesh Kumar].
2013-10-13 GL  1.20.0  Allow parallel dump with -j starting with 9.3
2015-08-05 15:29:00 +00:00
manu
f58ec1def7 Restore SSL functionnality with OpenSSL 1.0.1p
With OpenSSL 1.0.1p upgrade, DH parameters below 1024 bits are now
refused. mariaDB 5.5.43 hardcodes 512 bits DH parameters and will
therefore fail to run SSL connexions with OpenSSL 1.0.1p

Port fix from mysql:
866b988a76
2015-08-03 14:51:29 +00:00
fhajny
18ff3771be Update databases/elasticsearch to 1.7.1.
elasticsearch 1.7.1
===================

Deprecations
  Geo:
    Deprecate validate_* and normalize_*

Enhancements
  Logging:
    Add -XX:+PrintGCDateStamps when using GC Logs

Bug fixes
  Aggregations:
    Fix cidr mask conversion issue for 0.0.0.0/0 and add tests
  Core:
    ThreadPools: schedule a timeout check after adding command to queue
  Internal:
    IndicesStore shouldn't try to delete index after deleting a shard
  Plugins:
    Plugin script: Fix ES_HOME with spaces
  Query DSL:
    Fix malformed query generation
    QueryString ignores maxDeterminizedStates when creating a WildcardQuery
    Fix RegexpQueryBuilder#maxDeterminizedStates
  Scripting:
    Consistently name Groovy scripts with the same content
  Search:
    _only_nodes preference parsed incorrectly
    Copy headers from the MLT request before calling the multi-termvectors API
  Settings:
    Add explicit check that we have reached the end of the settings stream
      when parsing settings
    Copy the classloader from the original settings when checking for prompts

elasticsearch 1.7.0
===================

Breaking changes
  Allocation:
    Default delayed allocation timeout to 1m from 0

New features
  Allocation:
    Optional Delayed Allocation on Node leave
  Recovery:
    Add basic recovery prioritization to GatewayAllocator

Enhancements
  Allocation:
    Simplify ShardRouting and centralize move to unassigned
  Cluster:
    Remove scheduled routing
    Reset registeredNextDelaySetting on reroute
    Add Unassigned meta data
  Exceptions:
    Reduce the size of the XContent parsing exception
  Internal:
    Remove reroute with no reassign
    Mark store as corrupted instead of deleting state file on engine failure
  REST:
    Create Snapshot: remove _create from POST path to match PUT
    Add rewrite query parameter to the indices.validate_query API spec
  Search:
    Search preference based on node specification
  Snapshot/Restore:
    Backport to 1.7 - Snapshot info should contain version of elasticsearch
      that created the snapshot
    Add validation of snapshot FileInfo during parsing
  Term Vectors:
    Only load term statistics if required
  Upgrade:
    Upgrade groovy from 2.4.0 to 2.4.4

Bug fixes
  Allocation:
    Shard Started messages should be matched using an exact match
    Reroute after node join is processed
  Core:
    Throw LockObtainFailedException exception when we can't lock
      index directory
    Only clear open search ctx if the index is delete or closed via API
    Workaround deadlock on Codec initialisation
  Discovery:
    ZenDiscovery: #11960 failed to remove eager reroute from node join
  Highlighting:
    Fix exception for plain highlighter and huge terms for Lucene 4.x
  Index APIs:
    Use IndexWriter.hasPendingChanges() to detect if a flush is needed.
  Internal:
    Fix FieldDataTermsFilter.equals.
    Add a null-check for XContentBuilder#field for BigDecimals
  More Like This:
    Fix potentially unpositioned enum
  Packaging:
    Fix endless looping if starting fails
    Create PID_DIR in init.d script
  Percolator:
    Support filtering percolator queries by date using now
    Fail nicely if nested query with inner_hits is used in a percolator query
  Query DSL:
    CommonTermsQuery fix for ignored coordination factor
  Scroll:
    Append the shard top docs in such a way to prevent AOOBE
  Search:
    Free all pending search contexts if index is closed or removed
  Settings:
    Do not prompt for node name twice
  Shadow Replicas:
    Fail engine without marking it as corrupt when recovering on SharedFS
  Snapshot/Restore:
    Add url repository whitelist - backport of #11687 to 1.6 and 1.7
    Improve repository verification failure message
    Aborting snapshot might not abort snapshot of shards in very early
      stages in the snapshot process
    Improve logging of repository verification exceptions.
  Stats:
    Fix wrong reused file bytes in Recovery API reports
    Fix RecoveryState timestamps

Regression
  More Like This:
    Support for deprecated percent_terms_to_match REST parameter

elasticsearch 1.6.1
===================

Breaking changes
  Snapshot/Restore:
    Url repository should respect repo.path for file urls

Enhancements
  Exceptions:
    Reduce the size of the XContent parsing exception
  REST:
    Create Snapshot: remove _create from POST path to match PUT
    Add rewrite query parameter to the indices.validate_query API spec
  Snapshot/Restore:
    Add validation of snapshot FileInfo during parsing
    Add snapshot name validation logic to all snapshot operations
  Term Vectors:
    Only load term statistics if required
  Upgrade:
    Upgrade groovy from 2.4.0 to 2.4.4

Bug fixes
  Core:
    Throw LockObtainFailedException exception when we can't lock
      index directory
    Only clear open search ctx if the index is delete or closed via API
    Workaround deadlock on Codec initialisation
    Consistently add one more maxMerge in ConcurrentMergeSchedulerProvider
  Highlighting:
    Fix exception for plain highlighter and huge terms for Lucene 4.x
  Index APIs:
    Use IndexWriter.hasPendingChanges() to detect if a flush is needed.
  Internal:
    Fix FieldDataTermsFilter.equals.
    Add a null-check for XContentBuilder#field for BigDecimals
    AsyncShardFetch can hang if there are new nodes in cluster state
  Logging:
    Use task's class name if not a TimedPrioritizeRunnable
  More Like This:
    Fix potentially unpositioned enum
  Packaging:
    Fix endless looping if starting fails
    Postrm script should not fail
    Create PID_DIR in init.d script
  Percolator:
    Support filtering percolator queries by date using now
    Fail nicely if nested query with inner_hits is used in a percolator query
  Query DSL:
    CommonTermsQuery fix for ignored coordination factor
    Fix support for _name in some queries
  Scroll:
    Append the shard top docs in such a way to prevent AOOBE
  Search:
    Free all pending search contexts if index is closed or removed
  Settings:
    Do not prompt for node name twice
  Shadow Replicas:
    Return empty CommitID from ShadowEngine#flush
  Snapshot/Restore:
    Add url repository whitelist - backport of #11687 to 1.6 and 1.7
    Improve repository verification failure message
    Aborting snapshot might not abort snapshot of shards in very early
      stages in the snapshot process
    Improve logging of repository verification exceptions.
  Stats:
    Fix wrong reused file bytes in Recovery API reports
    Fix RecoveryState timestamps

Regression
  More Like This:
    Support for deprecated percent_terms_to_match REST parameter
    Add back support for deprectated percent_terms_to_match REST parameter
2015-08-03 11:57:43 +00:00
taca
1ae15f47da Update ruby-sequel to 4.25.0.
=== 4.25.0 (2015-08-01)

* Add Dataset#insert_conflict on PostgreSQL 9.5+, for upsert/insert ignore support using INSERT ON CONFLICT (jeremyevans)

* Support Dataset#group_rollup and #group_cube on PostgreSQL 9.5+ (jeremyevans)

* Automatically REORG tables when altering when using jdbc/db2 (karlhe) (#1054)

* Recognize constraint violation exceptions on swift/sqlite (jeremyevans)

* Recognize another check constraint violation exception message on SQLite (jeremyevans)

* Allow =~ and !~ to be used on ComplexExpressions (janko-m) (#1050)

* Support case sensitive SQL Server 2012 in MSSQL metadata queries (knut2) (#1049)

* Add Dataset#group_append, for appending to the existing GROUP BY clause (YorickPeterse) (#1047)

* Add inverted_subsets plugin, for creating an inverted subset method for each subset (celsworth) (#1042)

* Make Dataset#for_update not use the :read_only database when the dataset is executed (jeremyevans) (#1041)

* Add singular_table_names plugin, for changing Sequel to not pluralize table names by default (jeremyevans)

* PreparedStatement#prepare now raises an Error (jeremyevans)

* Clear delayed association pks when refreshing an object (jeremyevans)

* Add empty_array_consider_nulls extension to make Sequel consider NULL values when using IN/NOT IN with an empty array (jeremyevans)

* Make Sequel default to ignoring NULL values when using IN/NOT IN with an empty array (jeremyevans)

* Remove the deprecated firebird and informix adapters (jeremyevans)

* Make :collate option when creating columns literalize non-String values on PostgreSQL (jeremyevans) (#1040)

* Make dirty plugin notice when serialized column is changed (celsworth) (#1039)

* Allow prepared statements to use RETURNING (jeremyevans) (#1036)

=== 4.24.0 (2015-07-01)

* Allow class_table_inheritance plugin to support subclasses that don't add additional columns (QuinnHarris, jeremyevans) (#1030)

* Add :columns option to update_refresh plugin, specifying the columns to include in the RETURNING clause (celsworth) (#1029)

* Use column symbol key for auto validation unique errors if the unique index is on a single column (jeremyevans)

* Allow :timeout option to Database#listen in the postgres adapter to be a callable object (celsworth) (#1028)

* Add pg_inet_ops extension, for DSL support for PostgreSQL inet/cidr operators and functions (celsworth, jeremyevans) (#1024)

* Support :*_opts options in auto_validations plugin, for setting options for the underlying validation methods (celsworth, jeremyevans) (#1026)

* Support :delay_pks association option in association_pks to delay setting of associated_pks until after saving (jeremyevans)

* Make jdbc subadapters work if they issue queries while the subadapter is being loaded (jeremyevans) (#1022)

* Handle 64-bit auto incrementing primary keys in jdbc subadapters (DougEverly) (#1018, #1019)

* Remove the deprecated db2 and dbi adapters (jeremyevans)

* Make auto_validation plugin use :from=>:values option to setup validations on the underlying columns (jeremyevans)

* Add :from=>:values option to validation_helpers methods, for getting values from the values hash instead of a method call (jeremyevans)
2015-08-02 15:51:20 +00:00
taca
13d5a4d19c Update ruby-arel to 6.0.2.
=== 6.0.2 / 2014-07-11

* Bug fixes

  * Fix file permission problem on the gem package

=== 6.0.1 / 2014-07-10

* Bug fixes

  * Stop quoting LIMIT values.
2015-08-02 15:50:06 +00:00
adam
761b1a6ecd Changes 1.1.6:
* Security Fix: Connector/C++ 1.1.6 Commercial upgrades the linked OpenSSL library to version 1.0.1m which has been publicly reported as not vulnerable to CVE-2015-0286.
* The std::auto_ptr class template is deprecated in C++11, and its usage has been replaced with boost::scoped_ptr/shared_ptr.
* Connector/C++ now provides macros to indicate the versions of libraries against which it was built: MYCPPCONN_STATIC_MYSQL_VERSION and MYCPPCONN_STATIC_MYSQL_VERSION_ID (MySQL client library version, string and numeric), and MYCPPCONN_BOOST_VERSION (Boost library version, numeric).
* With defaultStatementResultType=FORWARD_ONLY and a row position after the last row, using getter methods such as getInt() or getString() resulted in a segmentation fault.
* For prepared statements, calling wasNull() before fetching data resulted in an assertion failure.
* Result sets from prepared statements were not freed.
* Connector/C++ failed to build against Boost-devel-1.41.0-25 on OLE6.
* Configuration failed if the MYSQL_CONFIG_EXECUTABLE option was specified and the MySQL installation path contained the characters -m. Installation failed if the build directory was not in the top source directory.
* For prepared statements, getString() did not return the fractional seconds part from temporal columns that had a fractional sections part.
* For queries of the form SELECT MAX(bit_col) FROM table_with_bit_col, getString() returned an incorrect result.
* For Connector/C++ builds from source, make install failed if only the static library had been built without the dynamic library.
2015-08-01 09:35:52 +00:00
adam
da3fea3874 Release 1.0.8 comes almost immediately after 1.0.7, as a new issue involving the connection pool has been identified and fixed which impacts any application that relies upon consistent behavior of the .info dictionary on a connection that is undergoing reconnect attempts. Applications and libraries which make use of connection pool event handlers may benefit from this release, as it repairs the behavior of the .info dictionary and reduces the likelihood of stale connections being passed to the "checkout" handler. 2015-08-01 09:30:52 +00:00
wen
b74dfddd9b Update to 1.001032
Upstream changes:
1.001032  2015-06-04 15:03:38+00:00 UTC

- releasing as stable

1.001_031 2015-05-27 14:54:24+00:00 UTC (TRIAL RELEASE)

- Fix for an issue where when inserting data into a database,
    the tables were being sorted alphabetically, rather than in dependency order. ( TBSliver++ )

1.001_030 2015-05-27 14:43:34+00:00 UTC (TRIAL RELEASE)

- use Test::TempDir::Tiny for better test parallelization and cleanup ( RsrchBoy++ )

1.001_029 2015-01-14 15:17:28+00:00 Europe/London

- fix for bugtracker pointing to gh, should be rt, added test to identify windows issues
2015-08-01 02:09:23 +00:00
wen
1d8e8d9a74 Update to 0.07043
Remove unneeded DEPENDS

Upstream changes:
0.07043  2015-05-13
        - Fix many_to_many bridges with overlapping foreign keys
        - Add option to allow extra columns in many_to_many link tables
        - Document how to add perltidy markers via filter_generated_code
        - Fix DB2 foreign-key introspection
        - Remove dependency on List::MoreUtils and Sub::Name
        - Ensure schema files are generated as binary files on Windows
        - Fix overwrite_modifications not overwriting if the table hasn't changed
        - Filter out disabled constraints and triggers for Oracle (GH#5)
2015-08-01 00:24:56 +00:00
wen
80456553fa Update to 0.001002
Upstream changes:
0.001002 May 27, 2015
    - Fix typos in comments and POD (RT#87140)
2015-07-31 23:54:50 +00:00
wen
058c0c00a3 Update to 0.004001
Upstream changes:
0.004001  2015-07-01 15:13:21-07:00 America/Los_Angeles
  - Set C3 on ResultClasses in addition to ResultSets

0.004000  2015-06-04 23:05:14-07:00 America/Los_Angeles
  - Add `-experimental` import
2015-07-31 23:44:59 +00:00
wen
eadf19784d Update to 2.031000
Upstream changes:
2.031000  2015-07-25 01:20:40-07:00 America/Los_Angeles
 - Add ::ResultSet::Bare (Closes GH#53)

2.030002  2015-07-14 13:43:47-07:00 America/Los_Angeles
 - Clarify docs for ::ResultSet::OneRow (Thanks for the tips Aran Deltac!)
   (Closes GH#48)
 - Add abstract to ::Row::JoinTable (Thanks Gregor Herrmann!)
   (Closes GH#49)
2015-07-31 23:39:32 +00:00
adam
7ab308b74d Changes 5.6.26:
* Security Fix: Due to the LogJam issue (https://weakdh.org/), OpenSSL has changed the Diffie-Hellman key length parameters for openssl-1.0.1n and up.
* Replication: When using a multi-threaded slave, each worker thread has its own queue of transactions to process. In previous MySQL versions, STOP SLAVE waited for all workers to process their entire queue. This logic has been changed so that STOP SLAVE first finds the newest transaction that was committed by any worker thread. Then, it waits for all workers to complete transactions older than that. Newer transactions are not processed. The new logic allows STOP SLAVE to complete faster in case some worker queues contain multiple transactions.
* Previously, the max_digest_length system variable controlled the maximum digest length for all server functions that computed statement digests. However, whereas the Performance Schema may need to maintain many digest values, other server functions such as MySQL Enterprise Firewall need only one digest per session. Increasing the max_digest_length value has little impact on total memory requirements for those functions, but can increase Performance Schema memory requirements significantly. To enable configuring digest length separately for the Performance Schema, its digest length is now controlled by the new performance_schema_max_digest_length system variable.
* Previously, changes to the validate_password plugin dictionary file (named by the validate_password_dictionary_file system variable) while the server was running required a restart for the server to recognize the changes. Now validate_password_dictionary_file can be set at runtime and assigning a value causes the named file to be read without a restart.

In addition, two new status variables are available. validate_password_dictionary_file_last_parsed indicates when the dictionary file was last read, and validate_password_dictionary_file_words_count indicates how many words it contains.
* Bugs fixed
2015-07-30 14:39:18 +00:00
adam
b5efe57837 Changes 5.5.45:
* Security Fix: Due to the LogJam issue (https://weakdh.org/), OpenSSL has changed the Diffie-Hellman key length parameters for openssl-1.0.1n and up. OpenSSL has provided a detailed explanation at http://openssl.org/news/secadv_20150611.txt. To adopt this change in MySQL, the key length used in vio/viosslfactories.c for creating Diffie-Hellman keys has been increased from 512 to 2,048 bits.
* InnoDB: On Unix-like platforms, os_file_create_simple_no_error_handling_func and os_file_create_func opened files in different modes when innodb_flush_method was set to O_DIRECT.
* InnoDB: An assertion was raised when InnoDB attempted to dereference a NULL foreign key object.
* InnoDB: An index record was not found on rollback due to inconsistencies in the purge_node_t structure.
* The Spencer regex library used for the REGEXP operator could be subject to heap overflow in some circumstances.
* A buffer-overflow error could occur for mysqlslap during option parsing.
* GROUP BY or ORDER BY on a CHAR(0) NOT NULL column could lead to a server exit.
* mysql-systemd-start failed if datadir was set in /etc/my.cnf.
* On OS X 10.10 (Yosemite), mysqld failed to start automatically. The startup item has been replaced with a launchd job, which enables the preference pane checkbox for automatic startup to work again.
2015-07-30 14:36:34 +00:00
adam
d5894b09a9 Changes 3.8.11.1:
Restore an undocumented side-effect of PRAGMA cache_size: force the database schema to be parsed if the database has not been previously accessed.
Fix a long-standing problem in sqlite3_changes() for WITHOUT ROWID tables that was reported a few hours after the 3.8.11 release.
2015-07-30 12:37:04 +00:00
adam
b9eb805f49 Changes 3.8.11:
Added the experimental RBU extension. Note that this extension is experimental and subject to change in incompatible ways.
Added the experimental FTS5 extension. Note that this extension is experimental and subject to change in incompatible ways.
Added the sqlite3_value_dup() and sqlite3_value_free() interfaces.
Enhance the spellfix1 extension to support ON CONFLICT clauses.
The IS operator is now able to drive indexes.
Enhance the query planner to permit automatic indexing on FROM-clause subqueries that are implemented by co-routine.
Disallow the use of "rowid" in common table expressions.
Added the PRAGMA cell_size_check command for better and earlier detection of database file corruption.
Added the matchinfo 'b' flag to the matchinfo() function in FTS3.
Improved fuzz-testing of database files, with fixes for problems found.
Add the fuzzcheck test program and automatically run this program using both SQL and database test cases on "make test".
Added the SQLITE_MUTEX_STATIC_VFS1 static mutex and use it in the Windows VFS.
The sqlite3_profile() callback is invoked (by sqlite3_reset() or sqlite3_finalize()) for statements that did not run to completion.
Enhance the page cache so that it can preallocate a block of memory to use for the initial set page cache lines. Set the default preallocation to 100 pages. Yields about a 5% performance increase on common workloads.
Miscellaneous micro-optimizations result in 22.3% more work for the same number of CPU cycles relative to the previous release. SQLite now runs twice as fast as version 3.8.0 and three times as fast as version 3.3.9. (Measured using cachegrind on the speedtest1.c workload on Ubuntu 14.04 x64 with gcc 4.8.2 and -Os. Your performance may vary.)
Added the sqlite3_result_zeroblob64() and sqlite3_bind_zeroblob64() interfaces.
2015-07-28 07:18:22 +00:00
bsiegert
335320f0f5 Add package for Parse::Dia::SQL. From David Gutteridge in PR pkg/50055.
Parse::Dia::SQL converts Dia class diagrams into SQL. Various SQL
dialects are supported.
2015-07-25 14:04:16 +00:00
jnemeth
4f799aa8cf sort 2015-07-23 06:59:02 +00:00
asau
9171187b59 + sql-workbench 2015-07-20 20:21:11 +00:00
asau
9515f50c3c Import SQL Workbench/J build 117 as databases/sql-workbench.
SQL Workbench/J is a free, DBMS-independent, cross-platform SQL
query tool. It is written in Java and should run on any
operating system that provides a Java Runtime Environment.

Its main focus is on running SQL scripts (either interactively
or as a batch) and export/import features. Graphical query
building or more advanced DBA tasks are not the focus and are
not planned.

Features:

 * Edit, insert and delete data directly in the query result.
 * Powerful export command to write text files (aka "CSV"),
   XML, HTML or SQL (including BLOB data). All user tables
   can be exported into a directory with a single command.
   Export files can be compressed "on-the-fly".
 * Powerful text, XML and spreadsheet import. A set of files
   (including compressed files) can be imported from a directory
   with a single command. Foreign key constraints are detected
   to insert the data in the correct order.
 * Compare two database schemas for differences. The XML output
   can be transformed into the approriate SQL ALTER statements
   using XSLT.
 * Compare the data of two database and generate the necessary
   SQL statements to migrate one to the other.
 * Supports running SQL scripts in batch mode.
 * Supports running in console mode.
 * Search text in procedure, view and other sources using a SQL
   command or a GUI.
 * Search for data across all columns in all tables using a SQL
   command or a GUI.
 * Reformatting (Pretty-Print) of SQL Statements.
 * Select rows from related tables according to their foreign
   key definitions.
 * Tooltips for INSERT statements to show the corresponding
   value or column.
 * Copy data directly between to database servers using a SQL
   command or a GUI.
 * Macros (aka aliases) for frequently used SQL statements.
 * Variable substitution in SQL statements including smart
   prompting for values (can be combined with macros).
 * Auto completion for tables and columns in SQL statements.
 * Display database objects and their definitions.
 * Display table source.
 * Display view, procedure and trigger source code.
 * Display foreign key constraints between tables.
 * Full support for BLOB data in query results, SQL statements,
   export and import.
 * SQLWorkbench/J is free for almost everyone (published under
   a modified the Apache 2.0 license).
2015-07-20 20:19:16 +00:00
asau
dad95ce0ec Make the launch script executable, remove extra file. 2015-07-20 18:27:09 +00:00
asau
3de4ae79a7 + squirrelsql 2015-07-19 20:30:28 +00:00
asau
1108c239df Import SQuirreL SQL Client 3.6 as databases/squirrelsql.
SQuirreL SQL Client is a graphical SQL client written in Java
that will allow you to view the structure of a JDBC compliant
database, browse the data in tables, issue SQL commands etc.
2015-07-19 20:26:18 +00:00
asau
588031669d + jdbc-postgresql94 2015-07-19 13:16:27 +00:00
asau
698d4e0db1 Import jdbc-postgresql94-1201 as databases/jdbc-postgresql94.
Cloned from databases/jdbc-postgresql93.

This is a pure Java (Type IV) JDBC driver for the PostgreSQL
database.  It allows Java programs to connect to a PostgreSQL
database using standard, database independent Java code.

The driver provides a reasonably complete implementation of the
JDBC 4 specification in addition to some PostgreSQL specific
extensions.
2015-07-19 13:14:58 +00:00
asau
58342f8445 + jdbc-postgresql93 2015-07-19 11:03:16 +00:00
asau
f40b962622 Import jdbc-postgresql93-1103 as databases/jdbc-postgresql93.
Cloned from databases/jdbc-postgresql92.

This is a pure Java (Type IV) JDBC driver for the PostgreSQL
database.  It allows Java programs to connect to a PostgreSQL
database using standard, database independent Java code.

The driver provides a reasonably complete implementation of the
JDBC 4 specification in addition to some PostgreSQL specific
extensions.
2015-07-19 11:01:41 +00:00
fhajny
80d6be7e24 Backport support for the upcoming Erlang/OTP 18.0. 2015-07-18 07:36:35 +00:00
adam
3f03bcd668 OpenLDAP 2.4.41 Release (2015/06/21)
Fixed ldapsearch to explicitly flush its buffer (ITS-8118)
	Fixed libldap async connections (ITS-8090)
	Fixed libldap double free of request during abandon (ITS-7967)
	Fixed libldap error string for LDAP_X_CONNECTING (ITS-8093)
	Fixed libldap segfault in ldap_sync_initialize (ITS-8001)
	Fixed libldap ldif-wrap off by one error (ITS-8003)
	Fixed libldap handling of TLS in async mode (ITS-8022)
	Fixed libldap null pointer dereference (ITS-8028)
	Fixed libldap mutex handling with LDAP_OPT_SESSION_REFCNT (ITS-8050)
	Fixed slapd slapadd config db import of minimal frontend entry (ITS-8150)
	Fixed slapd slapadd onetime leak with -w (ITS-8014)
	Fixed slapd sasl auxprop crash with invalid config (ITS-8092)
	Fixed slapd syncrepl delta-mmr issue with overlays and slapd.conf (ITS-7976)
	Fixed slapd syncrepl mutex for cookie state (ITS-7968)
	Fixed slapd syncrepl memory leaks (ITS-8035)
	Fixed slapd syncrepl to free presentlist at end of refresh mode (ITS-8038)
	Fixed slapd syncrepl to streamline presentlist (ITS-8042)
	Fixed slapd syncrepl concurrency when CHECK_CSN is enabled (ITS-8120)
	Fixed slapd rootdn checks for hidden backends (ITS-8108)
	Fixed slapd segfault when using matched values control (ITS-8046)
	Fixed slapd-ldap reconnection behavior on remote failure (ITS-8142)
	Fixed slapd-mdb minor case typo (ITS-8049)
	Fixed slapd-mdb one-level search (ITS-7975)
	Fixed slapd-mdb heap corruption (ITS-7965)
	Fixed slapd-mdb crash after deleting in-use schema (ITS-7995)
	Fixed slapd-mdb minor code cleanup (ITS-8011)
	Fixed slapd-mdb to return errors when using incorrect env flags (ITS-8016)
	Fixed slapd-mdb to correctly update search candidates (ITS-8036, ITS-7904)
	Fixed slapd-mdb when there were more than 65535 aliases in scope (ITS-8103)
	Fixed slapd-mdb alias deref when objectClass is not indexed (ITS-8146)
	Fixed slapd-meta TLS initialization with ldaps URIs (ITS-8022)
	Fixed slapd-meta to have better error logging (ITS-8131)
	Fixed slapd-perl conversion to cn=config (ITS-8105)
	Fixed slapd-sql autocommit config variable (ITS-8129,ITS-6613)
	Fixed slapo-collect segfault (ITS-7797)
	Fixed slapo-constraint with 0 count constraint (ITS-7780,ITS-7781)
	Fixed slapo-deref with empty attribute list (ITS-8027)
	Fixed slapo-memberof to correctly reject invalid members (ITS-8107)
	Fixed slapo-sock result parser for CONTINUE (ITS-8048)
	Fixed slapo-syncprov synprov_matchops usage of test_filter (ITS-8013)
	Fixed slapo-syncprov segfault on disconnect/abandon (ITS-5452,ITS-8012)
	Fixed slapo-syncprov memory leak (ITS-8039)
	Fixed slapo-syncprov segfault on disconnect/abandon (ITS-8043)
	Fixed slapo-syncprov deadlock when autogroup is in use (ITS-8063)
	Fixed slapo-syncprov potential loss of changes when under load (ITS-8081)
	Fixed slapo-unique enforcement of uniqueness with manageDSAit control (ITS-8057)
	Build Environment
		Fixed libdb detection with gcc 5.x (ITS-8056)
		Fixed ftello reference for Win32 (ITS-8127)
		Enhanced contrib modules build paths (ITS-7782)
		Fixed contrib/autogroup internal operation identity (ITS-8006)
		Fixed contrib/autogroup to skip internal ops with accesslog (ITS-8065)
		Fixed contrib/passwd/sha2 compiler warning (ITS-8000)
		Fixed contrib/noopsrch compiler warning (ITS-7998)
		Fixed contrib/dupent compiler warnings (ITS-7997)
		Test suite: Added vrFilter test (ITS-8046)
	Contrib
		Added pbkdf2 sha256 and sha512 schemes (ITS-7977)
		Fixed autogroup modification callback responses (ITS-6970)
		Fixed nssov compare with usergroup (ITS-8079)
		Fixed nssov password change behavior (ITS-8080)
		Fixed nssov updated to 0.9.4 (ITS-8097)
	Documentation
		Added ldap_get_option(3) LDAP_FEATURE_INFO_VERSION information (ITS-8032)
		Added ldap_get_option(3) LDAP_OPT_API_INFO_VERSION information (ITS-8032)
		Fixed slapd-config(5), slapd.conf(5) tls_cipher_suite option (ITS-8099)
		Fixed slapd-meta(5), slapd-ldap(5) tls_cipher_suite option (ITS-8099)
		Fixed slapd-meta(5) fix minor typo (ITS-7769)
2015-07-17 14:49:05 +00:00
manu
573c685dca Upstream fix for ignored TLSDHParamFile option
From 6f120920d359d3b880c5c56bde4c1b91c3bedb01 Mon Sep 17 00:00:00 2001
From: Ben Jencks <ben@bjencks.net>
Date: Sun, 27 Jan 2013 18:27:03 -0500
Subject: [PATCH] ITS#7506 tls_o.c: Fix Diffie-Hellman parameter usage.

If a DHParamFile or olcDHParamFile is specified, then it will be used,
otherwise a hardcoded 1024 bit parameter will be used. This allows the use of
larger parameters; previously only 512 or 1024 bit parameters would ever be
used.

From cfeb28412c28ce9feeea6e6c055286f201bd0a34 Mon Sep 17 00:00:00 2001
From: Howard Chu <hyc@openldap.org>
Date: Sat, 7 Sep 2013 06:39:53 -0700
Subject: [PATCH] ITS#7506 fix prev commit

The patch unconditionally enabled DHparams, which is a significant
change of behavior. Reverting to previous behavior, which only enables
DH use if a DHparam file was configured.
2015-07-15 16:33:57 +00:00
fhajny
98d3ba897b Change position where GCC_REQD is defined to help with potential race
condition against bsd.prefs.mk.
2015-07-14 17:53:55 +00:00
manu
dfa1e43f54 Restore SSL functionnality with OpenSSL 1.0.1p (revision bump)
This changes just bumps PKGREVISION after patches were added
in mysql56-client/patches which impact mysql56-server.

For the record, the commit log or that patches:
> With OpenSSL 1.0.1p upgrade, DH parameters below 1024 bits are now
> refused. MySQL hardcodes 512 bits DH parameters and will therefore
> fail to run SSL connexions with OpenSSL 1.0.1p
>
> Apply fix from upstream:
> 866b988a76
2015-07-14 16:38:56 +00:00
manu
f818b4bf14 Restore SSL functionnality with OpenSSL 1.0.1p
With OpenSSL 1.0.1p upgrade, DH parameters below 1024 bits are now
refused. MySQL hardcodes 512 bits DH parameters and will therefore
fail to run SSL connexions with OpenSSL 1.0.1p

Apply fix from upstream:
866b988a76
2015-07-14 12:09:24 +00:00
jperkin
92ec5df6e0 Update COMMENT to reflect reality. Bump PKGREVISION. 2015-07-13 21:36:35 +00:00
wiz
69745065ee Switch to py-boost and bump PKGREVISION. 2015-07-13 15:09:32 +00:00
wiz
51d021cae3 Comment out another one. 2015-07-12 19:02:03 +00:00
wiz
40bbad7ac6 Comment out dependencies of the style
{perl>=5.16.6,p5-ExtUtils-ParseXS>=3.15}:../../devel/p5-ExtUtils-ParseXS
since pkgsrc enforces the newest perl version anyway, so they
should always pick perl, but sometimes (pkg_add) don't due to the
design of the {,} syntax.

No effective change for the above reason.

Ok joerg
2015-07-12 18:56:06 +00:00
wen
6cbded313e Update to 2.030001
Update DEPENDS

Upstream changes:
2.030001  2015-07-10 22:38:58-07:00 America/Los_Angeles
 - Make ::Schema::Verifier aggregate errors instead of dying on first one

2.030000  2015-07-01 10:11:42-07:00 America/Los_Angeles
 - Add ::Row::OnColumnMissing (Thanks ZipRecruiter!)

2.029000  2015-06-27 14:16:31-07:00 America/Los_Angeles
 - Add ::ResultSet::OneRow (Thanks Aran Deltac!)

2.028000  2015-05-30 17:06:01-05:00 America/Chicago
 - Add ::Verifier::RelationshipColumnName (Thanks for the idea mcsnolte!)
 - Add ::ResultSet::Shortcut::Search (Closes GH#44 and GH#47) (Thanks moltar!)

2.027001  2015-05-16 11:47:15-05:00 America/Chicago
 - Fix missing POD in ::ResultSet::Explain

2.027000  2015-05-08 19:35:13-05:00 America/Chicago
 - Add ::Verifier::Parent

2.026000  2015-05-02 00:27:28-05:00 America/Chicago
 - Add new ::Schema::Verifier framework
 - ... including inaugural ::Verifier::C3
2015-07-12 03:21:34 +00:00
wen
fdd15273c9 Update to 0.082820
Update DEPENDS

Upstream changes:
0.082820 2015-03-20 20:35 (UTC)
    * Fixes
        - Protect destructors from rare but possible double execution, and
          loudly warn the user whenever the problem is encountered (GH#63)
        - Relax the 'self_result_object' argument check in the relationship
          resolution codepath, restoring exotic uses of inflate_result
          http://lists.scsys.co.uk/pipermail/dbix-class/2015-January/011876.html
        - Fix updating multiple CLOB/BLOB columns on Oracle
        - Fix exception on complex update/delete under a replicated setup
          http://lists.scsys.co.uk/pipermail/dbix-class/2015-January/011903.html
        - Fix uninitialized warnings on empty hashes passed to join/prefetch
          https://github.com/vanstyn/RapidApp/commit/6f41f6e48 and
          http://lists.scsys.co.uk/pipermail/dbix-class/2015-February/011921.html
        - Fix hang in t/72pg.t when run against DBD::Pg 3.5.0. The ping()
          implementation changes due to RT#100648 made an alarm() based
          timeout lock-prone.

    * Misc
        - Remove warning about potential side effects of RT#79576 (scheduled)
        - Various doc improvements (GH#35, GH#62, GH#66, GH#70, GH#71, GH#72)
        - Depend on newer Moo, to benefit from a safer runtime (RT#93004)
        - Fix intermittent failures in the LeakTracer on 5.18+
        - Fix failures of t/54taint.t on Windows with spaces in the $^X
          executable path (RT#101615)
2015-07-12 03:10:28 +00:00
gdt
805e52630a Add a TODO about a build issue.
When postgis is built as a non-root user, but postgresql was built as
root, postgis's use of pgxs.mk leads to install -o root, which fails.
2015-07-07 17:26:10 +00:00
gdt
3d5ead950a Fix PLIST for 2.1.8 update. 2015-07-07 16:19:19 +00:00
gdt
ffdb07ae7d Update to 2.1.8. Upstream changes:
- #3159, do not force a bbox cache on ST_Affine
  - #3018, GROUP BY geography sometimes returns duplicate rows
  - #3048, shp2pgsql - illegal number format when specific system locale set
  - #3094, Malformed GeoJSON inputs crash backend
  - #3104, st_asgml introduces random characters in ID field
  - #3155, Remove liblwgeom.h on make uninstall
  - #3177, gserialized_is_empty cannot handle nested empty cases
  - Fix crash in ST_LineLocatePoint
2015-07-07 15:49:48 +00:00
fhajny
f2ba5c66db Update databases/elasticsearch to 1.6.0
elasticsearch 1.6.0

Breaking changes
- Benchmark api: removed leftovers
- Wildcard field names in highlighting should only return fields that
  can be highlighted
- Remove unsafe options
- Fix FSRepository location configuration

Deprecations
- Deprecate async replication
- Query DSL: deprecate BytesFilterBuilder in favour of WrapperFilterBuilder
- Deprecate async replication
- Deprecate the More-Like-This API in favour of the MLT query
- Deprecate rivers
- Warning in documentation for deprecation of rivers
- Deprecated the thrift and memcached transports
- Deprecate the top_children query
- Plugins: deprecate addQuery methods that are going to be removed in 2.0
- Deprecate Groovy sandbox and related settings
- Deprecate delete-by-query in client/transport/action APIs too
- Deprecate filter option in PhraseSuggester collate

New features
- Add ability to specify a SizeBasedTriggeringPolicy for log configuration
- Bring back numeric_resolution
- API: Add response filtering with filter_path parameter
- Synced flush backport
- Move index sealing terminology to synced flush
- Seal indices for faster recovery
- Add support for fine-grained settings
- Validate API: provide more verbose explanation
- Add ability to prompt for selected settings on startup
- bootstrap.mlockall for Windows (VirtualLock)
- Allow shards on shared filesystems to be recovered on any node
- Add field stats api

For a full changelog see here:

  https://www.elastic.co/downloads/past-releases/elasticsearch-1-6-0

elasticsearch 1.5.2

Security
- Ensure URL expansion only works within the plugins directory

Enhancements
- Only flush for checkindex if we have uncommitted changes
- Update tree_level and precision parameter priorities
- Add merge conflicts to GeoShapeFieldMapper
- pom.xml updates to allow m2e integration to work correctly
- Fix to pom.xml to allow eclipse maven integration using m2e
- Eclipse fixes
- Implement retries for ShadowEngine creation
- Allow rebalancing primary shards on shared filesystems
- Update forbiddenapis to version 1.8

Bug fixes
- Fix _as_string output to only show when format specified
- _default_ mapping should be picked up from index template during
  auto create index
- Correct ShapeBuilder coordinate parser to ignore values in 3rd+ dimension
- Fix hole intersection at tangential coordinate
- Search: FielddataTermsFilter equality is based on hash codes
- Fix possible NPE in InternalClusterService$NotifyTimeout, the future
  field is set from a different thread
- Add missing hashCode method to RecoveryState#File
- Make GeoContext mapping idempotent
- Fixed an equality check in StringFieldMapper.
- Score value is 0 in _explanation with random_score query
- Fix updating templates.
- Analysis: fix ignoring tokenizer settings in SynonymTokenFilterFactory
- ShardTermVectorsService calls docFreq() on unpositioned TermsEnum
- FSTranslog#snapshot() can enter infinite loop

elasticsearch 1.5.1

Deprecations
- Warning in documentation for deprecation of rivers

Enhancements
- Core: also refresh if many deletes in a row use up too much
  version map RAM
- Use static logger name in Engine.java
- service.bat file should explicitly use the Windows find command.
- AbstractBlobContainer.deleteByPrefix() should not list all blobs

Bug fixes
- Core: Lucene merges should run on the target shard during recovery
- Sync translog before closing engine
- Fix validate_* merge policy for GeoPointFieldMapper
- Make sure size=0 works on the inner_hits level.
- Make sure inner hits also work for nested fields defined in object field
- Fix bug where parse error is thrown if a inner filter is used in
  a nested filter/query.
- Fix nested stored field support.
- Bugfix+unittest for unneccesary mapping refreshes caused by unordered
  fielddata settings
- Don't try to send a mapping refresh if there is no master
- Fix _field_names to be disabled on pre 1.3.0 indexes
- Transport: fix racing condition in timeout handling
- Children aggregation: Fix 2 bugs in children agg
- Fix wrong use of currentFieldName outside of a parsing loop
- Avoid NPE during query parsing
- Function score: apply min_score to sub query score if
  no function provided
- Function_score: undo "Remove explanation of query score from functions"
- State: Refactor state format to use incremental state IDs
- Recovery: RecoveryState.File.toXContent reports file length
  as recovered bytes
- Fail shard when index service/mappings fails to instantiate
- Don't reuse source index UUID on restore
- Snapshot/Restore: separate repository registration
- Automatically add "index." prefix to the settings are changed
  on restore...

elasticsearch 1.5.0

Breaking changes
- Aliases: Throw exception if index is null or missing when creating
  an alias
- Benchmark api: removed leftovers
- Resiliency: Throw exception if the JVM will corrupt data
- [ENGINE] Remove full flush / FlushType.NEW_WRITER
- Change behaviour of indices segments api to allow no indices
- Plugins: Don't overwrite plugin configuration when removing/upgrading
  plugins
- [QUERY] Remove lowercase_expanded_terms and locale options
- Recovery: RecoveryState clean up
- Scripting: cleanup ScriptService & friends
- Disable dynamic Groovy scripting by marking Groovy as not sandboxed
- Scripting: Script with _score: remove dependency of DocLookup and scorer

Deprecations
- Deprecate async replication
- Core: deprecate index.fail_on_merge_failure
- Mappings: Deprecate _analyzer and per query analyzers
- Deprecation: MLT Field Query
- Deprecated the thrift and memcached transports
- Core: add deprecation messages for delete-by-query

New features
- New aggregations feature - "PercentageScore" heuristic
  for significant_terms
- significant terms: add scriptable significance heuristic
- Cat API: show open and closed indices in _cat/indices
- Circuit Breakers: Add NoopCircuitBreaker used in NoneCircuitBreakerService
- Shadow replicas on shared filesystems
- MLT Query: Support for artificial documents
- Add time_zone setting for query_string
- Search: add format support for date range filter and queries
- Add min_score parameter to function score query to only match docs
  above this threshold
- Add inner hits to nested and parent/child queries
- Add index.data_path setting
- Term Vectors/MLT Query: support for different analyzers than default
  at the field

For a full changelog see here:

  https://www.elastic.co/downloads/past-releases/elasticsearch-1-5-0
2015-07-07 14:11:59 +00:00
joerg
b6b50914b6 Initialising an iterator from 0 is a GCCism. Avoid forcing a dependency
of the backend on a module by avoiding a dynamic cast. Don't use false
as string. Fix build with newer cTemplate.
2015-07-07 11:45:52 +00:00
joerg
afb93d9aba Lua 5.3 is not supported. 2015-07-07 11:43:47 +00:00
joerg
b80ab84d35 Restore patch for src/api/php/sql_relay.cpp:
Do not pretend that C++ is C.
2015-07-05 12:52:39 +00:00
joerg
11d2712a27 Remove USE_X11BASE and X11PREFIX. 2015-07-04 16:18:28 +00:00
fhajny
e70580bf72 Update databases/mongo-c-driver to 1.1.9.
1.1.9
 * This release fixes a common crash in 1.1.8, which itself was introduced
   while fixing a rare crash in 1.1.7

1.1.8
 * Crash freeing client after a replica set auth error.
 * Compile error strict C89 mode.
2015-07-04 15:02:58 +00:00
rodent
76bfea2553 Doesn't depend on a specific version of pbr now.
0.9.6
-----

* Fix ibmdb2 index name handling

0.9.5
-----

* Don't run the test if _setup() fails
* Correcting minor typo
* Fix .gitignore for .tox and .testrepository
* allow dropping fkeys with sqlite
* Add pretty_tox setup
* script: strip comments in SQL statements

0.9.4
-----

* Remove svn version tag setting

0.9.3
-----

* Ignore transaction management statements in SQL scripts
* Use native sqlalchemy 0.9 quote attribute with ibmdb2
* Don't add warnings filter on import
* Replace assertNotEquals with assertNotEqual
* Update requirements file matching global requ
* Work toward Python 3.4 support and testing
* pep8: mark all pep8 checks that currently fail as ignored

0.9.2
-----

* SqlScript: execute multiple statements one by one
* Make sure we don't throw away exception on SQL script failure
* Pin testtools to < 0.9.36
* Fix ibmdb2 unique constraint handling for sqlalchemy 0.9
* Fixes the auto-generated manage.py
2015-07-02 01:46:11 +00:00
wiz
c44347dc8e Update to 2.6.1:
What's new in psycopg 2.6.1
^^^^^^^^^^^^^^^^^^^^^^^^^^^

- Lists consisting of only `None` are escaped correctly (🎫`#285`).
- Fixed deadlock in multithread programs using OpenSSL (🎫`#290`).
- Correctly unlock the connection after error in flush (🎫`#294`).
- Fixed ``MinTimeLoggingCursor.callproc()`` (🎫`#309`).
2015-07-01 09:59:06 +00:00
jaapb
487d6ab2d3 Recursive revbump associated with update of lang/ocaml to 4.02.2. 2015-06-30 11:52:55 +00:00
joerg
75b6985f2e Fix build with Perl 5.22. 2015-06-27 18:42:42 +00:00
rodent
0e3b6dbebf Version 0.7 - 2015-05-19
* Fix WINDOW and HAVING params order in Select
* Add window functions
* Add filter and within group to aggregate
* Add limitstyle with 'offset' and 'limit'
* Add Lateral
2015-06-26 16:13:51 +00:00
ryoon
76f7627381 Update to 3.0.4
Changelog:
MongoDB 3.0.4 is released
June 6, 2015

MongoDB 3.0.4 is out and is ready for production deployment. This release contains only fixes since 3.0.3, and is a recommended upgrade for all 3.0 users.

Fixed in this release:

    SERVER-17923 Creating/dropping multiple background indexes on the same collection can cause fatal error on secondaries
    SERVER-18079 Large performance drop with documents > 16k on Windows
    SERVER-18190 Secondary reads block replication
    SERVER-18213 Lots of WriteConflict during multi-upsert with WiredTiger storage engine
    SERVER-18316 Database with WT engine fails to recover after system crash
    SERVER-18475 authSchemaUpgrade fails when the system.users contains non MONGODB-CR users
    SERVER-18629 WiredTiger journal system syncs wrong directory
    SERVER-18822 Sharded clusters with WiredTiger primaries may lose writes during chunk migration


Announcing MongoDB 3.0 and Bug Hunt Winners
March 3, 2015

Today MongoDB 3.0 is generally available; you can download now.

Our community was critical to ensuring the quality of the release. Thank you to everyone who participated in our 3.0 Bug Hunt. From the submissions, we've selected winners based on the user impact and severity of the bugs found.

First Prize

Mark Callaghan, Member of Technical Staff, Facebook
During the 3.0 release cycle, Mark submitted 10 bug reports and collaborated closely with the MongoDB engineering team to debug the issues he uncovered. As a first place winner, Mark will receive a free pass to MongoDB World in New York City on June 1-2, including a front row seat to the keynote sessions. Mark was also eligible to receive a $1,000 Amazon gift card but opted to donate the award to a charity. We are donating $1,000 to CodeStarters.org in his name.

Honorable Mentions

Nick Judson, Conevity
Nick submitted SERVER-17299, uncovering excessive memory allocation on Windows when using "snappy" compression in WiredTiger.

Koshelyaev Konstantin, RTEC
Koshelyaev submitted SERVER-16664, which uncovered a memory overflow in WiredTiger when using "zlib" compression.

Tim Callaghan, Crunchtime!
In submitting SERVER-16867, Tim found an uncaught WriteConflict exception affecting replicated writes during insert-heavy workloads.

Nathan Arthur, PreEmptive Solutions
Nathan submitted SERVER-16724, which found an issue with how collection metadata is persisted.

Thijs Cadier, AppSignal
Thijs submitted SERVER-16197, which revealed a bug in the build system interaction with the new MongoDB tools.

Nick, Koshelyaev, Tim, Nathan, and Thijs will also receive tickets to MongoDB World in New York City on June 1-2 (with reserved front-row seat for keynote sessions), $250 Amazon Gift Cards, and MongoDB t-shirts.

Congratulations to the winners and thanks to everyone who downloaded, tested and gave feedback on the release candidates.
2015-06-23 13:31:24 +00:00
jperkin
134b6261dc Substitute hardcoded paths to compiler wrappers. Fixes CHECK_WRKREF builds. 2015-06-22 15:16:24 +00:00
taca
0361608c77 Update ruby-activerecord32 to 3.2.22.
## Rails 3.2.22 (Jun 16, 2015) ##

* No changes.
2015-06-22 13:52:10 +00:00
adam
f22360d8f9 Changes:
This release primarily fixes issues not successfully fixed in prior releases. It should be applied as soon as possible all users of major versions 9.3 and 9.4. Other users should apply at the next available downtime.

Crash Recovery Fixes:
Earlier update releases attempted to fix an issue in PostgreSQL 9.3 and 9.4 with "multixact wraparound", but failed to account for issues doing multixact cleanup during crash recovery. This could cause servers to be unable to restart after a crash. As such, all users of 9.3 and 9.4 should apply this update as soon as possible.
2015-06-18 14:46:13 +00:00
dholland
4be5ac2c21 Refresh the lists of man pages. Closes PR 38998.
(Because of the partitioning into client and server packages, the man
pages have to be partitioned to match; this interferes with the
configure script's handling of them so the list of pages ends up
hardcoded in these patches. And it seems the lists haven't been
updated since the first mysql 5.x package.)
2015-06-18 04:29:51 +00:00
wiz
bc21bc8ada Update to 1.48:
1.48 2015-06-12
  - Switched to a production version. (ISHIGAKI)

1.47_05 2015-05-08
  - Updated to SQLite 3.8.10

1.47_04 2015-05-02
  - Used MY_CXT instead of a global variable

1.47_03 2015-04-16
  - Added :all to EXPORT_TAGS in ::Constants

1.47_02 2015-04-16
  - Updated to SQLite 3.8.9
  - Added DBD::SQLite::Constants, from which you can import any
    "useful" constants into your applications.
  - Removed previous Cygwin hack as SQLite 3.8.9 compiles well again
  - Now create_function/aggregate accepts an extra bit
    (SQLITE_DETERMINISTIC) for better performance.

1.47_01 2015-02-17
    *** (EXPERIMENTAL) CHANGES THAT MAY POSSIBLY BREAK YOUR OLD APPLICATIONS ***
    - Commented OPTIMIZE out of WriteMakefile (RT #94207).
      If your perl is not compiled with -O2, your DBD::SQLite may
      possibly behave differently under some circumstances.
      (This release is to find notable examples from CPAN Testers).
    - Set THREADSAFE to 0 under Cygwin to cope with an upstream
      regression since 3.8.7 (GH #7).

    - Updated to SQLite 3.8.8.2
    - Resolved #35449: Fast DBH->do (ptushnik, ISHIGAKI)
2015-06-14 16:51:32 +00:00
fhajny
41605d6106 Add php-mongo 2015-06-13 15:28:22 +00:00
fhajny
35ba63f1bc Set maintainership to bartoszkuzma, didn't notice his wip/php-mongo before. 2015-06-13 14:29:54 +00:00
fhajny
004a6aee2f Import the PECL mongo 1.6.9 module as databases/php-mongo.
Provides an interface for communicating with the Mongo database in PHP.
2015-06-13 13:48:37 +00:00
wiz
0982effce2 Recursive PKGREVISION bump for all packages mentioning 'perl',
having a PKGNAME of p5-*, or depending such a package,
for perl-5.22.0.
2015-06-12 10:48:20 +00:00
fhajny
c6b90866b3 Update databases/py-peewee to 2.6.1.
2.6.1
- #606, support self-referential joins with prefetch and aggregate_rows()
  methods.
- #588, accomodate changes in SQLite's PRAGMA index_list() return value.
- #607, fixed bug where pwiz was not passing table names to introspector.
- #591, fixed bug with handling of named cursors in older psycopg2 version.
- Removed some cruft from the APSWDatabase implementation.
- Added CompressedField and AESEncryptedField
- #609, #610, added Django-style foreign key ID lookup.
- Added support for Hybrid Attributes (cool idea courtesy of SQLAlchemy).
- Added upsert keyword argument to the Model.save() function (SQLite only).
- #587, added support for ON CONFLICT SQLite clause for INSERT and UPDATE
  queries.
- #601, added hook for programmatically defining table names.
- #581, #611, support connection pools with playhouse.db_url.connect().
- Added Contributing section section to docs.

2.6.0
- get_or_create() now returns a 2-tuple consisting of the model instance
  and a boolean indicating whether the instance was created. The function
  now behaves just like the Django equivalent.
- #574, better support for setting the character encoding on Postgresql
  database connections. Thanks @klen!
- Improved implementation of get_or_create().
2015-06-10 17:34:25 +00:00
fhajny
9b3da83b0e Update databases/mongo-c-driver to 1.1.7.
mongo-c-driver 1.1.7

- Thread-safe use of Cyrus SASL library.
- Experimental support for building with CMake and SASL.
- Faster reconnection to replica set with some hosts down.
- Crash iterating a cursor after reconnecting to a replica set.
- Unchecked errors decoding invalid UTF-8 in MongoDB URIs.
- Fix error reporting from mongoc_client_get_database_names.

mongo-c-driver 1.1.6

- mongoc_bulk_operation_execute now coalesces consecutive update operations
  into a single message to a MongoDB 2.6+ server, yielding huge performance
  gains. Same for remove operations. (Inserts were always coalesced.)
- Large numbers of insert operations are now properly batched according
  to number of documents and total data size.
- GSSAPI / Kerberos auth now works.
- The driver no longer tries three times in vain to reconnect to a primary,
  so socketTimeoutMS and connectTimeoutMS now behave closer to what you
  expect for replica sets with down members. A full fix awaits 1.2.0.
- mongoc_matcher_t now supports basic subdocument and array matching

mongo-c-driver 1.1.5

- The fsync and j write concern flags now imply acknowledged writes
- Prevent using fsync or j with conflicting w=0 write concern
- Obey socket timeout consistently in TLS/SSL mode
- Return an error promptly after a network hangup in TLS mode
- Prevent crash using SSL in FIPS mode
- Always return NULL from mongoc_database_get_collection_names on error
- Fix version check for GCC 5 and future versions of Clang
- Fix warnings and errors building on various platforms
- Add configure flag to enable/disable shared memory performance counters
- Minor docs improvements and fix links from C Driver docs to Libbson docs
2015-06-10 17:22:57 +00:00
wiedi
8f4529af03 fix buildlink 2015-06-10 01:43:00 +00:00
fhajny
03c25be005 Remove stale patch file. 2015-06-09 15:07:42 +00:00
fhajny
7ae271ace6 Update databases/py-barman to 1.4.1.
Version 1.4.1 - 05 May 2015
  * Fix for WAL archival stop working if first backup is EMPTY
    (Closes: #64)
  * Fix exception during error handling in Barman recovery (Closes:
    #65)
  * After a backup, limit cron activity to WAL archiving only
    (Closes: #62)
  * Improved robustness and error reporting of the backup delete
    command (Closes: #63)
  * Fix computation of WAL production ratio as reported in the
    show-backup command
  * Improved management of xlogb file, which is now correctly fsynced
    when updated. Also, the rebuild-xlogdb command now operates on a
    temporary new file, which overwrites the main one when finished.
  * Add unit tests for dateutil module compatibility
  * Modified Barman version following PEP 440 rules and added support
    of tests in Python 3.4
2015-06-09 15:06:39 +00:00
fhajny
98c0abb137 Update databases/redis to 3.0.2.
--[ Redis 3.0.2 ] Release date: 4 Jun 2015

Upgrade urgency: HIGH for Redis because of a security issue.
                 LOW for Sentinel.

* [FIX] Critical security issue fix by Ben Murphy: http://t.co/LpGTyZmfS7
* [FIX] SMOVE reply fixed when src and dst keys are the same. (Glenn Nethercutt)
* [FIX] Lua cmsgpack lib updated to support str8 type. (Sebastian Waisbrot)

* [NEW] ZADD support for options: NX, XX, CH. See new doc at redis.io.
        (Salvatore Sanfilippo)
* [NEW] Senitnel: CKQUORUM and FLUSHCONFIG commands back ported.
        (Salvatore Sanfilippo and Bill Anderson)

--[ Redis 3.0.1 ] Release date: 5 May 2015

Upgrade urgency: LOW for Redis and Cluster, MODERATE for Sentinel.

* [FIX] Sentinel memory leak due to hiredis fixed. (Salvatore Sanfilippo)
* [FIX] Sentinel memory leak on duplicated instance. (Charsyam)
* [FIX] Redis crash on Lua reaching output buffer limits. (Yossi Gottlieb)
* [FIX] Sentinel flushes config on +slave events. (Bill Anderson)
2015-06-09 12:17:56 +00:00
fhajny
ca7739d134 Update databases/py-cassandra-driver to 2.5.1.
- Fix thread safety in DC-aware load balancing policy (PYTHON-297)
- Fix race condition in node/token rebuild (PYTHON-298)
- Set and send serial consistency parameter (PYTHON-299)
2015-06-09 11:07:13 +00:00
fhajny
82ef2c2028 Extend SunOS epoll quirk to fix build on recent Illumos platforms. 2015-06-08 16:27:35 +00:00
adam
39d7e51a32 Changes:
* File Permissions Fix
* Have pg_get_functiondef() show the LEAKPROOF property
* Make pushJsonbValue() function push jbvBinary type
* Allow building with threaded Python on OpenBSD
2015-06-08 12:52:07 +00:00
joerg
2e1b72b19b Revert unintentional change. 2015-06-07 22:42:49 +00:00
joerg
475cb5f448 Update PostgreSQL 9.3 to 9.3.8:
- Avoid failures while fsync'ing data directory during crash restart
- Fix pg_get_functiondef() to show functions' LEAKPROOF property, if set
- Remove configure's check prohibiting linking to a threaded libpython
  on OpenBSD
- Allow libpq to use TLS protocol versions beyond v1
2015-06-07 22:42:15 +00:00
gdt
6afd0c766c Update to 0.47.
Upstream changes are mainly housekeeping and minor build system
changes not visible to pkgsrc users, plus the usual bugfixes.  Some
procedures previously advertised for deprecation have been dropped,
and some new ones are added to the deprectation list, notably
dbcoltypes.
2015-06-07 11:24:16 +00:00
wiedi
51ab81958e Update hiredis to 0.13.1
### 0.13.1 - May 03, 2015

This is a bug fix release.
The new `reconnect` method introduced new struct members, which clashed with pre-defined names in pre-C99 code.
Another commit forced C99 compilation just to make it work, but of course this is not desirable for outside projects.
Other non-C99 code can now use hiredis as usual again.
Sorry for the inconvenience.

* Fix memory leak in async reply handling (Salvatore Sanfilippo)
* Rename struct member to avoid name clash with pre-c99 code (Alex Balashov, ncopa)

### 0.13.0 - April 16, 2015

This release adds a minimal Windows compatibility layer.
The parser, standalone since v0.12.0, can now be compiled on Windows
(and thus used in other client libraries as well)

* Windows compatibility layer for parser code (tzickel)
* Properly escape data printed to PKGCONF file (Dan Skorupski)
* Fix tests when assert() undefined (Keith Bennett, Matt Stancliff)
* Implement a reconnect method for the client context, this changes the structure of `redisContext` (Aaron Bedra)

### 0.12.1 - January 26, 2015

* Fix `make install`: DESTDIR support, install all required files, install PKGCONF in proper location
* Fix `make test` as 32 bit build on 64 bit platform

### 0.12.0 - January 22, 2015

* Add optional KeepAlive support
* Try again on EINTR errors
* Add libuv adapter
* Add IPv6 support
* Remove possiblity of multiple close on same fd
* Add ability to bind source address on connect
* Add redisConnectFd() and redisFreeKeepFd()
* Fix getaddrinfo() memory leak
* Free string if it is unused (fixes memory leak)
* Improve redisAppendCommandArgv performance 2.5x
* Add support for SO_REUSEADDR
* Fix redisvFormatCommand format parsing
* Add GLib 2.0 adapter
* Refactor reading code into read.c
* Fix errno error buffers to not clobber errors
* Generate pkgconf during build
* Silence _BSD_SOURCE warnings
* Improve digit counting for multibulk creation
2015-06-05 14:07:27 +00:00
gdt
7792b74475 Update to 2.1.7.
Upstream changes (plus many bug fixes):

PostGIS 2.1.7
2015/03/30

PostGIS 2.1.6
2015/03/20

  - #3000, Ensure edge splitting and healing algorithms use indexes
  - #3048, Speed up geometry simplification (J.Santana @ CartoDB)
  - #3050, Speep up geometry type reading (J.Santana @ CartoDB)

PostGIS 2.1.5
2014/12/18

  - #2933, Speedup construction of large multi-geometry objects
2015-06-05 13:31:39 +00:00
taca
f53cec0304 Fix build problem on Ruby 2.2 and later. 2015-06-03 12:04:16 +00:00
taca
606c38fdd6 This package is work on Ruby 2.2. 2015-06-03 11:12:36 +00:00
taca
bd39d68aca Update ruby-sequel to 4.23.0.
=== 4.23.0 (2015-06-01)

* Make dataset.call_sproc(:insert) work in the jdbc adapter (flash-gordon) (#1013)

* Add update_refresh plugin, for refreshing a model instance when updating (jeremyevans)

* Add delay_add_association plugin, for delaying add_* method calls on new objects until after saving the object (jeremyevans)

* Add validate_associated plugin, for validating associated objects when validating the current object (jeremyevans)

* Make Postgres::JSONBOp#[] and #get_text return JSONBOp instances (jeremyevans) (#1005)

* Remove the fdbsql, jdbc/fdbsql, and openbase adapters (jeremyevans)

* Database#transaction now returns block return value if :rollback=>:always is used (jeremyevans)

* Allow postgresql:// connection strings as aliases to postgres://, for compatibility with libpq (jeremyevans) (#1004)

* Make Model#move_to in the list plugin handle out-of-range targets without raising an exception (jeremyevans) (#1003)

* Make Database#add_named_conversion_proc on PostgreSQL handle conversion procs for enum types (celsworth) (#1002)

=== 4.22.0 (2015-05-01)

* Deprecate the db2, dbi, fdbsql, firebird, jdbc/fdbsql, informix, and openbase adapters (jeremyevans)

* Avoid hash allocations and rehashes (jeremyevans)

* Don't silently ignore :jdbc_properties Database option in jdbc adapter (jeremyevans)

* Make tree plugin set reciprocal association for children association correctly (lpil, jeremyevans) (#995)

* Add Sequel::MassAssignmentRestriction exception, raised for mass assignment errors in strict mode (jeremyevans) (#994)

* Handle ODBC::SQL_BIT type as boolean in the odbc adapter, fixing boolean handling on odbc/mssql (jrgns) (#993)

* Make :auto_validations plugin check :default entry instead of :ruby_default entry for checking existence of default value (jeremyevans) (#990)

* Adapters should now set :default schema option to nil when adapter can determine that the value is nil (jeremyevans)

* Do not add a schema :max_length entry for a varchar(max) column on MSSQL (jeremyevans)

* Allow :default value for PostgreSQL array columns to be a ruby array when using the pg_array extension (jeremyevans) (#989)

* Add csv_serializer plugin for serializing model objects to and from csv (bjmllr, jeremyevans) (#988)

* Make Dataset#to_hash and #to_hash_groups handle single array argument for model datasets (jeremyevans)

* Handle Model#cancel_action in association before hooks (jeremyevans)

* Use a condition variable instead of busy waiting in the threaded connection pools on ruby 1.9+ (jeremyevans)

* Use Symbol#to_proc instead of explicit blocks (jeremyevans)

=== 4.21.0 (2015-04-01)

* Support :tsquery and :tsvector options in Dataset#full_text_search on PostgreSQL, for using existing tsquery/tsvector expressions (jeremyevans)

* Fix TinyTds::Error being raised when trying to cancel a query on a closed connection in the tinytds adapter (jeremyevans)

* Add GenericExpression#!~ for inverting =~ on ruby 1.9 (similar to inverting a hash) (jeremyevans) (#979)

* Add GenericExpression#=~ for equality, inclusion, and pattern matching (similar to using a hash) (jeremyevans) (#979)

* Add Database#add_named_conversion_proc on PostgreSQL to make it easier to add conversion procs for types by name (jeremyevans)

* Make Sequel.pg_jsonb return JSONBOp instances instead of JSONOp instances when passed other than Array or Hash (jeremyevans) (#977)

* Demodulize default root name in json_serializer plugin (janko-m) (#968)

* Make Database#transaction work in after_commit/after_rollback blocks (jeremyevans)
2015-06-03 11:11:15 +00:00
taca
b36143fbbb Update ruby-pg to 0.18.2.
== v0.18.2 [2015-05-14] Michael Granger <ged@FaerieMUD.org>

Enhancements:

- Allow URI connection string (thanks to Chris Bandy)

Bugfixes:

- Speedups and fixes for PG::TextDecoder::Identifier and quoting behavior
- Revert addition of PG::Connection#hostaddr [#202].
- Fix decoding of fractional timezones and timestamps [#203]
- Fixes for non-C99 compilers
- Avoid possible symbol name clash when linking againt static libpq.
2015-06-03 10:48:18 +00:00
taca
2c50f9e07f Update ruby-moneta to 0.8.0.
0.8.0

* Rename Moneta::Adapters::Mongo to Moneta::Adapters::MongoOfficial
* Add Moneta::Adapters::MongoMoped
* Drop Ruby 1.8 support
2015-06-03 10:46:46 +00:00
tron
3d6d812983 Use "editline" package from pkgsrc to fix the build under NetBSD. 2015-06-03 07:13:30 +00:00
ryoon
46fdcd4c38 Fix typo in comment of patch. 2015-06-03 03:20:03 +00:00
taca
1271d2de33 Update ruby-do_sqlite3 to 0.10.16.
No change except version.
2015-06-01 12:56:23 +00:00
taca
c31d350d35 Update ruby-do_postgres to 0.10.16.
## 0.10.16 2015-05-17

* Fix compile issue with do_postgres on stock OS X Ruby
2015-06-01 12:55:56 +00:00
taca
0ef3392491 Update ruby-do_mysql to 0.10.16.
No change except version.
2015-06-01 12:54:54 +00:00
taca
372267d0cb Update ruby-data_objects to 0.10.16.
No change except version.
2015-06-01 12:54:14 +00:00
adam
5cfccbaa2d Changes 5.6.25:
Functionality Added or Changed
* MySQL Enterprise Firewall operates on parser states and does not work well together with the query cache, which circumvents the parser. MySQL Enterprise Firewall now checks whether the query cache is enabled. If so, it displays a message that the query cache must be disabled and does not load.

* my_print_defaults now masks passwords. To display passwords in cleartext, use the new --show option.

* MySQL distributions now include an innodb_stress suite of test cases. Thanks to Mark Callaghan for the contribution.

Bugs Fixed
* InnoDB; Partitioning: The CREATE_TIME column of the INFORMATION_SCHEMA.TABLES table now shows the correct table creation time for partitioned InnoDB tables. The CREATE_TIME column of the INFORMATION_SCHEMA.PARTITIONS table now shows the correct partition creation time for a partition of partitioned InnoDB tables.

The UPDATE_TIME column of the INFORMATION_SCHEMA.TABLES table now shows when a partitioned InnoDB table was last updated by an INSERT, DELETE, or UPDATE. The UPDATE_TIME column of the INFORMATION_SCHEMA.PARTITIONS table now shows when a partition of a partitioned InnoDB table was last updated.

* InnoDB: An assertion was raised on shutdown due to XA PREPARE transactions holding explicit locks.

* InnoDB: The strict_* forms of innodb_checksum_algorithm settings (strict_none, strict_innodb, and strict_crc32) caused the server to halt when a non-matching checksum was encountered, even though the non-matching checksum was valid. For example, with innodb_checksum_algorithm=strict_crc32, encountering a valid innodb checksum caused the server to halt. Instead of halting the server, a message is now printed to the error log and the page is accepted as valid if it matches an innodb, crc32 or none checksum.

* InnoDB: The memcached set command permitted a negative expire time value. Expire time is stored internally as an unsigned integer. A negative value would be converted to a large number and accepted. The maximum expire time value is now restricted to INT_MAX32 to prevent negative expire time values.

* InnoDB: Removal of a foreign key object from the data dictionary cache during error handling caused the server to exit.

* InnoDB: SHOW ENGINE INNODB STATUS output showed negative reservation and signal count values due to a counter overflow error.

* InnoDB: Failure to check the status of a cursor transaction read-only option before reusing the cursor transaction for a write operation resulted in a server exit during a memcached workload.

* InnoDB: MDL locks taken by memcached clients caused a MySQL Enterprise Backup FLUSH TABLES WITH READ LOCK operation to hang.

* InnoDB: Estimates that were too low for the size of merge chunks in the result sorting algorithm caused a server exit.

* InnoDB: For full-text searches, the optimizer could choose an index that does not produce correct relevancy rankings.

* Partitioning: When creating a partitioned table, partition-level DATA DIRECTORY or INDEX DIRECTORY option values that contained an excessive number of characters were handled incorrectly.

* Partitioning: Executing an ALTER TABLE on a partitioned table on which a write lock was in effect could cause subsequent SQL statements on this table to fail.

* Replication: When binary logging was enabled, using stored functions and triggers resulting in a long running procedure that inserted many records caused the memory use to increase rapidly. This was due to memory being allocated per variable. The fix ensures that in such a situation, memory is allocated once and the same memory is reused.

* Replication: If an error was encountered while adding a GTID to the received GTID set, the log lock was not being correctly released. This could cause a deadlock.

more...
2015-06-01 08:15:05 +00:00
adam
4d48049824 Changes 5.5.44:
Bugs fixed:
* InnoDB; Partitioning: The CREATE_TIME column of the INFORMATION_SCHEMA.TABLES table now shows the correct table creation time for partitioned InnoDB tables. The CREATE_TIME column of the INFORMATION_SCHEMA.PARTITIONS table now shows the correct partition creation time for a partition of partitioned InnoDB tables.

The UPDATE_TIME column of the INFORMATION_SCHEMA.TABLES table now shows when a partitioned InnoDB table was last updated by an INSERT, DELETE, or UPDATE. The UPDATE_TIME column of the INFORMATION_SCHEMA.PARTITIONS table now shows when a partition of a partitioned InnoDB table was last updated.

* InnoDB: An assertion was raised on shutdown due to XA PREPARE transactions holding explicit locks.

* InnoDB: Removal of a foreign key object from the data dictionary cache during error handling caused the server to exit.

* InnoDB: SHOW ENGINE INNODB STATUS output showed negative reservation and signal count values due to a counter overflow error.

* InnoDB: Estimates that were too low for the size of merge chunks in the result sorting algorithm caused a server exit.

* SHOW VARIABLES mutexes were being locked twice, resulting in a server exit.

* A Provides rule in RPM .spec files misspelled “mysql-embedded” as “mysql-emdedded”.

* Under certain conditions, the libedit command-line library could write outside an array boundary and cause a client program crash.

* Host value matching for the grant tables could fail to use the most specific of values that contained wildcard characters.

* A user with a name of event_scheduler could view the Event Scheduler process list without the PROCESS privilege.

* SHOW GRANTS after connecting using a proxy user could display the password hash of the proxied user.

* For a prepared statement with an ORDER BY that refers by column number to a GROUP_CONCAT() expression that has an outer reference, repeated statement execution could cause a server exit.

* Loading corrupt spatial data into a MyISAM table could cause the server to exit during index building.

* Certain queries for the INFORMATION_SCHEMA TABLES and COLUMNS tables could lead to excessive memory use when there were large numbers of empty InnoDB tables.

* MySQL failed to compile using OpenSSL 0.9.8e.
2015-06-01 07:40:36 +00:00
taca
121a9dfd1b Make this package build on Ruby 2.2. 2015-05-31 15:25:41 +00:00
adam
26d5497a40 The PostgreSQL Global Development Group has released an update with multiple functionality and security fixes to all supported versions of the PostgreSQL database system, which includes minor versions 9.4.2, 9.3.7, 9.2.11, 9.1.16, and 9.0.20. The update contains a critical fix for a potential data corruption issue in PostgreSQL 9.3 and 9.4; users of those versions should update their servers at the next possible opportunity. 2015-05-27 13:27:27 +00:00
jnemeth
cc6147e399 Update to MySQL Cluster 7.4.6:
----

Changes in MySQL Cluster NDB 7.4.6 (5.6.24-ndb-7.4.6)

Bugs Fixed

    During backup, loading data from one SQL node followed by
repeated DELETE statements on the tables just loaded from a different
SQL node could lead to data node failures. (Bug #18949230)

    When an instance of NdbEventBuffer was destroyed, any references
to GCI operations that remained in the event buffer data list were
not freed. Now these are freed, and items from the event bufer data
list are returned to the free list when purging GCI containers.
(Bug #76165, Bug #20651661)

    When a bulk delete operation was committed early to avoid an
additional round trip, while also returning the number of affected
rows, but failed with a timeout error, an SQL node performed no
verification that the transaction was in the Committed state. (Bug
#74494, Bug #20092754)

    References: See also Bug #19873609.

Changes in MySQL Cluster NDB 7.4.5 (5.6.23-ndb-7.4.5)

Bugs Fixed

    In the event of a node failure during an initial node restart
followed by another node start, the restart of the the affected
node could hang with a START_INFOREQ that occurred while invalidation
of local checkpoints was still ongoing. (Bug #20546157, Bug #75916)

    References: See also Bug #34702.

    It was found during testing that problems could arise when the
node registered as the arbitrator disconnected or failed during
the arbitration process.

    In this situation, the node requesting arbitration could never
receive a positive acknowledgement from the registered arbitrator;
this node also lacked a stable set of members and could not initiate
selection of a new arbitrator.

    Now in such cases, when the arbitrator fails or loses contact
during arbitration, the requesting node immediately fails rather
than waiting to time out. (Bug #20538179)

    DROP DATABASE failed to remove the database when the database
directory contained a .ndb file which had no corresponding table
in NDB. Now, when executing DROP DATABASE, NDB performs an check
specifically for leftover .ndb files, and deletes any that it finds.
(Bug #20480035)

    References: See also Bug #44529.

    The maximum failure time calculation used to ensure that normal
node failure handling mechanisms are given time to handle survivable
cluster failures (before global checkpoint watchdog mechanisms
start to kill nodes due to GCP delays) was excessively conservative,
and neglected to consider that there can be at most number_of_data_nodes
/ NoOfReplicas node failures before the cluster can no longer
survive. Now the value of NoOfReplicas is properly taken into
account when performing this calculation.  (Bug #20069617, Bug
#20069624)

    References: See also Bug #19858151, Bug #20128256, Bug #20135976.

    When performing a restart, it was sometimes possible to find
a log end marker which had been written by a previous restart, and
that should have been invalidated. Now when when searching for the
last page to invalidate, the same search algorithm is used as when
searching for the last page of the log to read.  (Bug #76207, Bug
#20665205)

    During a node restart, if there was no global checkpoint
completed between the START_LCP_REQ for a local checkpoint and its
LCP_COMPLETE_REP it was possible for a comparison of the LCP ID
sent in the LCP_COMPLETE_REP signal with the internal value
SYSFILE->latestLCP_ID to fail. (Bug #76113, Bug #20631645)

    When sending LCP_FRAG_ORD signals as part of master takeover,
it is possible that the master is not synchronized with complete
accuracy in real time, so that some signals must be dropped. During
this time, the master can send a LCP_FRAG_ORD signal with its
lastFragmentFlag set even after the local checkpoint has been
completed. This enhancement causes this flag to persist until the
statrt of the next local checkpoint, which causes these signals to
be dropped as well.

    This change affects ndbd only; the issue described did not
occur with ndbmtd. (Bug #75964, Bug #20567730)

    When reading and copying transporter short signal data, it was
possible for the data to be copied back to the same signal with
overlapping memory. (Bug #75930, Bug #20553247)

    NDB node takeover code made the assumption that there would be
only one takeover record when starting a takeover, based on the
further assumption that the master node could never perform copying
of fragments. However, this is not the case in a system restart,
where a master node can have stale data and so need to perform such
copying to bring itself up to date. (Bug #75919, Bug #20546899)

    Cluster API: A scan operation, whether it is a single table
scan or a query scan used by a pushed join, stores the result set
in a buffer. This maximum size of this buffer is calculated and
preallocated before the scan operation is started. This buffer may
consume a considerable amount of memory; in some cases we observed
a 2 GB buffer footprint in tests that executed 100 parallel scans
with 2 single-threaded (ndbd) data nodes.  This memory consumption
was found to scale linearly with additional fragments.

    A number of root causes, listed here, were discovered that led
to this problem:

	Result rows were unpacked to full NdbRecord format before
they were stored in the buffer. If only some but not all columns
of a table were selected, the buffer contained empty space (essentially
wasted).

	Due to the buffer format being unpacked, VARCHAR and
VARBINARY columns always had to be allocated for the maximum size
defined for such columns.

	BatchByteSize and MaxScanBatchSize values were not taken
into consideration as a limiting factor when calculating the maximum
buffer size.

    These issues became more evident in NDB 7.2 and later MySQL
Cluster release series. This was due to the fact buffer size is
scaled by BatchSize, and that the default value for this parameter
was increased fourfold (from 64 to 256) beginning with MySQL Cluster
NDB 7.2.1.

    This fix causes result rows to be buffered using the packed
format instead of the unpacked format; a buffered scan result row
is now not unpacked until it becomes the current row. In addition,
BatchByteSize and MaxScanBatchSize are now used as limiting factors
when calculating the required buffer size.

    Also as part of this fix, refactoring has been done to separate
handling of buffered (packed) from handling of unbuffered result
sets, and to remove code that had been unused since NDB 7.0 or
earlier. The NdbRecord class declaration has also been cleaned up
by removing a number of unused or redundant member variables.  (Bug
#73781, Bug #75599, Bug #19631350, Bug #20408733)

-----

Changes in MySQL Cluster NDB 7.4.4 (5.6.23-ndb-7.4.4)

Bugs Fixed

    When upgrading a MySQL Cluster from NDB 7.3 to NDB 7.4, the
first data node started with the NDB 7.4 data node binary caused
the master node (still running NDB 7.3) to fail with Error 2301,
then itself failed during Start Phase 5. (Bug #20608889)

    A memory leak in NDB event buffer allocation caused an event
to be leaked for each epoch. (Due to the fact that an SQL node uses
3 event buffers, each SQL node leaked 3 events per epoch.) This
meant that a MySQL Cluster mysqld leaked an amount of memory that
was inversely proportional to the size of TimeBetweenEpochs that
is, the smaller the value for this parameter, the greater the amount
of memory leaked per unit of time. (Bug #20539452)

    The values of the Ndb_last_commit_epoch_server and
Ndb_last_commit_epoch_session status variables were incorrectly
reported on some platforms. To correct this problem, these values
are now stored internally as long long, rather than long. (Bug
#20372169)

    When restoring a MySQL Cluster from backup, nodes that failed
and were restarted during restoration of another node became
unresponsive, which subsequently caused ndb_restore to fail and
exit. (Bug #20069066)

    When a data node fails or is being restarted, the remaining
nodes in the same nodegroup resend to subscribers any data which
they determine has not already been sent by the failed node.
Normally, when a data node (actually, the SUMA kernel block) has
sent all data belonging to an epoch for which it is responsible,
it sends a SUB_GCP_COMPLETE_REP signal, together with a count, to
all subscribers, each of which responds with a SUB_GCP_COMPLETE_ACK.
When SUMA receives this acknowledgment from all subscribers, it
reports this to the other nodes in the same nodegroup so that they
know that there is no need to resend this data in case of a subsequent
node failure. If a node failed before all subscribers sent this
acknowledgement but before all the other nodes in the same nodegroup
received it from the failing node, data for some epochs could be
sent (and reported as complete) twice, which could lead to an
unplanned shutdown.

    The fix for this issue adds to the count reported by
SUB_GCP_COMPLETE_ACK a list of identifiers which the receiver can
use to keep track of which buckets are completed and to ignore any
duplicate reported for an already completed bucket.  (Bug #17579998)

    The output format of SHOW CREATE TABLE for an NDB table containing
foreign key constraints did not match that for the equivalent InnoDB
table, which could lead to issues with some third-party applications.
(Bug #75515, Bug #20364309)

    An ALTER TABLE statement containing comments and a partitioning
option against an NDB table caused the SQL node on which it was
executed to fail. (Bug #74022, Bug #19667566)

    Cluster API: When a transaction is started from a cluster
connection, Table and Index schema objects may be passed to this
transaction for use. If these schema objects have been acquired
from a different connection (Ndb_cluster_connection object), they
can be deleted at any point by the deletion or disconnection of
the owning connection. This can leave a connection with invalid
schema objects, which causes an NDB API application to fail when
these are dereferenced.

    To avoid this problem, if your application uses multiple
connections, you can now set a check to detect sharing of schema
objects between connections when passing a schema object to a
transaction, using the NdbTransaction::setSchemaObjectOwnerChecks()
method added in this release. When this check is enabled, the schema
objects having the same names are acquired from the connection and
compared to the schema objects passed to the transaction. Failure
to match causes the application to fail with an error. (Bug #19785977)

    Cluster API: The increase in the default number of hashmap
buckets (DefaultHashMapSize API node configuration parameter) from
240 to 3480 in MySQL Cluster NDB 7.2.11 increased the size of the
internal DictHashMapInfo::HashMap type considerably.  This type
was allocated on the stack in some getTable() calls which could
lead to stack overflow issues for NDB API users.

    To avoid this problem, the hashmap is now dynamically allocated
from the heap. (Bug #19306793)

-----

Changes in MySQL Cluster NDB 7.4.3 (5.6.22-ndb-7.4.3)

Functionality Added or Changed

    Important Change; Cluster API: This release introduces an
epoch-driven Event API for the NDB API that supercedes the earlier
GCI-based model. The new version of this API also simplifies error
detection and handling, and monitoring of event buffer memory usage
has been been improved.

    New event handling methods for Ndb and NdbEventOperation added
by this change include NdbEventOperation::getEventType2(),
pollEvents2(), nextEvent2(), getHighestQueuedEpoch(),
getNextEventOpInEpoch2(), getEpoch(), isEmptyEpoch(), and isErrorEpoch.
The pollEvents(), nextEvent(), getLatestGCI(), getGCIEventOperations(),
isConsistent(), isConsistentGCI(), getEventType(), getGCI(),
getLatestGCI(), isOverrun(), hasError(), and clearError() methods
are deprecated beginning with the same release.

    Some (but not all) of the new methods act as replacements for
deprecated methods; not all of the deprecated methods map to new
ones. The Event Class, provides information as to which old methods
correspond to new ones.

    Error handling using the new API is no longer handled using
dedicated hasError() and clearError() methods, which are now
deprecated as previously noted. To support this change, TableEvent
now supports the values TE_EMPTY (empty epoch), TE_INCONSISTENT
(inconsistent epoch), and TE_OUT_OF_MEMORY (insufficient event
buffer memory).

    Event buffer memory management has also been improved with the
introduction of the get_eventbuffer_free_percent(),
set_eventbuffer_free_percent(), and get_eventbuffer_memory_usage()
methods, as well as a new NDB API error Free percent out of range
(error code 4123). Memory buffer usage can now be represented in
applications using the EventBufferMemoryUsage data structure, and
checked from MySQL client applications by reading the
ndb_eventbuffer_free_percent system variable.

    For more information, see the detailed descriptions for the
Ndb and NdbEventOperation methods listed. See also The Event::TableEvent
Type, as well as The EventBufferMemoryUsage Structure.

    Additional logging is now performed of internal states occurring
during system restarts such as waiting for node ID allocation and
master takeover of global and local checkpoints. (Bug #74316, Bug
#19795029)

    Added the MaxParallelCopyInstances data node configuration
parameter. In cases where the parallelism used during restart copy
phase (normally the number of LDMs up to a maximum of 16) is
excessive and leads to system overload, this parameter can be used
to override the default behavior by reducing the degree of parallelism
employed.

    Added the operations_per_fragment table to the ndbinfo information
database. Using this table, you can now obtain counts of operations
performed on a given fragment (or fragment replica).  Such operations
include reads, writes, updates, and deletes, scan and index operations
performed while executing them, and operations refused, as well as
information relating to rows scanned on and returned from a given
fragment replica. This table also provides information about
interpreted programs used as attribute values, and values returned
by them.

    Cluster API: Two new example programs, demonstrating reads and
writes of CHAR, VARCHAR, and VARBINARY column values, have been
added to storage/ndb/ndbapi-examples in the MySQL Cluster source
tree. For more information about these programs, including source
code listings, see NDB API Simple Array Example, and NDB API Simple
Array Example Using Adapter.

Bugs Fixed

    The global checkpoint commit and save protocols can be delayed
by various causes, including slow disk I/O. The DIH master node
monitors the progress of both of these protocols, and can enforce
a maximum lag time during which the protocols are stalled by killing
the node responsible for the lag when it reaches this maximum. This
DIH master GCP monitor mechanism did not perform its task more than
once per master node; that is, it failed to continue monitoring
after detecting and handling a GCP stop. (Bug #20128256)

    References: See also Bug #19858151, Bug #20069617, Bug #20062754.

    When running mysql_upgrade on a MySQL Cluster SQL node, the
expected drop of the performance_schema database on this node was
instead performed on all SQL nodes connected to the cluster.  (Bug
#20032861)

    The warning shown when an ALTER TABLE ALGORITHM=INPLACE ...
ADD COLUMN statement automatically changes a column's COLUMN_FORMAT
from FIXED to DYNAMIC now includes the name of the column whose
format was changed. (Bug #20009152, Bug #74795)

    The local checkpoint scan fragment watchdog and the global
checkpoint monitor can each exclude a node when it is too slow when
participating in their respective protocols. This exclusion was
implemented by simply asking the failing node to shut down, which
in case this was delayed (for whatever reason) could prolong the
duration of the GCP or LCP stall for other, unaffected nodes.

    To minimize this time, an isolation mechanism has been added
to both protocols whereby any other live nodes forcibly disconnect
the failing node after a predetermined amount of time. This allows
the failing node the opportunity to shut down gracefully (after
logging debugging and other information) if possible, but limits
the time that other nodes must wait for this to occur. Now, once
the remaining live nodes have processed the disconnection of any
failing nodes, they can commence failure handling and restart the
related protocol or protocol, even if the failed node takes an
excessiviely long time to shut down.  (Bug #19858151)

    References: See also Bug #20128256, Bug #20069617, Bug #20062754.

    The matrix of values used for thread configuration when applying
the setting of the MaxNoOfExecutionThreads configuration parameter
has been improved to align with support for greater numbers of LDM
threads. See Multi-Threading Configuration Parameters (ndbmtd),
for more information about the changes.  (Bug #75220, Bug #20215689)

    When a new node failed after connecting to the president but
not to any other live node, then reconnected and started again, a
live node that did not see the original connection retained old
state information. This caused the live node to send redundant
signals to the president, causing it to fail. (Bug #75218, Bug
#20215395)

    In the NDB kernel, it was possible for a TransporterFacade
object to reset a buffer while the data contained by the buffer
was being sent, which could lead to a race condition. (Bug #75041,
Bug #20112981)

    mysql_upgrade failed to drop and recreate the ndbinfo database
and its tables as expected. (Bug #74863, Bug #20031425)

    Due to a lack of memory barriers, MySQL Cluster programs such
as ndbmtd did not compile on POWER platforms. (Bug #74782, Bug
#20007248)

    In spite of the presence of a number of protection mechanisms
against overloading signal buffers, it was still in some cases
possible to do so. This fix adds block-level support in the NDB
kernel (in SimulatedBlock) to make signal buffer overload protection
more reliable than when implementing such protection on a case-by-case
basis. (Bug #74639, Bug #19928269)

    Copying of metadata during local checkpoints caused node restart
times to be highly variable which could make it difficult to diagnose
problems with restarts. The fix for this issue introduces signals
(including PAUSE_LCP_IDLE, PAUSE_LCP_REQUESTED, and
PAUSE_NOT_IN_LCP_COPY_META_DATA) to pause LCP execution and flush
LCP reports, making it possible to block LCP reporting at times
when LCPs during restarts become stalled in this fashion. (Bug
#74594, Bug #19898269)

    When a data node was restarted from its angel process (that
is, following a node failure), it could be allocated a new node ID
before failure handling was actually completed for the failed node.
(Bug #74564, Bug #19891507)

    In NDB version 7.4, node failure handling can require completing
checkpoints on up to 64 fragments. (This checkpointing is performed
by the DBLQH kernel block.) The requirement for master takeover to
wait for completion of all such checkpoints led in such cases to
excessive length of time for completion.

    To address these issues, the DBLQH kernel block can now report
that it is ready for master takeover before it has completed any
ongoing fragment checkpoints, and can continue processing these
while the system completes the master takeover. (Bug #74320, Bug
#19795217)

    Local checkpoints were sometimes started earlier than necessary
during node restarts, while the node was still waiting for copying
of the data distribution and data dictionary to complete.  (Bug
#74319, Bug #19795152)

    The check to determine when a node was restarting and so know
when to accelerate local checkpoints sometimes reported a false
positive. (Bug #74318, Bug #19795108)

    Values in different columns of the ndbinfo tables
disk_write_speed_aggregate and disk_write_speed_aggregate_node were
reported using differing multiples of bytes. Now all of these
columns display values in bytes.

    In addition, this fix corrects an error made when calculating
the standard deviations used in the std_dev_backup_lcp_speed_last_10sec,
std_dev_redo_speed_last_10sec, std_dev_backup_lcp_speed_last_60sec,
and std_dev_redo_speed_last_60sec columns of the
ndbinfo.disk_write_speed_aggregate table. (Bug #74317, Bug #19795072)

    Recursion in the internal method Dblqh::finishScanrec() led to
an attempt to create two list iterators with the same head.  This
regression was introduced during work done to optimize scans for
version 7.4 of the NDB storage engine. (Bug #73667, Bug #19480197)

    Transporter send buffers were not updated properly following
a failed send. (Bug #45043, Bug #20113145)

    Disk Data: An update on many rows of a large Disk Data table
could in some rare cases lead to node failure. In the event that
such problems are observed with very large transactions on Disk
Data tables you can now increase the number of page entries allocated
for disk page buffer memory by raising the value of the
DiskPageBufferEntries data node configuration parameter added in
this release. (Bug #19958804)

    Disk Data: In some cases, during DICT master takeover, the new
master could crash while attempting to roll forward an ongoing
schema transaction. (Bug #19875663, Bug #74510)

    Cluster API: It was possible to delete an Ndb_cluster_connection
object while there remained instances of Ndb using references to
it. Now the Ndb_cluster_connection destructor waits for all related
Ndb objects to be released before completing. (Bug #19999242)

    References: See also Bug #19846392.

-----

Changes in MySQL Cluster NDB 7.4.2 (5.6.21-ndb-7.4.2)

Functionality Added or Changed

    Added the restart_info table to the ndbinfo information database
to provide current status and timing information relating to node
and system restarts. By querying this table, you can observe the
progress of restarts in real time. (Bug #19795152)

    After adding new data nodes to the configuration file of a
MySQL Cluster having many API nodes, but prior to starting any of
the data node processes, API nodes tried to connect to these missing
data nodes several times per second, placing extra loads on management
nodes and the network. To reduce unnecessary traffic caused in this
way, it is now possible to control the amount of time that an API
node waits between attempts to connect to data nodes which fail to
respond; this is implemented in two new API node configuration
parameters StartConnectBackoffMaxTime and ConnectBackoffMaxTime.

    Time elapsed during node connection attempts is not taken into
account when applying these parameters, both of which are given in
milliseconds with approximately 100 ms resolution. As long as the
API node is not connected to any data nodes as described previously,
the value of the StartConnectBackoffMaxTime parameter is applied;
otherwise, ConnectBackoffMaxTime is used.

    In a MySQL Cluster with many unstarted data nodes, the values
of these parameters can be raised to circumvent connection attempts
to data nodes which have not yet begun to function in the cluster,
as well as moderate high traffic to management nodes.

    For more information about the behavior of these parameters,
see Defining SQL and Other API Nodes in a MySQL Cluster. (Bug
#17257842)

Bugs Fixed

    When performing a batched update, where one or more successful
write operations from the start of the batch were followed by write
operations which failed without being aborted (due to the AbortOption
being set to AO_IgnoreError), the failure handling for these by
the transaction coordinator leaked CommitAckMarker resources. (Bug
#19875710)

    References: This bug was introduced by Bug #19451060, Bug #73339.

    Online downgrades to MySQL Cluster NDB 7.3 failed when a MySQL
Cluster NDB 7.4 master attempted to request a local checkpoint with
32 fragments from a data node already running NDB 7.3, which supports
only 2 fragments for LCPs. Now in such cases, the NDB 7.4 master
determines how many fragments the data node can handle before making
the request. (Bug #19600834)

    The fix for a previous issue with the handling of multiple node
failures required determining the number of TC instances the failed
node was running, then taking them over. The mechanism to determine
this number sometimes provided an invalid result which caused the
number of TC instances in the failed node to be set to an excessively
high value. This in turn caused redundant takeover attempts, which
wasted time and had a negative impact on the processing of other
node failures and of global checkpoints. (Bug #19193927)

    References: This bug was introduced by Bug #18069334.

    The server side of an NDB transporter disconnected an incoming
client connection very quickly during the handshake phase if the
node at the server end was not yet ready to receive connections
from the other node. This led to problems when the client immediately
attempted once again to connect to the server socket, only to be
disconnected again, and so on in a repeating loop, until it suceeded.
Since each client connection attempt left behind a socket in
TIME_WAIT, the number of sockets in TIME_WAIT increased rapidly,
leading in turn to problems with the node on the server side of
the transporter.

    Further analysis of the problem and code showed that the root
of the problem lay in the handshake portion of the transporter
connection protocol. To keep the issue described previously from
occurring, the node at the server end now sends back a WAIT message
instead of disconnecting the socket when the node is not yet ready
to accept a handshake. This means that the client end should no
longer need to create a new socket for the next retry, but can
instead begin immediately with a new handshake hello message. (Bug
#17257842)

    Corrupted messages to data nodes sometimes went undetected,
causing a bad signal to be delivered to a block which aborted the
data node. This failure in combination with disconnecting nodes
could in turn cause the entire cluster to shut down.

    To keep this from happening, additional checks are now made
when unpacking signals received over TCP, including checks for byte
order, compression flag (which must not be used), and the length
of the next message in the receive buffer (if there is one).

    Whenever two consecutive unpacked messages fail the checks just
described, the current message is assumed to be corrupted. In this
case, the transporter is marked as having bad data and no more
unpacking of messages occurs until the transporter is reconnected.
In addition, an entry is written to the cluster log containing the
error as well as a hex dump of the corrupted message. (Bug #73843,
Bug #19582925)

    During restore operations, an attribute's maximum length was
used when reading variable-length attributes from the receive buffer
instead of the attribute's actual length. (Bug #73312, Bug #19236945)

-----

Changes in MySQL Cluster NDB 7.4.1 (5.6.20-ndb-7.4.1)

Node Restart Performance and Reporting Enhancements

    Performance: A number of performance and other improvements
have been made with regard to node starts and restarts. The following
list contains a brief description of each of these changes:

	Before memory allocated on startup can be used, it must be
touched, causing the operating system to allocate the actual physical
memory needed. The process of touching each page of memory that
was allocated has now been multithreaded, with touch times on the
order of 3 times shorter than with a single thread when performed
by 16 threads.

	When performing a node or system restart, it is necessary
to restore local checkpoints for the fragments. This process
previously used delayed signals at a point which was found to be
critical to performance; these have now been replaced with normal
(undelayed) signals, which should shorten significantly the time
required to back up a MySQL Cluster or to restore it from backup.

	Previously, there could be at most 2 LDM instances active
with local checkpoints at any given time. Now, up to 16 LDMs can
be used for performing this task, which increases utilization of
available CPU power, and can speed up LCPs by a factor of 10, which
in turn can greatly improve restart times.

	Better reporting of disk writes and increased control over
these also make up a large part of this work. New ndbinfo tables
disk_write_speed_base, disk_write_speed_aggregate, and
disk_write_speed_aggregate_node provide information about the speed
of disk writes for each LDM thread that is in use. The DiskCheckpointSpeed
and DiskCheckpointSpeedInRestart configuration parameters have been
deprecated, and are subject to removal in a future MySQL Cluster
release. This release adds the data node configuration parameters
MinDiskWriteSpeed, MaxDiskWriteSpeed, MaxDiskWriteSpeedOtherNodeRestart,
and MaxDiskWriteSpeedOwnRestart to control write speeds for LCPs
and backups when the present node, another node, or no node is
currently restarting.

	For more information, see the descriptions of the ndbinfo
tables and MySQL Cluster configuration parameters named previously.

	Reporting of MySQL Cluster start phases has been improved,
with more frequent printouts. New and better information about the
start phases and their implementation has also been provided in
the sources and documentation. See Summary of MySQL Cluster Start
Phases.

Improved Scan and SQL Processing

    Performance: Several internal methods relating to the NDB
receive thread have been optimized to make mysqld more efficient
in processing SQL applications with the NDB storage engine. In
particular, this work improves the performance of the
NdbReceiver::execTRANSID_AI() method, which is commonly used
to receive a record from the data nodes as part of a scan
operation. (Since the receiver thread sometimes has to process
millions of received records per second, it is critical that
this method does not perform unnecessary work, or tie up
resources that are not strictly needed.) The associated internal
functions receive_ndb_packed_record() and handleReceivedSignal()
methods have also been improved, and made more efficient.

Per-Fragment Memory Reporting

    Information about memory usage by individual fragments can now
be obtained from the memory_per_fragment view added in this release
to the ndbinfo information database. This information includes
pages having fixed, and variable element size, rows, fixed element
free slots, variable element free bytes, and hash index memory
usage. For information, see The ndbinfo memory_per_fragment Table.

Bugs Fixed

    In some cases, transporter receive buffers were reset by one
thread while being read by another. This happened when a race
condition occurred between a thread receiving data and another
thread initiating disconnect of the transporter (disconnection
clears this buffer). Concurrency logic has now been implemented to
keep this race from taking place. (Bug #19552283, Bug #73790)

    When a new data node started, API nodes were allowed to attempt
to register themselves with the data node for executing transactions
before the data node was ready. This forced the API node to wait
an extra heartbeat interval before trying again.

    To address this issue, a number of HA_ERR_NO_CONNECTION errors
(Error 4009) that could be issued during this time have been changed
to Cluster temporarily unavailable errors (Error 4035), which should
allow API nodes to use new data nodes more quickly than before. As
part of this fix, some errors which were incorrectly categorised
have been moved into the correct categories, and some errors which
are no longer used have been removed. (Bug #19524096, Bug #73758)

    Executing ALTER TABLE ... REORGANIZE PARTITION after increasing
the number of data nodes in the cluster from 4 to 16 led to a crash
of the data nodes. This issue was shown to be a regression caused
by previous fix which added a new dump handler using a dump code
that was already in use (7019), which caused the command to execute
two different handlers with different semantics. The new handler
was assigned a new DUMP code (7024).  (Bug #18550318)

    References: This bug is a regression of Bug #14220269.

    When certain queries generated signals having more than 18 data
words prior to a node failure, such signals were not written
correctly in the trace file. (Bug #18419554)

    Failure of multiple nodes while using ndbmtd with multiple TC
threads was not handled gracefully under a moderate amount of
traffic, which could in some cases lead to an unplanned shutdown
of the cluster. (Bug #18069334)

    For multithreaded data nodes, some threads do communicate often,
with the result that very old signals can remain at the top of the
signal buffers. When performing a thread trace, the signal dumper
calculated the latest signal ID from what it found in the signal
buffers, which meant that these old signals could be erroneously
counted as the newest ones. Now the signal ID counter is kept as
part of the thread state, and it is this value that is used when
dumping signals for trace files. (Bug #73842, Bug #19582807)

    Cluster API: When an NDB API client application received a
signal with an invalid block or signal number, NDB provided only
a very brief error message that did not accurately convey the nature
of the problem. Now in such cases, appropriate printouts are provided
when a bad signal or message is detected.  In addition, the message
length is now checked to make certain that it matches the size of
the embedded signal. (Bug #18426180)

-----

The following improvements to MySQL Cluster have been made in MySQL
Cluster NDB 7.4:

    Conflict detection and resolution enhancements.  A reserved
column name namespace NDB$ is now employed for exceptions table
metacolumns, allowing an arbitrary subset of main table columns to
be recorded, even if they are not part of the original table's
primary key.

    Recording the complete original primary key is no longer
required, due to the fact that matching against exceptions table
columns is now done by name and type only. It is now also possible
for you to record values of columns which not are part of the main
table's primary key in the exceptions table.

    Read conflict detection is now possible. All rows read by the
conflicting transaction are flagged, and logged in the exceptions
table. Rows inserted in the same transaction are not included among
the rows read or logged. This read tracking depends on the slave
having an exclusive read lock which requires setting
ndb_log_exclusive_reads in advance. See Read conflict detection
and resolution, for more information and examples.

    Existing exceptions tables remain supported. For more information,
see Section 18.6.11, "MySQL Cluster Replication Conflict Resolution".

    Circular ("active-active") replication improvements.  When
using a circular or "active-active" MySQL Cluster Replication
topology, you can assign one of the roles of primary of secondary
to a given MySQL Cluster using the ndb_slave_conflict_role server
system variable, which can be employed when failing over from a
MySQL Cluster acting as primary, or when using conflict detection
and resolution with NDB$EPOCH2() and NDB$EPOCH2_TRANS() (MySQL
Cluster NDB 7.4.2 and later), which support delete-delete conflict
handling.

    See the description of the ndb_slave_conflict_role variable,
as well as NDB$EPOCH2(), for more information. See also Section
18.6.11, MySQL Cluster Replication Conflict Resolution.

    Per-fragment memory usage reporting.  You can now obtain data
about memory usage by individual MySQL Cluster fragments from the
memory_per_fragment view, added in MySQL Cluster NDB 7.4.1 to the
ndbinfo information database. For more information, see Section
18.5.10.17, "The ndbinfo memory_per_fragment Table".

    Node restart improvements.  MySQL Cluster NDB 7.4 includes a
number of improvements which decrease the time needed for data
nodes to be restarted. These are described in the following list:

	Memory allocated that is allocated on node startup cannot
be used until it has been, which causes the operating system to
set aside the actual physical memory required. In previous versions
of MySQL Cluster, the process of touching each page of memory that
was allocated was singlethreaded, which made it relatively
time-consuming. This process has now been reimplimented with
multithreading. In tests with 16 threads, touch times on the order
of 3 times shorter than with a single thread were observed.

	Increased parallelization of local checkpoints; in MySQL
Cluster NDB 7.4, LCPs now support 32 fragments rather than 2 as
before. This greatly increases utilization of CPU power that would
otherwise go unused, and can make LCPs faster by up to a factor of
10; this speedup in turn can greatly improve node restart times.

	The degree of parallelization used for the node copy phase
during node and system restarts can be controlled in MySQL Cluster
NDB 7.4.3 and later by setting the MaxParallelCopyInstances data
node configuration parameter to a nonzero value.

	Reporting on disk writes is provided by new ndbinfo tables
disk_write_speed_base, disk_write_speed_aggregate, and
disk_write_speed_aggregate_node, which provide information about
the speed of disk writes for each LDM thread that is in use.

	This release also adds the data node configuration parameters
MinDiskWriteSpeed, MaxDiskWriteSpeed, MaxDiskWriteSpeedOtherNodeRestart,
and MaxDiskWriteSpeedOwnRestart to control write speeds for LCPs
and backups when the present node, another node, or no node is
currently restarting.

	These changes are intended to supersede configuration of
disk writes using the DiskCheckpointSpeed and DiskCheckpointSpeedInRestart
configuration parameters.  These 2 parameters have now been
deprecated, and are subject to removal in a future MySQL Cluster
release.

	Faster times for restoring a MySQL Cluster from backup have
been obtained by replacing delayed signals found at a point which
was found to be critical to performance with normal (undelayed)
signals. The elimination or replacement of these unnecessary delayed
signals should noticeably reduce the amount of time required to
back up a MySQL Cluster, or to restore a MySQL Cluster from backup.

	Several internal methods relating to the NDB receive thread
have been optimized, to increase the efficiency of SQL processing
by NDB. The receiver thread at time may have to process several
million received records per second, so it is critical that it not
perform unnecessary work or waste resources when retrieving records
from MySQL Cluster data nodes.

    Improved reporting of MySQL Cluster restarts and start phases.
The restart_info table (included in the ndbinfo information database
beginning with MySQL Cluster NDB 7.4.2) provides current status
and timing information about node and system restarts.

    Reporting and logging of MySQL Cluster start phases also provides
more frequent and specific printouts during startup than previously.
See Section 18.5.1, Summary of MySQL Cluster Start Phases, for more
information.

    NDB API: new Event API.  MySQL Cluster NDB 7.4.3 introduces an
epoch-driven Event API that supercedes the earlier GCI-based model.
The new version of the API also simplifies error detection and
handling. These changes are realized in the NDB API by implementing
a number of new methods for Ndb and NdbEventOperation, deprecating
several other methods of both classes, and adding new type values
to Event::TableEvent.

    The event handling methods added to Ndb in MySQL Cluster NDB
7.4.3 are pollEvents2(), nextEvent2(), getHighestQueuedEpoch(),
and getNextEventOpInEpoch2(). The Ndb methods pollEvents(),
nextEvent(), getLatestGCI(), getGCIEventOperations(), isConsistent(),
and isConsistentGCI() are deprecated beginning with the same release.

    MySQL Cluster NDB 7.4.3 adds the NdbEventOperation event handling
methods getEventType2(), getEpoch(), isEmptyEpoch(), and isErrorEpoch;
it obsoletes getEventType(), getGCI(), getLatestGCI(), isOverrun(),
hasError(), and clearError().

    While some (but not all) of the new methods are direct replacements
for deprecated methods, not all of the deprecated methods map to
new ones. The Event Class, provides information as to which old
methods correspond to new ones.

    Error handling using the new API is no longer handled using
dedicated hasError() and clearError() methods, which are now
deprecated (and thus subject to removal in a future release of
MySQL Cluster). To support this change, the list of TableEvent
types now includes the values TE_EMPTY (empty epoch), TE_INCONSISTENT
(inconsistent epoch), and TE_OUT_OF_MEMORY (inconsistent data).

    Improvements in event buffer management have also been made by
implementing new get_eventbuffer_free_percent(),
set_eventbuffer_free_percent(), and get_eventbuffer_memory_usage()
methods. Memory buffer usage can now be represented in application
code using EventBufferMemoryUsage. The ndb_eventbuffer_free_percent
system variable, also implemented in MySQL Cluster NDB 7.4, makes
it possible for event buffer memory usage to be checked from MySQL
client applications.

    For more information, see the detailed descriptions for the
Ndb and NdbEventOperation methods listed. See also The Event::TableEvent
Type, as well as The EventBufferMemoryUsage Structure.

    Per-fragment operations information.  In MySQL Cluster NDB
7.4.3 and later, counts of various types of operations on a given
fragment or fragment replica can obtained easily using the
operations_per_fragment table in the ndbinfo information database.
This includes read, write, update, and delete operations, as well
as scan and index operations performed by these. Information about
operations refused, and about rows scanned and returned from a
given fragment replica, is also shown in operations_per_fragment.
This table also provides information about interpreted programs
used as attribute values, and values returned by them.

MySQL Cluster NDB 7.4 is also supported by MySQL Cluster Manager,
which provides an advanced command-line interface that can simplify
many complex MySQL Cluster management tasks. See MySQL Cluster
Manager 1.3.5 User Manual, for more information.

-----

Changes in MySQL Cluster NDB 7.3.9 (5.6.24-ndb-7.3.9)

Bugs Fixed

    It was found during testing that problems could arise when the
node registered as the arbitrator disconnected or failed during
the arbitration process.

    In this situation, the node requesting arbitration could never
receive a positive acknowledgement from the registered arbitrator;
this node also lacked a stable set of members and could not initiate
selection of a new arbitrator.

    Now in such cases, when the arbitrator fails or loses contact
during arbitration, the requesting node immediately fails rather
than waiting to time out. (Bug #20538179)

    The values of the Ndb_last_commit_epoch_server and
Ndb_last_commit_epoch_session status variables were incorrectly
reported on some platforms. To correct this problem, these values
are now stored internally as long long, rather than long. (Bug
#20372169)

    The maximum failure time calculation used to ensure that normal
node failure handling mechanisms are given time to handle survivable
cluster failures (before global checkpoint watchdog mechanisms
start to kill nodes due to GCP delays) was excessively conservative,
and neglected to consider that there can be at most number_of_data_nodes
/ NoOfReplicas node failures before the cluster can no longer
survive. Now the value of NoOfReplicas is properly taken into
account when performing this calculation.  (Bug #20069617, Bug
#20069624)

    References: See also Bug #19858151, Bug #20128256, Bug #20135976.

    When a data node fails or is being restarted, the remaining
nodes in the same nodegroup resend to subscribers any data which
they determine has not already been sent by the failed node.
Normally, when a data node (actually, the SUMA kernel block) has
sent all data belonging to an epoch for which it is responsible,
it sends a SUB_GCP_COMPLETE_REP signal, together with a count, to
all subscribers, each of which responds with a SUB_GCP_COMPLETE_ACK.
When SUMA receives this acknowledgment from all subscribers, it
reports this to the other nodes in the same nodegroup so that they
know that there is no need to resend this data in case of a subsequent
node failure. If a node failed before all subscribers sent this
acknowledgement but before all the other nodes in the same nodegroup
received it from the failing node, data for some epochs could be
sent (and reported as complete) twice, which could lead to an
unplanned shutdown.

    The fix for this issue adds to the count reported by
SUB_GCP_COMPLETE_ACK a list of identifiers which the receiver can
use to keep track of which buckets are completed and to ignore any
duplicate reported for an already completed bucket.  (Bug #17579998)

    When performing a restart, it was sometimes possible to find
a log end marker which had been written by a previous restart, and
that should have been invalidated. Now when when searching for the
last page to invalidate, the same search algorithm is used as when
searching for the last page of the log to read.  (Bug #76207, Bug
#20665205)

    When reading and copying transporter short signal data, it was
possible for the data to be copied back to the same signal with
overlapping memory. (Bug #75930, Bug #20553247)

    When a bulk delete operation was committed early to avoid an
additional round trip, while also returning the number of affected
rows, but failed with a timeout error, an SQL node performed no
verification that the transaction was in the Committed state. (Bug
#74494, Bug #20092754)

    References: See also Bug #19873609.

    An ALTER TABLE statement containing comments and a partitioning
option against an NDB table caused the SQL node on which it was
executed to fail. (Bug #74022, Bug #19667566)

    Cluster API: When a transaction is started from a cluster
connection, Table and Index schema objects may be passed to this
transaction for use. If these schema objects have been acquired
from a different connection (Ndb_cluster_connection object), they
can be deleted at any point by the deletion or disconnection of
the owning connection. This can leave a connection with invalid
schema objects, which causes an NDB API application to fail when
these are dereferenced.

    To avoid this problem, if your application uses multiple
connections, you can now set a check to detect sharing of schema
objects between connections when passing a schema object to a
transaction, using the NdbTransaction::setSchemaObjectOwnerChecks()
method added in this release. When this check is enabled, the schema
objects having the same names are acquired from the connection and
compared to the schema objects passed to the transaction. Failure
to match causes the application to fail with an error. (Bug #19785977)

    Cluster API: The increase in the default number of hashmap
buckets (DefaultHashMapSize API node configuration parameter) from
240 to 3480 in MySQL Cluster NDB 7.2.11 increased the size of the
internal DictHashMapInfo::HashMap type considerably.  This type
was allocated on the stack in some getTable() calls which could
lead to stack overflow issues for NDB API users.

    To avoid this problem, the hashmap is now dynamically allocated
from the heap. (Bug #19306793)

    Cluster API: A scan operation, whether it is a single table
scan or a query scan used by a pushed join, stores the result set
in a buffer. The maximum size of this buffer is calculated and
preallocated before the scan operation is started. This buffer may
consume a considerable amount of memory; in some cases we observed
a 2 GB buffer footprint in tests that executed 100 parallel scans
with 2 single-threaded (ndbd) data nodes.  This memory consumption
was found to scale linearly with additional fragments.

    A number of root causes, listed here, were discovered that led
to this problem:

	Result rows were unpacked to full NdbRecord format before
they were stored in the buffer. If only some but not all columns
of a table were selected, the buffer contained empty space (essentially
wasted).

	Due to the buffer format being unpacked, VARCHAR and
VARBINARY columns always had to be allocated for the maximum size
defined for such columns.

	BatchByteSize and MaxScanBatchSize values were not taken
into consideration as a limiting factor when calculating the maximum
buffer size.

    These issues became more evident in NDB 7.2 and later MySQL
Cluster release series. This was due to the fact buffer size is
scaled by BatchSize, and that the default value for this parameter
was increased fourfold (from 64 to 256) beginning with MySQL Cluster
NDB 7.2.1.

    This fix causes result rows to be buffered using the packed
format instead of the unpacked format; a buffered scan result row
is now not unpacked until it becomes the current row. In addition,
BatchByteSize and MaxScanBatchSize are now used as limiting factors
when calculating the required buffer size.

    Also as part of this fix, refactoring has been done to separate
handling of buffered (packed) from handling of unbuffered result
sets, and to remove code that had been unused since NDB 7.0 or
earlier. The NdbRecord class declaration has also been cleaned up
by removing a number of unused or redundant member variables.  (Bug
#73781, Bug #75599, Bug #19631350, Bug #20408733)
2015-05-25 22:17:36 +00:00
fhajny
9ebb57b120 Fix some issues raised by the previous update
- Never force install the init script, this is done by pkgsrc automatically
- Pre-create ${PKG_SYSCONFDIR}/sqlrelay.conf.d that was added in 0.48
- Doesn't really need libiconv directly
- Improve SMF manifest
- Remove some unneeded definitions.

Bump PKGREVISION.
2015-05-21 15:11:57 +00:00
adam
7846ac7eee Changes 3.8.10.2:
Fix an index corruption issue introduced by version 3.8.7. An index with a TEXT key can be corrupted by an INSERT into the corresponding table if the table has two nested triggers that convert the key value to INTEGER and back to TEXT again.
2015-05-21 10:38:53 +00:00
ryoon
a9499af770 Reset PKGREVISION. 2015-05-20 13:29:58 +00:00
ryoon
36613f5fa9 Update to 0.59
* Fix build with Ruby 2.2.

Changelog:
0.59 - updated docs, removed some Cygwin-specific info
	added support for login warnings
	made bind variable buffers dynamic on the client side
	added maxbindvars parameter on the server side
	binding a NULL to an integer works with db2 now
	moved getting started with DB docs into the cloud
	added a semaphore to ensure that the listener doesn't hand off the
		client to the connection until the connection is ready,
		elimiating a race condition on the handoff socket that could
		occur if the connection timed out waiting for the listener
		just after the listener had decided to use that connection
	oracle temp tables that need to be truncated at the end of the session
		are truncated with "truncate table xxx" now rather than
		"delete from xxx"
	oracle temp tables that need to be dropped at the end of the session
		are truncated first, rather than the connection re-logging in
	an ora-14452 error (basically indicating that a temp table can only be
		dropped after being truncated, or if the current session ends)
		does not automatically trigger a re-login any more
	updated cachemanager to use directory::read() directly instead of
		directory::getChildName(index)
	added cache and opencache commands to sqlrsh
	made cache ttl a 64-bit number
	added enabled="yes"/"no" parameter to logger modules
	updated odbc connection code to use new/delete and rudiments methods
		rather than malloc/free and native calls
	retired Ruby DBI driver
	fixed command line client crash when using -id "instance" with an
		instance that uses authtier="database"
	fixed bugs that could make reexecuted db2 selects fail and cause a
		database re-login loop
	tweaked spec file to remove empty directories on uninstall
	fixed typo that could sometimes cause a listener crash
	postgresql and mdbtools return error code of 1 rather than 0 for all
		errors now
	tweaked odbc driver to work with Oracle Heterogenous Agent (dblinks)
	fixed bugs related to autocommit with db's that support transaction
		blocks
	implemented the ODBC driver-manager dialog for windows
	updated windows installer to install ODBC registry settings
	ODBC driver copies references now
	fixed various bugs in sqlrconfigfile that caused sqlr-start with no
		-id to crash or behave strangely sometimes
	refactored build process to use nmake and be compatible with many
		different versions of MS Visual Studio
	updated the slow query logger to show the date/time that the query
		was executed
	consolidated c, c++ and server source/includes down a few levels
	implemented column-remapping for get db/table/column commands to
		enable different formats for mysql, odbc, etc.
	odbc connection correctly returns database/table lists now
	added support for maxselectlistsize/maxitembuffersize to MySQL
		connection
	updated mysql connection to fetch blob columns in chunks and not be
		bound by maxitembuffersize
	fixed a misspelling in sqlrelay.dtd
	swapped order of init directory detection, looking for /etc/init.d
		ahead of /etc/rc.d/init.d to resolve conflict with dkms on
		SuSE Enterprise
	C# api and tests compile and work under Mono on unix/linux now
	sqlr-start spawns a new window on Windows now
	added global temp table tracking for firebird
	added droptemptables parameter for firebird
	added globaltemptables parameter for oracle and firebird
	updated mysql connection to allow mysql_init to allocate a mysql
		struct on platforms that support mysql_init, rather than
		using a static struct
	fixed subtle noon/midnight-related bugs in date/time translation
	updated mysql connection to get affected rows when not using the
		statement api
	updated mysql connection not to use the statement API on windows,
		for now
	disabled mysql_change_user, for now
	fixed blob-input binds on firebird

0.58 - updated spawn() calls to detach on windows
	added support for sqlrelay.conf.d
	removed support for undocumented ~/.sqlrelay.conf
	fixed detection of oracle jdk 7 and 8 on debian and ubuntu systems
	added ini files for PHP and PDO modules
	added resultsetbuffersize, dontgetcolumninfo and nullsasnulls connect
		string variables to the PHP PDO driver
	refactored sqlr-status and removed dependency on libsqlrserver
	cleaned up and refactored server-side classes quite a bit
	fixed a bug where sqlrsh was losing the timezone when binding dates
	server-devel headers are now installed
	removed backupschema script
	moved triggers, translations, resultsettranslations and parser into
		separate project
	blobs work when using fake input binds now
	replaced sqlr-stop script with a binary (for Windows)
	preliminary support for server components on Windows
	sessionhandler="thread" is now forced on Windows
	added various compile flags for clang's aggressive -Wall
	added support for sybase 16.0
	removed unnecessary -lsybdb/-lsybdb64 for sybase 15+
	fixed PQreset, PQresetStart, PQresetPoll in postgresql drop-in
		replacement lib
	added debug-to-file support to PHP PDO driver
	fixed subtle row-fetch bug in sybase/freetds drivers that could cause
		the total row count to be set to garbage
	fixed support for older versions of perl (5.00x)
	fixed a bug in the DB2 connoutpection that caused blob input binds to be
		truncated at the first null
	added support for binding streams to output bind blobs in the PHP PDO
		driver
	updated PHP PDO guide with notes about bind variable formats
	integrated Samat Yusup's dbh driver methods for PHP PDO
	added stmt driver methods for suspending/resuming result sets to the
		PHP PDO driver
	added row cache to mysql drop-in replacement library to fix issues on
		systems with 32-bit pointers
	fixed subtle db2 output bind bfers the entire result set by default now
	implemented an ext_SQLR_Debug database handle attribute for perl DBI
	added support for type, length, precision, scale bind variable
		attributes in perl DBI
	output bind clobs and blobs work in perl DBI now
	addd support for perl DBI ParamValues, ParamTypes and ParamArrays
		attributes
	tweaked the odbc driver so it works with the jdbc-odbc bridge and
		jmeter
	added custom db/statement attributes to perl DBI for
		DontGetColumnInfo, GetNullsAsEmptyStrings and
		ResultSetBufferSize
	added note about JDBC-ODBC bridge removal in Oracle Java 8
	made threaded listener the default
	tweaks to sqlr-connection/sqlr-scaler processes to deal with lack of
		SIGCHLD/waitpid() on windows
	the signal on semaphore 2 is now undone manually when sqlr-connections
		shut down and doesn't rely on semaphore undo's for normal
		operation
	subtly tweaked freeing of Oracle column-info buffers to work around
		a crash that could occur after using a cursor bind
2015-05-20 13:26:45 +00:00
abs
97bf5b9670 Record error message that prompted MAKE_JOBS_SAFE=no 2015-05-20 11:33:01 +00:00
abs
0bcb4a2de7 Add MAKE_JOBS_SAFE=no 2015-05-20 09:18:24 +00:00
ryoon
499a568f7f Update to 2.9.13
* Fix build with Ruby 2.2.
* Use setup.rb instead of extconf.rb.
* Use GITHUB framework.

Changelog:
Not available.
2015-05-19 13:32:38 +00:00