Packages Collection.
The Perl 5 module DBIx::Class::TimeStamp is a DBIx::Class component
providing automatic setting and updating of date and time based
fields.
NetBSD Packages Collection.
The Perl 5 module DBIx::Class::DynamicDefault is a DBIx::Class
component for automatically setting and updating fields with values
calculated at runtime.
MAJOR BUG-FIX:
* When running rrdtool update with multiple updates in one go and
MMAP enabled, there was a data corruption bug at wrap around.
See http://oss.oetiker.ch/rrdtool-trac/ticket/178 for details
Thanks to Kevin Brintnall
OTHER FIXES:
* Forward ported rra cur_row randomization patch from rrdtool
1.2.28 (it got lost in development).
* Contrary to the documentation imginfo did return the full path
of the image and not only the file name.
* Make --lazy mode work even when PRINT commands are present.
http://oss.oetiker.ch/rrdtool-trac/ticket/163
* Fix Ruby Bindings memory leak.
* Fix compilation on solaris 2.8
* Fix a ton of memory leaks in rrd_create and some in rrd_tool as
well. Based on valgrind analysis by Sven Engelhardt. Thanks!
* Fix handling of error conditions in rrd_tool.c (errno is not the
ideal indicator)
ENHANCEMENTS:
* Text Strings entered in the current locale will automatically be
transformed to utf8 for proper handling by Pango.
* Dramatically improved Pango Performance by introducing a static
fontmap. On my test system the persistent fontmap causes the
second graph with the same fonts in a single session to be
created about 0.18s faster than the first one. For a total graph
creation time of 0.21s this is a pretty substantial improvement.
With this patch, performance for the second graph is back to
1.2.x levels or even better.
Packages Collection.
The Perl 5 module DBICx::TestDatabase creates a temporary SQLite
database, deploys your DBIC schema, and then connects to it. This
lets you easily test your DBIC schema. Since you have a fresh
database for every test, you don't have to worry about cleaning up
after your tests, ordering of tests affecting failure, etc.
* Split the pager subsystem into separate pager and pcache subsystems.
* Factor out indentifier resolution procedures into separate files.
* Bug fixes.
Berkeley DB 4.7.25 Change Log
Database or Log File On-Disk Format Changes:
1. The log file format changed in 4.7.
New Features:
1. The lock manager may now be fully partitioned, improving performance
on some multi-CPU systems. [#15880]
2. Replication groups are now architecture-neutral, supporting
connections between differing architectures (big-endian or
little-endian, independent of structure padding). [#15787] [#15840]
3. Java: A new Direct Persistence Layer adds a built-in Plain Old Java
Object (POJO)-based persistent object model, which provides support
for complex object models without compromises in performance. For an
introduction to the Direct Persistence Layer API, see Getting Started
with Data Storage. [#15936]
4. Add the DB_ENV->set_intermediate_dir_mode method to support the
creation of intermediate directories needed during recovery. [#15097]
5. The DB_ENV->failchk method can now abort transactions for threads,
which have failed while blocked on a concurrency lock. This
significantly decreases the need for database environment recovery
after thread of control failure. [#15626]
6. Replication Manager clients now can be configured to monitor the
connection to the master using heartbeat messages, in order to
promptly discover connection failures. [#15714]
7. The logging system may now be configured to pre-zero log files when
they are created, improving performance on some systems. [#15758]
Database Environment Changes:
1. Restructure aborted page allocation handling on systems without an
ftruncate system call. This enables the Berkeley DB High Availability
product on systems, which do not support ftruncate. [#15602]
2. Fix a bug where closing a database handle after aborting a transaction
which included a failed open of that handle could result in
application failure. [#15650]
3. Fix minor memory leaks when closing a private database environment.
[#15663]
4. Fix a bug leading to a panic of "unpinned page returned" if a cursor
was used for a delete multiple times and deadlocked during one of the
deletes. [#15944]
5. Optionally signal processes still running in the environment before
running recovery. [#15984]
Concurrent Data Store Changes:
None.
General Access Method Changes:
1. Fix a bug where closing a database handle after aborting a transaction
which included a failed open of that database handle could result in
application failure. [#15650]
2. Fix a bug that could cause panic in a database environment configured
with POSIX-style thread locking, if a database open failed. [#15662]
3. Fix bug in the DB->compact method which could cause a panic if a
thread was about to release a page while another thread was truncating
the database file. [#15671]
4. Fix an obscure case of interaction between a cursor scan and delete
that was prematurely returning DB_NOTFOUND. [#15785]
5. Fix a bug in the DB->compact method where if read-uncommitted was
configured, a reader reading uncommitted data my see an inconsistent
entry between when the compact method detects an error and when it
aborts the enclosing transaction. [#15856]
6. Fix a bug in the DB->compact method where a thread of control mail
fail if two threads are compacting the same section of a Recno
database. [#15856]
7. Fix a bug in DB->compact method, avoid an assertion failure when zero
pages can be freed. [#15965]
8. Fix a bug return a non-zero error when DB->truncate is called with
open cursors. [#15973]
9. Fix a bug add HANDLE_DEAD checking for DB cursors. [#15990]
10. Fix a bug to now generate errors when DB_SEQUENCE->stat is called
without first opening the sequence. [#15995]
11. Fix a bug to no longer dereference a pointer into a hash structure,
when hash functionality is disabled. [#16095]
Btree Access Method Changes:
None.
Hash Access Method Changes:
1. Fix a bug where a database store into a Hash database could
self-deadlock in a database environment configured for the Berkeley DB
Concurrent Data Store product, and with a free-threaded DB_ENV or DB
handle. [#15718]
Queue Access Method Changes:
1. Fix a bug that could cause a put or delete of a queue element to
return a DB_NOTGRANTED error, if blocked. [#15933]
Recno Access Method Changes:
1. Expose db_env_set_func_malloc, db_env_set_func_realloc, and
db_env_set_func_free through the Windows API for the DB dll. [#16045]
C-specific API Changes:
None.
Java-specific API Changes:
1. Fix a bug where enabling MVCC on a database through the Java API was
ignored. [#15644]
2. Fixed memory leak bugs in error message buffering in the Java API.
[#15843]
3. Fix a bug where Java SecondaryConfig was not setting
SecondaryMultiKeyCreator from the underlying db handle [OTN FORUM}
4. Fix a bug so that getStartupComplete will now return a boolean instead
of an int. [#16067]
5. Fix a bug in the Java API, where Berkeley DB would hang on exit when
using replication. [#16142]
Direct Persistence Layer (DPL), Bindings and Collections API:
1. A new Direct Persistence Layer adds a built-in Plain Old Java Object
(POJO)-based persistent object model, which provides support for
complex object models without compromises in performance. For an
introduction to the Direct Persistence Layer API, see Getting Started
with Data Storage. [#15936]
2. Fixed a bug in the remove method of the Iterator instances returned
by the StoredCollection.iterator method in the collections package.
This bug caused ArrayIndexOutOfBoundsException in some cases when
calling next, previous, hasNext or hasPrevious after calling remove.
(Note that this issue does not apply to StoredIterator instances
returned by the StoredCollection.storedIterator method.) This bug was
reported in this forum thread:
http://forums.oracle.com/forums/thread.jspa?messageID=2187896
[#15858]
3. Fixed a bug in the remove method of the StoredIterator instances
returned by StoredCollection.storedIterator method in the collections
package. If the sequence of methods next-remove-previous was called,
previous would sometimes return the removed record. If the sequence of
methods previous-remove-next was called, next would sometimes return
the removed record. (Note that this issue does not apply to Iterator
instances returned by the StoredCollection.iterator method.) [#15909]
4. Fixed a bug that causes a memory leak for applications where many
Environment objects are opened and closed and the CurrentTransaction
or TransactionRunner class is used. The problem was reported in this
JE Forum thread:
http://forums.oracle.com/forums/thread.jspa?messageID=1782659 [#15444]
5. Added StoredContainer.areKeyRangesAllowed method. Key ranges and the
methods in SortedMap and SortedSet such as subMap and subSet are now
explicitly disallowed for RECNO and QUEUE databases -- they are only
supported for BTREE databases. Before, using key ranges in a RECNO or
QUEUE database did not work, but was not explicitly prohibited in the
Collections API. [#15936]
Tcl-specific API Changes:
1. The Berkeley DB Tcl API does not attempt to avoid evaluating input as
Tcl commands. For this reason, it may be dangerous to pass unreviewed
user input through the Berkeley DB Tcl API, as the input may
subsequently be evaluated as a Tcl command. To minimize the
effectiveness of a Tcl injection attack, the Berkeley DB Tcl API in
the 4.7 release routine resets process' effective user and group IDs
to the real user and group IDs. [#15597]
RPC-specific Client/Server Changes:
None.
Replication Changes:
1. Fix a bug where a master failure resulted in multiple attempts to
perform a "fast election"; subsequent elections, when necessary, now
use the normal nsites value. [#15099]
2. Replication performance enhancements to speed up failover. [#15490]
3. Fix a bug where replication could self-block in a database environment
configured for in-memory logging. [#15503]
4. Fix a bug where replication would attempt to read log file version
numbers in a database configured for in-memory logging. [#15503]
5. Fix a bug where log files were not removed during client
initialization in a database configured for in-memory logging.
[#15503]
6. The 4.7 release no longer supports live replication upgrade from the
4.2 or 4.3 releases, only from the 4.4 and later releases. [#15602]
7. Fix a bug where replication could re-request missing records on every
arriving record. [#15629]
8. Change the DB_ENV->rep_set_request method to use time, not the number
of messages, when re-requesting missed messages on a replication
client. [#15629]
9. Fix a minor memory leak on the master when updating a client during
internal initialization. [#15634]
10. Fix a bug where a client error when syncing with a new replication
group master could result in an inability to ever re-join the group.
[#15648]
11. Change dbenv->rep_set_request to use time-based values instead of
counters. [#15682]
12. Fix a bug where a LOCK_NOTGRANTED error could be returned from the
DB_ENV->rep_process_message method, instead of being handled
internally by replication. [#15685]
13. Fix a bug where the Replication Manager would reject a fresh
connection from a remote site that had crashed and restarted,
displaying the message: "redundant incoming connection will be
ignored". [#15731]
14. The Replication Manager now supports dynamic negotiation of the best
available wire protocol version, on a per-connection basis. [#15783]
15. Fix a bug, which could lead to slow performance of internal
initialization under the Replication Manager, as evidenced by "queue
limit exceeded" messages in verbose replication diagnostic output.
[#15788]
16. Fix a bug where replication control message were not portable between
replication clients with different endian architectures. [#15793]
17. Add a configuration option to turn off Replication Manager's special
handling of elections in 2-site groups. [#15873]
18. Fix a bug making it impossible to call replicationManagerAddRemoteSite
in the Java API after having called replicationManagerStart. [#15875]
19. Fix a bug where the DB_EVENT_REP_STARTUPDONE event could be triggered
too early. [#15887]
20. Fix a bug where the rcvd_ts timestamp is reset when the user just
changes the threshold. [#15895]
21. Fix a bug where the master in a 2-site replication group might wait
for client acknowledgement, even when there was no client connected.
[#15927]
22. Fix a bug, clean up and restart internal init if master log is gone.
[#16006]
23. Fix a bug, ignore page messages that are from an old internal init.
[#16075] [#16059]
24. Fix a bug where checkpoint records do not indicate a database was a
named in-memory database. [#16076]
25. Fix a bug with in-memory replication, where we returned with the log
region mutex held in an error path, leading to self-deadlock. [#16088]
26. Fix a bug which causes the DB_REP_CHECKPOINT_DELAY setting in
rep_set_timeout() to be interpreted in seconds, rather than
microseconds. [#16153]
XA Resource Manager Changes:
1. Fix a bug where the DB_ENV->failchk method and replication in general
could fail in database environments configured for XA. [#15654]
Locking Subsystem Changes:
1. Fix a bug causing a lock or transaction timeout to not be set properly
after the first timeout triggers on a particular lock id. [#15847]
2. Fix a bug that would cause a trap if DB_ENV->lock_id_free was passed
an invalid locker id. [#16005]
3. Fix a bug when thread tracking is enabled where an attempt is made to
release a mutex that is not lock. [#16011]
Logging Subsystem Changes:
1. Fix a bug, handle zero-length log records doing HA sync with in-memory
logs. [#15838]
2. Fix a bug that could cause DB_ENV->failcheck to leak log region
memory. [#15925]
3. Fix a bug where the abort of a transaction that opened a database
could leak log region memory. [#15953]
4. Fix a bug that could leak memory in the DB_ENV->log_archive interface
if a log file was not found. [#16013]
Memory Pool Subsystem Changes:
1. Fix multiple MVCC bugs including a race, which could result in
incorrect data being returned to the application. [#15653]
2. Fixed a bug that left an active file in the buffer pool after a
database create was aborted. [#15918]
3. Fix a bug where there could be uneven distribution of pages if a
single database and multiple cache regions are configured. [#16015]
4. Fix a bug where DB_MPOOLFILE->set_maxsize was dropping the wrong mutex
after open. [#16050]
Mutex Subsystem Changes:
1. Fix a bug where mutex contention in database environments configured
for hybrid mutex support could result in performance degradation.
[#15646]
2. Set the DB_MUTEX_PROCESS_ONLY flag on all mutexes in private
environments, they can't be shared and so we can use the faster,
intra-process only mutex implementations [#16025]
3. Fix a bug so that mutexes are now removed from the environment
signature if mutexes are disabled. [#16042]
Transaction Subsystem Changes:
1. Fix a bug that could cause a checkpoint to selfblock attempting to
flush a file, when the file handle was closed by another thread during
the flush. [#15692]
2. Fix a bug that could cause DB_ENV->failcheck to hang if there were
pending prepared transactions in the environment. [#15925]
3. Prepared transactions will now use the sync setting from the
environment. Default to flushing the log on commit (was nosync).
[#15995]
4. If __txn_getactive fails, we now return with the log region mutex
held. This is not a bus since __txn_getactive cannot really fail.
[#16088]
Utility Changes:
1. Update db_stat with -x option for mutex stats
2. Fix an incorrect assumption about buffer size when getting an overflow
page in db_verify. [#16064]
Configuration, Documentation, Sample Application, Portability and Build
Changes:
1. Fix an installation bug where the Berkeley DB PHP header file was not
installed in the correct place.
2. Merge the run-time configuration sleep and yield functions. [#15037]
3. Fix Handle_DEAD and other expected replication errors in the C++
sample application ReqQuoteExample.cpp. [15568]
4. Add support for monotonic timers. [#15670]
5. Fix bugs where applications using the db_env_func_map and
db_env_func_unmap run-time configuration functions could not join
existing database environments, or open multiple DB_ENV handles for a
single environment. [#15930]
6. Add documentation about building Berkeley DB for VxWorks 6.x.
7. Remove the HAVE_FINE_GRAINED_LOCK_MANAGER flag, it is obsolete in 4.7.
8. Fix a bug in ex_rep, add a missing break which could cause a segment
fault.
9. Fix build warnings from 64 bit Windows build. [#16029]
10. Fix an alignment bug on ARM Linux. Force the assignment to use
memcpy. [#16125]
11. Fix a bug in the Windows specific code of ex_sequence.c, where there
was an invalide printf specifier. [#16131]
12. Improve the timer in ex_tpcb to use high resolution timers. [#16154]
13. Mention in the documentation that env->open() requires DB_THREAD to be
specified when using repmgr. [#16163]
14. Disable support for mmap on Windows CE. The only affect is that we do
not attempt to mmap small read only databases into the mpool. [#16169]
2.5.0:
- Windows binaries are now cross-built using mingw on Linux
- import various fixes from Python 2.6 version
- Connection has new method iterdump() that allows you to create a script file
that can be used to clone a database
- the docs are now built using Sphinx and were imported from Python 2.6's
sqlite3 module
- Connection.enable_load_extension(enabled) to allow/disallow extension
loading. Allows you to use fulltext search extension, for example ;-)
- Give the remaining C functions used in multiple .c source files the pysqlite_
prefix.
- Release GIL during sqlite3_prepare() calls for better concurrency.
- Automatically download the SQLite amalgamation when building statically.
2.4.1:
- Made unicode strings for the database parameter in connect() work again
- Removed bad defaults from setup.cfg
2.4.0:
- Implemented context managers. pysqlite's connections can now be used as
context managers with Python 2.5 or later:
from __future__ import with_statement
from pysqlite2 import dbapi2 as sqlite
con = sqlite.connect(":memory:")
con.execute("create table person (id integer primary key, firstname varchar unique)")
# Successful, con.commit() is called automatically afterwards
with con:
con.execute("insert into person(firstname) values (?)", ("Joe",))
# con.rollback() is called after the with block finishes with an exception, the
# exception is still raised and must be catched
try:
with con:
con.execute("insert into person(firstname) values (?)", ("Joe",))
except sqlite.IntegrityError:
print "couldn't add Joe twice"
- pysqlite connections can now be created from APSW connections. This enables
users to use APSW functionality in applications using the DB-API from
pysqlite:
from pysqlite2 import dbapi2 as sqlite
import apsw
apsw_con = apsw.Connection(":memory:")
apsw_con.createscalarfunction("times_two", lambda x: 2*x, 1)
# Create pysqlite connection from APSW connection
con = sqlite.connect(apsw_con)
result = con.execute("select times_two(15)").fetchone()[0]
assert result == 30
con.close()
Caveat: This will only work if both pysqlite and APSW are dynamically
linked against the same SQLite shared library. Otherwise you will
experience a segfault.
- Fixed shuffled docstrings for fetchXXX methods.
- Workaround for SQLite 3.5.x versions which apparently return NULL for
"no-operation" statements.
- Disable the test for rollback detection on old SQLite versions. This prevents
test failures on systems that ship outdated SQLite libraries like MacOS X.
- Implemented set_progress_handler for progress callbacks from SQLite. This is
particularly useful to update GUIs during long-running queries. Thanks to
exarkun for the original patch.
oriented interface to databases like DBIx-Class is for Perl. It is quite
extensible and widely deployed.
It contains compilers for a number of database engines which are used
only if they're requested explicitly, nevertheless the package offers to
depend on some of them explicitly as requested by
PKG_OPTIONS.py-sqlalchemy.
Tokyo Cabinet is a library of routines for managing a database. The database is
a simple data file containing records, each is a pair of a key and a value.
Every key and value is serial bytes with variable length. Both binary data and
character string can be used as a key and a value. There is neither concept of
data tables nor data types. Records are organized in hash table, B+ tree, or
fixed-length array.
This package provides Ruby binding of Tokyo Cabinet.
Tokyo Cabinet is a library of routines for managing a database. The database is
a simple data file containing records, each is a pair of a key and a value.
Every key and value is serial bytes with variable length. Both binary data and
character string can be used as a key and a value. There is neither concept of
data tables nor data types. Records are organized in hash table, B+ tree, or
fixed-length array.
This package provides Perl binding of Tokyo Cabinet.
Tokyo Cabinet is a library of routines for managing a database. The database is
a simple data file containing records, each is a pair of a key and a value.
Every key and value is serial bytes with variable length. Both binary data and
character string can be used as a key and a value. There is neither concept of
data tables nor data types. Records are organized in hash table, B+ tree, or
fixed-length array.
in the NetBSD Packages Collection.
The Perl 5 module DBIx::Class::InflateColumn::IP is a DBIx::Class
component to declare columns as IP addresses and treat them as
NetAddr::IP objects.
NetBSD Packages Collection.
The Perl 5 module DBIx::Class::Fixtures allows to dump fixtures
from source database to filesystem then import to another database
(with same schema) at any time. Use as a constant dataset for
running tests against or for populating development databases when
impractical to use production clones. Describe fixture set using
relations and conditions based on your DBIx::Class schema.
Collection.
The Perl 5 module Jifty::DBI deals with databases, so that you
don't have to. This module provides an object-oriented mechanism
for retrieving and updating data in a DBI-accessible database.
This module is the direct descendent of DBIx::SearchBuilder. If
you're familiar with SearchBuilder, Jifty::DBI should be quite
familiar to you.
Collection.
The Perl 5 module DBM::Deep is a unique flat-file database module,
written in pure perl. True multi-level hash/array support (unlike
MLDBM, which is faked), hybrid OO / tie() interface, cross-platform
FTPable files, ACID transactions, and is quite fast. Can handle
millions of keys and unlimited levels without significant slow-down.
Written from the ground-up in pure perl -- this is NOT a wrapper
around a C-based DBM. Out-of-the-box compatibility with Unix, Mac
OS X and Windows.
Set archive server to MASTER_SITES instead.
Noticed outdated MASTER_SITES by Zafer Aydogan via private mail.
XXX: Should be switched to individual package found in
XXX: ${MASTER_SITE_PGSQL:=odbc/versions/src} ?
* Added the lookaside memory allocator for a speed improvement in excess of 15%
on some workloads. (Your mileage may vary.)
* Added the SQLITE_CONFIG_LOOKASIDE verb to sqlite3_config() to control the
default lookaside configuration.
* Added verbs SQLITE_STATUS_PAGECACHE_SIZE and SQLITE_STATUS_SCRATCH_SIZE to
the sqlite3_status() interface.
* Modified SQLITE_CONFIG_PAGECACHE and SQLITE_CONFIG_SCRATCH to remove the "+4"
magic number in the buffer size computation.
* Added the sqlite3_db_config() and sqlite3_db_status() interfaces for
controlling and monitoring the lookaside allocator separately on each
database connection.
* Numerious other performance enhancements
* Miscellaneous minor bug fixes
0.28 11 Aug 2008
* API for ModelAdapter changed to pass controller instance in do_model()
* add get_primary_key() and make_primary_key_string()
methods to base Controller.
This allows for PKs composed of multiple columns.
Move from BSD Makefile to libtool
Use DESTDIR
Use PKG_MANDIR
Install the programs with this package as well as the library
Make this build on Mac OS X - there was a problem with case sensitivity
Use modern regexp calls
Get rid of lint
Bump PKGREVISION
Use DESTDIR
Use PKG_MANDIR
Install the programs with this package as well as the library
Make this build on Mac OS X - there was a problem with case sensitivity
Use modern regexp calls
Get rid of lint
Bump PKGREVISION
1.54 Sun Feb 10 21:35:02 PST 2008
Modify fromFileGetTopLines method, remove dependency on bytes
bytes::substr causes infinite loop in some older version of perl
1.53 Thu Jan 3 21:13:40 PST 2008
add "use bytes" to Table.pm
Just patched test.pl, because some OS cannot open in-memory file.
1.52 Fri Dec 14 11:48:42 PST 2007
1.51 Wed Dec 12 15:36:22 PST 2007
1. Add a class methods Data::Table::fromFile(file_name), which can
guess the file format and call fromCSV/fromTSV internally.
fromFile relies on the following new methods
fromFileGuessOS(file_name)
fromFileGetTopLines($file_name, $OS, $lineNumber)
fromFileIsHeader($string)
fromFileGuessDelimiter($arrayRefToLines)
to figure out if the input file is from UNIX/PC/MAC, whether its first
row contains column headers, and whether it uses ",", "\t" or ":" as
field delimiters.
It then calls either fromCSV or fromTSV to return the table object.
$t = Data::Table::fromFile("myFileName_CSVorTSV_HeaderOrNoHeader_UNIXorPCorMAC
");
Please refers to the updated document for details.
2. When fromFile/fromCSV/fromTSV reads from an empty file, it returns
an undef object, rather than quit.
3. Provide more informative error message, when invalid column header is found
.
4. fixed a bug in 1.51 where fromFileGuessOS failed in Windows
Thanks to patches provided by "whitebell".
- patch #1987593 [interface] Table list pagination in navi
- bug #1989081 [profiling] Profiling causes query to be executed again
(really causes a problem in case of INSERT/UPDATE)
- bug #1990342 [import] SQL file import very slow on Windows
- bug [XHTML] problem with tabindex and radio fields
- bug #1971221 [interface] tabindex not set correctly
- bug [views] VIEW name created via the GUI was not protected
with backquotes
- bug #1989813 [interface] Deleting multiple views (space in name)
- bug #1992628 [parser] SQL parser removes essential space
- bug #1989281 [export] CSV for MS Excel incorrect escaping of
double quotes
- bug #1959855 [interface] Font size option problem when no
config file
- bug #1982489 [relation] Relationship view should check for changes
- bug [history] Do not save too big queries in history
- [security] Do not show version info on login screen
- bug #2018595 [import] Potential data loss on import resubmit
- patch #2020630 [export] Safari and timedate
- bug #2022182 [import, export] Import/Export fails because of
Mac files
- [security] protection against cross-frame scripting and
new directive AllowThirdPartyFraming
- [security] possible XSS during setup
- [interface] revert language changing problem introduced
with 2.11.7.1
- small fix for notice about "lang"
This update fixes the security vulnerability reported in PMASA-2008-6.
* image size does get returned properly even with --lazy active
this broke a number of frontends which should work now.
* fix rrd_restore to be able to read rrd 1.0.x generated dumps again.
* several documetation fixes
* make rrdtool.spec work without php
* complain when someone tries to create an rrd file with step size zero.
* added filename to illegal updated interval error message.
* fix number of rows returned by python modules fetch implementation.
directory traversal), CVE-2007-1232 an CVE-2008-0516. Update to 1.2.0 in
order to make this possible at all. Also remove manu as maintainer as he
suggested in mail.
took maintainership
updated REPLACE_PERL section
ChangeLog:
Changes in DBI 1.607 (svn r11571) 22nd July 2008
NOTE: Perl 5.8.1 is now the minimum supported version.
If you need support for earlier versions send me a patch.
Fixed missing import of carp in DBI::Gofer::Execute.
Added note to docs about effect of execute(@empty_array).
Clarified docs for ReadOnly thanks to Martin Evans.
ChangeLog:
Changes log for Perl extension SQL::Statement
Version 1.15, released 2 February, 2006
----------------------------------------
* fixed placeholder bug in SQL::Statement::UPDATE
thanks for bug report Tanktalus
Version 1.14, released 21 April, 2005
----------------------------------------
* fixed circular dependency in tests (one mistakenly required AnyData)
Version 1.13, released 18 April, 2005
----------------------------------------
* pod fixes
Version 1.12, released 18 April, 2005
----------------------------------------
* added support for GROUP BY
(several people sent suggestions for this in the past, please email me
so I can credit you, sorry I lost the names)
* added support for true LIMIT - if a LIMIT clause is specified and
no ORDER BY clause is specified, the SELECT will stop searching
when the limit is reached; with an ORDER BY clause it will still
search the entire table because we can only ORDER a set; using
LIMIT without an ORDER BY will greatly increase speed
* added support for CREATE/DROP keyword|operator|type|function
* optimized process_predicate to only look up scalars once
* completely re-wrote the POD
* fixed bug in primary key search optimization
thanks for bug report and test scripts: Jim Lambert, <jimlambrtATmac.com>
* fixed problem with all_cols slowing inserts
thanks for patch and test Cosimo Streppone <cosimoATcpan.org>
* cleaned up case of temp table column names
thanks for bug report: Dan Wright
* added a META.YML and extra tests
1.817 27 March 2008
* Updated dbinfo
* Applied core patch 32299 - Re-apply change #30562
* Applied core patch 32208
* Applied core patch 32884 - use MM->parse_version() in Makefile.PL
* Applied core patch 32883 - Silence new warning grep in void
context warning
* Applied core patch 32704 to remove use of PL_na in typemap
* Applied core patch 30562 to fix a build issue on OSF
1.816 28 October 2007
* Clarified the warning about building with a different version of
Berkeley DB that is used at runtime.
* Also made the boot version check less strict.
[rt.cpan.org #30013]
* Modifications to the virtual file system interface to support a wider range
of embedded systems.
* All C-preprocessor macros used to control compile-time options now begin
with the prefix "SQLITE_".
* The SQLITE_MUTEX_APPDEF compile-time option is no longer supported.
* The handling of IN and NOT IN operators that contain a NULL on their
right-hand side expression is brought into compliance with the SQL standard
and with other SQL database engines. This is a bug fix, but as it has the
potential to break legacy applications that depend on the older buggy
behavior.
* The result column names generated for compound subqueries have been
simplified to show only the name of the column of the original table and
omit the table name. This makes SQLite operate more like other SQL database
engines.
* Added the sqlite3_config() interface for doing run-time configuration of the
entire SQLite library.
* Added the sqlite3_status() interface used for querying run-time status
information about the overall SQLite library and its subsystems.
* Added the sqlite3_initialize() and sqlite3_shutdown() interfaces.
* The SQLITE_OPEN_NOMUTEX option was added to sqlite3_open_v2().
* Added the PRAGMA page_count command.
* Added the sqlite3_next_stmt() interface.
* Added a new R*Tree virtual table
- verify the dependencies added: p5-File-Temp, p5-Encode
ChangeLog:
1.54 Wed Jul 9 09:34:25 EDT 2008
When aborting transactions, we need to flush our cache,
because SQLite is reusing the primary id for later inserts and the cache
can otherwise become inconsistent.
- bug #1908719 [interface] New field cannot be auto-increment and
primary key
- [dbi] Incorrect interpretation for some mysqli field flags
- bug #1910621 [display] part 1: do not display a TEXT utf8_bin
as BLOB (fixed for mysqli extension only)
- [interface] sanitize the after_field parameter,
thanks to Norman Hippert
- [structure] do not remove the BINARY attribute in drop-down
- bug #1955386 [session] Overriding session.hash_bits_per_character
- [interface] sanitize the table comments in table print view,
thanks to Norman Hippert
- bug #1939031 Auto_Increment selected for TimeStamp by Default
- patch #1957998 [display] No tilde for InnoDB row counter when
we know it for sure, thanks to Vladyslav Bakayev - dandy76
- bug #1955572 [display] alt text causes duplicated strings
- bug #1762029 [interface] Cannot upload BLOB into existing row
- bug #1981043 [export] HTML in exports getting corrupted,
thanks to Jason Judge - jasonjudge
- bug #1936761 [interface] BINARY not treated as BLOB:
update/delete issues
- protection against XSS when register_globals is on and .htaccess
has no effect, thanks to Tim Starling
- bug #1996943 [export] Firefox 3 and .sql.gz (corrupted);
detect Gecko 1.9, thanks to Juergen Wind
- (2.11.7.1) [security] XSRF/CSRF by manipulating the db,
convcharset and collation_connection parameters,
thanks to YGN Ethical Hacker Group
This update fixes the security vulnerability reported in PMASA-2008-5.
Terminate HOMEPAGE url with /
Adapt PERL5_PACKLIST to what the package does.
Changes:
0.09 Mon Jul 10 03:40:00 2006
* "I'm doind this as I watch the World Cup Finals" release
- Add POD tests
0.08_02 Mon May 29 15:30:00 2006
- Apply patches from Boris Sukholitko. Adds "Primary As Option" and
"Column Groups" features
0.08_01 Sat May 20 10:00:00 2006
- Fix typo in the sequence detection
- Restructure directory structure
0.08 Sat Mar 11 17:00:00 2006
- Stop using _croak (#18093)
0.07 Thu Jan 26 03:00:00 2006
- work with PostgreSQL 8.1's new sequence display
- pg_version(full_version => 1) gets you the major, minor, micro
version strings
- maintainer changed to Daisuke Maki
0.34 27th March 2008
* Updates to support building with Berkeley DB version 4.7
* Typo in #ifdef for ThreadCount support. Spotted by Mark Hindley
* Updated dbinfo
This is a pure Java (Type IV) JDBC driver for the PostgreSQL
database. It allows Java programs to connect to a PostgreSQL
database using standard, database independent Java code.
The driver provides a reasonably complete implementation of the
JDBC 3 specification in addition to some PostgreSQL specific
extensions.
# Updated Brazilian Portuguese translation. (jurka) Thanks to Euler Taveira de Oliveira.
# fix While the driver currently doesn't support the copy protocol, it needs to understand it enough to ignore it. Now the connection will not be irreparably broken when a COPY request is sent. (jurka) Thanks to Altaf Malik.
# fix The JDBC spec says that when you have two duplicately named columns in a ResultSet, a search by name should return the first one. Previously our code was returning the second match. (jurka) Thanks to Magne Mahre.
This is a pure Java (Type IV) JDBC driver for the PostgreSQL
database. It allows Java programs to connect to a PostgreSQL
database using standard, database independent Java code.
The driver provides a reasonably complete implementation of the
JDBC 3 specification in addition to some PostgreSQL specific
extensions.
module. Hence add a build dependency on time/p5-DateTime-Format-MySQL
package.
- Unbreak build when the package textproc/p5-Data-FormValidator is
not installed: add it as dependency.
Bump PKGREVISION to 1.
Rose::DBx::Garden::Catalyst extends Rose::DBx::Garden to create
Catalyst components that use the RDBO and RHTMLO classes that the
Garden class produces.
By default this class creates stub Template Toolkit files for use
with the RDBO and RHTMLO CRUD components. If you use a different
templating system, just set the tt option to 0.
CatalystX::CRUD provides a simple and generic API for Catalyst CRUD
applications. CatalystX::CRUD is agnostic with regard to data model
and data input, instead providing a common API that different
projects can implement for greater compatability with one another.
The project was born out of a desire to make Rose::HTML::Objects
easy to use with Rose::DB::Object and DBIx::Class ORMs, using the
Catalyst::Controller::Rose project. However, any ORM could implement
the CatalystX::CRUD::Model API, and any form management project
could use the resulting CatalystX::CRUD::Model subclass.
Catalyst Model base class for Rose::DB::Object. This class provides
convenience access to your existing Rose::DB::Object class.
The assumption is one Model class per Rose::DB::Object class.
This is a Catalyst Model for DBIx::Class::Schema-based Models. See the
documentation for Catalyst::Helper::Model::DBIC::Schema and
Catalyst::Helper::Model::DBIC::SchemaLoader for information on
generating these Models via Helper scripts. The latter of the two will
also generated a DBIx::Class::Schema::Loader-based Schema class for you.
ose::DBx::Garden bootstraps Rose::DB::Object and Rose::HTML::Form
based projects. The idea is that you can point the module at a
database and end up with work-able RDBO and Form classes with a
single method call.
Rose::DBx::Garden inherits from Rose::DB::Object::Loader, so all
the magic there is also available here.
This DBIx::Class component resembles the behaviour of Class::DBI::UUID,
to make some columns implicitly created as uuid.
When loaded, UUIDColumns will search for a suitable uuid generation
module from the following list of supported modules:
Data::UUID APR::UUID* UUID Win32::Guidgen Win32API::GUID
If no supporting module can be found, an exception will be thrown.
*APR::UUID will not be loaded under OpenBSD due to an as yet
unidentified XS issue.
DBIx::Class::Schema::Loader automates the definition of a
DBIx::Class::Schema by scanning table schemas and setting up columns and
primary keys.
DBIx::Class::Schema::Loader supports MySQL, Postgres, SQLite and DB2.
See DBIx::Class::Schema::Loader::Generic for more, and
DBIx::Class::Schema::Loader::Writing for notes on writing your own
db-specific subclass for an unsupported db.
This module requires DBIx::Class 0.05 or later, and obsoletes
DBIx::Class::Loader for DBIx::Class version 0.05 and later.
While on the whole, the bare table definitions are fairly straightforward,
relationship creation is somewhat heuristic, especially in the choosing
of relationship types, join types, and relationship names. The relationships
generated by this module will probably never be as well-defined as
hand-generated ones. Because of this, over time a complex project will
probably wish to migrate off of L<DBIx::Class::Schema::Loader>.
It is designed more to get you up and running quickly against an existing
database, or to be effective for simple situations, rather than to be what
you use in the long term for a complex database/project.
DBIx::Class::Loader automates the definition of DBIx::Class
sub-classes by scanning table schemas and setting up columns and
primary keys.
This module is deprecated in favor of DBIx::Class::Schema::Loader
for use with DBIx::Class versions 0.05 and higher. It continues to
function as well as it ever did, even for recent DBIx::Class
releases, and will be maintained for some time to counter bugs,
but it doesn't use the now-preferred DBIx::Class::Schema way of
doing things, and tends to promote bad DBIx::Class usage habits.
This DBIx::Class component can be used to automatically insert a
message digest of selected columns. By default DigestColumns will
use Digest::MD5 to insert a 128-bit hexadecimal message digest of
the column value.
The length of the inserted string will be 32 and it will only
contain characters from this set: '0'..'9' and 'a'..'f'.
If you would like to use a specific digest module to create your
message digest, you can set "digest_algorithm":
This is an SQL to OO mapper, inspired by the Class::DBI framework, and
meant to support compability with it, while restructuring the internals
and making it possible to support some new features like self-joins,
distinct, group bys and more.
This project is still at an early stage, so the maintainers don't make
any absolute promise that full backwards-compatibility will be
supported; however, if we can without compromising the improvements
we're trying to make, we will, and any non-compatible changes will merit
a full justification on the mailing list and a CPAN developer release
for people to test against.
Changes in DBI 1.605 XXX
Make trace level 2 show method entry but not fetched rows, leave that
for trace level 3. So trace level 2 can be used to aid debugging with-
out being flooded by data
1 = return from top level only, no rows
2 = +entry to top level, no rows
3 = +return from nested, no rows
4 = +entry to nested, with rows
Fixed broken DBIS macro with threads on big-endian machines
with 64bit ints but 32bit pointers. Ticket #32309.
Fixed the selectall_arrayref, selectrow_arrayref, and selectrow_array
methods that get embedded into compiled drivers to use the
inner sth handle when passed a $sth instead of an sql string.
Drivers will need to be recompiled to pick up this change.
Fixed leak in neat() for some kinds of values thanks to Rudolf Lippan.
Fixed DBI::PurePerl neat() to behave more like XS neat().
Increased default $DBI::neat_maxlen from 400 to 1000.
Increased timeout on tests to accomodate very slow systems.
Changed behaviour of trace levels 1..4 to show less information
at lower levels.
Changed the format of the key used for $h->{CachedKids}
(which is undocumented so you shouldn't depend on it anyway)
Changed gofer error handling to avoid duplicate error text in errstr.
Clarified docs re ":N" style placeholders.
Improved gofer retry-on-error logic and refactored to aid subclassing.
Improved gofer trace output in assorted ways.
Removed the beeps "\a" from Makefile.PL warnings.
Removed check for PlRPC-modules from Makefile.PL
Added sorting of ParamValues reported by ShowErrorStatement
thanks to to Rudolf Lippan.
Added cache miss trace message to DBD::Gofer transport class.
Added $drh->dbixs_revision method.
Added explicit LICENSE specification (perl) to META.yaml