### engrampa 1.22.1
sync with transifex
Help: replace link linkend with xref linkend
file-utils: avoid out of bound memory access
actions: avoid use of memory after it is freed
fr-process: Fix memory leak: 'g_shell_quote' needs to be freed
fr-process: Fix memory leak: 'g_strconcat' needs to be freed
[Security] fr-process: avoid 'strcpy' and 'strcat'
fr-process: Fix memory leak
Help: Fix version to 1.22
help: update copyright
Upgrade the manual to docbook 5.0
v1.4.0
perf: Improve level 1 compression speed in most scenarios by 6% by @gbtucker and @terrelln
api: Move the advanced API, including all functions in the staging section, to the stable section
api: Make ZSTD_e_flush and ZSTD_e_end block for maximum forward progress
api: Rename ZSTD_CCtxParam_getParameter to ZSTD_CCtxParams_getParameter
api: Rename ZSTD_CCtxParam_setParameter to ZSTD_CCtxParams_setParameter
api: Don't export ZSTDMT functions from the shared library by default
api: Require ZSTD_MULTITHREAD to be defined to use ZSTDMT
api: Add ZSTD_decompressBound() to provide an upper bound on decompressed size by @shakeelrao
api: Fix ZSTD_decompressDCtx() corner cases with a dictionary
api: Move ZSTD_getDictID_*() functions to the stable section
api: Add ZSTD_c_literalCompressionMode flag to enable or disable literal compression by @terrelln
api: Allow compression parameters to be set when a dictionary is used
api: Allow setting parameters before or after ZSTD_CCtx_loadDictionary() is called
api: Fix ZSTD_estimateCStreamSize_usingCCtxParams()
api: Setting ZSTD_d_maxWindowLog to 0 means use the default
cli: Ensure that a dictionary is not used to compress itself by @shakeelrao
cli: Add --[no-]compress-literals flag to enable or disable literal compression
doc: Update the examples to use the advanced API
doc: Explain how to transition from old streaming functions to the advanced API in the header
build: Improve the Windows release packages
build: Improve CMake build by @hjmjohnson
build: Build fixes for FreeBSD by @lwhsu
build: Remove redundant warnings by @thatsafunnyname
build: Fix tests on OpenBSD by @bket
build: Extend fuzzer build system to work with the new clang engine
build: CMake now creates the libzstd.so.1 symlink
build: Improve Menson build by @lzutao
misc: Fix symbolic link detection on FreeBSD
misc: Use physical core count for -T0 on FreeBSD by @cemeyer
misc: Fix zstd --list on truncated files by @kostmo
misc: Improve logging in debug mode by @felixhandte
misc: Add CirrusCI tests by @lwhsu
misc: Optimize dictionary memory usage in corner cases
misc: Improve the dictionary builder on small or homogeneous data
misc: Fix spelling across the repo by @jsoref
LZ4 v1.9.1
Changes
fix : decompression functions were reading a few bytes beyond input size
api : fix : lz4frame initializers compatibility with c++
cli : added command --list
build: improved Windows build
build: AIX, by Norman Green
LZ4 v1.9.0
This release brings an assortment of small improvements and bug fixes, as detailed below :
perf: large decompression speed improvement on x86/x64 (up to +20%)
api : changed : _destSize() compression variants are promoted to stable API
api : new : LZ4_initStream(HC), replacing LZ4_resetStream(HC)
api : changed : LZ4_resetStream(HC) as recommended reset function, for better performance on small data
cli : support custom block sizes
build: source code can be amalgamated, by Bing Xu
build: added meson build
build: new build macros : LZ4_DISTANCE_MAX, LZ4_FAST_DEC_LOOP
install: MidnightBSD
install: msys2 on Windows 10
Libaec provides fast lossless compression of 1 up to 32 bit wide signed
or unsigned integers (samples). The library achieves best results for
low entropy data as often encountered in space imaging instrument data or
numerical model output from weather or climate simulations. While floating
point representations are not directly supported, they can also be efficiently
coded by grouping exponents and mantissa.
Libaec implements Golomb-Rice coding as defined in the Space Data System
Standard documents 121.0-B-2 and 120.0-G-2.
Libaec includes a free drop-in replacement for the SZIP library.
Upstream changes:
0.0946 2019-04-05 20:11:47Z
- Added copyright holder/year meta to dist.ini. (GH#6) (Mohammad S Anwar)
- Auto generate META.yml using the plugin [MetaYAML]. (GH#8) (Mohammad S
Anwar)
libarchive 3.3.3:
Avoid super-linear slowdown on malformed mtree files
Many fixes for building with Visual Studio
NO_OVERWRITE doesn't change existing directory attributes
New support for Zstandard read and write filters
Update provided by Michael Bäuerle via pkgsrc-wip.
Changelog
=========
Release 2018-11-22:
-libschily: resolvenpath() did not work as expected when some path names
do not exist. A stat() call that should check whether we already
reached the "/" directory caused a return (-1) even with
(flags & RSPF_EXIST) == 0
This bug caused star to classify more symlinks as dangerous than
needed.
- star: A typo in the function dolchmodat() has been fixed. The bug has been
introduced in July 2018 while adding support for very long path names.
- star: added a new timestamp to the star version.
- star: The man page now mentions incremental backups and restores in the
FEATURES section.
Release 2018-12-06:
- star: hole.c: A memory leak in in hole.c::put_sparse() has been fixed.
Thanks to Pavel Raiskup for reporting this coverity result.
- star: xheader.c: the macro scopy() no longer has a semicolon at the end.
Thanks to Pavel Raiskup for reporting this coverity result.
Release 2019-01-22:
- libstrar & star unicode.c: iconv() may return > 0 if there are
characters that could not be converted into an
identical meaning.
We therefore now check for ret != 0 instead of
ret == -1.
- star: added support for auto detection of "zstd" compressed archives.
- star: added a new option -zstd to support compression and uncompression
using the program "zstd".
- star: Recently, star did hang in the FIFO code on Solaris. This did
not happen on Solaris over 20 years before...
On Linux - on fast multi CPU machines - the probability that a
child process from fork() starts up before the parent is 1000x higher
than on Solaris, where 10 million tries were needed to reproduce the
same problem.
As a result, the FIFO in star on Linux could in rare cases (1 of.
~ 10000 tries) even finish the 1st read() from the input file before
the "tar"-process starts with e.g. command lines like "star -tv" or.
"star -x". Since star introduced auto-byte-order detection and
handling in 1985, star needs a special start up sequence to do that.
Star introduced the FIFO in the late 1980s and the machines from that
time did always restart the parent before the fork()ed child starts.
The new OS behavior thus caused a situation that was not forseeable
when the FIFO has been designed. This new OS behavior caused a
deadlock in aprox. 1 of 10000 star calls on Linux and 1 of 10000000
star calls on Solaris.
Star now waits when entering the FIFO fill-process until the.
FIFO get-process did start up before trying to wake up a waiting
get process.
- star: On Linux, in 1 of 1.5 million tries, star did die from SIGPIPE.
Note that this did never happen on Solaris.
Star now ignores SIGPIPE and it seems that this fixed the problem
since it did not happen again after that change with even 100 million
tries.
- star: The debug printing for the FIFO has been enhanced to print more
information from the FIFO control structure to make it easier to debug
problems like the ones mentioned above.
- star: There seems to be a problem in pipe handling in the Linux kernel.
It seems that in rare cases, the read(2) on a pipe returns 0 even though
the write side did write(2) one byte to the pipe just before calling
exit(). Unfortunately, this problem is hard to debug as it happens only
once every ~30 million tries. Our workaround is to behave as if the
expected byte could be read and star currently prints something like:
star: Erfolg. Sync pipe read error pid 8141 ret 0
star: Erfolg. Ib 0 Ob 1 e 0 p 1 g 0 chan 5.
star: Erfolg. Trying to work around Kernel Pipe botch.
before it continues. Since the star exit code in such a case is 0,
we assume that this is a correct workaround and this case thus may
be made completely silent in the future.
- star: an even less frequent FIFO problem (occurs once every 50 million
tries on fast multi CPU machines) has been identified. Star reports a
hard EOF on input even though the complete file with logical EOF has
been read and there is still input to process. In order to debug this
problem a debug message has been added to the code.
With this debug message, it turned out, that this problem happened
because a context switch occurred in the FIFO read process after it did
see an empty FIFO and later, after the process was resumed, the
following check for the FIFO_MEOF flag did see EOF. We now first check
for the FIFO_MEOF flag and later for the amount of data inside as the
FIFO as FIFO_MEOF is set after the FIFO content has been updated and
thus a context switch is no longer able to cause a wrong assumption
about the content of the FIFO.
If you still see this, please send a report.
- star: added support to print debug malloc statistics to better debug
memory problems in star.
- star: pathname.c:: free_pspace() now only frees the path buffer if it
is != NULL
- star: fixed a bug in the file create.c that caused star to incorrectly
grow the path buffer by 2 bytes for every archived file. This caused
star to constantly grow if a larger amount of files are archived and
eat up all memory available to 32 bit processes if the archived
filesystem is larger than approx. 1 TB.
- star: If the path name now cannot be handled because of low memory,
we print a warning that includes the text "out of memory".
- star: Now checking whether open of /dev/null failed while running a
compress pipe. This avoids a core dump on defective OS installations.
Thanks to Pavel Raiskup for poiting to a related Coverity message.
- star: props.c: Added a missing /* FALLTHROUGH */ comment..
Thanks to Pavel Raiskup for poiting to a related Coverity message.
- star: create.c: Add more comment for the CPIO CRC format handler to
explain why the last instance if a series of hard links for a file
needs to archive the data.
- star: diff.c: added a filling fillbytes(&finfo, ...) to make sure that
ACL pointers are initialized.
Thanks to Pavel Raiskup for poiting to a related Coverity message.
- star: Several /* NOTREACHED */ comments have been added to tell
programs like coverity that after a NULL pointer check, there is no
continuation of the program
Thanks to Pavel Raiskup for poiting to a related Coverity message.
- star: extract.c: A if (path->ps_path == '\0') has been corrected to
if (path->ps_path[0] == '\0') after a mktemp() call. This was a typo
introduced with the new support for extremely long path names.
Thanks to Pavel Raiskup for poiting to a related Coverity message.
- star: extract.c An initalization for a struct pathstore has been
moved to the front to verify that path.ps_path is always initialized.
Thanks to Pavel Raiskup for poiting to a related Coverity message.
- star: header.c: isgnumagic(&ptb->dbuf.t_vers) has been changed to
isgnumagic(ptb->ustar_dbuf.t_magic) as it is a "ustar" structure
that is going to be checked.
Thanks to Pavel Raiskup for poiting to a related Coverity message.
- star: some Cstyle changes
- bsh / Bourne Shell / star: the function hop_dirs() no longer checks
for p2 != NULL before calling *p2 = '/' as p2 has
been granted to be != NULL from a break with
strchr(p, '/') == NULL
Release 2019-02-18:
- star: another similar has been fixed similat to what has been fixed
already in the 2019-01-22 release:
An even less frequent FIFO problem (occurs once every 50 million
tries on fast multi CPU machines) has been identified. Star reports a
hard EOF on input even though the complete file with logical EOF has
been read and there is still input to process. In order to debug this
problem a debug message has been added to the code.
With this debug message, it turned out, that this problem happened
because a context switch occurred in the FIFO read process after it did
see an empty FIFO and later, after the process was resumed, the
following check for the FIFO_MEOF flag did see EOF. We now first check
for the FIFO_MEOF flag and later for the amount of data inside as the
FIFO as FIFO_MEOF is set after the FIFO content has been updated and
thus a context switch is no longer able to cause a wrong assumption
about the content of the FIFO.
We now did run 250 million tests without seeing another problem.
If you still see this, please send a report.
- star: Note that the debug output for this problem now has been
disabled. If you need to debug this, call:
smake clean COPTX=-DFIFO_EOF_DEBUG all
in the star directory.
- star: The message "Sync pipe read error" is no longer printed when
the FIFO background process dies instead of sending a final wakeup.
This is needed since there is a possibility for a context switch in
the foreground process that can make it later wait for a wakeup while
the background process misses to see the wait flag and just exits.
- star: In rare conditions (once every 2 million tries), a hang could.
occur with "star -c" if the tar process fills the FIFO and sets the
EOF flag and then calls wait() to wait for the FIFO tape output
process. This happens in case that the tape output did not see the
EOF flag because it has undergone a context switch after it checked
for the not yet existing EOF flag and before waiting for a wakeup
from the tar FIFO fill process.
Star now closes the sync pipes before calling wait() as this always
wakes up the waiting other side.
We did run another 300 million tests for this condition and did not
see any problem now.
- star: The version is now 1.6
Short overview for what changed since the last "stable" version:
- Support for "infinitely" long path names has been added.
- Support for comparing timestamps with nanosecond granularity
- -secure-links has been made the default when extracting
archived (except when doing an incremental restore).
- Added Support for NFSv4 ACLs on FreeBSD. Solaris has been
supported since 2013.
- Added Support to archive SELinix attributes.
- Allow to configure whether "star -fsync" is the default in
order to support filesystems that are slow with granted
transactions (like ZFS) or platforms that are genrally
slow with fsync() (like Linux).
- Full UNICODE support has been added for tar headers.
- Support for -zstd compression has been added.
- Some rare FIFO problems have been fixed.
Note that we did recently run more than a billion tests to
verify the FIFO after we identified a method to trigger the
problem on Linux.
Release 2019-03-11:
- star: Support for base-256 numbers in timestams and uid/gid has been
added. This has been planned in the 1990s already, when star invented
the base-256 coding, but it has been forgotten in favor of the
POSIX.1-2001 enhanded archive headers. Now it seems that GNU tar.
that copied the format from star uses it for timestamps and uid/gid
and we need to implement it in order to get archive compatibility.
Thanks to Michal Górny (mgorny@gentoo.org) for detecting the missing
feature.
- star: The t_rdev field in the old star header now may use base-256
as well.
- star: The function stoli() added a new parameter "fieldwidth" that
allows to configure when a "unterminated octal number" warning is
printed. This is needed since this function is used for 8 byte and
for 12 byte fields.
- star: star did print archives with illegal 32 byte user/group.
names (where the nul terminator is missing) "correctly", when in.
list mode but it used only the first 31 bytes when extractig.
such archives
- star: a new function istarnumber() is used to do better heuristics on
what a valid TAR archive is. We have some special handling to work.
around the non-compliance of GNU tar in some known cases. If you
discover other GNU tar archives that are not detected as TAR archive,
please report them to help to make th eheuristics better.
The background is to make star better in detecting fool archives.
- star: The directory testscripts added new files:
testscripts/not_a_tar_file1 and testscripts/not_a_tar_file3
with correct checksums that fool tar implementations that use too
few heuristics to identify tar archives.
- star: fixed a bug in the FIFO related to extracting multi-volume
archives. The bug was introduced with release 2019-02-18 and the
effect was that the FIFO complained at the end of the last volume.
- star/libschily: Added new error checking codes:
"ID"<-->allows to control error behaviour with range errors in uid_t
and gid_t values.
"TIME"<>allows to control error behaviour with range errors in time_t
- star: Creating multi volume archives without using the FIFO did dump
core. We thus no longer set mp->chreel = TRUE; when the FIFO has.
been disabled. The related bug has been introduced in January 2012.
- star: Creating multi volume archives with a very small volume size
could cause a hang at the end as the function startvol() did not
check whether the TAR process did already decide to exit while
waiting for the TAR process to calm down (stop) before writing the
next multi volume header. We no longer wait in this case.
- star: exprstats() now calls fifo_exit(ret) in order to avoid a
FIFO Sync pipe read error message in case that star was terminated
with an error.
- star: Since we added better Unicode support in May 2018, star did
dump core when a multi volume header with POSIX.1-2001 extensions
was written in multi volume create mode. We now check for NULL
pointers before we call nameascii() to decide whether the file.
name needs a UTF-8 translation.
- star: Creating multi volume archives without POSIX.1-2001 support
no longer sets POSIX.1-2001 extension flags for the volume header.
- star: The flag XF_NOTIME now works when creating POSIX.1-2001
extended headers and thus the 'x'-header with time stamps for the
volume header tar header is no longer created. This avoids
to write atime=1 for the volume number 1 since we encode the
volume number in the otherwise useless atime of the volume header
when in POSIX.1-1988 TAR mode.
- star: the star.1 man page now mentions that the first tar program
appeared in 1979 (3 years before star has been started as a project).
- star: the star.4 man page now has a "SEE ALSO", a HISTORY and
a AUTHOR section.
- star: the star.4 man page now has a MULTI VOLUME ARCHIVE HANDLING
section.
- star: the star.4 man page added a new "BASIC TAR STRUCTURE" section.
- star: The ACL reference test archives (formerly available from e.g.:
http://sf.net/projects/s-tar/files/alpha/) have been added
to the directory star/testscripts/. The files.
acl-test.tar.gz
acl-test2.tar.gz
acl-test3.tar.gz
acl-test4.tar.gz
acl-test5.tar.gz
contain ACLs that use the obsolete method from a POSIX proposal
from around 1993 that was withdrawn in 1997 and never has become
part of a standard. This method has been implemented in 1993 for
UFS on Solaris.
GNU tar claims to support this format but really does not support
it at all. GNU tar fails to extract the reference tar archives from
above and it fails to create a compliant tar archive in create mode.
It is strange to see that GNU tar never has been tested against the
reference archives that have been created in collaboration with
SuSE in 2001 already.
The files
acl-nfsv4-test.tar.gz
acl-nfsv4-test2.tar.gz
acl-nfsv4-test3.tar.gz
acl-nfsv4-test4.tar.gz
acl-nfsv4-test5.tar.gz
contain ACLs that have become part of the NFSv4 standard and that.
are also used on NTFS and ZFS. This format is completely unsupported
by GNU tar.
- star TODO: create unit tests in order to avoid future problems
with multi volume archives similar to the problems we recently
fixed.
- star: Updated version 1.6 (not yet published in separate tarball)
Short overview for what changed since the last "stable" version:
- Support for "infinitely" long path names has been added.
- Support for base-256 numbers in timestams and uid/gid
has been added. This has been planned in the 1990s already,
when star invented the base-256 coding, but it has been
forgotten in favor of the POSIX.1-2001 enhanded archive
headers.
- Support for comparing timestamps with nanosecond granularity
- -secure-links has been made the default when extracting
archived (except when doing an incremental restore).
- Added Support for NFSv4 ACLs on FreeBSD. Solaris has been
supported since 2013.
- Added Support to archive SELinix attributes.
- Allow to configure whether "star -fsync" is the default in
order to support filesystems that are slow with granted
transactions (like ZFS) or platforms that are genrally
slow with fsync() (like Linux).
- Full UNICODE support has been added for tar headers.
- Support for -zstd compression has been added.
- Some rare FIFO problems have been fixed.
Note that we did recently run more than a billion tests to
verify the FIFO after we identified a method to trigger the
problem on Linux.
1.5.2 [2019-03-12]
==================
* Fix bug in AES encryption affecting certain file sizes
* Keep file permissions when modifying zip archives
* Support systems with small stack size.
* Support mbed TLS as crypto backend.
* Add nullability annotations.
### engrampa 1.22.0
* Translations update
* Avoid array index out of bounds parsing dpkg-deb --info
* warning: Use of memory after it is freed
* Read authors (updated) from engrampa.about gresource
* Enable Travis CI
* eggsmclient: avoid deprecated 'g_type_class_add_private'
* update copyright year to 2019
* rar/unrar: Fix: "overwrite existing files" disabled must work
* fix fr-command-cfile.c: fr_process_set_working_dir
* fr-command-cfile.c: fix indentation
* Added test integrity for brotli
* Added test integrity for the cfile compressors: gzip, bzip2, etc.
* move appdata to metainfo directory
* fr-window: show the pause button only if the dialog is working
* disable deprecation warnings for distcheck
* fr-window: avoid 'gtk_dialog_add_button' with stock ids
* fr-window: hide the progress bar if the process is paused
* fr-window: change the info label if process is paused/resumed
* fr-window: little improvements in the look of pause/resume button
* Adding pause and start functions
* Fix implementation and use of the alternative package name lookup
* Added support for brotli (*.tar.br) compressed tar archives
* Add brotli support
* Use make functions for HELP_LINGUAS
* Replace -Dokumentationteam
* Replace -Dokumentationsprojekt with Documentation Project
* Manual: Update file format descriptions using shared-mime-info
* Fix url of ulinks to point to mate-user-guide
* UNIX and Linux systems -> Linux and UNIX-like systems
* tx: add atril help to transifex config
* Add the ability to support 'unar' over .zip archives
* Add support for OpenDocument formats
* UI: on the properties dialog, focus the Close button instead of the Help button by default
0.11.0 (released 2019-02-24)
============================
Backwards Compatibility Nodes
-----------------------------
* ZstdDecompressor.read() now allows reading sizes of -1 or 0
and defaults to -1, per the documented behavior of
io.RawIOBase.read(). Previously, we required an argument that was
a positive value.
* The readline(), readlines(), __iter__, and __next__ methods
of ZstdDecompressionReader() now raise io.UnsupportedOperation
instead of NotImplementedError.
* ZstdDecompressor.stream_reader() now accepts a read_across_frames
argument. The default value will likely be changed in a future release
and consumers are advised to pass the argument to avoid unwanted change
of behavior in the future.
* setup.py now always disables the CFFI backend if the installed
CFFI package does not meet the minimum version requirements. Before, it was
possible for the CFFI backend to be generated and a run-time error to
occur.
* In the CFFI backend, CompressionReader and DecompressionReader
were renamed to ZstdCompressionReader and ZstdDecompressionReader,
respectively so naming is identical to the C extension. This should have
no meaningful end-user impact, as instances aren't meant to be
constructed directly.
* ZstdDecompressor.stream_writer() now accepts a write_return_read
argument to control whether write() returns the number of bytes
read from the source / written to the decompressor. It defaults to off,
which preserves the existing behavior of returning the number of bytes
emitted from the decompressor. The default will change in a future release
so behavior aligns with the specified behavior of io.RawIOBase.
* ZstdDecompressionWriter.__exit__ now calls self.close(). This
will result in that stream plus the underlying stream being closed as
well. If this behavior is not desirable, do not use instances as
context managers.
* ZstdCompressor.stream_writer() now accepts a write_return_read
argument to control whether write() returns the number of bytes read
from the source / written to the compressor. It defaults to off, which
preserves the existing behavior of returning the number of bytes emitted
from the compressor. The default will change in a future release so
behavior aligns with the specified behavior of io.RawIOBase.
* ZstdCompressionWriter.__exit__ now calls self.close(). This will
result in that stream plus any underlying stream being closed as well. If
this behavior is not desirable, do not use instances as context managers.
* ZstdDecompressionWriter no longer requires being used as a context
manager.
* ZstdCompressionWriter no longer requires being used as a context
manager.
* The overlap_size_log attribute on CompressionParameters instances
has been deprecated and will be removed in a future release. The
overlap_log attribute should be used instead.
* The overlap_size_log argument to CompressionParameters has been
deprecated and will be removed in a future release. The overlap_log
argument should be used instead.
* The ldm_hash_every_log attribute on CompressionParameters instances
has been deprecated and will be removed in a future release. The
ldm_hash_rate_log attribute should be used instead.
* The ldm_hash_every_log argument to CompressionParameters has been
deprecated and will be removed in a future release. The ldm_hash_rate_log
argument should be used instead.
* The compression_strategy argument to CompressionParameters has been
deprecated and will be removed in a future release. The strategy
argument should be used instead.
* The SEARCHLENGTH_MIN and SEARCHLENGTH_MAX constants are deprecated
and will be removed in a future release. Use MINMATCH_MIN and
MINMATCH_MAX instead.
* The zstd_cffi module has been renamed to zstandard.cffi. As had
been documented in the README file since the 0.9.0 release, the
module should not be imported directly at its new location. Instead,
import zstandard to cause an appropriate backend module to be loaded
automatically.
Bug Fixes
---------
* CFFI backend could encounter a failure when sending an empty chunk into
ZstdDecompressionObj.decompress(). The issue has been fixed.
* CFFI backend could encounter an error when calling
ZstdDecompressionReader.read() if there was data remaining in an
internal buffer. The issue has been fixed.
Changes
-------
* ZstDecompressionObj.decompress() now properly handles empty inputs in
the CFFI backend.
* ZstdCompressionReader now implements read1() and readinto1().
These are part of the io.BufferedIOBase interface.
* ZstdCompressionReader has gained a readinto(b) method for reading
compressed output into an existing buffer.
* ZstdCompressionReader.read() now defaults to size=-1 and accepts
read sizes of -1 and 0. The new behavior aligns with the documented
behavior of io.RawIOBase.
* ZstdCompressionReader now implements readall(). Previously, this
method raised NotImplementedError.
* ZstdDecompressionReader now implements read1() and readinto1().
These are part of the io.BufferedIOBase interface.
* ZstdDecompressionReader.read() now defaults to size=-1 and accepts
read sizes of -1 and 0. The new behavior aligns with the documented
behavior of io.RawIOBase.
* ZstdDecompressionReader() now implements readall(). Previously, this
method raised NotImplementedError.
* The readline(), readlines(), __iter__, and __next__ methods
of ZstdDecompressionReader() now raise io.UnsupportedOperation
instead of NotImplementedError. This reflects a decision to never
implement text-based I/O on (de)compressors and keep the low-level API
operating in the binary domain.
* README.rst now documented how to achieve linewise iteration using
an io.TextIOWrapper with a ZstdDecompressionReader.
* ZstdDecompressionReader has gained a readinto(b) method for
reading decompressed output into an existing buffer. This allows chaining
to an io.TextIOWrapper on Python 3 without using an io.BufferedReader.
* ZstdDecompressor.stream_reader() now accepts a read_across_frames
argument to control behavior when the input data has multiple zstd
*frames*. When False (the default for backwards compatibility), a
read() will stop when the end of a zstd *frame* is encountered. When
True, read() can potentially return data spanning multiple zstd
*frames*. The default will likely be changed to True in a future
release.
* setup.py now performs CFFI version sniffing and disables the CFFI
backend if CFFI is too old. Previously, we only used install_requires
to enforce the CFFI version and not all build modes would properly enforce
the minimum CFFI version.
* CFFI's ZstdDecompressionReader.read() now properly handles data
remaining in any internal buffer. Before, repeated read() could
result in *random* errors.
* Upgraded various Python packages in CI environment.
* Upgrade to hypothesis 4.5.11.
* In the CFFI backend, CompressionReader and DecompressionReader
were renamed to ZstdCompressionReader and ZstdDecompressionReader,
respectively.
* ZstdDecompressor.stream_writer() now accepts a write_return_read
argument to control whether write() returns the number of bytes read
from the source. It defaults to False to preserve backwards
compatibility.
* ZstdDecompressor.stream_writer() now implements the io.RawIOBase
interface and behaves as a proper stream object.
* ZstdCompressor.stream_writer() now accepts a write_return_read
argument to control whether write() returns the number of bytes read
from the source. It defaults to False to preserve backwards
compatibility.
* ZstdCompressionWriter now implements the io.RawIOBase interface and
behaves as a proper stream object. close() will now close the stream
and the underlying stream (if possible). __exit__ will now call
close(). Methods like writable() and fileno() are implemented.
* ZstdDecompressionWriter no longer must be used as a context manager.
* ZstdCompressionWriter no longer must be used as a context manager.
When not using as a context manager, it is important to call
flush(FRAME_FRAME) or the compression stream won't be properly
terminated and decoders may complain about malformed input.
* ZstdCompressionWriter.flush() (what is returned from
ZstdCompressor.stream_writer()) now accepts an argument controlling the
flush behavior. Its value can be one of the new constants
FLUSH_BLOCK or FLUSH_FRAME.
* ZstdDecompressionObj instances now have a flush([length=None]) method.
This provides parity with standard library equivalent types.
* CompressionParameters no longer redundantly store individual compression
parameters on each instance. Instead, compression parameters are stored inside
the underlying ZSTD_CCtx_params instance. Attributes for obtaining
parameters are now properties rather than instance variables.
* Exposed the STRATEGY_BTULTRA2 constant.
* CompressionParameters instances now expose an overlap_log attribute.
This behaves identically to the overlap_size_log attribute.
* CompressionParameters() now accepts an overlap_log argument that
behaves identically to the overlap_size_log argument. An error will be
raised if both arguments are specified.
* CompressionParameters instances now expose an ldm_hash_rate_log
attribute. This behaves identically to the ldm_hash_every_log attribute.
* CompressionParameters() now accepts a ldm_hash_rate_log argument that
behaves identically to the ldm_hash_every_log argument. An error will be
raised if both arguments are specified.
* CompressionParameters() now accepts a strategy argument that behaves
identically to the compression_strategy argument. An error will be raised
if both arguments are specified.
* The MINMATCH_MIN and MINMATCH_MAX constants were added. They are
semantically equivalent to the old SEARCHLENGTH_MIN and
SEARCHLENGTH_MAX constants.
* Bundled zstandard library upgraded from 1.3.7 to 1.3.8.
* setup.py denotes support for Python 3.7 (Python 3.7 was supported and
tested in the 0.10 release).
* zstd_cffi module has been renamed to zstandard.cffi.
* ZstdCompressor.stream_writer() now reuses a buffer in order to avoid
allocating a new buffer for every operation. This should result in faster
performance in cases where write() or flush() are being called
frequently.
* Bundled zstandard library upgraded from 1.3.6 to 1.3.7.
version 1.32 - Sergey Poznyakoff, 2019-02-23
* Fix the use of --checkpoint without explicit --checkpoint-action
* Fix extraction with the -U option
See http://lists.gnu.org/archive/html/bug-tar/2019-01/msg00015.html,
for details
* Fix iconv usage on BSD-based systems
* Fix possible NULL dereference (savannah bug #55369)
* Improve the testsuite
v2.1.5: Made the md5sum detection consistent with the header code. Check for
the presence of the archive directory. Added --encrypt for symmetric encryption
through gpg (Eric Windisch). Added support for the digest command on Solaris 10
for MD5 checksums. Check for available disk space before extracting to the
target directory (Andreas Schweitzer). Allow extraction to run asynchronously
(patch by Peter Hatch). Use file descriptors internally to avoid error messages
(patch by Kay Tiong Khoo).
v2.1.6: Replaced one dot per file progress with a realtime progress percentage
and a spining cursor. Added --noprogress to prevent showing the progress during
the decompression. Added --target dir to allow extracting directly to a target
directory. (Guy Baconniere)
v2.2.0: First major new release in years! Includes many bugfixes and user
contributions. Please look at the project page on Github for all the details.
v2.3.0: Support for archive encryption via GPG or OpenSSL. Added LZO and LZ4
compression support. Options to set the packaging date and stop the umask from
being overriden. Optionally ignore check for available disk space when
extracting. New option to check for root permissions before extracting.
v2.3.1: Various compatibility updates. Added unit tests for Travis CI in the
GitHub repo. New --tar-extra, --untar-extra, --gpg-extra,
--gpg-asymmetric-encrypt-sign options.
v2.4.0: Added optional support for SHA256 archive integrity checksums.
Changes in version 1.21:
The options '--dump', '--remove' and '--strip' have been added, mainly as
support for the tarlz archive format: http://www.nongnu.org/lzip/tarlz.html
These options replace '--dump-tdata', '--remove-tdata' and '--strip-tdata',
which are now aliases and will be removed in version 1.22.
'--dump=[<member_list>][:damaged][:tdata]' dumps the members listed, the
damaged members (if any), or the trailing data (if any) of one or more
regular multimember files to standard output.
'--remove=[<member_list>][:damaged][:tdata]' removes the members listed,
the damaged members (if any), or the trailing data (if any) from regular
multimember files in place.
'--strip=[<member_list>][:damaged][:tdata]' copies one or more regular
multimember files to standard output, stripping the members listed, the
damaged members (if any), or the trailing data (if any) from each file.
Detection of forbidden combinations of characters in trailing data has been
improved.
'--split' can now detect trailing data and gaps between members, and save
each gap in its own file. Trailing data (if any) are saved alone in the last
file. (Gaps may contain garbage or may be members with corrupt headers or
trailers).
'--ignore-errors' now makes '--list' show gaps between members, ignoring
format errors.
'--ignore-errors' now makes '--range-decompress' ignore a truncated last
member.
Errors are now also checked when closing the input file in decompression
mode.
Some diagnostic messages have been improved.
'\n' is now printed instead of '\r' when showing progress of merge or repair
if stdout is not a terminal.
Lziprecover now compiles on DOS with DJGPP. (Patch from Robert Riebisch).
The new chapter 'Tarlz', explaining the ways in which lziprecover can
recover and process multimember tar.lz archives, has been added to the
manual.
The configure script now accepts appending options to CXXFLAGS using the
syntax 'CXXFLAGS+=OPTIONS'.
It has been documented in INSTALL the use of
CXXFLAGS+='-D __USE_MINGW_ANSI_STDIO' when compiling on MinGW.
Changes in version 1.21:
Detection of forbidden combinations of characters in trailing data has been
improved.
Errors are now also checked when closing the input file.
Lzip now compiles on DOS with DJGPP. (Patch from Robert Riebisch).
The descriptions of '-0..-9', '-m' and '-s' in the manual have been
improved.
The configure script now accepts appending options to CXXFLAGS using the
syntax 'CXXFLAGS+=OPTIONS'.
It has been documented in INSTALL the use of
CXXFLAGS+='-D __USE_MINGW_ANSI_STDIO' when compiling on MinGW.
Prompted in part because prior releases fail to build on Linux
distributions that use glibc >= 2.27 (relates to PR pkg/53826).
* Noteworthy changes in release 1.10 (2018-12-29) [stable]
** Changes in behavior
Compressed gzip output no longer contains the current time as a
timestamp when the input is not a regular file. Instead, the output
contains a null (zero) timestamp. This makes gzip's behavior more
reproducible when used as part of a pipeline. (As a reminder, even
regular files will use null timestamps after the year 2106, due to a
limitation in the gzip format.)
** Bug fixes
A use of uninitialized memory on some malformed inputs has been fixed.
[bug present since the beginning]
A few theoretical race conditions in signal handers have been fixed.
These bugs most likely do not happen on practical platforms.
[bugs present since the beginning]
* Noteworthy changes in release 1.9 (2018-01-07) [stable]
** Bug fixes
gzip -d -S SUFFIX file.SUFFIX would fail for any upper-case byte in SUFFIX.
E.g., before, this command would fail:
$ :|gzip > kT && gzip -d -S T kT
gzip: kT: unknown suffix -- ignored
[bug present since the beginning]
When decompressing data in 'pack' format, gzip no longer mishandles
leading zeros in the end-of-block code. [bug introduced in gzip-1.6]
When converting from system-dependent time_t format to the 32-bit
unsigned MTIME format used in gzip files, if a timestamp does not
fit gzip now substitutes zero instead of the timestamp's low-order
32 bits, as per Internet RFC 1952. When converting from MTIME to
time_t format, if a timestamp does not fit gzip now warns and
substitutes the nearest in-range value instead of crashing or
silently substituting an implementation-defined value (typically,
the timestamp's low-order bits). This affects timestamps before
1970 and after 2106, and timestamps after 2038 on platforms with
32-bit signed time_t. [bug present since the beginning]
Commands implemented via shell scripts are now more consistent about
failure status. For example, 'gunzip --help >/dev/full' now
consistently exits with status 1 (error), instead of with status 2
(warning) on some platforms. [bug present since the beginning]
Support for VMS and Amiga has been removed. It was not working anyway,
and it reportedly caused file name glitches on MS-Windowsish platforms.
* Noteworthy changes in release 1.8 (2016-04-26) [stable]
** Bug fixes
gzip -l no longer falsely reports a write error when writing to a pipe.
[bug introduced in gzip-1.7]
Port to Oracle Solaris Studio 12 on x86-64.
[bug present since at least gzip-1.2.4]
When configuring gzip, ./configure DEFS='...-DNO_ASM...' now
suppresses assembler again. [bug introduced in gzip-1.3.5]
* Noteworthy changes in release 1.7 (2016-03-27) [stable]
** Changes in behavior
The GZIP environment variable is now obsolescent; gzip now warns if
it is used, and rejects attempts to use dangerous options or operands.
You can use an alias or script instead.
Installed programs like 'zgrep' now use the PATH environment variable
as usual to find subsidiary programs like 'gzip' and 'grep'.
Previously they prepended the installation directory to the PATH,
which sometimes caused 'make check' to test the wrong gzip executable.
[bug introduced in gzip-1.3.13]
** New features
gzip now accepts the --synchronous option, which causes it to use
fsync and similar primitives to transfer output data to the output
file's storage device when the file system supports this. Although
this option makes gzip safer in the presence of system crashes, it
can make gzip considerably slower.
gzip now accepts the --rsyncable option. This option is accepted in
all modes, but has effect only when compressing: it makes the resulting
output more amenable to efficient use of rsync. For example, when a
large input file gets a small change, a gzip --rsyncable image of
that file will remain largely unchanged, too. Without --rsyncable,
even a tiny change in the input could result in a totally different
gzip-compressed output file.
** Bug fixes
gzip -k -v no longer reports that files are replaced.
[bug present since the beginning]
zgrep -f A B C no longer reads A more than once if A is not a regular file.
This better supports invocations like 'zgrep -f <(COMMAND) B C' in Bash.
[bug introduced in gzip-1.2]
Bz2file is a Python library for reading and writing bzip2-compressed files. It
contains a drop-in replacement for the file interface in the standard library's
bz2 module, including features from the latest development version of CPython
that are not available in older releases.
Changelog:
version 1.31 - Sergey Poznyakoff, 2019-01-02
* Fix heap-buffer-overrun with --one-top-level.
Bug introduced with the addition of that option in 1.28.
* Support for zstd compression
New option '--zstd' instructs tar to use zstd as compression program.
When listing, extractng and comparing, zstd compressed archives are
recognized automatically.
When '-a' option is in effect, zstd compression is selected if the
destination archive name ends in '.zst' or '.tzst'.
* The -K option interacts properly with member names given in the command line
Names of members to extract can be specified along with the "-K NAME"
option. In this case, tar will extract NAME and those of named members
that appear in the archive after it, which is consistent with the
semantics of the option.
Previous versions of tar extracted NAME, those of named members that
appeared before it, and everything after it.
* Fix CVE-2018-20482
When creating archives with the --sparse option, previous versions of
tar would loop endlessly if a sparse file had been truncated while
being archived.
Zstandard v1.3.8
perf: better decompression speed on large files (+7%) and cold dictionaries (+15%)
perf: slightly better compression ratio at high compression modes
api : finalized advanced API, last stage before "stable" status
api : new --rsyncable mode
api : support decompression of empty frames into NULL (used to be an error)
build: new set of build macros to generate a minimal size decoder
build: fix compilation on MIPS32
build: fix compilation with multiple -arch flags
build: highly upgraded meson build
build: improved buck support
build: fix cmake script : can create debug build
build: Makefile : grep works on both colored consoles and systems without color support
build: fixed zstd-pgo target
cli : support ZSTD_CLEVEL environment variable
cli : --no-progress flag, preserving final summary
cli : ensure destination file is not source file
cli : clearer error messages, notably when input file not present
doc : clarified zstd_compression_format.md
misc: fixed zstdgrep, returns 1 on failure
misc: NEWS renamed as CHANGELOG, in accordance with fb.oss policy
2.1.5
This release contains no functional changes other than changes to the Appveyor configuration for publishing wheels.
2.1.4
This release contains no functional changes other than changes to the Travis configuration for publishing wheels.
2.1.3
A simplification of the tox.ini file
More robust checking for pkgconfig availability
Integration of cibuildwheel into travis builds so as to build and publish binary wheels for Linux and OSX
Only require pytest-runner if pytest/test is being called
Blacklists version 3.3.0 of pytest which has a bug that can cause the tests to fail.