1.5.2 [2019-03-12]
==================
* Fix bug in AES encryption affecting certain file sizes
* Keep file permissions when modifying zip archives
* Support systems with small stack size.
* Support mbed TLS as crypto backend.
* Add nullability annotations.
### engrampa 1.22.0
* Translations update
* Avoid array index out of bounds parsing dpkg-deb --info
* warning: Use of memory after it is freed
* Read authors (updated) from engrampa.about gresource
* Enable Travis CI
* eggsmclient: avoid deprecated 'g_type_class_add_private'
* update copyright year to 2019
* rar/unrar: Fix: "overwrite existing files" disabled must work
* fix fr-command-cfile.c: fr_process_set_working_dir
* fr-command-cfile.c: fix indentation
* Added test integrity for brotli
* Added test integrity for the cfile compressors: gzip, bzip2, etc.
* move appdata to metainfo directory
* fr-window: show the pause button only if the dialog is working
* disable deprecation warnings for distcheck
* fr-window: avoid 'gtk_dialog_add_button' with stock ids
* fr-window: hide the progress bar if the process is paused
* fr-window: change the info label if process is paused/resumed
* fr-window: little improvements in the look of pause/resume button
* Adding pause and start functions
* Fix implementation and use of the alternative package name lookup
* Added support for brotli (*.tar.br) compressed tar archives
* Add brotli support
* Use make functions for HELP_LINGUAS
* Replace -Dokumentationteam
* Replace -Dokumentationsprojekt with Documentation Project
* Manual: Update file format descriptions using shared-mime-info
* Fix url of ulinks to point to mate-user-guide
* UNIX and Linux systems -> Linux and UNIX-like systems
* tx: add atril help to transifex config
* Add the ability to support 'unar' over .zip archives
* Add support for OpenDocument formats
* UI: on the properties dialog, focus the Close button instead of the Help button by default
0.11.0 (released 2019-02-24)
============================
Backwards Compatibility Nodes
-----------------------------
* ZstdDecompressor.read() now allows reading sizes of -1 or 0
and defaults to -1, per the documented behavior of
io.RawIOBase.read(). Previously, we required an argument that was
a positive value.
* The readline(), readlines(), __iter__, and __next__ methods
of ZstdDecompressionReader() now raise io.UnsupportedOperation
instead of NotImplementedError.
* ZstdDecompressor.stream_reader() now accepts a read_across_frames
argument. The default value will likely be changed in a future release
and consumers are advised to pass the argument to avoid unwanted change
of behavior in the future.
* setup.py now always disables the CFFI backend if the installed
CFFI package does not meet the minimum version requirements. Before, it was
possible for the CFFI backend to be generated and a run-time error to
occur.
* In the CFFI backend, CompressionReader and DecompressionReader
were renamed to ZstdCompressionReader and ZstdDecompressionReader,
respectively so naming is identical to the C extension. This should have
no meaningful end-user impact, as instances aren't meant to be
constructed directly.
* ZstdDecompressor.stream_writer() now accepts a write_return_read
argument to control whether write() returns the number of bytes
read from the source / written to the decompressor. It defaults to off,
which preserves the existing behavior of returning the number of bytes
emitted from the decompressor. The default will change in a future release
so behavior aligns with the specified behavior of io.RawIOBase.
* ZstdDecompressionWriter.__exit__ now calls self.close(). This
will result in that stream plus the underlying stream being closed as
well. If this behavior is not desirable, do not use instances as
context managers.
* ZstdCompressor.stream_writer() now accepts a write_return_read
argument to control whether write() returns the number of bytes read
from the source / written to the compressor. It defaults to off, which
preserves the existing behavior of returning the number of bytes emitted
from the compressor. The default will change in a future release so
behavior aligns with the specified behavior of io.RawIOBase.
* ZstdCompressionWriter.__exit__ now calls self.close(). This will
result in that stream plus any underlying stream being closed as well. If
this behavior is not desirable, do not use instances as context managers.
* ZstdDecompressionWriter no longer requires being used as a context
manager.
* ZstdCompressionWriter no longer requires being used as a context
manager.
* The overlap_size_log attribute on CompressionParameters instances
has been deprecated and will be removed in a future release. The
overlap_log attribute should be used instead.
* The overlap_size_log argument to CompressionParameters has been
deprecated and will be removed in a future release. The overlap_log
argument should be used instead.
* The ldm_hash_every_log attribute on CompressionParameters instances
has been deprecated and will be removed in a future release. The
ldm_hash_rate_log attribute should be used instead.
* The ldm_hash_every_log argument to CompressionParameters has been
deprecated and will be removed in a future release. The ldm_hash_rate_log
argument should be used instead.
* The compression_strategy argument to CompressionParameters has been
deprecated and will be removed in a future release. The strategy
argument should be used instead.
* The SEARCHLENGTH_MIN and SEARCHLENGTH_MAX constants are deprecated
and will be removed in a future release. Use MINMATCH_MIN and
MINMATCH_MAX instead.
* The zstd_cffi module has been renamed to zstandard.cffi. As had
been documented in the README file since the 0.9.0 release, the
module should not be imported directly at its new location. Instead,
import zstandard to cause an appropriate backend module to be loaded
automatically.
Bug Fixes
---------
* CFFI backend could encounter a failure when sending an empty chunk into
ZstdDecompressionObj.decompress(). The issue has been fixed.
* CFFI backend could encounter an error when calling
ZstdDecompressionReader.read() if there was data remaining in an
internal buffer. The issue has been fixed.
Changes
-------
* ZstDecompressionObj.decompress() now properly handles empty inputs in
the CFFI backend.
* ZstdCompressionReader now implements read1() and readinto1().
These are part of the io.BufferedIOBase interface.
* ZstdCompressionReader has gained a readinto(b) method for reading
compressed output into an existing buffer.
* ZstdCompressionReader.read() now defaults to size=-1 and accepts
read sizes of -1 and 0. The new behavior aligns with the documented
behavior of io.RawIOBase.
* ZstdCompressionReader now implements readall(). Previously, this
method raised NotImplementedError.
* ZstdDecompressionReader now implements read1() and readinto1().
These are part of the io.BufferedIOBase interface.
* ZstdDecompressionReader.read() now defaults to size=-1 and accepts
read sizes of -1 and 0. The new behavior aligns with the documented
behavior of io.RawIOBase.
* ZstdDecompressionReader() now implements readall(). Previously, this
method raised NotImplementedError.
* The readline(), readlines(), __iter__, and __next__ methods
of ZstdDecompressionReader() now raise io.UnsupportedOperation
instead of NotImplementedError. This reflects a decision to never
implement text-based I/O on (de)compressors and keep the low-level API
operating in the binary domain.
* README.rst now documented how to achieve linewise iteration using
an io.TextIOWrapper with a ZstdDecompressionReader.
* ZstdDecompressionReader has gained a readinto(b) method for
reading decompressed output into an existing buffer. This allows chaining
to an io.TextIOWrapper on Python 3 without using an io.BufferedReader.
* ZstdDecompressor.stream_reader() now accepts a read_across_frames
argument to control behavior when the input data has multiple zstd
*frames*. When False (the default for backwards compatibility), a
read() will stop when the end of a zstd *frame* is encountered. When
True, read() can potentially return data spanning multiple zstd
*frames*. The default will likely be changed to True in a future
release.
* setup.py now performs CFFI version sniffing and disables the CFFI
backend if CFFI is too old. Previously, we only used install_requires
to enforce the CFFI version and not all build modes would properly enforce
the minimum CFFI version.
* CFFI's ZstdDecompressionReader.read() now properly handles data
remaining in any internal buffer. Before, repeated read() could
result in *random* errors.
* Upgraded various Python packages in CI environment.
* Upgrade to hypothesis 4.5.11.
* In the CFFI backend, CompressionReader and DecompressionReader
were renamed to ZstdCompressionReader and ZstdDecompressionReader,
respectively.
* ZstdDecompressor.stream_writer() now accepts a write_return_read
argument to control whether write() returns the number of bytes read
from the source. It defaults to False to preserve backwards
compatibility.
* ZstdDecompressor.stream_writer() now implements the io.RawIOBase
interface and behaves as a proper stream object.
* ZstdCompressor.stream_writer() now accepts a write_return_read
argument to control whether write() returns the number of bytes read
from the source. It defaults to False to preserve backwards
compatibility.
* ZstdCompressionWriter now implements the io.RawIOBase interface and
behaves as a proper stream object. close() will now close the stream
and the underlying stream (if possible). __exit__ will now call
close(). Methods like writable() and fileno() are implemented.
* ZstdDecompressionWriter no longer must be used as a context manager.
* ZstdCompressionWriter no longer must be used as a context manager.
When not using as a context manager, it is important to call
flush(FRAME_FRAME) or the compression stream won't be properly
terminated and decoders may complain about malformed input.
* ZstdCompressionWriter.flush() (what is returned from
ZstdCompressor.stream_writer()) now accepts an argument controlling the
flush behavior. Its value can be one of the new constants
FLUSH_BLOCK or FLUSH_FRAME.
* ZstdDecompressionObj instances now have a flush([length=None]) method.
This provides parity with standard library equivalent types.
* CompressionParameters no longer redundantly store individual compression
parameters on each instance. Instead, compression parameters are stored inside
the underlying ZSTD_CCtx_params instance. Attributes for obtaining
parameters are now properties rather than instance variables.
* Exposed the STRATEGY_BTULTRA2 constant.
* CompressionParameters instances now expose an overlap_log attribute.
This behaves identically to the overlap_size_log attribute.
* CompressionParameters() now accepts an overlap_log argument that
behaves identically to the overlap_size_log argument. An error will be
raised if both arguments are specified.
* CompressionParameters instances now expose an ldm_hash_rate_log
attribute. This behaves identically to the ldm_hash_every_log attribute.
* CompressionParameters() now accepts a ldm_hash_rate_log argument that
behaves identically to the ldm_hash_every_log argument. An error will be
raised if both arguments are specified.
* CompressionParameters() now accepts a strategy argument that behaves
identically to the compression_strategy argument. An error will be raised
if both arguments are specified.
* The MINMATCH_MIN and MINMATCH_MAX constants were added. They are
semantically equivalent to the old SEARCHLENGTH_MIN and
SEARCHLENGTH_MAX constants.
* Bundled zstandard library upgraded from 1.3.7 to 1.3.8.
* setup.py denotes support for Python 3.7 (Python 3.7 was supported and
tested in the 0.10 release).
* zstd_cffi module has been renamed to zstandard.cffi.
* ZstdCompressor.stream_writer() now reuses a buffer in order to avoid
allocating a new buffer for every operation. This should result in faster
performance in cases where write() or flush() are being called
frequently.
* Bundled zstandard library upgraded from 1.3.6 to 1.3.7.
version 1.32 - Sergey Poznyakoff, 2019-02-23
* Fix the use of --checkpoint without explicit --checkpoint-action
* Fix extraction with the -U option
See http://lists.gnu.org/archive/html/bug-tar/2019-01/msg00015.html,
for details
* Fix iconv usage on BSD-based systems
* Fix possible NULL dereference (savannah bug #55369)
* Improve the testsuite
v2.1.5: Made the md5sum detection consistent with the header code. Check for
the presence of the archive directory. Added --encrypt for symmetric encryption
through gpg (Eric Windisch). Added support for the digest command on Solaris 10
for MD5 checksums. Check for available disk space before extracting to the
target directory (Andreas Schweitzer). Allow extraction to run asynchronously
(patch by Peter Hatch). Use file descriptors internally to avoid error messages
(patch by Kay Tiong Khoo).
v2.1.6: Replaced one dot per file progress with a realtime progress percentage
and a spining cursor. Added --noprogress to prevent showing the progress during
the decompression. Added --target dir to allow extracting directly to a target
directory. (Guy Baconniere)
v2.2.0: First major new release in years! Includes many bugfixes and user
contributions. Please look at the project page on Github for all the details.
v2.3.0: Support for archive encryption via GPG or OpenSSL. Added LZO and LZ4
compression support. Options to set the packaging date and stop the umask from
being overriden. Optionally ignore check for available disk space when
extracting. New option to check for root permissions before extracting.
v2.3.1: Various compatibility updates. Added unit tests for Travis CI in the
GitHub repo. New --tar-extra, --untar-extra, --gpg-extra,
--gpg-asymmetric-encrypt-sign options.
v2.4.0: Added optional support for SHA256 archive integrity checksums.
Changes in version 1.21:
The options '--dump', '--remove' and '--strip' have been added, mainly as
support for the tarlz archive format: http://www.nongnu.org/lzip/tarlz.html
These options replace '--dump-tdata', '--remove-tdata' and '--strip-tdata',
which are now aliases and will be removed in version 1.22.
'--dump=[<member_list>][:damaged][:tdata]' dumps the members listed, the
damaged members (if any), or the trailing data (if any) of one or more
regular multimember files to standard output.
'--remove=[<member_list>][:damaged][:tdata]' removes the members listed,
the damaged members (if any), or the trailing data (if any) from regular
multimember files in place.
'--strip=[<member_list>][:damaged][:tdata]' copies one or more regular
multimember files to standard output, stripping the members listed, the
damaged members (if any), or the trailing data (if any) from each file.
Detection of forbidden combinations of characters in trailing data has been
improved.
'--split' can now detect trailing data and gaps between members, and save
each gap in its own file. Trailing data (if any) are saved alone in the last
file. (Gaps may contain garbage or may be members with corrupt headers or
trailers).
'--ignore-errors' now makes '--list' show gaps between members, ignoring
format errors.
'--ignore-errors' now makes '--range-decompress' ignore a truncated last
member.
Errors are now also checked when closing the input file in decompression
mode.
Some diagnostic messages have been improved.
'\n' is now printed instead of '\r' when showing progress of merge or repair
if stdout is not a terminal.
Lziprecover now compiles on DOS with DJGPP. (Patch from Robert Riebisch).
The new chapter 'Tarlz', explaining the ways in which lziprecover can
recover and process multimember tar.lz archives, has been added to the
manual.
The configure script now accepts appending options to CXXFLAGS using the
syntax 'CXXFLAGS+=OPTIONS'.
It has been documented in INSTALL the use of
CXXFLAGS+='-D __USE_MINGW_ANSI_STDIO' when compiling on MinGW.
Changes in version 1.21:
Detection of forbidden combinations of characters in trailing data has been
improved.
Errors are now also checked when closing the input file.
Lzip now compiles on DOS with DJGPP. (Patch from Robert Riebisch).
The descriptions of '-0..-9', '-m' and '-s' in the manual have been
improved.
The configure script now accepts appending options to CXXFLAGS using the
syntax 'CXXFLAGS+=OPTIONS'.
It has been documented in INSTALL the use of
CXXFLAGS+='-D __USE_MINGW_ANSI_STDIO' when compiling on MinGW.
Prompted in part because prior releases fail to build on Linux
distributions that use glibc >= 2.27 (relates to PR pkg/53826).
* Noteworthy changes in release 1.10 (2018-12-29) [stable]
** Changes in behavior
Compressed gzip output no longer contains the current time as a
timestamp when the input is not a regular file. Instead, the output
contains a null (zero) timestamp. This makes gzip's behavior more
reproducible when used as part of a pipeline. (As a reminder, even
regular files will use null timestamps after the year 2106, due to a
limitation in the gzip format.)
** Bug fixes
A use of uninitialized memory on some malformed inputs has been fixed.
[bug present since the beginning]
A few theoretical race conditions in signal handers have been fixed.
These bugs most likely do not happen on practical platforms.
[bugs present since the beginning]
* Noteworthy changes in release 1.9 (2018-01-07) [stable]
** Bug fixes
gzip -d -S SUFFIX file.SUFFIX would fail for any upper-case byte in SUFFIX.
E.g., before, this command would fail:
$ :|gzip > kT && gzip -d -S T kT
gzip: kT: unknown suffix -- ignored
[bug present since the beginning]
When decompressing data in 'pack' format, gzip no longer mishandles
leading zeros in the end-of-block code. [bug introduced in gzip-1.6]
When converting from system-dependent time_t format to the 32-bit
unsigned MTIME format used in gzip files, if a timestamp does not
fit gzip now substitutes zero instead of the timestamp's low-order
32 bits, as per Internet RFC 1952. When converting from MTIME to
time_t format, if a timestamp does not fit gzip now warns and
substitutes the nearest in-range value instead of crashing or
silently substituting an implementation-defined value (typically,
the timestamp's low-order bits). This affects timestamps before
1970 and after 2106, and timestamps after 2038 on platforms with
32-bit signed time_t. [bug present since the beginning]
Commands implemented via shell scripts are now more consistent about
failure status. For example, 'gunzip --help >/dev/full' now
consistently exits with status 1 (error), instead of with status 2
(warning) on some platforms. [bug present since the beginning]
Support for VMS and Amiga has been removed. It was not working anyway,
and it reportedly caused file name glitches on MS-Windowsish platforms.
* Noteworthy changes in release 1.8 (2016-04-26) [stable]
** Bug fixes
gzip -l no longer falsely reports a write error when writing to a pipe.
[bug introduced in gzip-1.7]
Port to Oracle Solaris Studio 12 on x86-64.
[bug present since at least gzip-1.2.4]
When configuring gzip, ./configure DEFS='...-DNO_ASM...' now
suppresses assembler again. [bug introduced in gzip-1.3.5]
* Noteworthy changes in release 1.7 (2016-03-27) [stable]
** Changes in behavior
The GZIP environment variable is now obsolescent; gzip now warns if
it is used, and rejects attempts to use dangerous options or operands.
You can use an alias or script instead.
Installed programs like 'zgrep' now use the PATH environment variable
as usual to find subsidiary programs like 'gzip' and 'grep'.
Previously they prepended the installation directory to the PATH,
which sometimes caused 'make check' to test the wrong gzip executable.
[bug introduced in gzip-1.3.13]
** New features
gzip now accepts the --synchronous option, which causes it to use
fsync and similar primitives to transfer output data to the output
file's storage device when the file system supports this. Although
this option makes gzip safer in the presence of system crashes, it
can make gzip considerably slower.
gzip now accepts the --rsyncable option. This option is accepted in
all modes, but has effect only when compressing: it makes the resulting
output more amenable to efficient use of rsync. For example, when a
large input file gets a small change, a gzip --rsyncable image of
that file will remain largely unchanged, too. Without --rsyncable,
even a tiny change in the input could result in a totally different
gzip-compressed output file.
** Bug fixes
gzip -k -v no longer reports that files are replaced.
[bug present since the beginning]
zgrep -f A B C no longer reads A more than once if A is not a regular file.
This better supports invocations like 'zgrep -f <(COMMAND) B C' in Bash.
[bug introduced in gzip-1.2]
Bz2file is a Python library for reading and writing bzip2-compressed files. It
contains a drop-in replacement for the file interface in the standard library's
bz2 module, including features from the latest development version of CPython
that are not available in older releases.
Changelog:
version 1.31 - Sergey Poznyakoff, 2019-01-02
* Fix heap-buffer-overrun with --one-top-level.
Bug introduced with the addition of that option in 1.28.
* Support for zstd compression
New option '--zstd' instructs tar to use zstd as compression program.
When listing, extractng and comparing, zstd compressed archives are
recognized automatically.
When '-a' option is in effect, zstd compression is selected if the
destination archive name ends in '.zst' or '.tzst'.
* The -K option interacts properly with member names given in the command line
Names of members to extract can be specified along with the "-K NAME"
option. In this case, tar will extract NAME and those of named members
that appear in the archive after it, which is consistent with the
semantics of the option.
Previous versions of tar extracted NAME, those of named members that
appeared before it, and everything after it.
* Fix CVE-2018-20482
When creating archives with the --sparse option, previous versions of
tar would loop endlessly if a sparse file had been truncated while
being archived.
Zstandard v1.3.8
perf: better decompression speed on large files (+7%) and cold dictionaries (+15%)
perf: slightly better compression ratio at high compression modes
api : finalized advanced API, last stage before "stable" status
api : new --rsyncable mode
api : support decompression of empty frames into NULL (used to be an error)
build: new set of build macros to generate a minimal size decoder
build: fix compilation on MIPS32
build: fix compilation with multiple -arch flags
build: highly upgraded meson build
build: improved buck support
build: fix cmake script : can create debug build
build: Makefile : grep works on both colored consoles and systems without color support
build: fixed zstd-pgo target
cli : support ZSTD_CLEVEL environment variable
cli : --no-progress flag, preserving final summary
cli : ensure destination file is not source file
cli : clearer error messages, notably when input file not present
doc : clarified zstd_compression_format.md
misc: fixed zstdgrep, returns 1 on failure
misc: NEWS renamed as CHANGELOG, in accordance with fb.oss policy
2.1.5
This release contains no functional changes other than changes to the Appveyor configuration for publishing wheels.
2.1.4
This release contains no functional changes other than changes to the Travis configuration for publishing wheels.
2.1.3
A simplification of the tox.ini file
More robust checking for pkgconfig availability
Integration of cibuildwheel into travis builds so as to build and publish binary wheels for Linux and OSX
Only require pytest-runner if pytest/test is being called
Blacklists version 3.3.0 of pytest which has a bug that can cause the tests to fail.
1.0.7
cross compilation support:
added ability to run cross-compiled ARM tests in qemu
added arm-linux-gnueabihf-gcc entry to Travis build matrix
faster decoding on ARM:
implemented prefetching HuffmanCode entry as uint32_t if target platform is ARM
fixed NEON extension detection
combed Huffman table building code for better readability
improved precision of window size calculation in CLI
minor fixes:
fixed typos
improved internal comments / parameter names
fixed BROTLI_PREDICT_TRUE/_FALSE detection for SunPro compiler
unburdened JNI (Bazel) builds from fetching the full JDK
1.0.6
Fixes
fix unaligned 64-bit accesses on AArch32
add missing files to the sources list
add ASAN/MSAN unaligned read specializations
fix CoverityScan "unused assignment" warning
fix JDK 8<->9 incompatibility
unbreak Travis builds
fix auto detect of bundled mode in cmake
* libmspack is now distributed with its test-suite, which now run
as part of "make check"
* libmspack's programs in src/ have been moved to examples/ and do
not auto-install
Set TEST_TARGET.
New in 1.9
* Fixed invisible bad extraction when using cabextract -F (broken in 1.8)
* Fixed configure --with-external-libmspack which was broken in 1.8
* configure --with-external-libmspack will now use pkg-config. To configure
it manually, set environment variables libmspack_CFLAGS and libmspack_LIBS
before running configure.
* Now includes the test suite (make check)
New in 1.8
* cabextract -f now extracts even more badly damaged files than before
Uptsream changes:
2.32 13/09/2018 (CBERRY)
- Fix absolute path handling on VMS
2.30 19/06/2018
- skip white_space test on MSWin32 as Windows will report that both
files exist, which is obviously a 'feature'
2.1.2:
Improves the speed of importing the module by avoiding the use of pkg_resources
Fixes some flake8 warnings
Resolves a small issue with the test suite when detecting memory usage increases
* On Arch Linux, the build failed, makedev(3) indicates
#include <sys/sysmacros.h>
* On Debian Buster, the build succeed but a big warning is displayed:
warning: In the GNU C Library, "minor" is defined
by <sys/sysmacros.h>. For historical compatibility, it is
currently defined by <sys/types.h> as well, but we plan to
remove this soon. To use "minor", include <sys/sysmacros.h>
directly. If you did not intend to use a system-defined macro
"minor", you should undefine it after including <sys/types.h>.
0.10.1:
Backwards Compatibility Notes
* ZstdCompressor.stream_reader().closed is now a property instead of a
method.
* ZstdDecompressor.stream_reader().closed is now a property instead of a
method.
Changes
* Stop attempting to package Python 3.6 for Miniconda. The latest version of
Miniconda is using Python 3.7. The Python 3.6 Miniconda packages were a lie
since this were built against Python 3.7.
* ZstdCompressor.stream_reader()'s and ZstdDecompressor.stream_reader()'s
closed attribute is now a read-only property instead of a method. This now
properly matches the IOBase API and allows instances to be used in more
places that accept IOBase instances.
0.10.0:
Backwards Compatibility Notes
* ZstdDecompressor.stream_reader().read() now consistently requires an
argument in both the C and CFFI backends. Before, the CFFI implementation
would assume a default value of -1, which was later rejected.
* The compress_literals argument and attribute has been removed from
zstd.ZstdCompressionParameters because it was removed by the zstd 1.3.5
API.
* ZSTD_CCtx_setParametersUsingCCtxParams() is no longer called on every
operation performed against ZstdCompressor instances. The reason for this
change is that the zstd 1.3.5 API no longer allows this without calling
ZSTD_CCtx_resetParameters() first. But if we called
ZSTD_CCtx_resetParameters() on every operation, we'd have to redo
potentially expensive setup when using dictionaries. We now call
ZSTD_CCtx_reset() on every operation and don't attempt to change
compression parameters.
* Objects returned by ZstdCompressor.stream_reader() no longer need to be
used as a context manager. The context manager interface still exists and its
behavior is unchanged.
* Objects returned by ZstdDecompressor.stream_reader() no longer need to be
used as a context manager. The context manager interface still exists and its
behavior is unchanged.
Bug Fixes
* ZstdDecompressor.decompressobj().decompress() should now return all data
from internal buffers in more scenarios. Before, it was possible for data to
remain in internal buffers. This data would be emitted on a subsequent call
to decompress(). The overall output stream would still be valid. But if
callers were expecting input data to exactly map to output data (say the
producer had used flush(COMPRESSOBJ_FLUSH_BLOCK) and was attempting to
map input chunks to output chunks), then the previous behavior would be
wrong. The new behavior is such that output from
flush(COMPRESSOBJ_FLUSH_BLOCK) fed into decompressobj().decompress()
should produce all available compressed input.
* ZstdDecompressor.stream_reader().read() should no longer segfault after
a previous context manager resulted in error.
* ZstdCompressor.compressobj().flush(COMPRESSOBJ_FLUSH_BLOCK) now returns
all data necessary to flush a block. Before, it was possible for the
flush() to not emit all data necessary to fully represent a block. This
would mean decompressors wouldn't be able to decompress all data that had been
fed into the compressor and flush()ed.
New Features
* New module constants BLOCKSIZELOG_MAX, BLOCKSIZE_MAX,
TARGETLENGTH_MAX that expose constants from libzstd.
* New ZstdCompressor.chunker() API for manually feeding data into a
compressor and emitting chunks of a fixed size. Like compressobj(), the
API doesn't impose restrictions on the input or output types for the
data streams. Unlike compressobj(), it ensures output chunks are of a
fixed size. This makes this API useful when the compressed output is being
fed into an I/O layer, where uniform write sizes are useful.
* ZstdCompressor.stream_reader() no longer needs to be used as a context
manager.
* ZstdDecompressor.stream_reader() no longer needs to be used as a context
manager.
* Bundled zstandard library upgraded from 1.3.4 to 1.3.6.
Changes
* Added zstd_cffi.py and NEWS.rst to MANIFEST.in.
* zstandard.__version__ is now defined.
* Upgrade pip, setuptools, wheel, and cibuildwheel packages to latest versions.
* Upgrade various packages used in CI to latest versions. Notably tox (in
order to support Python 3.7).
* Use relative paths in setup.py to appease Python 3.7.
* Added CI for Python 3.7.
Zstandard v1.3.7
perf: slightly better decompression speed on clang (depending on hardware target)
fix: ratio for dictionary compression at levels 9 and 10, reported by @indygreg
build: no longer build backtrace by default in release mode; restrict further automatic mode
build: control backtrace support through build macro BACKTRACE
misc: added man pages for zstdless and zstdgrep, by @samrussell