- improvements for Android APK and JAR archives
- better support for non-recursive list and extract
- tar --exclude-vcs support
- fixes for file attributes and flags handling
- zipx support
- rar 5.0 reader
0.12.0:
Backwards Compatibility Notes
* Support for Python 3.4 has been dropped since Python 3.4 is no longer
a supported Python version upstream. (But it will likely continue to
work until Python 2.7 support is dropped and we port to Python 3.5+
APIs.)
Bug Fixes
* Fix ``ZstdDecompressor.__init__`` on 64-bit big-endian systems.
* Fix memory leak in ``ZstdDecompressionReader.seek()``.
Changes
* CI transitioned to Azure Pipelines (from AppVeyor and Travis CI).
* Switched to ``pytest`` for running tests (from ``nose``).
* Bundled zstandard library upgraded from 1.3.8 to 1.4.3.
Version 3.1:
This will be last version with support for Python 2.x
New feature:
Accept pathlib objects as filenames.
Accept bytes filenames in Python 3
Fixes:
Use bug-compatible SHA1 for longer passwords (> 28 chars) in RAR3 encrypted headers.
Return true/false from _check_unrar_tool
Include all test files in archive
Include volume number in NeedFirstVolume exception if available (rar5).
Cleanups:
Convert tests to pytest.
v2.2.1:
Update the bundled LZ4 library to version 1.9.1
This release updates the bundled LZ4 library to version 1.9.1.
The 2.2.x releases will be the final release that support Python 2.7. In the near future we'll begin work on the 3.0.x release which will only support Python >= 3.5, and will require LZ4 > 1.9.0.
v2.2.0:
Add more detail to the install section of docs
v0.6.0
When adding implicit dirs, ensure that ancestral directories
are added and that duplicates are excluded.
The library now relies on more_itertools
5.61.0
KTar::openArchive: Don't assert if file has two root dirs
KZip::openArchive: Don't assert when opening broken files
5.60.0
Do not crash if the inner file wants to be bigger than QByteArray max size
5.59.0
Test reading and seeking in KCompressionDevice
KCompressionDevice: Remove bIgnoreData
KAr: fix out-of-bounds read (on invalid input) by porting to QByteArray
KAr: fix parsing of long filenames with Qt-5.10
KAr: the permissions are in octal, not decimal
KAr::openArchive: Also check ar_longnamesIndex is not < 0
KAr::openArchive: Fix invalid memory access on broken files
KAr::openArchive: Protect against Heap-buffer-overflow in broken files
KTar::KTarPrivate::readLonglink: Fix crash in malformed files
5.58.0
KTar: Protect against negative longlink sizes
Fix invalid memory write on malformed tar files
Fix memory leak when reading some tar files
Fix uninitialized memory use when reading malformed tar files
Fix stack-buffer-overflow read on malformed files
Fix null-dereference on malformed tar files
Install krcc.h header
Fix double delete on broken files
Disallow copy of KArchiveDirectoryPrivate and KArchivePrivate
Fix KArchive::findOrCreate running out of stack on VERY LONG paths
Introduce and use KArchiveDirectory::addEntryV2
removeEntry can fail so it's good to know if it did
KZip: fix Heap-use-after-free in broken files
LZ4 v1.9.2
fix : out-of-bound read in exceptional circumstances when using decompress_partial()
fix : slim opportunity for out-of-bound write with compress_fast() with a large enough input and when providing an output smaller than recommended (< LZ4_compressBound(inputSize))
fix : rare data corruption bug with LZ4_compress_destSize()
fix : data corruption bug when Streaming with an Attached Dict in HC Mode
perf: enable LZ4_FAST_DEC_LOOP on aarch64/GCC by default
perf: improved lz4frame streaming API speed
perf: speed up lz4hc on slow patterns when using external dictionary
api: better in-place decompression and compression support
cli : --list supports multi-frames files
cli: --version outputs to stdout
cli : add option --best as an alias of -12
misc: Integration into oss-fuzz
Zstandard v1.4.3
Dictionary Compression Regression
We discovered an issue in the v1.4.2 release, which can degrade the effectiveness of dictionary compression. This release fixes that issue.
Detailed Changes
* bug: Fix Dictionary Compression Ratio Regression
* bug: Fix Buffer Overflow in v0.3 Decompression
* build: Add support for IAR C/C++ Compiler for Arm
* misc: Add NULL pointer check in util.c by
The canonical form [1] of an R package Makefile includes the
following:
- The first stanza includes R_PKGNAME, R_PKGVER, PKGREVISION (as
needed), and CATEGORIES.
- HOMEPAGE is not present but defined in math/R/Makefile.extension to
refer to the CRAN web page describing the package. Other relevant
web pages are often linked from there via the URL field.
This updates all current R packages to this form, which will make
regular updates _much_ easier, especially using pkgtools/R2pkg.
[1] http://mail-index.netbsd.org/tech-pkg/2019/08/02/msg021711.html
2019-02-18 Stuart Caie <kyzer@cabextract.org.uk>
* chmd_read_headers(): a CHM file name beginning "::" but shorter
than 33 bytes will lead to reading past the freshly-allocated name
buffer - checks for specific control filenames didn't take length
into account. Thanks to ADLab of Venustech for the report and
proof of concept.
2019-02-18 Stuart Caie <kyzer@cabextract.org.uk>
* chmd_read_headers(): CHM files can declare their chunks are any
size up to 4GB, and libmspack will attempt to allocate that to
read the file.
This is not a security issue; libmspack doesn't promise how much
memory it'll use to unpack files. You can set your own limits by
returning NULL in a custom mspack_system.alloc() implementation.
However, it would be good to validate chunk size further. With no
offical specification, only empirical data is available. All files
created by hhc.exe have a chunk size of 4096 bytes, and this is
matched by all the files I've found in the wild, except for one
which has a chunk size of 8192 bytes, which was created by someone
developing a CHM file creator 15 years ago, and they appear to
have abandoned it, so it seems 4096 is a de-facto standard.
I've changed the "chunk size is not a power of two" warning to
"chunk size is not 4096", and now only allow chunk sizes between
22 and 8192 bytes. If you have CHM files with a larger chunk size,
please send them to me and I'll increase this upper limit.
Thanks to ADLab of Venustech for the report.
2019-02-18 Stuart Caie <kyzer@cabextract.org.uk>
* oabd.c: replaced one-shot copying of uncompressed blocks (which
requires allocating a buffer of the size declared in the header,
which can be 4GB) with a fixed-size buffer. The buffer size is
user-controllable with the new msoab_decompressor::set_param()
method (check you have version 2 of the OAB decompressor), and
also controls the input buffer used for OAB's LZX decompression.
Reminder: compression formats can dictate how much memory is
needed to decompress them. If memory usage is a security concern
to you, write a custom mspack_system.alloc() that returns NULL
if "too much" memory is requested. Do not rely on libmspack adding
special heuristics to know not to request "too much".
Thanks to ADLab of Venustech for the report.
Zstandard v1.4.2
Legacy Decompression Fix
This release is a small one, that corrects an issue discovered in the previous release. Zstandard v1.4.1 included a bug in decompressing v0.5 legacy frames, which is fixed in v1.4.2.
Detailed Changes
bug: Fix bug in zstd-0.5 decoder
bug: Fix seekable decompression in-memory API
bug: Close minor memory leak in CLI
misc: Validate blocks are smaller than size limit
misc: Restructure source files
1.0.8 (13 Jul 19)
~~~~~~~~~~~~~~~~~
* Accept as many selectors as the file format allows.
This relaxes the fix for CVE-2019-12900 from 1.0.7
so that bzip2 allows decompression of bz2 files that
use (too) many selectors again.
* Fix handling of large (> 4GB) files on Windows.
* Cleanup of bzdiff and bzgrep scripts so they don't use
any bash extensions and handle multiple archives correctly.
* There is now a bz2-files testsuite at
https://sourceware.org/git/bzip2-tests.git
1.0.7 (27 Jun 19)
~~~~~~~~~~~~~~~~~
* Fix undefined behavior in the macros SET_BH, CLEAR_BH, & ISSET_BH
* bzip2: Fix return value when combining --test,-t and -q.
* bzip2recover: Fix buffer overflow for large argv[0]
* bzip2recover: Fix use after free issue with outFile (CVE-2016-3189)
* Make sure nSelectors is not out of range (CVE-2019-12900)
v1.4.1
bug: Fix data corruption in niche use cases by @terrelln (#1659)
bug: Fuzz legacy modes, fix uncovered bugs by @terrelln (#1593, #1594, #1595)
bug: Fix out of bounds read by @terrelln (#1590)
perf: Improve decode speed by ~7% @mgrice (#1668)
perf: Slightly improved compression ratio of level 3 and 4 (ZSTD_dfast) by @cyan4973 (#1681)
perf: Slightly faster compression speed when re-using a context by @cyan4973 (#1658)
perf: Improve compression ratio for small windowLog by @cyan4973 (#1624)
perf: Faster compression speed in high compression mode for repetitive data by @terrelln (#1635)
api: Add parameter to generate smaller dictionaries by @tyler-tran (#1656)
cli: Recognize symlinks when built in C99 mode by @felixhandte (#1640)
cli: Expose cpu load indicator for each file on -vv mode by @ephiepark (#1631)
cli: Restrict read permissions on destination files by @chungy (#1644)
cli: zstdgrep: handle -f flag by @felixhandte (#1618)
cli: zstdcat: follow symlinks by @vejnar (#1604)
doc: Remove extra size limit on compressed blocks by @felixhandte (#1689)
doc: Fix typo by @yk-tanigawa (#1633)
doc: Improve documentation on streaming buffer sizes by @cyan4973 (#1629)
build: CMake: support building with LZ4 @leeyoung624 (#1626)
build: CMake: install zstdless and zstdgrep by @leeyoung624 (#1647)
build: CMake: respect existing uninstall target by @j301scott (#1619)
build: Make: skip multithread tests when built without support by @michaelforney (#1620)
build: Make: Fix examples/ test target by @sjnam (#1603)
build: Meson: rename options out of deprecated namespace by @lzutao (#1665)
build: Meson: fix build by @lzutao (#1602)
build: Visual Studio: don't export symbols in static lib by @scharan (#1650)
build: Visual Studio: fix linking by @absotively (#1639)
build: Fix MinGW-W64 build by @myzhang1029 (#1600)
misc: Expand decodecorpus coverage by @ephiepark (#1664)
Update ruby-zip to 1.2.3, here is release note.
1.2.3 (2019-05-23)
* Allow tilde in zip entry names #391 (fixes regression in 1.2.2 from #376)
* Support frozen string literals in more files #390
* Require pathname explicitly #388 (fixes regression in 1.2.2 from #376)
Tooling / Documentation:
* CI updates #392, #394
- Bump supported ruby versions and add 2.6
- JRuby failures are no longer ignored (reverts #375 / part of #371)
* Add changelog entry that was missing for last release #387
* Comment cleanup #385
Since the GitHub release information for 1.2.2 is missing, I will also include
it here:
1.2.2 (2018-09-01)
NB: This release drops support for extracting symlinks, because there was no
clear way to support this securely. See #376 (comment) for details.
* Fix CVE-2018-1000544 #376 / #371
* Fix NoMethodError: undefined method `glob' #363
* Fix handling of stored files (i.e. files not using compression) with general
purpose bit 3 set #358
* Fix close on StringIO-backed zip file #353
* Add Zip.force_entry_names_encoding option #340
* Update rubocop, apply auto-fixes, and fix regressions caused by said
auto-fixes #332, #355
* Save temporary files to temporary directory (rather than current directory)
#325
Tooling / Documentation:
* Turn off all terminal output in all tests #361
* Several CI updates #346, #347, #350, #352
* Several README improvements #345, #326, #321
### engrampa 1.22.1
sync with transifex
Help: replace link linkend with xref linkend
file-utils: avoid out of bound memory access
actions: avoid use of memory after it is freed
fr-process: Fix memory leak: 'g_shell_quote' needs to be freed
fr-process: Fix memory leak: 'g_strconcat' needs to be freed
[Security] fr-process: avoid 'strcpy' and 'strcat'
fr-process: Fix memory leak
Help: Fix version to 1.22
help: update copyright
Upgrade the manual to docbook 5.0
v1.4.0
perf: Improve level 1 compression speed in most scenarios by 6% by @gbtucker and @terrelln
api: Move the advanced API, including all functions in the staging section, to the stable section
api: Make ZSTD_e_flush and ZSTD_e_end block for maximum forward progress
api: Rename ZSTD_CCtxParam_getParameter to ZSTD_CCtxParams_getParameter
api: Rename ZSTD_CCtxParam_setParameter to ZSTD_CCtxParams_setParameter
api: Don't export ZSTDMT functions from the shared library by default
api: Require ZSTD_MULTITHREAD to be defined to use ZSTDMT
api: Add ZSTD_decompressBound() to provide an upper bound on decompressed size by @shakeelrao
api: Fix ZSTD_decompressDCtx() corner cases with a dictionary
api: Move ZSTD_getDictID_*() functions to the stable section
api: Add ZSTD_c_literalCompressionMode flag to enable or disable literal compression by @terrelln
api: Allow compression parameters to be set when a dictionary is used
api: Allow setting parameters before or after ZSTD_CCtx_loadDictionary() is called
api: Fix ZSTD_estimateCStreamSize_usingCCtxParams()
api: Setting ZSTD_d_maxWindowLog to 0 means use the default
cli: Ensure that a dictionary is not used to compress itself by @shakeelrao
cli: Add --[no-]compress-literals flag to enable or disable literal compression
doc: Update the examples to use the advanced API
doc: Explain how to transition from old streaming functions to the advanced API in the header
build: Improve the Windows release packages
build: Improve CMake build by @hjmjohnson
build: Build fixes for FreeBSD by @lwhsu
build: Remove redundant warnings by @thatsafunnyname
build: Fix tests on OpenBSD by @bket
build: Extend fuzzer build system to work with the new clang engine
build: CMake now creates the libzstd.so.1 symlink
build: Improve Menson build by @lzutao
misc: Fix symbolic link detection on FreeBSD
misc: Use physical core count for -T0 on FreeBSD by @cemeyer
misc: Fix zstd --list on truncated files by @kostmo
misc: Improve logging in debug mode by @felixhandte
misc: Add CirrusCI tests by @lwhsu
misc: Optimize dictionary memory usage in corner cases
misc: Improve the dictionary builder on small or homogeneous data
misc: Fix spelling across the repo by @jsoref
LZ4 v1.9.1
Changes
fix : decompression functions were reading a few bytes beyond input size
api : fix : lz4frame initializers compatibility with c++
cli : added command --list
build: improved Windows build
build: AIX, by Norman Green
LZ4 v1.9.0
This release brings an assortment of small improvements and bug fixes, as detailed below :
perf: large decompression speed improvement on x86/x64 (up to +20%)
api : changed : _destSize() compression variants are promoted to stable API
api : new : LZ4_initStream(HC), replacing LZ4_resetStream(HC)
api : changed : LZ4_resetStream(HC) as recommended reset function, for better performance on small data
cli : support custom block sizes
build: source code can be amalgamated, by Bing Xu
build: added meson build
build: new build macros : LZ4_DISTANCE_MAX, LZ4_FAST_DEC_LOOP
install: MidnightBSD
install: msys2 on Windows 10
Libaec provides fast lossless compression of 1 up to 32 bit wide signed
or unsigned integers (samples). The library achieves best results for
low entropy data as often encountered in space imaging instrument data or
numerical model output from weather or climate simulations. While floating
point representations are not directly supported, they can also be efficiently
coded by grouping exponents and mantissa.
Libaec implements Golomb-Rice coding as defined in the Space Data System
Standard documents 121.0-B-2 and 120.0-G-2.
Libaec includes a free drop-in replacement for the SZIP library.
Upstream changes:
0.0946 2019-04-05 20:11:47Z
- Added copyright holder/year meta to dist.ini. (GH#6) (Mohammad S Anwar)
- Auto generate META.yml using the plugin [MetaYAML]. (GH#8) (Mohammad S
Anwar)
libarchive 3.3.3:
Avoid super-linear slowdown on malformed mtree files
Many fixes for building with Visual Studio
NO_OVERWRITE doesn't change existing directory attributes
New support for Zstandard read and write filters
Update provided by Michael Bäuerle via pkgsrc-wip.
Changelog
=========
Release 2018-11-22:
-libschily: resolvenpath() did not work as expected when some path names
do not exist. A stat() call that should check whether we already
reached the "/" directory caused a return (-1) even with
(flags & RSPF_EXIST) == 0
This bug caused star to classify more symlinks as dangerous than
needed.
- star: A typo in the function dolchmodat() has been fixed. The bug has been
introduced in July 2018 while adding support for very long path names.
- star: added a new timestamp to the star version.
- star: The man page now mentions incremental backups and restores in the
FEATURES section.
Release 2018-12-06:
- star: hole.c: A memory leak in in hole.c::put_sparse() has been fixed.
Thanks to Pavel Raiskup for reporting this coverity result.
- star: xheader.c: the macro scopy() no longer has a semicolon at the end.
Thanks to Pavel Raiskup for reporting this coverity result.
Release 2019-01-22:
- libstrar & star unicode.c: iconv() may return > 0 if there are
characters that could not be converted into an
identical meaning.
We therefore now check for ret != 0 instead of
ret == -1.
- star: added support for auto detection of "zstd" compressed archives.
- star: added a new option -zstd to support compression and uncompression
using the program "zstd".
- star: Recently, star did hang in the FIFO code on Solaris. This did
not happen on Solaris over 20 years before...
On Linux - on fast multi CPU machines - the probability that a
child process from fork() starts up before the parent is 1000x higher
than on Solaris, where 10 million tries were needed to reproduce the
same problem.
As a result, the FIFO in star on Linux could in rare cases (1 of.
~ 10000 tries) even finish the 1st read() from the input file before
the "tar"-process starts with e.g. command lines like "star -tv" or.
"star -x". Since star introduced auto-byte-order detection and
handling in 1985, star needs a special start up sequence to do that.
Star introduced the FIFO in the late 1980s and the machines from that
time did always restart the parent before the fork()ed child starts.
The new OS behavior thus caused a situation that was not forseeable
when the FIFO has been designed. This new OS behavior caused a
deadlock in aprox. 1 of 10000 star calls on Linux and 1 of 10000000
star calls on Solaris.
Star now waits when entering the FIFO fill-process until the.
FIFO get-process did start up before trying to wake up a waiting
get process.
- star: On Linux, in 1 of 1.5 million tries, star did die from SIGPIPE.
Note that this did never happen on Solaris.
Star now ignores SIGPIPE and it seems that this fixed the problem
since it did not happen again after that change with even 100 million
tries.
- star: The debug printing for the FIFO has been enhanced to print more
information from the FIFO control structure to make it easier to debug
problems like the ones mentioned above.
- star: There seems to be a problem in pipe handling in the Linux kernel.
It seems that in rare cases, the read(2) on a pipe returns 0 even though
the write side did write(2) one byte to the pipe just before calling
exit(). Unfortunately, this problem is hard to debug as it happens only
once every ~30 million tries. Our workaround is to behave as if the
expected byte could be read and star currently prints something like:
star: Erfolg. Sync pipe read error pid 8141 ret 0
star: Erfolg. Ib 0 Ob 1 e 0 p 1 g 0 chan 5.
star: Erfolg. Trying to work around Kernel Pipe botch.
before it continues. Since the star exit code in such a case is 0,
we assume that this is a correct workaround and this case thus may
be made completely silent in the future.
- star: an even less frequent FIFO problem (occurs once every 50 million
tries on fast multi CPU machines) has been identified. Star reports a
hard EOF on input even though the complete file with logical EOF has
been read and there is still input to process. In order to debug this
problem a debug message has been added to the code.
With this debug message, it turned out, that this problem happened
because a context switch occurred in the FIFO read process after it did
see an empty FIFO and later, after the process was resumed, the
following check for the FIFO_MEOF flag did see EOF. We now first check
for the FIFO_MEOF flag and later for the amount of data inside as the
FIFO as FIFO_MEOF is set after the FIFO content has been updated and
thus a context switch is no longer able to cause a wrong assumption
about the content of the FIFO.
If you still see this, please send a report.
- star: added support to print debug malloc statistics to better debug
memory problems in star.
- star: pathname.c:: free_pspace() now only frees the path buffer if it
is != NULL
- star: fixed a bug in the file create.c that caused star to incorrectly
grow the path buffer by 2 bytes for every archived file. This caused
star to constantly grow if a larger amount of files are archived and
eat up all memory available to 32 bit processes if the archived
filesystem is larger than approx. 1 TB.
- star: If the path name now cannot be handled because of low memory,
we print a warning that includes the text "out of memory".
- star: Now checking whether open of /dev/null failed while running a
compress pipe. This avoids a core dump on defective OS installations.
Thanks to Pavel Raiskup for poiting to a related Coverity message.
- star: props.c: Added a missing /* FALLTHROUGH */ comment..
Thanks to Pavel Raiskup for poiting to a related Coverity message.
- star: create.c: Add more comment for the CPIO CRC format handler to
explain why the last instance if a series of hard links for a file
needs to archive the data.
- star: diff.c: added a filling fillbytes(&finfo, ...) to make sure that
ACL pointers are initialized.
Thanks to Pavel Raiskup for poiting to a related Coverity message.
- star: Several /* NOTREACHED */ comments have been added to tell
programs like coverity that after a NULL pointer check, there is no
continuation of the program
Thanks to Pavel Raiskup for poiting to a related Coverity message.
- star: extract.c: A if (path->ps_path == '\0') has been corrected to
if (path->ps_path[0] == '\0') after a mktemp() call. This was a typo
introduced with the new support for extremely long path names.
Thanks to Pavel Raiskup for poiting to a related Coverity message.
- star: extract.c An initalization for a struct pathstore has been
moved to the front to verify that path.ps_path is always initialized.
Thanks to Pavel Raiskup for poiting to a related Coverity message.
- star: header.c: isgnumagic(&ptb->dbuf.t_vers) has been changed to
isgnumagic(ptb->ustar_dbuf.t_magic) as it is a "ustar" structure
that is going to be checked.
Thanks to Pavel Raiskup for poiting to a related Coverity message.
- star: some Cstyle changes
- bsh / Bourne Shell / star: the function hop_dirs() no longer checks
for p2 != NULL before calling *p2 = '/' as p2 has
been granted to be != NULL from a break with
strchr(p, '/') == NULL
Release 2019-02-18:
- star: another similar has been fixed similat to what has been fixed
already in the 2019-01-22 release:
An even less frequent FIFO problem (occurs once every 50 million
tries on fast multi CPU machines) has been identified. Star reports a
hard EOF on input even though the complete file with logical EOF has
been read and there is still input to process. In order to debug this
problem a debug message has been added to the code.
With this debug message, it turned out, that this problem happened
because a context switch occurred in the FIFO read process after it did
see an empty FIFO and later, after the process was resumed, the
following check for the FIFO_MEOF flag did see EOF. We now first check
for the FIFO_MEOF flag and later for the amount of data inside as the
FIFO as FIFO_MEOF is set after the FIFO content has been updated and
thus a context switch is no longer able to cause a wrong assumption
about the content of the FIFO.
We now did run 250 million tests without seeing another problem.
If you still see this, please send a report.
- star: Note that the debug output for this problem now has been
disabled. If you need to debug this, call:
smake clean COPTX=-DFIFO_EOF_DEBUG all
in the star directory.
- star: The message "Sync pipe read error" is no longer printed when
the FIFO background process dies instead of sending a final wakeup.
This is needed since there is a possibility for a context switch in
the foreground process that can make it later wait for a wakeup while
the background process misses to see the wait flag and just exits.
- star: In rare conditions (once every 2 million tries), a hang could.
occur with "star -c" if the tar process fills the FIFO and sets the
EOF flag and then calls wait() to wait for the FIFO tape output
process. This happens in case that the tape output did not see the
EOF flag because it has undergone a context switch after it checked
for the not yet existing EOF flag and before waiting for a wakeup
from the tar FIFO fill process.
Star now closes the sync pipes before calling wait() as this always
wakes up the waiting other side.
We did run another 300 million tests for this condition and did not
see any problem now.
- star: The version is now 1.6
Short overview for what changed since the last "stable" version:
- Support for "infinitely" long path names has been added.
- Support for comparing timestamps with nanosecond granularity
- -secure-links has been made the default when extracting
archived (except when doing an incremental restore).
- Added Support for NFSv4 ACLs on FreeBSD. Solaris has been
supported since 2013.
- Added Support to archive SELinix attributes.
- Allow to configure whether "star -fsync" is the default in
order to support filesystems that are slow with granted
transactions (like ZFS) or platforms that are genrally
slow with fsync() (like Linux).
- Full UNICODE support has been added for tar headers.
- Support for -zstd compression has been added.
- Some rare FIFO problems have been fixed.
Note that we did recently run more than a billion tests to
verify the FIFO after we identified a method to trigger the
problem on Linux.
Release 2019-03-11:
- star: Support for base-256 numbers in timestams and uid/gid has been
added. This has been planned in the 1990s already, when star invented
the base-256 coding, but it has been forgotten in favor of the
POSIX.1-2001 enhanded archive headers. Now it seems that GNU tar.
that copied the format from star uses it for timestamps and uid/gid
and we need to implement it in order to get archive compatibility.
Thanks to Michal Górny (mgorny@gentoo.org) for detecting the missing
feature.
- star: The t_rdev field in the old star header now may use base-256
as well.
- star: The function stoli() added a new parameter "fieldwidth" that
allows to configure when a "unterminated octal number" warning is
printed. This is needed since this function is used for 8 byte and
for 12 byte fields.
- star: star did print archives with illegal 32 byte user/group.
names (where the nul terminator is missing) "correctly", when in.
list mode but it used only the first 31 bytes when extractig.
such archives
- star: a new function istarnumber() is used to do better heuristics on
what a valid TAR archive is. We have some special handling to work.
around the non-compliance of GNU tar in some known cases. If you
discover other GNU tar archives that are not detected as TAR archive,
please report them to help to make th eheuristics better.
The background is to make star better in detecting fool archives.
- star: The directory testscripts added new files:
testscripts/not_a_tar_file1 and testscripts/not_a_tar_file3
with correct checksums that fool tar implementations that use too
few heuristics to identify tar archives.
- star: fixed a bug in the FIFO related to extracting multi-volume
archives. The bug was introduced with release 2019-02-18 and the
effect was that the FIFO complained at the end of the last volume.
- star/libschily: Added new error checking codes:
"ID"<-->allows to control error behaviour with range errors in uid_t
and gid_t values.
"TIME"<>allows to control error behaviour with range errors in time_t
- star: Creating multi volume archives without using the FIFO did dump
core. We thus no longer set mp->chreel = TRUE; when the FIFO has.
been disabled. The related bug has been introduced in January 2012.
- star: Creating multi volume archives with a very small volume size
could cause a hang at the end as the function startvol() did not
check whether the TAR process did already decide to exit while
waiting for the TAR process to calm down (stop) before writing the
next multi volume header. We no longer wait in this case.
- star: exprstats() now calls fifo_exit(ret) in order to avoid a
FIFO Sync pipe read error message in case that star was terminated
with an error.
- star: Since we added better Unicode support in May 2018, star did
dump core when a multi volume header with POSIX.1-2001 extensions
was written in multi volume create mode. We now check for NULL
pointers before we call nameascii() to decide whether the file.
name needs a UTF-8 translation.
- star: Creating multi volume archives without POSIX.1-2001 support
no longer sets POSIX.1-2001 extension flags for the volume header.
- star: The flag XF_NOTIME now works when creating POSIX.1-2001
extended headers and thus the 'x'-header with time stamps for the
volume header tar header is no longer created. This avoids
to write atime=1 for the volume number 1 since we encode the
volume number in the otherwise useless atime of the volume header
when in POSIX.1-1988 TAR mode.
- star: the star.1 man page now mentions that the first tar program
appeared in 1979 (3 years before star has been started as a project).
- star: the star.4 man page now has a "SEE ALSO", a HISTORY and
a AUTHOR section.
- star: the star.4 man page now has a MULTI VOLUME ARCHIVE HANDLING
section.
- star: the star.4 man page added a new "BASIC TAR STRUCTURE" section.
- star: The ACL reference test archives (formerly available from e.g.:
http://sf.net/projects/s-tar/files/alpha/) have been added
to the directory star/testscripts/. The files.
acl-test.tar.gz
acl-test2.tar.gz
acl-test3.tar.gz
acl-test4.tar.gz
acl-test5.tar.gz
contain ACLs that use the obsolete method from a POSIX proposal
from around 1993 that was withdrawn in 1997 and never has become
part of a standard. This method has been implemented in 1993 for
UFS on Solaris.
GNU tar claims to support this format but really does not support
it at all. GNU tar fails to extract the reference tar archives from
above and it fails to create a compliant tar archive in create mode.
It is strange to see that GNU tar never has been tested against the
reference archives that have been created in collaboration with
SuSE in 2001 already.
The files
acl-nfsv4-test.tar.gz
acl-nfsv4-test2.tar.gz
acl-nfsv4-test3.tar.gz
acl-nfsv4-test4.tar.gz
acl-nfsv4-test5.tar.gz
contain ACLs that have become part of the NFSv4 standard and that.
are also used on NTFS and ZFS. This format is completely unsupported
by GNU tar.
- star TODO: create unit tests in order to avoid future problems
with multi volume archives similar to the problems we recently
fixed.
- star: Updated version 1.6 (not yet published in separate tarball)
Short overview for what changed since the last "stable" version:
- Support for "infinitely" long path names has been added.
- Support for base-256 numbers in timestams and uid/gid
has been added. This has been planned in the 1990s already,
when star invented the base-256 coding, but it has been
forgotten in favor of the POSIX.1-2001 enhanded archive
headers.
- Support for comparing timestamps with nanosecond granularity
- -secure-links has been made the default when extracting
archived (except when doing an incremental restore).
- Added Support for NFSv4 ACLs on FreeBSD. Solaris has been
supported since 2013.
- Added Support to archive SELinix attributes.
- Allow to configure whether "star -fsync" is the default in
order to support filesystems that are slow with granted
transactions (like ZFS) or platforms that are genrally
slow with fsync() (like Linux).
- Full UNICODE support has been added for tar headers.
- Support for -zstd compression has been added.
- Some rare FIFO problems have been fixed.
Note that we did recently run more than a billion tests to
verify the FIFO after we identified a method to trigger the
problem on Linux.
1.5.2 [2019-03-12]
==================
* Fix bug in AES encryption affecting certain file sizes
* Keep file permissions when modifying zip archives
* Support systems with small stack size.
* Support mbed TLS as crypto backend.
* Add nullability annotations.
### engrampa 1.22.0
* Translations update
* Avoid array index out of bounds parsing dpkg-deb --info
* warning: Use of memory after it is freed
* Read authors (updated) from engrampa.about gresource
* Enable Travis CI
* eggsmclient: avoid deprecated 'g_type_class_add_private'
* update copyright year to 2019
* rar/unrar: Fix: "overwrite existing files" disabled must work
* fix fr-command-cfile.c: fr_process_set_working_dir
* fr-command-cfile.c: fix indentation
* Added test integrity for brotli
* Added test integrity for the cfile compressors: gzip, bzip2, etc.
* move appdata to metainfo directory
* fr-window: show the pause button only if the dialog is working
* disable deprecation warnings for distcheck
* fr-window: avoid 'gtk_dialog_add_button' with stock ids
* fr-window: hide the progress bar if the process is paused
* fr-window: change the info label if process is paused/resumed
* fr-window: little improvements in the look of pause/resume button
* Adding pause and start functions
* Fix implementation and use of the alternative package name lookup
* Added support for brotli (*.tar.br) compressed tar archives
* Add brotli support
* Use make functions for HELP_LINGUAS
* Replace -Dokumentationteam
* Replace -Dokumentationsprojekt with Documentation Project
* Manual: Update file format descriptions using shared-mime-info
* Fix url of ulinks to point to mate-user-guide
* UNIX and Linux systems -> Linux and UNIX-like systems
* tx: add atril help to transifex config
* Add the ability to support 'unar' over .zip archives
* Add support for OpenDocument formats
* UI: on the properties dialog, focus the Close button instead of the Help button by default
0.11.0 (released 2019-02-24)
============================
Backwards Compatibility Nodes
-----------------------------
* ZstdDecompressor.read() now allows reading sizes of -1 or 0
and defaults to -1, per the documented behavior of
io.RawIOBase.read(). Previously, we required an argument that was
a positive value.
* The readline(), readlines(), __iter__, and __next__ methods
of ZstdDecompressionReader() now raise io.UnsupportedOperation
instead of NotImplementedError.
* ZstdDecompressor.stream_reader() now accepts a read_across_frames
argument. The default value will likely be changed in a future release
and consumers are advised to pass the argument to avoid unwanted change
of behavior in the future.
* setup.py now always disables the CFFI backend if the installed
CFFI package does not meet the minimum version requirements. Before, it was
possible for the CFFI backend to be generated and a run-time error to
occur.
* In the CFFI backend, CompressionReader and DecompressionReader
were renamed to ZstdCompressionReader and ZstdDecompressionReader,
respectively so naming is identical to the C extension. This should have
no meaningful end-user impact, as instances aren't meant to be
constructed directly.
* ZstdDecompressor.stream_writer() now accepts a write_return_read
argument to control whether write() returns the number of bytes
read from the source / written to the decompressor. It defaults to off,
which preserves the existing behavior of returning the number of bytes
emitted from the decompressor. The default will change in a future release
so behavior aligns with the specified behavior of io.RawIOBase.
* ZstdDecompressionWriter.__exit__ now calls self.close(). This
will result in that stream plus the underlying stream being closed as
well. If this behavior is not desirable, do not use instances as
context managers.
* ZstdCompressor.stream_writer() now accepts a write_return_read
argument to control whether write() returns the number of bytes read
from the source / written to the compressor. It defaults to off, which
preserves the existing behavior of returning the number of bytes emitted
from the compressor. The default will change in a future release so
behavior aligns with the specified behavior of io.RawIOBase.
* ZstdCompressionWriter.__exit__ now calls self.close(). This will
result in that stream plus any underlying stream being closed as well. If
this behavior is not desirable, do not use instances as context managers.
* ZstdDecompressionWriter no longer requires being used as a context
manager.
* ZstdCompressionWriter no longer requires being used as a context
manager.
* The overlap_size_log attribute on CompressionParameters instances
has been deprecated and will be removed in a future release. The
overlap_log attribute should be used instead.
* The overlap_size_log argument to CompressionParameters has been
deprecated and will be removed in a future release. The overlap_log
argument should be used instead.
* The ldm_hash_every_log attribute on CompressionParameters instances
has been deprecated and will be removed in a future release. The
ldm_hash_rate_log attribute should be used instead.
* The ldm_hash_every_log argument to CompressionParameters has been
deprecated and will be removed in a future release. The ldm_hash_rate_log
argument should be used instead.
* The compression_strategy argument to CompressionParameters has been
deprecated and will be removed in a future release. The strategy
argument should be used instead.
* The SEARCHLENGTH_MIN and SEARCHLENGTH_MAX constants are deprecated
and will be removed in a future release. Use MINMATCH_MIN and
MINMATCH_MAX instead.
* The zstd_cffi module has been renamed to zstandard.cffi. As had
been documented in the README file since the 0.9.0 release, the
module should not be imported directly at its new location. Instead,
import zstandard to cause an appropriate backend module to be loaded
automatically.
Bug Fixes
---------
* CFFI backend could encounter a failure when sending an empty chunk into
ZstdDecompressionObj.decompress(). The issue has been fixed.
* CFFI backend could encounter an error when calling
ZstdDecompressionReader.read() if there was data remaining in an
internal buffer. The issue has been fixed.
Changes
-------
* ZstDecompressionObj.decompress() now properly handles empty inputs in
the CFFI backend.
* ZstdCompressionReader now implements read1() and readinto1().
These are part of the io.BufferedIOBase interface.
* ZstdCompressionReader has gained a readinto(b) method for reading
compressed output into an existing buffer.
* ZstdCompressionReader.read() now defaults to size=-1 and accepts
read sizes of -1 and 0. The new behavior aligns with the documented
behavior of io.RawIOBase.
* ZstdCompressionReader now implements readall(). Previously, this
method raised NotImplementedError.
* ZstdDecompressionReader now implements read1() and readinto1().
These are part of the io.BufferedIOBase interface.
* ZstdDecompressionReader.read() now defaults to size=-1 and accepts
read sizes of -1 and 0. The new behavior aligns with the documented
behavior of io.RawIOBase.
* ZstdDecompressionReader() now implements readall(). Previously, this
method raised NotImplementedError.
* The readline(), readlines(), __iter__, and __next__ methods
of ZstdDecompressionReader() now raise io.UnsupportedOperation
instead of NotImplementedError. This reflects a decision to never
implement text-based I/O on (de)compressors and keep the low-level API
operating in the binary domain.
* README.rst now documented how to achieve linewise iteration using
an io.TextIOWrapper with a ZstdDecompressionReader.
* ZstdDecompressionReader has gained a readinto(b) method for
reading decompressed output into an existing buffer. This allows chaining
to an io.TextIOWrapper on Python 3 without using an io.BufferedReader.
* ZstdDecompressor.stream_reader() now accepts a read_across_frames
argument to control behavior when the input data has multiple zstd
*frames*. When False (the default for backwards compatibility), a
read() will stop when the end of a zstd *frame* is encountered. When
True, read() can potentially return data spanning multiple zstd
*frames*. The default will likely be changed to True in a future
release.
* setup.py now performs CFFI version sniffing and disables the CFFI
backend if CFFI is too old. Previously, we only used install_requires
to enforce the CFFI version and not all build modes would properly enforce
the minimum CFFI version.
* CFFI's ZstdDecompressionReader.read() now properly handles data
remaining in any internal buffer. Before, repeated read() could
result in *random* errors.
* Upgraded various Python packages in CI environment.
* Upgrade to hypothesis 4.5.11.
* In the CFFI backend, CompressionReader and DecompressionReader
were renamed to ZstdCompressionReader and ZstdDecompressionReader,
respectively.
* ZstdDecompressor.stream_writer() now accepts a write_return_read
argument to control whether write() returns the number of bytes read
from the source. It defaults to False to preserve backwards
compatibility.
* ZstdDecompressor.stream_writer() now implements the io.RawIOBase
interface and behaves as a proper stream object.
* ZstdCompressor.stream_writer() now accepts a write_return_read
argument to control whether write() returns the number of bytes read
from the source. It defaults to False to preserve backwards
compatibility.
* ZstdCompressionWriter now implements the io.RawIOBase interface and
behaves as a proper stream object. close() will now close the stream
and the underlying stream (if possible). __exit__ will now call
close(). Methods like writable() and fileno() are implemented.
* ZstdDecompressionWriter no longer must be used as a context manager.
* ZstdCompressionWriter no longer must be used as a context manager.
When not using as a context manager, it is important to call
flush(FRAME_FRAME) or the compression stream won't be properly
terminated and decoders may complain about malformed input.
* ZstdCompressionWriter.flush() (what is returned from
ZstdCompressor.stream_writer()) now accepts an argument controlling the
flush behavior. Its value can be one of the new constants
FLUSH_BLOCK or FLUSH_FRAME.
* ZstdDecompressionObj instances now have a flush([length=None]) method.
This provides parity with standard library equivalent types.
* CompressionParameters no longer redundantly store individual compression
parameters on each instance. Instead, compression parameters are stored inside
the underlying ZSTD_CCtx_params instance. Attributes for obtaining
parameters are now properties rather than instance variables.
* Exposed the STRATEGY_BTULTRA2 constant.
* CompressionParameters instances now expose an overlap_log attribute.
This behaves identically to the overlap_size_log attribute.
* CompressionParameters() now accepts an overlap_log argument that
behaves identically to the overlap_size_log argument. An error will be
raised if both arguments are specified.
* CompressionParameters instances now expose an ldm_hash_rate_log
attribute. This behaves identically to the ldm_hash_every_log attribute.
* CompressionParameters() now accepts a ldm_hash_rate_log argument that
behaves identically to the ldm_hash_every_log argument. An error will be
raised if both arguments are specified.
* CompressionParameters() now accepts a strategy argument that behaves
identically to the compression_strategy argument. An error will be raised
if both arguments are specified.
* The MINMATCH_MIN and MINMATCH_MAX constants were added. They are
semantically equivalent to the old SEARCHLENGTH_MIN and
SEARCHLENGTH_MAX constants.
* Bundled zstandard library upgraded from 1.3.7 to 1.3.8.
* setup.py denotes support for Python 3.7 (Python 3.7 was supported and
tested in the 0.10 release).
* zstd_cffi module has been renamed to zstandard.cffi.
* ZstdCompressor.stream_writer() now reuses a buffer in order to avoid
allocating a new buffer for every operation. This should result in faster
performance in cases where write() or flush() are being called
frequently.
* Bundled zstandard library upgraded from 1.3.6 to 1.3.7.
version 1.32 - Sergey Poznyakoff, 2019-02-23
* Fix the use of --checkpoint without explicit --checkpoint-action
* Fix extraction with the -U option
See http://lists.gnu.org/archive/html/bug-tar/2019-01/msg00015.html,
for details
* Fix iconv usage on BSD-based systems
* Fix possible NULL dereference (savannah bug #55369)
* Improve the testsuite
v2.1.5: Made the md5sum detection consistent with the header code. Check for
the presence of the archive directory. Added --encrypt for symmetric encryption
through gpg (Eric Windisch). Added support for the digest command on Solaris 10
for MD5 checksums. Check for available disk space before extracting to the
target directory (Andreas Schweitzer). Allow extraction to run asynchronously
(patch by Peter Hatch). Use file descriptors internally to avoid error messages
(patch by Kay Tiong Khoo).
v2.1.6: Replaced one dot per file progress with a realtime progress percentage
and a spining cursor. Added --noprogress to prevent showing the progress during
the decompression. Added --target dir to allow extracting directly to a target
directory. (Guy Baconniere)
v2.2.0: First major new release in years! Includes many bugfixes and user
contributions. Please look at the project page on Github for all the details.
v2.3.0: Support for archive encryption via GPG or OpenSSL. Added LZO and LZ4
compression support. Options to set the packaging date and stop the umask from
being overriden. Optionally ignore check for available disk space when
extracting. New option to check for root permissions before extracting.
v2.3.1: Various compatibility updates. Added unit tests for Travis CI in the
GitHub repo. New --tar-extra, --untar-extra, --gpg-extra,
--gpg-asymmetric-encrypt-sign options.
v2.4.0: Added optional support for SHA256 archive integrity checksums.
Changes in version 1.21:
The options '--dump', '--remove' and '--strip' have been added, mainly as
support for the tarlz archive format: http://www.nongnu.org/lzip/tarlz.html
These options replace '--dump-tdata', '--remove-tdata' and '--strip-tdata',
which are now aliases and will be removed in version 1.22.
'--dump=[<member_list>][:damaged][:tdata]' dumps the members listed, the
damaged members (if any), or the trailing data (if any) of one or more
regular multimember files to standard output.
'--remove=[<member_list>][:damaged][:tdata]' removes the members listed,
the damaged members (if any), or the trailing data (if any) from regular
multimember files in place.
'--strip=[<member_list>][:damaged][:tdata]' copies one or more regular
multimember files to standard output, stripping the members listed, the
damaged members (if any), or the trailing data (if any) from each file.
Detection of forbidden combinations of characters in trailing data has been
improved.
'--split' can now detect trailing data and gaps between members, and save
each gap in its own file. Trailing data (if any) are saved alone in the last
file. (Gaps may contain garbage or may be members with corrupt headers or
trailers).
'--ignore-errors' now makes '--list' show gaps between members, ignoring
format errors.
'--ignore-errors' now makes '--range-decompress' ignore a truncated last
member.
Errors are now also checked when closing the input file in decompression
mode.
Some diagnostic messages have been improved.
'\n' is now printed instead of '\r' when showing progress of merge or repair
if stdout is not a terminal.
Lziprecover now compiles on DOS with DJGPP. (Patch from Robert Riebisch).
The new chapter 'Tarlz', explaining the ways in which lziprecover can
recover and process multimember tar.lz archives, has been added to the
manual.
The configure script now accepts appending options to CXXFLAGS using the
syntax 'CXXFLAGS+=OPTIONS'.
It has been documented in INSTALL the use of
CXXFLAGS+='-D __USE_MINGW_ANSI_STDIO' when compiling on MinGW.
Changes in version 1.21:
Detection of forbidden combinations of characters in trailing data has been
improved.
Errors are now also checked when closing the input file.
Lzip now compiles on DOS with DJGPP. (Patch from Robert Riebisch).
The descriptions of '-0..-9', '-m' and '-s' in the manual have been
improved.
The configure script now accepts appending options to CXXFLAGS using the
syntax 'CXXFLAGS+=OPTIONS'.
It has been documented in INSTALL the use of
CXXFLAGS+='-D __USE_MINGW_ANSI_STDIO' when compiling on MinGW.
Prompted in part because prior releases fail to build on Linux
distributions that use glibc >= 2.27 (relates to PR pkg/53826).
* Noteworthy changes in release 1.10 (2018-12-29) [stable]
** Changes in behavior
Compressed gzip output no longer contains the current time as a
timestamp when the input is not a regular file. Instead, the output
contains a null (zero) timestamp. This makes gzip's behavior more
reproducible when used as part of a pipeline. (As a reminder, even
regular files will use null timestamps after the year 2106, due to a
limitation in the gzip format.)
** Bug fixes
A use of uninitialized memory on some malformed inputs has been fixed.
[bug present since the beginning]
A few theoretical race conditions in signal handers have been fixed.
These bugs most likely do not happen on practical platforms.
[bugs present since the beginning]
* Noteworthy changes in release 1.9 (2018-01-07) [stable]
** Bug fixes
gzip -d -S SUFFIX file.SUFFIX would fail for any upper-case byte in SUFFIX.
E.g., before, this command would fail:
$ :|gzip > kT && gzip -d -S T kT
gzip: kT: unknown suffix -- ignored
[bug present since the beginning]
When decompressing data in 'pack' format, gzip no longer mishandles
leading zeros in the end-of-block code. [bug introduced in gzip-1.6]
When converting from system-dependent time_t format to the 32-bit
unsigned MTIME format used in gzip files, if a timestamp does not
fit gzip now substitutes zero instead of the timestamp's low-order
32 bits, as per Internet RFC 1952. When converting from MTIME to
time_t format, if a timestamp does not fit gzip now warns and
substitutes the nearest in-range value instead of crashing or
silently substituting an implementation-defined value (typically,
the timestamp's low-order bits). This affects timestamps before
1970 and after 2106, and timestamps after 2038 on platforms with
32-bit signed time_t. [bug present since the beginning]
Commands implemented via shell scripts are now more consistent about
failure status. For example, 'gunzip --help >/dev/full' now
consistently exits with status 1 (error), instead of with status 2
(warning) on some platforms. [bug present since the beginning]
Support for VMS and Amiga has been removed. It was not working anyway,
and it reportedly caused file name glitches on MS-Windowsish platforms.
* Noteworthy changes in release 1.8 (2016-04-26) [stable]
** Bug fixes
gzip -l no longer falsely reports a write error when writing to a pipe.
[bug introduced in gzip-1.7]
Port to Oracle Solaris Studio 12 on x86-64.
[bug present since at least gzip-1.2.4]
When configuring gzip, ./configure DEFS='...-DNO_ASM...' now
suppresses assembler again. [bug introduced in gzip-1.3.5]
* Noteworthy changes in release 1.7 (2016-03-27) [stable]
** Changes in behavior
The GZIP environment variable is now obsolescent; gzip now warns if
it is used, and rejects attempts to use dangerous options or operands.
You can use an alias or script instead.
Installed programs like 'zgrep' now use the PATH environment variable
as usual to find subsidiary programs like 'gzip' and 'grep'.
Previously they prepended the installation directory to the PATH,
which sometimes caused 'make check' to test the wrong gzip executable.
[bug introduced in gzip-1.3.13]
** New features
gzip now accepts the --synchronous option, which causes it to use
fsync and similar primitives to transfer output data to the output
file's storage device when the file system supports this. Although
this option makes gzip safer in the presence of system crashes, it
can make gzip considerably slower.
gzip now accepts the --rsyncable option. This option is accepted in
all modes, but has effect only when compressing: it makes the resulting
output more amenable to efficient use of rsync. For example, when a
large input file gets a small change, a gzip --rsyncable image of
that file will remain largely unchanged, too. Without --rsyncable,
even a tiny change in the input could result in a totally different
gzip-compressed output file.
** Bug fixes
gzip -k -v no longer reports that files are replaced.
[bug present since the beginning]
zgrep -f A B C no longer reads A more than once if A is not a regular file.
This better supports invocations like 'zgrep -f <(COMMAND) B C' in Bash.
[bug introduced in gzip-1.2]
Bz2file is a Python library for reading and writing bzip2-compressed files. It
contains a drop-in replacement for the file interface in the standard library's
bz2 module, including features from the latest development version of CPython
that are not available in older releases.
Changelog:
version 1.31 - Sergey Poznyakoff, 2019-01-02
* Fix heap-buffer-overrun with --one-top-level.
Bug introduced with the addition of that option in 1.28.
* Support for zstd compression
New option '--zstd' instructs tar to use zstd as compression program.
When listing, extractng and comparing, zstd compressed archives are
recognized automatically.
When '-a' option is in effect, zstd compression is selected if the
destination archive name ends in '.zst' or '.tzst'.
* The -K option interacts properly with member names given in the command line
Names of members to extract can be specified along with the "-K NAME"
option. In this case, tar will extract NAME and those of named members
that appear in the archive after it, which is consistent with the
semantics of the option.
Previous versions of tar extracted NAME, those of named members that
appeared before it, and everything after it.
* Fix CVE-2018-20482
When creating archives with the --sparse option, previous versions of
tar would loop endlessly if a sparse file had been truncated while
being archived.
Zstandard v1.3.8
perf: better decompression speed on large files (+7%) and cold dictionaries (+15%)
perf: slightly better compression ratio at high compression modes
api : finalized advanced API, last stage before "stable" status
api : new --rsyncable mode
api : support decompression of empty frames into NULL (used to be an error)
build: new set of build macros to generate a minimal size decoder
build: fix compilation on MIPS32
build: fix compilation with multiple -arch flags
build: highly upgraded meson build
build: improved buck support
build: fix cmake script : can create debug build
build: Makefile : grep works on both colored consoles and systems without color support
build: fixed zstd-pgo target
cli : support ZSTD_CLEVEL environment variable
cli : --no-progress flag, preserving final summary
cli : ensure destination file is not source file
cli : clearer error messages, notably when input file not present
doc : clarified zstd_compression_format.md
misc: fixed zstdgrep, returns 1 on failure
misc: NEWS renamed as CHANGELOG, in accordance with fb.oss policy
2.1.5
This release contains no functional changes other than changes to the Appveyor configuration for publishing wheels.
2.1.4
This release contains no functional changes other than changes to the Travis configuration for publishing wheels.
2.1.3
A simplification of the tox.ini file
More robust checking for pkgconfig availability
Integration of cibuildwheel into travis builds so as to build and publish binary wheels for Linux and OSX
Only require pytest-runner if pytest/test is being called
Blacklists version 3.3.0 of pytest which has a bug that can cause the tests to fail.
1.0.7
cross compilation support:
added ability to run cross-compiled ARM tests in qemu
added arm-linux-gnueabihf-gcc entry to Travis build matrix
faster decoding on ARM:
implemented prefetching HuffmanCode entry as uint32_t if target platform is ARM
fixed NEON extension detection
combed Huffman table building code for better readability
improved precision of window size calculation in CLI
minor fixes:
fixed typos
improved internal comments / parameter names
fixed BROTLI_PREDICT_TRUE/_FALSE detection for SunPro compiler
unburdened JNI (Bazel) builds from fetching the full JDK
1.0.6
Fixes
fix unaligned 64-bit accesses on AArch32
add missing files to the sources list
add ASAN/MSAN unaligned read specializations
fix CoverityScan "unused assignment" warning
fix JDK 8<->9 incompatibility
unbreak Travis builds
fix auto detect of bundled mode in cmake
* libmspack is now distributed with its test-suite, which now run
as part of "make check"
* libmspack's programs in src/ have been moved to examples/ and do
not auto-install
Set TEST_TARGET.
New in 1.9
* Fixed invisible bad extraction when using cabextract -F (broken in 1.8)
* Fixed configure --with-external-libmspack which was broken in 1.8
* configure --with-external-libmspack will now use pkg-config. To configure
it manually, set environment variables libmspack_CFLAGS and libmspack_LIBS
before running configure.
* Now includes the test suite (make check)
New in 1.8
* cabextract -f now extracts even more badly damaged files than before
Uptsream changes:
2.32 13/09/2018 (CBERRY)
- Fix absolute path handling on VMS
2.30 19/06/2018
- skip white_space test on MSWin32 as Windows will report that both
files exist, which is obviously a 'feature'
2.1.2:
Improves the speed of importing the module by avoiding the use of pkg_resources
Fixes some flake8 warnings
Resolves a small issue with the test suite when detecting memory usage increases
* On Arch Linux, the build failed, makedev(3) indicates
#include <sys/sysmacros.h>
* On Debian Buster, the build succeed but a big warning is displayed:
warning: In the GNU C Library, "minor" is defined
by <sys/sysmacros.h>. For historical compatibility, it is
currently defined by <sys/types.h> as well, but we plan to
remove this soon. To use "minor", include <sys/sysmacros.h>
directly. If you did not intend to use a system-defined macro
"minor", you should undefine it after including <sys/types.h>.
0.10.1:
Backwards Compatibility Notes
* ZstdCompressor.stream_reader().closed is now a property instead of a
method.
* ZstdDecompressor.stream_reader().closed is now a property instead of a
method.
Changes
* Stop attempting to package Python 3.6 for Miniconda. The latest version of
Miniconda is using Python 3.7. The Python 3.6 Miniconda packages were a lie
since this were built against Python 3.7.
* ZstdCompressor.stream_reader()'s and ZstdDecompressor.stream_reader()'s
closed attribute is now a read-only property instead of a method. This now
properly matches the IOBase API and allows instances to be used in more
places that accept IOBase instances.
0.10.0:
Backwards Compatibility Notes
* ZstdDecompressor.stream_reader().read() now consistently requires an
argument in both the C and CFFI backends. Before, the CFFI implementation
would assume a default value of -1, which was later rejected.
* The compress_literals argument and attribute has been removed from
zstd.ZstdCompressionParameters because it was removed by the zstd 1.3.5
API.
* ZSTD_CCtx_setParametersUsingCCtxParams() is no longer called on every
operation performed against ZstdCompressor instances. The reason for this
change is that the zstd 1.3.5 API no longer allows this without calling
ZSTD_CCtx_resetParameters() first. But if we called
ZSTD_CCtx_resetParameters() on every operation, we'd have to redo
potentially expensive setup when using dictionaries. We now call
ZSTD_CCtx_reset() on every operation and don't attempt to change
compression parameters.
* Objects returned by ZstdCompressor.stream_reader() no longer need to be
used as a context manager. The context manager interface still exists and its
behavior is unchanged.
* Objects returned by ZstdDecompressor.stream_reader() no longer need to be
used as a context manager. The context manager interface still exists and its
behavior is unchanged.
Bug Fixes
* ZstdDecompressor.decompressobj().decompress() should now return all data
from internal buffers in more scenarios. Before, it was possible for data to
remain in internal buffers. This data would be emitted on a subsequent call
to decompress(). The overall output stream would still be valid. But if
callers were expecting input data to exactly map to output data (say the
producer had used flush(COMPRESSOBJ_FLUSH_BLOCK) and was attempting to
map input chunks to output chunks), then the previous behavior would be
wrong. The new behavior is such that output from
flush(COMPRESSOBJ_FLUSH_BLOCK) fed into decompressobj().decompress()
should produce all available compressed input.
* ZstdDecompressor.stream_reader().read() should no longer segfault after
a previous context manager resulted in error.
* ZstdCompressor.compressobj().flush(COMPRESSOBJ_FLUSH_BLOCK) now returns
all data necessary to flush a block. Before, it was possible for the
flush() to not emit all data necessary to fully represent a block. This
would mean decompressors wouldn't be able to decompress all data that had been
fed into the compressor and flush()ed.
New Features
* New module constants BLOCKSIZELOG_MAX, BLOCKSIZE_MAX,
TARGETLENGTH_MAX that expose constants from libzstd.
* New ZstdCompressor.chunker() API for manually feeding data into a
compressor and emitting chunks of a fixed size. Like compressobj(), the
API doesn't impose restrictions on the input or output types for the
data streams. Unlike compressobj(), it ensures output chunks are of a
fixed size. This makes this API useful when the compressed output is being
fed into an I/O layer, where uniform write sizes are useful.
* ZstdCompressor.stream_reader() no longer needs to be used as a context
manager.
* ZstdDecompressor.stream_reader() no longer needs to be used as a context
manager.
* Bundled zstandard library upgraded from 1.3.4 to 1.3.6.
Changes
* Added zstd_cffi.py and NEWS.rst to MANIFEST.in.
* zstandard.__version__ is now defined.
* Upgrade pip, setuptools, wheel, and cibuildwheel packages to latest versions.
* Upgrade various packages used in CI to latest versions. Notably tox (in
order to support Python 3.7).
* Use relative paths in setup.py to appease Python 3.7.
* Added CI for Python 3.7.
Zstandard v1.3.7
perf: slightly better decompression speed on clang (depending on hardware target)
fix: ratio for dictionary compression at levels 9 and 10, reported by @indygreg
build: no longer build backtrace by default in release mode; restrict further automatic mode
build: control backtrace support through build macro BACKTRACE
misc: added man pages for zstdless and zstdgrep, by @samrussell
Zstandard v1.3.6 release is focused on intensive dictionary compression for database scenarios.
This is a new environment we are experimenting. The success of dictionary compression on small data, of which databases tend to store plentiful, led to increased adoption, and we now see scenarios where literally thousands of dictionaries are being used simultaneously, with permanent generation or update of new dictionaries.
== 1.0.0 (2018-05-20)
* *BreakingChange* The XZ module's methods now take any parameters
beyond the IO object as real Ruby keyword arguments rather than
a long argument list.
* *BreakingChange* XZ.decompress_stream now honours Ruby's
external and internal encoding concept instead of just
returning BINARY-tagged strings.
* *BreakingChange* Remove deprecated API on stream reader/writer
class and instead sync the API with Ruby's zlib library
(Ticket #12 by me).
* *BreakingChange* StreamWriter.new and StreamReader.new do not accept
a block anymore. This is part of syncing with Ruby's zlib API.
* *BreakingChange* StreamReader.open and StreamWriter.open always
return the new instance, even if a block is given to the method
(previous behaviour was to return the return value of the block).
This is part of the syncing with Ruby's zlib API.
* *BreakingChange* StreamReader.new and StreamWriter.new as well as
the ::open variants take additional arguments as real Ruby keyword
arguments now instead of a long parameter list plus options hash.
This is different from Ruby's own zlib API as that one takes both
a long parameter list and a hash of additional options. ruby-xz
is meant to follow zlib's semantics mostly, but not as a drop-in
replacement, so this divergence from zlib's API is okay (also
given that it isn't possible to replicate all possible options
1:1 anyway, since liblzma simply accepts different options as
libz). If you've never used these methods' optional arguments,
you should be fine.
* *BreakingChange* Stream#close now returns nil instead of the
number of bytes written. This syncs Stream#close with Ruby's
own IO#close, which also returns nil.
* *BreakingChange* Remove Stream#pos=, Stream#seek, Stream#stat. These
methods irritated the minitar gem, which doesn't expect them to
raise NotImplementedError, but directly to be missing if the object
does not support seeking.
* *BreakingChange* StreamReader and StreamWriter now honour Ruby's
encoding system instead of returning only BINARY-tagged strings.
* *Dependency* Remove dependency on ffi. ruby-xz now uses fiddle from
the stdlib instead.
* *Dependency* Remove dependency on io-like. ruby-xz now implements
all the IO mechanics itself. (Ticket #10 by me)
* *Dependency* Bump required Ruby version to 2.3.0.
* *Fix* libzlma.dylib not being found on OS X (Ticket #15 by
s0nspark).
- perf: minor decompression speed improvement (~+2%) with gcc
- fix : corruption in v1.8.2 at level 9 for files > 64KB under rare
conditions (#560)
- cli : new command --fast, by @jennifermliu
- api : LZ4_decompress_safe_partial() now decodes exactly the nb of
bytes requested (feature request #566)
- build : added Haiku target, by @fbrosson, and MidnightBSD, by @laffer1
- doc : updated documentation regarding dictionary compression
This is based on the decision The NetBSD Foundation made in 2008 to
do so, which was already applied to src.
This change has been applied to code which is likely not in other
repositories.
ok board@, reviewed by riastradh@
1.61 Sat 18 Aug 2018
- File::Find will not untaint [github/ThisUsedToBeAnEmail]
- Prevent from traversing symlinks and parent directories when extracting [github/ppisar]
Changes:
improve q=1 compression on small files
inverse Bazel workspace tree
add rolling-composite-hasher for large-window mode
add tools to download and transform static dictionary data
Changes:
2018-03-15 guidod <guidod@gmx.de>
* fix a number of CVEs reported with special *.zip PoC files
* man-pages are generated with new dbk2man.py - docbook xmlto is optional now
* completing some doc strings while checking the new man-pages to look good
* allow the zziptests.py testsuite to run with an installed /bin path
* try to fix some issues on testing with non-installed binaries on non-linux platfors
* update autotools to allow compiling on some newer Mac / Win machines
* a zip-program is still required for testing, but some errors are gone when not there
* complete the approximation of fnmatch for the test binaries (on platforms without)
* allow windows __mmap.h to be simpler, helping with some problems on MingW
* integrate 'fopen("wb")' from TexLive to be more portable across
* more portability as well for helpers like strnlen being used in the sources
* update doc refs to point to github instead of sf.net
* update the sf.net pages to have a prominent hint on newer github.com location
* release v0.13.69
2018-04-26 Stuart Caie <kyzer@cabextract.org.uk>
* read_chunk(): the test that chunk numbers are in bounds was off
by one, so read_chunk() returned a pointer taken from outside
allocated memory that usually crashes libmspack when accessed.
Thanks to Hanno Böck for finding the issue and providing a sample.
* chmd_read_headers(): reject files with blank filenames. Thanks
again to Hanno Böck for finding the issue and providing a sample file.
2018-02-06 Stuart Caie <kyzer@cabextract.org.uk>
* chmd.c: fixed an off-by-one error in the TOLOWER() macro, reported
by Dmitry Glavatskikh. Thanks Dmitry!
2017-11-26 Stuart Caie <kyzer@cabextract.org.uk>
* kwajd_read_headers(): fix up the logic of reading the filename and
extension headers to avoid a one or two byte overwrite. Thanks to
Jakub Wilk for finding the issue.
* test/kwajd_test.c: add tests for KWAJ filename.ext handling
2017-10-16 Stuart Caie <kyzer@cabextract.org.uk>
* test/cabd_test.c: update the short string tests to expect not only
MSPACK_ERR_DATAFORMAT but also MSPACK_ERR_READ, because of the recent
change to cabd_read_string(). Thanks to maitreyee43 for spotting this.
* test/msdecompile_md5: update the setup instructions for this script,
and also change the script so it works with current Wine. Again, thanks
to maitreyee43 for trying to use it and finding it not working.
2017-08-13 Stuart Caie <kyzer@cabextract.org.uk>
* src/chmextract.c: support MinGW one-arg mkdir(). Thanks to AntumDeluge
for reporting this.
2017-08-13 Stuart Caie <kyzer@cabextract.org.uk>
* read_spaninfo(): a CHM file can have no ResetTable and have a
negative length in SpanInfo, which then feeds a negative output length
to lzxd_init(), which then sets frame_size to a value of your choosing,
the lower 32 bits of output length, larger than LZX_FRAME_SIZE. If the
first LZX block is uncompressed, this writes data beyond the end of the
window. This issue was raised by ClamAV as CVE-2017-6419. Thanks to
Sebastian Andrzej Siewior for finding this by chance!
* lzxd_init(), lzxd_set_output_length(), mszipd_init(): due to the issue
mentioned above, these functions now reject negative lengths
2017-08-05 Stuart Caie <kyzer@cabextract.org.uk>
* cabd_read_string(): add missing error check on result of read().
If an mspack_system implementation returns an error, it's interpreted
as a huge positive integer, which leads to reading past the end of the
stack-based buffer. Thanks to Sebastian Andrzej Siewior for explaining
the problem. This issue was raised by ClamAV as CVE-2017-11423
2016-04-20 Stuart Caie <kyzer@cabextract.org.uk>
* configure.ac: change my email address to kyzer@cabextract.org.uk
2015-05-10 Stuart Caie <kyzer@4u.net>
* cabd_read_string(): correct rejection of empty strings. Thanks to
Hanno Böck for finding the issue and providing a sample file.
2015-05-10 Stuart Caie <kyzer@4u.net>
* Makefile.am: Add subdir-objects option as suggested by autoreconf.
* configure.ac: Add AM_PROG_AR as suggested by autoreconf.
2015-01-29 Stuart Caie <kyzer@4u.net>
* system.h: if C99 inttypes.h exists, use its PRI{d,u}{32,64} macros.
Thanks to Johnathan Kollasch for the suggestion.
New in 1.7
* cabextract now supports an --encoding parameter, to specify the character
encoding of CAB filenames if they are not ASCII or UTF8
* cabextract -L now lowercases non-ASCII characters
Performing substitutions during post-patch breaks tools such as mkpatches,
making it very difficult to regenerate correct patches after making changes,
and often leading to substituted string replacements being committed.
Update LICENSE
Upstream changes:
0.26 (2018/06/09)
Implemented refactoring due warnings from Perl::Critic.
0.25 (2018/06/04)
Implemented refactoring due warnings from Perl::Critic.
Merge pull request #3 from manwar/suggest-code-tidy
0.24 (2018/06/02)
Added a LICENSE file (GNU GPL v3).
Removed MYMETA files (see https://rt.cpan.org/Ticket/Display.html?id=108171).
Improved Kwalitee by adding information to Makefile.PL
Fixed tests under OpenBSD
Added some code to check for OpenBSD tar, which is not quite compatible to the command line options passed by this module.
Also made the method is_gnu() more robust, testing the return code and properly handling STDOUT and STDERR when trying "tar --version".
Dependencies added are those already available on standard perl (Config and IPC::Open3).
Added a README.md for better formatting in Github project page.
Small refactorings and code formating with perltidy.
Upstream changes:
2.30 19/06/2018
- skip white_space test on MSWin32 as Windows will report that both
files exist, which is obviously a 'feature'
2.28 08/06/2018 (madroach, ARC, OCBNET, ppisar)
- fix creating file with trailing whitespace on filename - fixes 103279
- allow archiving with absolute pathnames - fixes 97748
- small POD fix
- Speed up extract when archive contains lots of files
- CVE-2018-12015 directory traversal vulnerability [RT#125523]
2.0.1:
This release fixes: tests failed when run under python setup.py test, but passed when running under tox.
2.0.0:
It's now possible to specify a compession dictionary for block compression.
The bundled LZ4 libraries have been updated to 1.8.2
A compatibility fix for 2.x memoryview objects has been added.
Various flake8 cleanups and test additions.
This Go language package supports the reading and writing of xz
compressed streams. It includes also a gxz command for compressing and
decompressing data. The package is completely written in Go and
doesn't have any dependency on any C code.
Changes 2.8:
add support for setting atime, ctime, mtime and birthtime
tell libarchive when writing an archive is aborted due to an exception
add support for getting uid and gid
add support for high resolution timestamps
add two new archive readers: stream_reader and custom_reader
add missing archive extraction flags
add the lz4 and warc formats
add support for write options and uid/gid lookup
innoextract 1.7 (2018-06-12)
- Added support for Inno Setup 5.6.0 installers
- Added support for new GOG installers with GOG Galaxy file parts
- Added support for encrypted installers with the --password (-P) and --password-file options
- Added a --show-password option to print password check information
- Added a --check-password option to abort if the provided password does not match the stored checksum
- Added a --info (-i) convenience option to print information about the installer
- Added a --list-sizes option to print file sizes even with --quiet or --silent
- Added a --list-checksums option to print file checksums
- Added a --data-version (-V) option to print the data version and exit
- Added a --no-extract-unknown (-n) option to abort on unknown Inno Setup data versions
- Fixed building in paths that contain regex expressions
- Fixed case-sensitivity in parent directory when creating subdirectories
- Fixed .bin slice file names used with Inno Setup versions older than 4.1.7
- Fixed build with newer libc++ versions
- Made loading of .bin slice files case-insensitive
- The --test option can now be combined with --extract to abort on file checksum errors
- Now compiles in C++17 mode if supported
5.2.4:
* liblzma:
- Allow 0 as memory usage limit instead of returning
LZMA_PROG_ERROR. Now 0 is treated as if 1 byte was specified,
which effectively is the same as 0.
- Use "noexcept" keyword instead of "throw()" in the public
headers when a C++11 (or newer standard) compiler is used.
- Added a portability fix for recent Intel C Compilers.
- Microsoft Visual Studio build files have been moved under
windows/vs2013 and windows/vs2017.
* xz:
- Fix "xz --list --robot missing_or_bad_file.xz" which would
try to print an unitialized string and thus produce garbage
output. Since the exit status is non-zero, most uses of such
a command won't try to interpret the garbage output.
- "xz --list foo.xz" could print "Internal error (bug)" in a
corner case where a specific memory usage limit had been set.
Engrampa, the archive viewer, has improved support for encrypted 7z archives.
Full changelog:
build: use PKG_CONFIG to fix cross-build
Add our copyright to About dialog and Caja extension
7z: Fix: rename files with password without the list encrypted
7z: Fix: delete/rename files/folders with the list encrypted
avoid deprecated gdk_screen_make_display_name
don’t use deprecated gtk_show_uri
use a more common gtk+ function
avoid deprecated gdk_screen_get_number
Add the button “Show the Files and Quit” in the progress dialog
Fix: create zip files in “maximum” compression level
Fix: Browsing history not correct
hide folders in “View All Files”
Fix: Wrong behavior of Skip button in Replace file dialog
UI files: avoid deprecations
gtk-utils: remove some GTK_STOCK deprecations
gtk-utils: avoid deprecated gtk_icon_size_lookup_for_settings
fr-window: fix some GTK_STOCK deprecations
add style class frame to scrolledwindows
fr-window: avoid deprecated GtkMisc and GtkAlignment
dlg-add-folder: avoid deprecated gtk_alignment_new()
build: use variable instead of hardcoded file name when cleaning
Translations update
v1.8.2
perf: *much* faster dictionary compression on small files
perf: improved decompression speed and binary size
perf: slightly faster HC compression and decompression speed
perf: very small compression ratio improvement
fix : compression compatible with low memory addresses (< 0xFFFF)
fix : decompression segfault when provided with NULL input
cli : new command --favor-decSpeed
cli : benchmark mode more accurate for small inputs
fullbench : can bench _destSize() variants
doc : clarified block format parsing restrictions
1.5.1 [2018-04-11]
==================
* Choose format of installed documentation based on available tools.
* Fix visibility of symbols.
* Fix zipcmp directory support.
* Don't set RPATH on Linux.
* Use Libs.private for link dependencies in pkg-config file.
* Fix build with LibreSSL.
* Various bugfixes.
0.9.0:
Backwards Compatibility Notes
CFFI 1.11 or newer is now required (previous requirement was 1.8).
The primary module is now zstandard. Please change imports of zstd and zstd_cffi to import zstandard. See the README for more. Support for importing the old names will be dropped in the next release.
ZstdCompressor.read_from() and ZstdDecompressor.read_from() have been renamed to read_to_iter(). read_from() is aliased to the new name and will be deleted in a future release.
Support for Python 2.6 has been removed.
Support for Python 3.3 has been removed.
The selectivity argument to train_dictionary() has been removed, as the feature disappeared from zstd 1.3.
Support for legacy dictionaries has been removed. Cover dictionaries are now the default. train_cover_dictionary() has effectively been renamed to train_dictionary().
The allow_empty argument from ZstdCompressor.compress() has been deleted and the method now allows empty inputs to be compressed by default.
estimate_compression_context_size() has been removed. Use CompressionParameters.estimated_compression_context_size() instead.
get_compression_parameters() has been removed. Use CompressionParameters.from_level() instead.
The arguments to CompressionParameters.__init__() have changed. If you were using positional arguments before, the positions now map to different arguments. It is recommended to use keyword arguments to construct CompressionParameters instances.
TARGETLENGTH_MAX constant has been removed (it disappeared from zstandard 1.3.4).
ZstdCompressor.write_to() and ZstdDecompressor.write_to() have been renamed to ZstdCompressor.stream_writer() and ZstdDecompressor.stream_writer(), respectively. The old names are still aliased, but will be removed in the next major release.
Content sizes are written into frame headers by default (ZstdCompressor(write_content_size=True) is now the default).
CompressionParameters has been renamed to ZstdCompressionParameters for consistency with other types. The old name is an alias and will be removed in the next major release.
Bug Fixes
Fixed memory leak in ZstdCompressor.copy_stream().
Fixed memory leak in ZstdDecompressor.copy_stream().
Fixed memory leak of ZSTD_DDict instances in CFFI's ZstdDecompressor.
New Features
Bundlded zstandard library upgraded from 1.1.3 to 1.3.4. This delivers various bug fixes and performance improvements. It also gives us access to newer features.
Support for negative compression levels.
Support for long distance matching (facilitates compression ratios that approach LZMA).
Supporting for reading empty zstandard frames (with an embedded content size of 0).
Support for writing and partial support for reading zstandard frames without a magic header.
New stream_reader() API that exposes the io.RawIOBase interface (allows you to .read() from a file-like object).
Several minor features, bug fixes, and performance enhancements.
Wheels for Linux and macOS are now provided with releases.
Changes
Functions accepting bytes data now use the buffer protocol and can accept more types (like memoryview and bytearray).
Add #includes so compilation on OS X and BSDs works.
New ZstdDecompressor.stream_reader() API to obtain a read-only i/o stream of decompressed data for a source.
New ZstdCompressor.stream_reader() API to obtain a read-only i/o stream of compressed data for a source.
Renamed ZstdDecompressor.read_from() to ZstdDecompressor.read_to_iter(). The old name is still available.
Renamed ZstdCompressor.read_from() to ZstdCompressor.read_to_iter(). read_from() is still available at its old location.
Introduce the zstandard module to import and re-export the C or CFFI backend as appropriate. Behavior can be controlled via the PYTHON_ZSTANDARD_IMPORT_POLICY environment variable. See README for usage info.
Vendored version of zstd upgraded to 1.3.4.
Added module constants CONTENTSIZE_UNKNOWN and CONTENTSIZE_ERROR.
Add STRATEGY_BTULTRA compression strategy constant.
Switch from deprecated ZSTD_getDecompressedSize() to ZSTD_getFrameContentSize() replacement.
ZstdCompressor.compress() can now compress empty inputs without requiring special handling.
ZstdCompressor and ZstdDecompressor now have a memory_size() method for determining the current memory utilization of the underlying zstd primitive.
train_dictionary() has new arguments and functionality for trying multiple variations of COVER parameters and selecting the best one.
Added module constants LDM_MINMATCH_MIN, LDM_MINMATCH_MAX, and LDM_BUCKETSIZELOG_MAX.
Converted all consumers to the zstandard new advanced API, which uses ZSTD_compress_generic()
CompressionParameters.__init__ now accepts several more arguments, including support for long distance matching.
ZstdCompressionDict.__init__ now accepts a dict_type argument that controls how the dictionary should be interpreted. This can be used to force the use of content-only dictionaries or to require the presence of the dictionary magic header.
ZstdCompressionDict.precompute_compress() can be used to precompute the compression dictionary so it can efficiently be used with multiple ZstdCompressor instances.
Digested dictionaries are now stored in ZstdCompressionDict instances, created automatically on first use, and automatically reused by all ZstdDecompressor instances bound to that dictionary.
All meaningful functions now accept keyword arguments.
ZstdDecompressor.decompressobj() now accepts a write_size argument to control how much work to perform on every decompressor invocation.
ZstdCompressor.write_to() now exposes a tell(), which exposes the total number of bytes written so far.
ZstdDecompressor.stream_reader() now supports seek() when moving forward in the stream.
Removed TARGETLENGTH_MAX constant.
Added frame_header_size(data) function.
Added frame_content_size(data) function.
Consumers of ZSTD_decompress* have been switched to the new advanced decompression API.
ZstdCompressor and ZstdCompressionParams can now be constructed with negative compression levels.
ZstdDecompressor now accepts a max_window_size argument to limit the amount of memory required for decompression operations.
FORMAT_ZSTD1 and FORMAT_ZSTD1_MAGICLESS constants to be used with the format compression parameter to control whether the frame magic header is written.
ZstdDecompressor now accepts a format argument to control the expected frame format.
ZstdCompressor now has a frame_progression() method to return information about the current compression operation.
Error messages in CFFI no longer have b'' literals.
Compiler warnings and underlying overflow issues on 32-bit platforms have been fixed.
Builds in CI now build with compiler warnings as errors. This should hopefully fix new compiler warnings from being introduced.
Make ZstdCompressor(write_content_size=True) and CompressionParameters(write_content_size=True) the default.
CompressionParameters has been renamed to ZstdCompressionParameters.
1.1.0:
This release removes the deprecated functions which were marked as remove in 1.0, but nonetheless remained:
lz4.lz4version()
LZ4FrameCompressor.finalize()
As a side effect, we noo longer have a dependency on the deprecation package.
1.5.0 [2018-03-11]
==================
* Use standard cryptographic library instead of custom AES implementation.
This also simplifies the license.
* Use `clang-format` to format the source code.
* More Windows improvements.
version 1.30 - Sergey Poznyakoff, 2017-12-17
* Member names containing '..' components are now skipped when extracting.
This fixes tar's behavior to match its documentation, and is a bit
safer when extracting untrusted archives over old files (an unsafe
practice that the tar manual has long recommended against).
* Report erroneous use of position-sensitive options.
During archive creation or update, tar keeps track of positional
options (see the manual, subsection 3.4.4 "Position-Sensitive
Options"), and reports those that had no effect. For example, when
invoked as
tar -cf a.tar . --exclude '*.o'
tar will create the archive, but will exit with status 2, having
issued the following error message
tar: The following options were used after non-optional
arguments in archive create or update mode. These options are
positional and affect only arguments that follow them. Please,
rearrange them properly.
tar: --exclude '*.o' has no effect
tar: Exiting with failure status due to previous errors
* --numeric-owner now affects private headers too.
This helps the output of 'tar' to be more deterministic.
* Fixed the --delay-directory-restore option
In some cases tar would restore the directory permissions too early,
causing subsequent link extractions in that directory to fail.
* The --warnings=failed-read option
This new warning control option suppresses warning messages about
unreadable files and directories. It has effect only if used together
with the --ignore-failed-read option.
* The --warnings=none option now suppresses all warnings
This includes warnings about unreadable files produced when
--ignore-failed-read is in effect. To output these, use
--warnings=none --warnings=no-failed-read.
* Fix reporting of hardlink mismatches during compare
Tar reported incorrect target file name in the 'Not linked to'
diagnostic message.
Changes in version 1.20:
The option '--loose-trailing', has been added.
The test used by lzip to discriminate trailing data from a corrupt
header in multimember or concatenated files has been improved to a
Hamming distance (HD) of 3, and the 3 bit flips must happen in different
magic bytes for the test to fail. As a consequence some kinds of files
no longer can be appended to a lzip file as trailing data unless the
'--loose-trailing' option is used when decompressing.
Lziprecover can be used to remove conflicting trailing data from a file.
The contents of a corrupt or truncated header found in a multimember
file are now shown, after the error message, in the same format as
trailing data.
Option '-S, --volume-size' now keeps input files unchanged.
When creating multimember files or splitting the output in volumes, the
dictionary size is now adjusted for each member individually.
The 'bits/byte' ratio has been replaced with the inverse compression
ratio in the output.
The progress of decompression is now shown at verbosity level 2 (-vv) or
higher.
Progress of (de)compression is only shown if stderr is a terminal.
A final diagnostic is now shown at verbosity level 1 (-v) or higher if
any file fails the test when testing multiple files.
A second '.lz' extension is no longer added to the argument of '-o' if
it already ends in '.lz' or '.tlz'.
In case of (de)compressed size mismatch, the stored size is now also
shown in hexadecimal to ease visual comparison.
The dictionary size is now shown at verbosity level 4 (-vvvv) when
decompressing or testing.
The new chapter "Meaning of lzip's output" has been added to the manual.
Changelog:
2018-02-02 guidod <guidod@gmx.de>
* fix a number of CVEs reported with special *.zip files
* the testsuite has been expanded to cover all the CVEs
* some minor doc updates referencing GitHub instead of sf.net
* release v0.13.68
0.23.2:
Fixes an error in the deprecated LZ4Compressor.finalize() method
Improves documentation
Has all example code in documentation verified via doctest