The makefile.include fragment included by all of the project
makefiles unconditionally sets $(CC), $(LD), $(AR) and $(RANLIB)
to $(PREFIX){gcc,ld,ar,ranlib}. Their intent was to provide a
facility for cross-compiling the code, but the use of $(PREFIX)
for this purpose was unfortunate.
This change adds a patch to set $(PREFIX) to the empty string in
the makefiles, which should fix the problem with the smallest
set of changes.
allow overriding MKOCTFILE_* and use it to provide a full path gfortran,
so the invocation doesn't fail because gfortran isn't normally in PATH
Force the use of bsdtar. unpacking yielding random PaxHeaders.1234 triggers
an octave package sanity check making the build of the 'signal' package
fail with a cryptic error with no further diagnostics.
bump PKGREVISION
Fixes build with PKGSRC_FORTRAN=gfortran (6.4) on netbsd. resulting
binary works fine. I suspect the issue is that the wrong gcc (one without
fortran support) is invoked.
Changes:
- Compat with the new Yahoo iCharts API. Yahoo removed the older API,
this release restores ability to download from Yahoo.
- ``DataReader`` now supports Quandl.
- Removed Oanda as it became subscription only.
- web sessions are closed properly at the end of use
- Handle commas in large price quotes
- Test suite fixes for test_get_options_data
- Test suite fixes for test_wdi_download
- avoid monkey patching requests.Session
- `get_data_yahoo` now treats ``'null'`` strings as missing values
Docs are now generated with pdoc.
Merged Pull Request 24, from @czlee:
* Change to step 4: When it looks for a uncovered zero, rather than starting at row 0, column 0, it starts where it left off, i.e. at the last uncovered zero it found. Since it doesn't start at (0,0), when it gets to the last column it now loops around to the first, and exits unsuccessfully if it got back to where it started. This change reduces this reduces the solving time for (certain) large matrices. For instance, in tests, solving a matrix of size 394×394 goes from about 2 minutes to about 4 seconds.
* Since Python 3 started cracking down on unnatural comparisons, the DISALLOWED constant added in Pull Request 19 no longer works. (It raises a TypeError for unorderable types, as is expected in Python 3.) Since this constant is meant to act like infinity, this modification just changes the two lines where it would otherwise try to make an illegal (in Python 3) comparison between a number and DISALLOWED_OBJ() and gets it to behave as if DISALLOWED is always larger.
Added Travis CI integration.
Added some unit tests. See tests and tests/README.md.
Bug Fixes
* Fixed a bug in failing to compute rolling computations of a column-MultiIndexed DataFrame
* Fixed a pytest marker failing downstream packages’ tests suites
Conversion
* Bug in pickle compat prior to the v0.20.x series, when UTC is a timezone in a Series/DataFrame/Index
* Bug in Series construction when passing a Series with dtype='category'.
* Bug in DataFrame.astype() when passing a Series as the dtype kwarg..
Indexing
* Bug in Float64Index causing an empty array instead of None to be returned from .get(np.nan) on a Series whose index did not contain any NaN s
* Bug in MultiIndex.isin causing an error when passing an empty iterable
* Fixed a bug in a slicing DataFrame/Series that have a TimedeltaIndex
I/O
* Bug in read_csv() in which files weren’t opened as binary files by the C engine on Windows, causing EOF characters mid-field, which would fail
* Bug in read_hdf() in which reading a Series saved to an HDF file in ‘fixed’ format fails when an explicit mode='r' argument is supplied
* Bug in DataFrame.to_latex() where bold_rows was wrongly specified to be True by default, whereas in reality row labels remained non-bold whatever parameter provided.
* Fixed an issue with DataFrame.style() where generated element ids were not unique
* Fixed loading a DataFrame with a PeriodIndex, from a format='fixed' HDFStore, in Python 3, that was written in Python 2
Plotting
* Fixed regression that prevented RGB and RGBA tuples from being used as color arguments
* Fixed an issue with DataFrame.plot.scatter() that incorrectly raised a KeyError when categorical data is used for plotting
Reshaping
* PeriodIndex / TimedeltaIndex.join was missing the sort= kwarg
* Bug in joining on a MultiIndex with a category dtype for a level.
* Bug in merge() when merging/joining with multiple categorical columns
Categorical
* Bug in DataFrame.sort_values not respecting the kind parameter with categorical data
Packaged by Filip Hajny and updated by Kamel Derouiche and me.
scikit-learn is a Python module integrating classic machine learning
algorithms in the tightly-knit scientific Python world (numpy, scipy,
matplotlib). It aims to provide simple and efficient solutions to
learning problems, accessible to everybody and reusable in various
contexts: machine-learning as a versatile tool for science and
engineering.
The package does not include a change log, but one major feature
added since 20140106 is support for transforms of sizes other
than powers of two by means of the chirp Z transform.
* What is new in gsl-2.4:
** migrated documentation to Sphinx software, which has built-in
support for latex equations and figures in HTML output
** add const to declaration of appropriate gsl_rstat routines
** bug fix for #45730: change gsl_sf_sin/cos to libm sin/cos
** fix Cholesky documentation regarding upper triangle on output
** added routines to compute integrals with fixed-point quadrature,
based on IQPACK (Konrad Griessinger)
** added routines for Hermite polynomials, gsl_sf_hermite_*
(Konrad Griessinger)
** added new nonlinear least squares example for fitting
a Gaussian to data
** deprecated routines:
gsl_sf_coupling_6j_INCORRECT
gsl_sf_coupling_6j_INCORRECT_e
** deprecated routine 'gsl_linalg_hessenberg' (replaced
by gsl_linalg_hessenberg_decomp)
** removed routines which were deprecated in v2.1:
gsl_bspline_deriv_alloc
gsl_bspline_deriv_free
** changed COD expression to Q R Z^T instead of Q R Z to
be consistent with standard texts
** added check for nz == 0 in gsl_spmatrix_get
(reported by Manuel Schmitz)
** permit zero-dimension blocks, vectors, matrics, subvectors,
submatrices, and views of the above (bug #49988)
** added routine gsl_linalg_COD_lssolve2 for regularized
least squares problems
some typos.
From DESCR:
Math::Calc::Units is a simple calculator that keeps track of units. It
currently handles combinations of byte sizes and duration only, although
adding any other multiplicative types is easy. Any unknown type is
treated as a unique user type (with some effort to map English plurals
to their singular forms).
being one example), and the logic for building it is conditional, causing
PLIST mismatches for GLIBC users.
Reported by Jason Bacon in pkgsrc-users.
Bump PKGREVISION
This package will provide a complete implementation of unicode
maths for XeLaTeX and LuaLaTeX. Unicode maths is currently
supported by the following fonts: Cambria Math (Microsoft),
Minion Math (Johannes Kuster, typoma GmbH) Latin Modern Math
(Boguslaw Jackowski, Janusz M. Nowacki) TeX Gyre Pagella Math
(Boguslaw Jackowski, Janusz M. Nowacki) Asana-Math fonts
(Apostolos Syropolous), Neo Euler (Khaled Hosny), STIX (STI
Pub), and XITS Math (Khaled Hosny). As well as running XeTeX or
LuaTeX, this package requires recent versions of the fontspec,
expl3, xpackages, filehook, ucharcat and lualatex-math
packages.
The mathspec package provides an interface to typeset
mathematics in XeLaTeX with arbitrary text fonts using fontspec
as a backend. The package is under development and later
versions might to be incompatible with this version, as this
version is incompatible with earlier versions. The package
requires at least version 0.9995 of XeTeX.
Highlights
* Operations like a + b + c will reuse temporaries on some platforms,
resulting in less memory use and faster execution.
* Inplace operations check if inputs overlap outputs and create temporaries
to avoid problems.
* New __array_ufunc__ attribute provides improved ability for classes to
override default ufunc behavior.
* New np.block function for creating blocked arrays.
New functions
* New np.positive ufunc.
* New np.divmod ufunc provides more efficient divmod.
* New np.isnat ufunc tests for NaT special values.
* New np.heaviside ufunc computes the Heaviside function.
* New np.isin function, improves on in1d.
* New np.block function for creating blocked arrays.
* New PyArray_MapIterArrayCopyIfOverlap added to NumPy C-API.
This is a minor bug-fix release in the 0.20.x series and includes some small regression fixes, bug fixes and performance improvements. We recommend that all users upgrade to this version.
We have improved the trust-region update rule in the primal-based Newton method. It's significantly faster (e.g., twice faster or more) on some problems (see the technical report).
We now support scipy objects in the Python interface
The main features of this release are several new time series models based on the statespace framework, multiple imputation using MICE as well as many other enhancements. The codebase also has been updated to be compatible with recent numpy and pandas releases.
Statsmodels is using now github to store the updated documentation which is available under http://www.statsmodels.org/stable for the last release, and http://www.statsmodels.org/dev/ for the development version.
New .agg() API for Series/DataFrame similar to the groupby-rolling-resample API’s, see here
Integration with the feather-format, including a new top-level pd.read_feather() and DataFrame.to_feather() method, see here.
The .ix indexer has been deprecated, see here
Panel has been deprecated, see here
Addition of an IntervalIndex and Interval scalar type, see here
Improved user API when grouping by index levels in .groupby(), see here
Improved support for UInt64 dtypes, see here
A new orient for JSON serialization, orient='table', that uses the Table Schema spec and that gives the possibility for a more interactive repr in the Jupyter Notebook, see here
Experimental support for exporting styled DataFrames (DataFrame.style) to Excel, see here
Window binary corr/cov operations now return a MultiIndexed DataFrame rather than a Panel, as Panel is now deprecated, see here
Support for S3 handling now uses s3fs, see here
Google BigQuery support now uses the pandas-gbq library, see here
Improvements
------------
- setup.py detects conda env and uses installed conda (hdf5, bzip2, lzo
and/or blosc) packages when building from source.
Bugs fixed
----------
- Linux wheels now built against built-in blosc.
- Fixed windows absolute paths in ptrepack, ptdump, ptree.
Updates to keep with API changes in newer NumPy versions
Removed several warnings
Fix bugs in function stringcontains()
Detection of the POWER processor
Fix pow result casting
Fix integers to negative integer powers
Detect numpy exceptions in expression evaluation
Better handling of RC versions
Upstream changes:
0.65 2017-05-03
[API Changes]
- Config options irand and primeinc are deprecated. They will carp if set.
[FUNCTIONALITY AND PERFORMANCE]
- Add Math::BigInt::Lite to list of known bigint objects.
- sum_primes fix for certain ranges with results near 2^64.
- is_prime, next_prime, prev_prime do a lock-free check for a find-in-cache
optimization. This is a big help on on some platforms with many threads.
- C versions of LogarithmicIntegral and inverse_li rewritten.
inverse_li honors the documentation promise within FP representation.
Thanks to Kim Walisch for motivation and discussion.
- Slightly faster XS nth_prime_approx.
- PP nth_prime_approx uses inverse_li past 1e12, which should run
at a reasonable speed now.
- Adjusted crossover points for segment vs. LMO interval prime_count.
- Slightly tighter prime_count_lower, nth_prime_upper, and Ramanujan bounds.
0.64 2017-04-17
[FUNCTIONALITY AND PERFORMANCE]
- inverse_li switched to Halley instead of binary search. Faster.
- Don't call pre-0.46 GMP backend directly for miller_rabin_random.
0.63 2017-04-16
[FUNCTIONALITY AND PERFORMANCE]
- Moved miller_rabin_random to separate interface.
Make catching of negative bases more explicit.
0.62 2017-04-16
[API Changes]
- The 'irand' config option is removed, as we now use our own CSPRNG.
It can be seeded with csrand() or srand(). The latter is not exported.
- The 'primeinc' config option is deprecated and will go away soon.
[ADDED]
- irand() Returns uniform random 32-bit integer
- irand64() Returns uniform random 64-bit integer
- drand([fmax]) Returns uniform random NV (floating point)
- urandomb(n) Returns uniform random integer less than 2^n
- urandomm(n) Returns uniform random integer in [0, n-1]
- random_bytes(nbytes) Return a string of CSPRNG bytes
- csrand(data) Seed the CSPRNG
- srand([UV]) Insecure seed for the CSPRNG (not exported)
- entropy_bytes(nbytes) Returns data from our entropy source
- :rand Exports srand, rand, irand, irand64
- nth_ramanujan_prime_upper(n) Upper limit of nth Ramanujan prime
- nth_ramanujan_prime_lower(n) Lower limit of nth Ramanujan prime
- nth_ramanujan_prime_approx(n) Approximate nth Ramanujan prime
- ramanujan_prime_count_upper(n) Upper limit of Ramanujan prime count
- ramanujan_prime_count_lower(n) Lower limit of Ramanujan prime count
- ramanujan_prime_count_approx(n) Approximate Ramanujan prime count
[FUNCTIONALITY AND PERFORMANCE]
- vecsum is faster when returning a bigint from native inputs (we
construct the 128-bit string in C, then call _to_bigint).
- Add a simple Legendre prime sum using uint128_t, which means only for
modern 64-bit compilers. It allows reasonably fast prime sums for
larger inputs, e.g. 10^12 in 10 seconds. Kim Walisch's primesum is
much more sophisticated and over 100x faster.
- is_pillai about 10x faster for composites.
- Much faster Ramanujan prime count and nth prime. These also now use
vastly less memory even with large inputs.
- small speed ups for cluster sieve.
- faster PP is_semiprime.
- Add prime option to forpart restrictions for all prime / non-prime.
- is_primitive_root needs two args, as documented.
- We do random seeding ourselves now, so remove dependency.
- Random primes functions moved to XS / GMP, 3-10x faster.
0.61 2017-03-12
[ADDED]
- is_semiprime(n) Returns 1 if n has exactly 2 prime factors
- is_pillai(p) Returns 0 or v wherev v! % n == n-1 and n % v != 1
- inverse_li(n) Integer inverse of Logarithmic Integral
[FUNCTIONALITY AND PERFORMANCE]
- is_power(-1,k) now returns true for odd k.
- RiemannZeta with GMP was not subtracting 1 from results > 9.
- PP Bernoulli algorithm changed to Seidel from Brent-Harvey. 2x speedup.
Math::BigNum is 10x faster, and our GMP code is 2000x faster.
- LambertW changes in C and PP. Much better initial approximation, and
switch iteration from Halley to Fritsch. 2 to 10x faster.
- Try to use GMP LambertW for bignums if it is available.
- Use Montgomery math in more places:
= sqrtmod. 1.2-1.7x faster.
= is_primitive_root. Up to 2x faster for some inputs.
= p-1 factoring stage 1.
- Tune AKS r/s selection above 32-bit.
- primes.pl uses twin_primes function for ~3x speedup.
- native chinese can handle some cases that used to overflow. Use Shell
sort on moduli to prevent pathological-but-reasonable test case.
- chinese directly to GMP
- Switch to Bytes::Random::Secure::Tiny -- fewer dependencies.
- PP nth_prime_approx has better MSE and uses inverse_li above 10^12.
- All random prime functions will use GMP versions if possible and
if a custom irand has not been configured.
They are much faster than the PP versions at smaller bit sizes.
- is_carmichael and is_pillai small speedups.
Upstream changes: 2017-03-15 v1.999811 pjacklam
* Fix an old in the Math::BigFloat methods as_hex(), as_oct(), and as_bin()
methods resulting in loss of accuracy. This bug was introduced in bug in
Math-BigInt-1.76. Due to a naive copy and paste by me, and lack of tests,
this bug was also present in the newer to_hex(), to_oct(), and to_bin()
methods. This shows the bug, as it did not print "0xffff...":
print Math::BigFloat -> from_hex("f" x 30) -> as_hex();
* Fix incorrect formatting in the output from the Math::BigFloat methods
to_hex(), to_oct(), and to_bin() when the output was zero. A prefix was
added when it shouldn't have been.
* Add tests to bigintpm.inc and bigfltpm.inc for better testing of as_hex(),
as_oct(), and as_bin() as well as to_hex(), to_oct(), and to_bin().
* "Synchronize" tests and code formatting in bigintpm.inc and bigfltpm.inc.
2017-03-01 v1.999810 pjacklam
* CPAN RT #120240 revealed that the problems with undefined values is still
present. After a close examination, I believe the only way to get this
really working is to to make blog() call objectify() differently depending
on whether the base for the logarithm is undefined or not. That way we can
avoid objectify() converting the undefined value to a zero. Ideally, we
should warn about undefined values when used in any other context, but we'll
handle that in a later release. See also the related changelog entry for
v1.999801.
* Fix the way the argument count is computed in objectify(). When an argument
count of 0 is given, it means that we should objectify all input arguments.
However, it turned out that the actual argument count was computed
incorrectly.
* Fix CPAN RT #120242 rearding c3 method resolution.