Upstream changes:
0.65 2017-05-03
[API Changes]
- Config options irand and primeinc are deprecated. They will carp if set.
[FUNCTIONALITY AND PERFORMANCE]
- Add Math::BigInt::Lite to list of known bigint objects.
- sum_primes fix for certain ranges with results near 2^64.
- is_prime, next_prime, prev_prime do a lock-free check for a find-in-cache
optimization. This is a big help on on some platforms with many threads.
- C versions of LogarithmicIntegral and inverse_li rewritten.
inverse_li honors the documentation promise within FP representation.
Thanks to Kim Walisch for motivation and discussion.
- Slightly faster XS nth_prime_approx.
- PP nth_prime_approx uses inverse_li past 1e12, which should run
at a reasonable speed now.
- Adjusted crossover points for segment vs. LMO interval prime_count.
- Slightly tighter prime_count_lower, nth_prime_upper, and Ramanujan bounds.
0.64 2017-04-17
[FUNCTIONALITY AND PERFORMANCE]
- inverse_li switched to Halley instead of binary search. Faster.
- Don't call pre-0.46 GMP backend directly for miller_rabin_random.
0.63 2017-04-16
[FUNCTIONALITY AND PERFORMANCE]
- Moved miller_rabin_random to separate interface.
Make catching of negative bases more explicit.
0.62 2017-04-16
[API Changes]
- The 'irand' config option is removed, as we now use our own CSPRNG.
It can be seeded with csrand() or srand(). The latter is not exported.
- The 'primeinc' config option is deprecated and will go away soon.
[ADDED]
- irand() Returns uniform random 32-bit integer
- irand64() Returns uniform random 64-bit integer
- drand([fmax]) Returns uniform random NV (floating point)
- urandomb(n) Returns uniform random integer less than 2^n
- urandomm(n) Returns uniform random integer in [0, n-1]
- random_bytes(nbytes) Return a string of CSPRNG bytes
- csrand(data) Seed the CSPRNG
- srand([UV]) Insecure seed for the CSPRNG (not exported)
- entropy_bytes(nbytes) Returns data from our entropy source
- :rand Exports srand, rand, irand, irand64
- nth_ramanujan_prime_upper(n) Upper limit of nth Ramanujan prime
- nth_ramanujan_prime_lower(n) Lower limit of nth Ramanujan prime
- nth_ramanujan_prime_approx(n) Approximate nth Ramanujan prime
- ramanujan_prime_count_upper(n) Upper limit of Ramanujan prime count
- ramanujan_prime_count_lower(n) Lower limit of Ramanujan prime count
- ramanujan_prime_count_approx(n) Approximate Ramanujan prime count
[FUNCTIONALITY AND PERFORMANCE]
- vecsum is faster when returning a bigint from native inputs (we
construct the 128-bit string in C, then call _to_bigint).
- Add a simple Legendre prime sum using uint128_t, which means only for
modern 64-bit compilers. It allows reasonably fast prime sums for
larger inputs, e.g. 10^12 in 10 seconds. Kim Walisch's primesum is
much more sophisticated and over 100x faster.
- is_pillai about 10x faster for composites.
- Much faster Ramanujan prime count and nth prime. These also now use
vastly less memory even with large inputs.
- small speed ups for cluster sieve.
- faster PP is_semiprime.
- Add prime option to forpart restrictions for all prime / non-prime.
- is_primitive_root needs two args, as documented.
- We do random seeding ourselves now, so remove dependency.
- Random primes functions moved to XS / GMP, 3-10x faster.
0.61 2017-03-12
[ADDED]
- is_semiprime(n) Returns 1 if n has exactly 2 prime factors
- is_pillai(p) Returns 0 or v wherev v! % n == n-1 and n % v != 1
- inverse_li(n) Integer inverse of Logarithmic Integral
[FUNCTIONALITY AND PERFORMANCE]
- is_power(-1,k) now returns true for odd k.
- RiemannZeta with GMP was not subtracting 1 from results > 9.
- PP Bernoulli algorithm changed to Seidel from Brent-Harvey. 2x speedup.
Math::BigNum is 10x faster, and our GMP code is 2000x faster.
- LambertW changes in C and PP. Much better initial approximation, and
switch iteration from Halley to Fritsch. 2 to 10x faster.
- Try to use GMP LambertW for bignums if it is available.
- Use Montgomery math in more places:
= sqrtmod. 1.2-1.7x faster.
= is_primitive_root. Up to 2x faster for some inputs.
= p-1 factoring stage 1.
- Tune AKS r/s selection above 32-bit.
- primes.pl uses twin_primes function for ~3x speedup.
- native chinese can handle some cases that used to overflow. Use Shell
sort on moduli to prevent pathological-but-reasonable test case.
- chinese directly to GMP
- Switch to Bytes::Random::Secure::Tiny -- fewer dependencies.
- PP nth_prime_approx has better MSE and uses inverse_li above 10^12.
- All random prime functions will use GMP versions if possible and
if a custom irand has not been configured.
They are much faster than the PP versions at smaller bit sizes.
- is_carmichael and is_pillai small speedups.