An evolution of 'reshape2'. It's designed specifically for data
tidying (not general reshaping or aggregating) and works well with
'dplyr' data pipelines.
Provides functions for the consistent analysis of compositional data
(e.g. portions of substances) and positive numbers (e.g.
concentrations) in the way proposed by J. Aitchison and V.
Pawlowsky-Glahn.
"Essential" Robust Statistics. Tools allowing to analyze data with
robust methods. This includes regression methodology including model
selections and multivariate statistics where we strive to cover the
book "Robust Statistics, Theory and Methods" by 'Maronna, Martin and
Yohai'; Wiley 2006.
Provides convenience functions for advanced linear algebra with
tensors and computation with datasets of tensors on a higher level
abstraction. It includes Einstein and Riemann summing conventions,
dragging, co- and contravariate indices, parallel computations on
sequences of tensors.
Differential Evolution (DE) stochastic algorithms for global
optimization of problems with and without constraints. The aim is to
curate a collection of its state-of-the-art variants that (1) do not
sacrifice simplicity of design, (2) are essentially tuning-free, and
(3) can be efficiently implemented directly in the R language.
Currently, it only provides an implementation of the 'jDE' algorithm
by Brest et al. (2006) <doi:10.1109/TEVC.2006.872133>.
E-statistics (energy) tests and statistics for multivariate and
univariate inference, including distance correlation, one-sample,
two-sample, and multi-sample tests for comparing multivariate
distributions, are implemented. Measuring and testing multivariate
independence based on distance correlation, partial distance
correlation, multivariate goodness-of-fit tests, k-groups and
hierarchical clustering based on energy distance, testing for
multivariate normality, distance components (disco) for non-parametric
analysis of structured data, and other energy statistics/methods are
implemented.
Provides a %<-% operator to perform multiple, unpacking, and
destructuring assignment in R. The operator unpacks the right-hand
side of an assignment into multiple values and assigns these values to
variables on the left-hand side of the assignment.
Defines new notions of prototype and size that are used to provide
tools for consistent and well-founded type-coercion and
size-recycling, and are in turn connected to ideas of type- and
size-stability useful for analyzing function interfaces.
Summary statistics, two-sample tests, rank tests, generalised linear
models, cumulative link models, Cox models, loglinear models, and
general maximum pseudolikelihood estimation for multistage stratified,
cluster-sampled, unequally weighted survey samples. Variances by
Taylor series linearisation or replicate weights. Post-stratification,
calibration, and raking. Two-phase subsampling designs. Graphics. PPS
sampling without replacement. Principal components, factor analysis.
Functions to facilitate inference on the relative importance of
predictors in a linear or generalized linear model, and a couple of
useful Tcl/Tk widgets.
Helpers for reordering factor levels (including moving specified
levels to front, ordering by first appearance, reversing, and randomly
shuffling), and tools for modifying factor levels (including
collapsing rare levels into other, 'anonymising', and manually
'recoding').
Provides tools for determining estimability of linear functions of
regression coefficients, and 'epredict' methods that handle
non-estimable cases correctly. Estimability theory is discussed in
many linear-models textbooks including Chapter 3 of Monahan, JF
(2008), "A Primer on Linear Models", Chapman and Hall (ISBN
978-1-4200-6201-4).
The ellipsis is a powerful tool for extending functions. Unfortunately
this power comes at a cost: misspelled arguments will be silently
ignored. The ellipsis package provides a collection of functions to
catch problems and alert the user.
Functions introduced or changed since R v3.0.0 are re-implemented in
this package. The backports are conditionally exported in order to let
R resolve the function name to either the implemented backport, or the
respective base version, if available. Package developers can make use
of new functions or arguments by selectively importing specific
backports to support older installations.
Two nonparametric methods for multiple regression transform selection
are provided. The first, Alternative Conditional Expectations (ACE),
is an algorithm to find the fixed point of maximal correlation, i.e.
it finds a set of transformed response variables that maximizes R^2
using smoothing functions [see Breiman, L., and J.H. Friedman. 1985.
"Estimating Optimal Transformations for Multiple Regression and
Correlation". Journal of the American Statistical Association.
80:580-598. <doi:10.1080/01621459.1985.10478157>]. Also included is
the Additivity Variance Stabilization (AVAS) method which works better
than ACE when correlation is low [see Tibshirani, R.. 1986.
"Estimating Transformations for Regression via Additivity and Variance
Stabilization". Journal of the American Statistical Association.
83:394-405. <doi:10.1080/01621459.1988.10478610>]. A good introduction
to these two methods is in chapter 16 of Frank Harrel's "Regression
Modeling Strategies" in the Springer Series in Statistics.
Various utilities are provided that might be used in spatial
statistics and elsewhere. It delivers a method for solving linear
equations that checks the sparsity of the matrix before any algorithm
is used. Furthermore, it includes the Struve functions.
Infrastructure for extended formulas with multiple parts on the
right-hand side and/or multiple responses on the left-hand side (see
<DOI:10.18637/jss.v034.i01>).
uncertainties allows calculations such as (2 +/- 0.1)*2 = 4 +/- 0.2 to be
performed transparently. Much more complex mathematical expressions involving
numbers with uncertainties can also be evaluated directly.
The uncertainties package takes the pain and complexity out of uncertainty
calculations.
An R interface to the NetCDF file format designed by Unidata for
efficient storage of array-oriented scientific data and descriptions.
The R interface is closely based on the C API of the NetCDF library,
and it includes calendar conversions from the Unidata UDUNITS library.
The current implementation supports all operations on NetCDF datasets
in classic and 64-bit offset file formats, and NetCDF4-classic format
is supported for reading and modification of existing files.
From Kai-Uwe Eckhardt, updated as the previous distfile wasn't available.
PR pkg/51607
Changes from 3.4.3 to 3.4.4
Improvements
Environment variable to control the use of embedded libraries.
Include citation in repository. gh-690.
Bugs fixed
Fixed import error with numexpr 2.6.5.dev0 gh-685.
Fixed linter warnings.
Fixed for re.split() is version detection. gh-687.
Fixed test failures with Python 2.7 and NumPy 1.14.3 gh-688 & gh-689.