dask_ml.decomposition.IncrementalPCA

class dask_ml.decomposition.IncrementalPCA(n_components=None, whiten=False, copy=True, batch_size=None, svd_solver='auto', iterated_power=0, random_state=None)

Incremental principal components analysis (IPCA). Linear dimensionality reduction using Singular Value Decomposition of the data, keeping only the most significant singular vectors to project the data to a lower dimensional space. The input data is centered but not scaled for each feature before applying the SVD. Depending on the size of the input data, this algorithm can be much more memory efficient than a PCA, and allows sparse input. This algorithm has constant memory complexity, on the order of batch_size * n_features, enabling use of np.memmap files without loading the entire file into memory. For sparse matrices, the input is converted to dense in batches (in order to be able to subtract the mean) which avoids storing the entire dense matrix at any one time. The computational overhead of each SVD is O(batch_size * n_features ** 2), but only 2 * batch_size samples remain in memory at a time. There will be n_samples / batch_size SVD computations to get the principal components, versus 1 large SVD of complexity O(n_samples * n_features ** 2) for PCA. Read more in the User Guide. .. versionadded:: 0.16

Parameters:
n_components : int or None, (default=None)

Number of components to keep. If n_components `` is ``None, then n_components is set to min(n_samples, n_features).

whiten : bool, optional

When True (False by default) the components_ vectors are divided by n_samples times components_ to ensure uncorrelated outputs with unit component-wise variances. Whitening will remove some information from the transformed signal (the relative variance scales of the components) but can sometimes improve the predictive accuracy of the downstream estimators by making data respect some hard-wired assumptions.

copy : bool, (default=True)

If False, X will be overwritten. copy=False can be used to save memory but is unsafe for general use.

batch_size : int or None, (default=None)

The number of samples to use for each batch. Only used when calling fit. If batch_size is None, then batch_size is inferred from the data and set to 5 * n_features, to provide a balance between approximation accuracy and memory consumption.

svd_solver : string {‘auto’, ‘full’, ‘tsqr’, ‘randomized’}
auto :

the solver is selected by a default policy based on X.shape and n_components: if the input data is larger than 500x500 and the number of components to extract is lower than 80% of the smallest dimension of the data, then the more efficient ‘randomized’ method is enabled. Otherwise the exact full SVD is computed and optionally truncated afterwards.

full :

run exact full SVD and select the components by postprocessing

randomized :

run randomized SVD by using da.linalg.svd_compressed.

iterated_power: integer
random_state: None or integer

Parameters used for randomized svd.

Attributes:
components_ : array, shape (n_components, n_features)

Components with maximum variance.

explained_variance_ : array, shape (n_components,)

Variance explained by each of the selected components.

explained_variance_ratio_ : array, shape (n_components,)

Percentage of variance explained by each of the selected components. If all components are stored, the sum of explained variances is equal to 1.0.

singular_values_ : array, shape (n_components,)

The singular values corresponding to each of the selected components. The singular values are equal to the 2-norms of the n_components variables in the lower-dimensional space.

mean_ : array, shape (n_features,)

Per-feature empirical mean, aggregate over calls to partial_fit.

var_ : array, shape (n_features,)

Per-feature empirical variance, aggregate over calls to partial_fit.

noise_variance_ : float

The estimated noise covariance following the Probabilistic PCA model from Tipping and Bishop 1999. See “Pattern Recognition and Machine Learning” by C. Bishop, 12.2.1 p. 574 or http://www.miketipping.com/papers/met-mppca.pdf.

n_components_ : int

The estimated number of components. Relevant when n_components=None.

n_samples_seen_ : int

The number of samples processed by the estimator. Will be reset on new calls to fit, but increments across partial_fit calls.

Methods

fit(X[, y]) Fit the model with X.
fit_transform(X[, y]) Fit the model with X and apply the dimensionality reduction on X.
get_covariance() Compute data covariance with the generative model.
get_params([deep]) Get parameters for this estimator.
get_precision() Compute data precision matrix with the generative model.
inverse_transform(X) Transform data back to its original space.
partial_fit(X[, y, check_input]) Incremental fit with X.
score(X[, y]) Return the average log-likelihood of all samples.
score_samples(X) Return the log-likelihood of each sample.
set_params(**params) Set the parameters of this estimator.
transform(X) Apply dimensionality reduction on X.
__init__(n_components=None, whiten=False, copy=True, batch_size=None, svd_solver='auto', iterated_power=0, random_state=None)

Initialize self. See help(type(self)) for accurate signature.

fit(X, y=None)

Fit the model with X.

Parameters:
X : array-like, shape (n_samples, n_features)

Training data, where n_samples is the number of samples and n_features is the number of features.

y : None

Ignored variable.

Returns:
self : object

Returns the instance itself.

fit_transform(X, y=None)

Fit the model with X and apply the dimensionality reduction on X.

Parameters:
X : array-like, shape (n_samples, n_features)

New data, where n_samples in the number of samples and n_features is the number of features.

y : Ignored
Returns:
X_new : array-like, shape (n_samples, n_components)
get_covariance()

Compute data covariance with the generative model.

cov = components_.T * S**2 * components_ + sigma2 * eye(n_features) where S**2 contains the explained variances, and sigma2 contains the noise variances.

Returns:
cov : array, shape=(n_features, n_features)

Estimated covariance of data.

get_params(deep=True)

Get parameters for this estimator.

Parameters:
deep : bool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
params : mapping of string to any

Parameter names mapped to their values.

get_precision()

Compute data precision matrix with the generative model.

Equals the inverse of the covariance but computed with the matrix inversion lemma for efficiency.

Returns:
precision : array, shape=(n_features, n_features)

Estimated precision of data.

inverse_transform(X)

Transform data back to its original space.

Returns an array X_original whose transform would be X.

Parameters:
X : array-like, shape (n_samples, n_components)

New data, where n_samples in the number of samples and n_components is the number of components.

Returns:
X_original array-like, shape (n_samples, n_features)

Notes

If whitening is enabled, inverse_transform does not compute the exact inverse operation of transform.

partial_fit(X, y=None, check_input=True)

Incremental fit with X. All of X is processed as a single batch.

Parameters:
X : array-like, shape (n_samples, n_features)

Training data, where n_samples is the number of samples and n_features is the number of features.

check_input : bool

Run check_array on X.

y : Ignored
Returns:
self : object

Returns the instance itself.

score(X, y=None)

Return the average log-likelihood of all samples.

See. “Pattern Recognition and Machine Learning” by C. Bishop, 12.2.1 p. 574 or http://www.miketipping.com/papers/met-mppca.pdf

Parameters:
X : array, shape(n_samples, n_features)

The data.

y : Ignored
Returns:
ll : float

Average log-likelihood of the samples under the current model

score_samples(X)

Return the log-likelihood of each sample.

See. “Pattern Recognition and Machine Learning” by C. Bishop, 12.2.1 p. 574 or http://www.miketipping.com/papers/met-mppca.pdf

Parameters:
X : array, shape(n_samples, n_features)

The data.

Returns:
ll : array, shape (n_samples,)

Log-likelihood of each sample under the current model

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**params : dict

Estimator parameters.

Returns:
self : object

Estimator instance.

transform(X)

Apply dimensionality reduction on X.

X is projected on the first principal components previous extracted from a training set.

Parameters:
X : array-like, shape (n_samples, n_features)

New data, where n_samples in the number of samples and n_features is the number of features.

Returns:
X_new : array-like, shape (n_samples, n_components)