# dask_ml.decomposition.PCA¶

class dask_ml.decomposition.PCA(n_components=None, copy=True, whiten=False, svd_solver='auto', tol=0.0, iterated_power=0, random_state=None)

Principal component analysis (PCA)

Linear dimensionality reduction using Singular Value Decomposition of the data to project it to a lower dimensional space.

It uses the “tsqr” algorithm from Benson et. al. (2013). See the References for more.

Read more in the User Guide.

Parameters: n_components : int, or None Number of components to keep. if n_components is not set all components are kept: n_components == min(n_samples, n_features)  Note Unlike scikit-learn, n_components='mle' and n_components between (0, 1) are not currently supported. copy : bool (default True) ignored whiten : bool, optional (default False) When True (False by default) the components_ vectors are multiplied by the square root of n_samples and then divided by the singular values to ensure uncorrelated outputs with unit component-wise variances. Whitening will remove some information from the transformed signal (the relative variance scales of the components) but can sometime improve the predictive accuracy of the downstream estimators by making their data respect some hard-wired assumptions. svd_solver : string {‘auto’, ‘full’, ‘tsqr’, ‘randomized’} auto : the solver is selected by a default policy based on X.shape and n_components: if the input data is larger than 500x500 and the number of components to extract is lower than 80% of the smallest dimension of the data, then the more efficient ‘randomized’ method is enabled. Otherwise the exact full SVD is computed and optionally truncated afterwards. full : run exact full SVD and select the components by postprocessing randomized : run randomized SVD by using da.linalg.svd_compressed. tol : float >= 0, optional (default .0) ignored iterated_power : int >= 0, default 0 Number of iterations for the power method computed by svd_solver == ‘randomized’. random_state : int, RandomState instance or None, optional (default None) If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by da.random. Used when svd_solver == ‘randomized’. components_ : array, shape (n_components, n_features) Principal axes in feature space, representing the directions of maximum variance in the data. The components are sorted by explained_variance_. explained_variance_ : array, shape (n_components,) The amount of variance explained by each of the selected components. Equal to n_components largest eigenvalues of the covariance matrix of X. explained_variance_ratio_ : array, shape (n_components,) Percentage of variance explained by each of the selected components. If n_components is not set then all components are stored and the sum of the ratios is equal to 1.0. singular_values_ : array, shape (n_components,) The singular values corresponding to each of the selected components. The singular values are equal to the 2-norms of the n_components variables in the lower-dimensional space. mean_ : array, shape (n_features,) Per-feature empirical mean, estimated from the training set. Equal to X.mean(axis=0). n_components_ : int The estimated number of components. When n_components is set to ‘mle’ or a number between 0 and 1 (with svd_solver == ‘full’) this number is estimated from input data. Otherwise it equals the parameter n_components, or the lesser value of n_features and n_samples if n_components is None. noise_variance_ : float The estimated noise covariance following the Probabilistic PCA model from Tipping and Bishop 1999. See “Pattern Recognition and Machine Learning” by C. Bishop, 12.2.1 p. 574 or http://www.miketipping.com/papers/met-mppca.pdf. It is required to computed the estimated data covariance and score samples. Equal to the average of (min(n_features, n_samples) - n_components) smallest eigenvalues of the covariance matrix of X.

Notes

Differences from scikit-learn:

• svd_solver : ‘randomized’ uses dask.linalg.svd_compressed ‘full’ uses dask.linalg.svd, ‘arpack’ is not valid.
• iterated_power : defaults to 0, the default for dask.linalg.svd_compressed.
• n_components : n_components='mle' is not allowed. Fractional n_components between 0 and 1 is not allowed.

References

Direct QR factorizations for tall-and-skinny matrices in MapReduce architectures. A. Benson, D. Gleich, and J. Demmel. IEEE International Conference on Big Data, 2013. http://arxiv.org/abs/1301.1071

Examples

>>> import numpy as np
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> dX = da.from_array(X, chunks=X.shape)
>>> pca = PCA(n_components=2)
>>> pca.fit(dX)
PCA(copy=True, iterated_power='auto', n_components=2, random_state=None,
svd_solver='auto', tol=0.0, whiten=False)
>>> print(pca.explained_variance_ratio_)  # doctest: +ELLIPSIS
[ 0.99244...  0.00755...]
>>> print(pca.singular_values_)  # doctest: +ELLIPSIS
[ 6.30061...  0.54980...]

>>> pca = PCA(n_components=2, svd_solver='full')
>>> pca.fit(dX)                 # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
PCA(copy=True, iterated_power='auto', n_components=2, random_state=None,
svd_solver='full', tol=0.0, whiten=False)
>>> print(pca.explained_variance_ratio_)  # doctest: +ELLIPSIS
[ 0.99244...  0.00755...]
>>> print(pca.singular_values_)  # doctest: +ELLIPSIS
[ 6.30061...  0.54980...]


Methods

 fit(self, X[, y]) Fit the model with X. fit_transform(self, X[, y]) Fit the model with X and apply the dimensionality reduction on X. get_covariance(self) Compute data covariance with the generative model. get_params(self[, deep]) Get parameters for this estimator. get_precision(self) Compute data precision matrix with the generative model. inverse_transform(self, X) Transform data back to its original space. score(self, X[, y]) Return the average log-likelihood of all samples. score_samples(self, X) Return the log-likelihood of each sample. set_params(self, \*\*params) Set the parameters of this estimator. transform(self, X) Apply dimensionality reduction on X.
__init__(self, n_components=None, copy=True, whiten=False, svd_solver='auto', tol=0.0, iterated_power=0, random_state=None)

Initialize self. See help(type(self)) for accurate signature.

fit(self, X, y=None)

Fit the model with X.

Parameters: X : array-like, shape (n_samples, n_features) Training data, where n_samples is the number of samples and n_features is the number of features. y : None Ignored variable. self : object Returns the instance itself.
fit_transform(self, X, y=None)

Fit the model with X and apply the dimensionality reduction on X.

Parameters: X : array-like, shape (n_samples, n_features) New data, where n_samples in the number of samples and n_features is the number of features. y : Ignored X_new : array-like, shape (n_samples, n_components)
get_covariance(self)

Compute data covariance with the generative model.

cov = components_.T * S**2 * components_ + sigma2 * eye(n_features) where S**2 contains the explained variances, and sigma2 contains the noise variances.

Returns: cov : array, shape=(n_features, n_features) Estimated covariance of data.
get_params(self, deep=True)

Get parameters for this estimator.

Parameters: deep : bool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. params : mapping of string to any Parameter names mapped to their values.
get_precision(self)

Compute data precision matrix with the generative model.

Equals the inverse of the covariance but computed with the matrix inversion lemma for efficiency.

Returns: precision : array, shape=(n_features, n_features) Estimated precision of data.
inverse_transform(self, X)

Transform data back to its original space.

Returns an array X_original whose transform would be X.

Parameters: X : array-like, shape (n_samples, n_components) New data, where n_samples in the number of samples and n_components is the number of components. X_original array-like, shape (n_samples, n_features)

Notes

If whitening is enabled, inverse_transform does not compute the exact inverse operation of transform.

score(self, X, y=None)

Return the average log-likelihood of all samples.

See. “Pattern Recognition and Machine Learning” by C. Bishop, 12.2.1 p. 574 or http://www.miketipping.com/papers/met-mppca.pdf

Parameters: X : array, shape(n_samples, n_features) The data. y : Ignored ll : float Average log-likelihood of the samples under the current model
score_samples(self, X)

Return the log-likelihood of each sample.

See. “Pattern Recognition and Machine Learning” by C. Bishop, 12.2.1 p. 574 or http://www.miketipping.com/papers/met-mppca.pdf

Parameters: X : array, shape(n_samples, n_features) The data. ll : array, shape (n_samples,) Log-likelihood of each sample under the current model
set_params(self, **params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters: **params : dict Estimator parameters. self : object Estimator instance.
transform(self, X)

Apply dimensionality reduction on X.

X is projected on the first principal components previous extracted from a training set.

Parameters: X : array-like, shape (n_samples, n_features) New data, where n_samples in the number of samples and n_features is the number of features. X_new : array-like, shape (n_samples, n_components)