class dask_ml.decomposition.TruncatedSVD(n_components=2, algorithm='tsqr', n_iter=5, random_state=None, tol=0.0, compute=True)


fit(X[, y])

Fit truncated SVD on training data X

fit_transform(X[, y])

Fit model to X and perform dimensionality reduction on X.


Get metadata routing of this object.


Get parameters for this estimator.


Transform X back to its original space.

set_output(*[, transform])

Set output container.


Set the parameters of this estimator.

transform(X[, y])

Perform dimensionality reduction on X.

__init__(n_components=2, algorithm='tsqr', n_iter=5, random_state=None, tol=0.0, compute=True)

Dimensionality reduction using truncated SVD (aka LSA).

This transformer performs linear dimensionality reduction by means of truncated singular value decomposition (SVD). Contrary to PCA, this estimator does not center the data before computing the singular value decomposition.

n_componentsint, default = 2

Desired dimensionality of output data. Must be less than or equal to the number of features. The default value is useful for visualization.

algorithm{‘tsqr’, ‘randomized’}

SVD solver to use. Both use the tsqr (for “tall-and-skinny QR”) algorithm internally. ‘randomized’ uses an approximate algorithm that is faster, but not exact. See the References for more.

n_iterint, optional (default 0)

Number of power iterations, useful when the singular values decay slowly. Error decreases exponentially as n_power_iter increases. In practice, set n_power_iter <= 4.

random_stateint, RandomState instance or None, optional

If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.

tolfloat, optional



Whether or not SVD results should be computed eagerly, by default True.

components_array, shape (n_components, n_features)
explained_variance_array, shape (n_components,)

The variance of the training samples transformed by a projection to each component.

explained_variance_ratio_array, shape (n_components,)

Percentage of variance explained by each of the selected components.

singular_values_array, shape (n_components,)

The singular values corresponding to each of the selected components. The singular values are equal to the 2-norms of the n_components variables in the lower-dimensional space.


SVD suffers from a problem called “sign indeterminacy”, which means the sign of the components_ and the output from transform depend on the algorithm and random state. To work around this, fit instances of this class to data once, then keep the instance around to do transformations.


The implementation currently does not support sparse matrices.


Direct QR factorizations for tall-and-skinny matrices in MapReduce architectures. A. Benson, D. Gleich, and J. Demmel. IEEE International Conference on Big Data, 2013. http://arxiv.org/abs/1301.1071


>>> from dask_ml.decomposition import TruncatedSVD
>>> import dask.array as da
>>> X = da.random.normal(size=(1000, 20), chunks=(100, 20))
>>> svd = TruncatedSVD(n_components=5, n_iter=3, random_state=42)
>>> svd.fit(X)  
TruncatedSVD(algorithm='tsqr', n_components=5, n_iter=3,
             random_state=42, tol=0.0)
>>> print(svd.explained_variance_ratio_)  
[0.06386323 0.06176776 0.05901293 0.0576399  0.05726607]
>>> print(svd.explained_variance_ratio_.sum())  
>>> print(svd.singular_values_)  
array([35.92469517, 35.32922121, 34.53368856, 34.138..., 34.013...])

Note that transform returns a dask.Array.

>>> svd.transform(X)
dask.array<sum-agg, shape=(1000, 5), dtype=float64, chunksize=(100, 5)>