# Parallel Meta-estimators¶

dask-ml provides some meta-estimators that parallelize and scaling out certain tasks that may not be parallelized within scikit-learn itself. For example, ParallelPostFit will parallelize the predict, predict_proba and transform methods, enabling them to work on large (possibly larger-than memory) datasets.

## Parallel Prediction and Transformation¶

wrappers.ParallelPostFit is a meta-estimator for parallelizing post-fit tasks like prediction and transformation. It can wrap any scikit-learn estimator to provide parallel predict, predict_proba, and transform methods.

Warning

ParallelPostFit does not parallelize the training step. The underlying estimator’s .fit method is called normally.

Since just the predict, predict_proba, and transform methods are wrapped, wrappers.ParallelPostFit is most useful in situations where your training dataset is relatively small (fits in a single machine’s memory), and prediction or transformation must be done on a much larger dataset (perhaps larger than a single machine’s memory).

In [1]: from sklearn.ensemble import GradientBoostingClassifier

In [2]: import sklearn.datasets

In [4]: from dask_ml.wrappers import ParallelPostFit


In this example, we’ll make a small 1,000 sample training dataset

In [5]: X, y = sklearn.datasets.make_classification(n_samples=1000,
...:                                             random_state=0)
...:


Training is identical to just calling estimator.fit(X, y). Aside from copying over learned attributes, that’s all that ParallelPostFit does.

In [6]: clf = ParallelPostFit(estimator=GradientBoostingClassifier())

In [7]: clf.fit(X, y)
Out[7]:
learning_rate=0.1, loss='deviance', max_depth=3,
max_features=None, max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_sampl...      subsample=1.0, tol=0.0001, validation_fraction=0.1,
verbose=0, warm_start=False),
scoring=None)


This class is useful for predicting for or transforming large datasets. We’ll make a larger dask array X_big with 10,000 samples per block.

In [8]: X_big, _ = dask_ml.datasets.make_classification(n_samples=100000,
...:                                                 chunks=10000,
...:                                                 random_state=0)
...:

In [9]: clf.predict(X_big)


This returned a dask.array. Like any dask array, the actual compute will cause the scheduler to compute tasks in parallel. If you’ve connected to a dask.distributed.Client, the computation will be parallelized across your cluster of machines.

In [10]: clf.predict_proba(X_big).compute()[:10]
Out[10]:
array([[0.24519991, 0.75480009],
[0.00464304, 0.99535696],
[0.00902734, 0.99097266],
[0.01299259, 0.98700741],
[0.93776132, 0.06223868],
[0.755881  , 0.244119  ],
[0.94838903, 0.05161097],
[0.0203811 , 0.9796189 ],
[0.021179  , 0.978821  ],
[0.95012543, 0.04987457]])


See parallelizing prediction for an example of how this scales for a support vector classifier.

## Comparison to other Estimators in dask-ml¶

dask-ml re-implements some estimators from scikit-learn, for example dask_ml.cluster.KMeans, or dask_ml.preprocessing.QuantileTransformer. This raises the question, should I use the reimplemented dask-ml versions, or should I wrap scikit-learn version in a meta-estimator? It varies from estimator to estimator, and depends on your tolerance for approximate solutions and the size of your training data. In general, if your training data is small, you should be fine wrapping the scikit-learn version with a dask-ml meta-estimator.