Incremental Learning
Contents
Incremental Learning¶
Some estimators can be trained incrementally – without seeing the entire
dataset at once. Scikit-Learn provides the partial_fit
API to stream batches
of data to an estimator that can be fit in batches.
Normally, if you pass a Dask Array to an estimator expecting a NumPy array, the Dask Array will be converted to a single, large NumPy array. On a single machine, you’ll likely run out of RAM and crash the program. On a distributed cluster, all the workers will send their data to a single machine and crash it.
dask_ml.wrappers.Incremental
provides a bridge between Dask and
Scikit-Learn estimators supporting the partial_fit
API. You wrap the
underlying estimator in Incremental
. Dask-ML will sequentially pass each
block of a Dask Array to the underlying estimator’s partial_fit
method.
Note
dask_ml.wrappers.Incremental
currently does not work well with
hyper-parameter optimization like sklearn.model_selection.GridSearchCV
.
If you need to do hyper-parameter optimization on larger-than-memory datasets,
we recommend dask_ml.model_selection.IncrementalSearchCV
. See
“Incremental Hyperparameter Optimization” for an introduction.
Incremental Meta-estimator¶
|
Metaestimator for feeding Dask Arrays to an estimator blockwise. |
dask_ml.wrappers.Incremental
is a meta-estimator (an estimator that
takes another estimator) that bridges scikit-learn estimators expecting
NumPy arrays, and users with large Dask Arrays.
Each block of a Dask Array is fed to the underlying estimator’s
partial_fit
method. The training is entirely sequential, so you won’t
notice massive training time speedups from parallelism. In a distributed
environment, you should notice some speedup from avoiding extra IO, and the
fact that models are typically much smaller than data, and so faster to move
between machines.
In [1]: from dask_ml.datasets import make_classification
In [2]: from dask_ml.wrappers import Incremental
In [3]: from sklearn.linear_model import SGDClassifier
In [4]: X, y = make_classification(chunks=25)
In [5]: X
Out[5]: dask.array<normal, shape=(100, 20), dtype=float64, chunksize=(25, 20), chunktype=numpy.ndarray>
In [6]: estimator = SGDClassifier(random_state=10, max_iter=100)
In [7]: clf = Incremental(estimator)
In [8]: clf.fit(X, y, classes=[0, 1])
Out[8]: Incremental(estimator=SGDClassifier(max_iter=100, random_state=10))
In this example, we make a (small) random Dask Array. It has 100 samples, broken in the 4 blocks of 25 samples each. The chunking is only along the first axis (the samples). There is no chunking along the features.
You instantiate the underlying estimator as usual. It really is just a
scikit-learn compatible estimator, and will be trained normally via its
partial_fit
.
Notice that we call the regular .fit
method, not partial_fit
for
training. Dask-ML takes care of passing each block to the underlying estimator
for you.
Just like sklearn.linear_model.SGDClassifier.partial_fit()
, we need to
pass the classes
argument to fit
. In general, any argument that is
required for the underlying estimators partial_fit
becomes required for
the wrapped fit
.
Note
Take care with the behavior of Incremental.score()
. Most estimators
inherit the default scoring methods of R2 score for regressors and accuracy
score for classifiers. For these estimators, we automatically use Dask-ML’s
scoring methods, which are able to operate on Dask arrays.
If your underlying estimator uses a different scoring method, you’ll need
to ensure that the scoring method is able to operate on Dask arrays. You
can also explicitly pass scoring=
to pass a dask-aware scorer.
We can get the accuracy score on our dataset.
In [9]: clf.score(X, y)
Out[9]: np.float64(0.62)
All of the attributes learned during training, like coef_
, are available
on the Incremental
instance.
In [10]: clf.coef_
Out[10]:
array([[-35.9445656 , 49.73992669, -2.87680832, 13.73346329,
-14.26998178, -1.31088012, 26.54761929, -13.63342126,
12.48523186, -14.24566313, -64.32235796, -29.13811848,
-31.70766329, -35.77446991, -1.04224674, -11.70995257,
-20.41152723, 1.89513282, -26.68256021, -6.31896171]])
If necessary, the actual estimator trained is available as Incremental.estimator_
In [11]: clf.estimator_
Out[11]: SGDClassifier(max_iter=100, random_state=10)
Incremental Learning and Hyper-parameter Optimization¶
See “Incremental Hyperparameter Optimization” for more on how to do hyperparameter optimization on larger than memory datasets.