class dask_ml.ensemble.BlockwiseVotingClassifier(estimator, voting='hard', classes=None)

Blockwise training and ensemble voting classifier.

This classifier trains on blocks / partitions of Dask Arrays or DataFrames. A cloned version of estimator will be fit independently on each block or partition of the Dask collection. This is useful when the sub estimator only works on small in-memory data structures like a NumPy array or pandas DataFrame.

Prediction is done by the ensemble of learned models.


Ensure that your data are sufficiently shuffled prior to training! If the values of the various blocks / partitions of your dataset are not distributed similarly, the classifier will give poor results.

votingstr, {‘hard’, ‘soft’} (default=’hard’)

If ‘hard’, uses predicted class labels for majority rule voting. Else if ‘soft’, predicts the class label based on the argmax of the sums of the predicted probabilities, which is recommended for an ensemble of well-calibrated classifiers.

classeslist-like, optional

The set of classes that y can take. This can also be provided as a fit param if the underlying estimator requires classes at fit time.

estimators_list of classifiers

The collection of fitted sub-estimators that are estimator fitted on each partition / block of the inputs.

classes_array-like, shape (n_predictions,)

The class labels.


>>> import dask_ml.datasets
>>> import dask_ml.ensemble
>>> import sklearn.linear_model
>>> X, y = dask_ml.datasets.make_classification(n_samples=100_000,
>>> ...                                         chunks=10_000)
>>> subestimator = sklearn.linear_model.RidgeClassifier(random_state=0)
>>> clf = dask_ml.ensemble.BlockwiseVotingClassifier(
>>> ...     subestimator,
>>> ...     classes=[0, 1]
>>> ... )
>>> clf.fit(X, y)



Get parameters for this estimator.

score(X, y[, sample_weight])

Return the mean accuracy on the given test data and labels.


Set the parameters of this estimator.



__init__(estimator, voting='hard', classes=None)