class dask_ml.ensemble.BlockwiseVotingClassifier(estimator, voting='hard', classes=None)

Blockwise training and ensemble voting classifier.

This classifier trains on blocks / partitions of Dask Arrays or DataFrames. A cloned version of estimator will be fit independently on each block or partition of the Dask collection. This is useful when the sub estimator only works on small in-memory data structures like a NumPy array or pandas DataFrame.

Prediction is done by the ensemble of learned models.


Ensure that your data are sufficiently shuffled prior to training! If the values of the various blocks / partitions of your dataset are not distributed similarly, the classifier will give poor results.

estimator : Estimator
voting : str, {‘hard’, ‘soft’} (default=’hard’)

If ‘hard’, uses predicted class labels for majority rule voting. Else if ‘soft’, predicts the class label based on the argmax of the sums of the predicted probabilities, which is recommended for an ensemble of well-calibrated classifiers.

classes : list-like, optional

The set of classes that y can take. This can also be provided as a fit param if the underlying estimator requires classes at fit time.

estimators_ : list of classifiers

The collection of fitted sub-estimators that are estimator fitted on each partition / block of the inputs.

classes_ : array-like, shape (n_predictions,)

The class labels.


>>> import dask_ml.datasets
>>> import dask_ml.ensemble
>>> import sklearn.linear_model
>>> X, y = dask_ml.datasets.make_classification(n_samples=100_000,
>>> ...                                         chunks=10_000)
>>> subestimator = sklearn.linear_model.RidgeClassifier(random_state=0)
>>> clf = dask_ml.ensemble.BlockwiseVotingClassifier(
>>> ...     subestimator,
>>> ...     classes=[0, 1]
>>> ... )
>>> clf.fit(X, y)


get_params([deep]) Get parameters for this estimator.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
__init__(estimator, voting='hard', classes=None)

Initialize self. See help(type(self)) for accurate signature.


Get parameters for this estimator.

deep : bool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

params : dict

Parameter names mapped to their values.

score(X, y, sample_weight=None)

Return the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

This matches the scikit-learn implementation with the difference that dask_ml.metrics.accuracy_score() is used rather than sklearn.metrics.accuracy_score().

X : array-like of shape (n_samples, n_features)

Test samples.

y : array-like of shape (n_samples,) or (n_samples, n_outputs)

True labels for X.

sample_weight : array-like of shape (n_samples,), default=None

Sample weights.

score : float

Mean accuracy of self.predict(X) wrt. y.


Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

**params : dict

Estimator parameters.

self : estimator instance

Estimator instance.