class dask_ml.model_selection.SuccessiveHalvingSearchCV(estimator, parameters, n_initial_parameters=10, n_initial_iter=None, max_iter=None, aggressiveness=3, test_size=None, patience=False, tol=0.001, random_state=None, scoring=None, verbose=False, prefix='')

Perform the successive halving algorithm [1].

This algorithm trains estimators for a certain number partial_fit calls to partial_fit, then kills the worst performing half. It trains the surviving estimators for twice as long, and repeats this until one estimator survives.

The value of \(1/2\) above is used for a clear explanation. This class defaults to killing the worst performing 1 - 1 // aggressiveness fraction of models, and trains estimators for aggressiveness times longer, and waits until the number of models left is less than aggressiveness.

estimatorestimator object.

A object of that type is instantiated for each initial hyperparameter combination. This is assumed to implement the scikit-learn estimator interface. Either estimator needs to provide a score function, or scoring must be passed. The estimator must implement partial_fit, set_params, and work well with clone.


Dictionary with parameters names (string) as keys and distributions or lists of parameters to try. Distributions must provide a rvs method for sampling (such as those from scipy.stats.distributions). If a list is given, it is sampled uniformly.

aggressivenessfloat, default=3

How aggressive to be in culling off the different estimators. Higher values imply higher confidence in scoring (or that the hyperparameters influence the estimator.score more than the data).

n_initial_parametersint, default=10

Number of parameter settings that are sampled. This trades off runtime vs quality of the solution.


Number of times to call partial fit initially before scoring. Estimators are trained for n_initial_iter calls to partial_fit initially. Higher values of n_initial_iter train the estimators longer before making a decision. Metadata on the number of calls to partial_fit is in metadata (and metadata_).

max_iterint, default None

Maximum number of partial fit calls per model. If None, will allow SuccessiveHalvingSearchCV to run until (about) one model survives. If specified, models will stop being trained when max_iter calls to partial_fit are reached.


Fraction of the dataset to hold out for computing test scores. Defaults to the size of a single partition of the input training set


The training dataset should fit in memory on a single machine. Adjust the test_size parameter as necessary to achieve this.

patienceint, default False

If specified, training stops when the score does not increase by tol after patience calls to partial_fit. Off by default.

tolfloat, default 0.001

The required level of improvement to consider stopping training on that model. The most recent score must be at at most tol better than the all of the previous patience scores for that model. Increasing tol will tend to reduce training time, at the cost of worse models.

scoringstring, callable, None. default: None

A single string (see The scoring parameter: defining model evaluation rules) or a callable (see Defining your scoring strategy from metric functions) to evaluate the predictions on the test set.

If None, the estimator’s default scorer (if available) is used.

random_stateint, RandomState instance or None, optional, default: None

If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.

verbosebool, float, int, optional, default: False

If False (default), don’t print logs (or pipe them to stdout). However, standard logging will still be used.

If True, print logs and use standard logging.

If float, print/log approximately verbose fraction of the time.

prefixstr, optional, default=””

While logging, add prefix to each message.

cv_results_dict of np.ndarrays

This dictionary has keys

  • mean_partial_fit_time

  • mean_score_time

  • std_partial_fit_time

  • std_score_time

  • test_score

  • rank_test_score

  • model_id

  • partial_fit_calls

  • params

  • param_{key}, where key is every key in params.

The values in the test_score key correspond to the last score a model received on the hold out dataset. The key model_id corresponds with history_. This dictionary can be imported into Pandas.

metadata and metadata_dict[key, int]

Dictionary describing the computation. metadata describes the computation that will be performed, and metadata_ describes the computation that has been performed. Both dictionaries have keys

  • n_models: the number of models for this run of successive halving

  • max_iter: the maximum number of times partial_fit is called. At least one model will have this many partial_fit calls.

  • partial_fit_calls: the total number of partial_fit calls. All models together will receive this many partial_fit calls.

When patience is specified, the reduced computation will be reflected in metadata_ but not metadata.

model_history_dict of lists of dict

A dictionary of each models history. This is a reorganization of history_: the same information is present but organized per model.

This data has the structure {model_id: hist} where hist is a subset of history_ and model_id are model identifiers.

history_list of dicts

Information about each model after each partial_fit call. Each dict the keys

  • partial_fit_time

  • score_time

  • score

  • model_id

  • params

  • partial_fit_calls

The key model_id corresponds to the model_id in cv_results_. This list of dicts can be imported into Pandas.


The model with the highest validation score among all the models retained by the “inverse decay” algorithm.


Score achieved by best_estimator_ on the validation set after the final call to partial_fit.


Index indicating which estimator in cv_results_ corresponds to the highest score.


Dictionary of best parameters found on the hold-out data.

scorer_ :

The function used to score models, which has a call signature of scorer_(estimator, X, y).


Number of cross validation splits.


Whether this cross validation search uses multiple metrics.



“Non-stochastic best arm identification and hyperparameter optimization” by Jamieson, Kevin and Talwalkar, Ameet. 2016. https://arxiv.org/abs/1502.07943



fit(X[, y])

Find the best parameters for a particular model.


Get parameters for this estimator.



Predict for X.


Log of probability estimates.


Probability estimates.

score(X[, y])

Returns the score on the given data.


Set the parameters of this estimator.


Transform block or partition-wise for dask inputs.


__init__(estimator, parameters, n_initial_parameters=10, n_initial_iter=None, max_iter=None, aggressiveness=3, test_size=None, patience=False, tol=0.001, random_state=None, scoring=None, verbose=False, prefix='')