dask_ml.model_selection.InverseDecaySearchCV
dask_ml.model_selection
.InverseDecaySearchCV¶
- class dask_ml.model_selection.InverseDecaySearchCV(estimator, parameters, n_initial_parameters=10, test_size=None, patience=False, tol=0.001, fits_per_score=1, max_iter=100, random_state=None, scoring=None, verbose=False, prefix='', decay_rate=1.0)¶
Incrementally search for hyper-parameters on models that support partial_fit
This incremental hyper-parameter optimization class starts training the model on many hyper-parameters on a small amount of data, and then only continues training those models that seem to be performing well.
This class will decay the number of parameters over time. At time step
k
, this class will retain1 / (k + 1)
fraction of the highest performing models.- Parameters
- estimatorestimator object.
A object of that type is instantiated for each initial hyperparameter combination. This is assumed to implement the scikit-learn estimator interface. Either estimator needs to provide a score` function, or
scoring
must be passed. The estimator must implementpartial_fit
,set_params
, and work well withclone
.- parametersdict
Dictionary with parameters names (string) as keys and distributions or lists of parameters to try. Distributions must provide a
rvs
method for sampling (such as those from scipy.stats.distributions). If a list is given, it is sampled uniformly.- n_initial_parametersint, default=10
Number of parameter settings that are sampled. This trades off runtime vs quality of the solution.
Alternatively, you can set this to
"grid"
to do a full grid search.- patienceint, default False
If specified, training stops when the score does not increase by
tol
afterpatience
calls topartial_fit
. Off by default.- fits_per_scoresint, optional, default=1
If
patience
is used the maximum number ofpartial_fit
calls betweenscore
calls.- scores_per_fitint, default 1
If
patience
is used the maximum number ofpartial_fit
calls betweenscore
calls.- tolfloat, default 0.001
The required level of improvement to consider stopping training on that model. The most recent score must be at at most
tol
better than the all of the previouspatience
scores for that model. Increasingtol
will tend to reduce training time, at the cost of worse models.- max_iterint, default 100
Maximum number of partial fit calls per model.
- test_sizefloat
Fraction of the dataset to hold out for computing test scores. Defaults to the size of a single partition of the input training set
Note
The training dataset should fit in memory on a single machine. Adjust the
test_size
parameter as necessary to achieve this.- random_stateint, RandomState instance or None, optional, default: None
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
- scoringstring, callable, list/tuple, dict or None, default: None
A single string (see The scoring parameter: defining model evaluation rules) or a callable (see Defining your scoring strategy from metric functions) to evaluate the predictions on the test set.
For evaluating multiple metrics, either give a list of (unique) strings or a dict with names as keys and callables as values.
NOTE that when using custom scorers, each scorer should return a single value. Metric functions returning a list/array of values can be wrapped into multiple scorers that return one value each.
See Specifying multiple metrics for evaluation for an example.
If None, the estimator’s default scorer (if available) is used.
- verbosebool, float, int, optional, default: False
If False (default), don’t print logs (or pipe them to stdout). However, standard logging will still be used.
If True, print logs and use standard logging.
If float, print/log approximately
verbose
fraction of the time.- prefixstr, optional, default=””
While logging, add
prefix
to each message.- decay_ratefloat, default 1.0
How quickly to decrease the number partial future fit calls. Higher decay_rate will result in lower training times, at the cost of worse models.
The default
decay_rate=1.0
is chosen because it has some theoretical motivation [1].
- Attributes
- cv_results_dict of np.ndarrays
This dictionary has keys
mean_partial_fit_time
mean_score_time
std_partial_fit_time
std_score_time
test_score
rank_test_score
model_id
partial_fit_calls
params
param_{key}
, wherekey
is every key inparams
.
The values in the
test_score
key correspond to the last score a model received on the hold out dataset. The keymodel_id
corresponds withhistory_
. This dictionary can be imported into Pandas.- model_history_dict of lists of dict
A dictionary of each models history. This is a reorganization of
history_
: the same information is present but organized per model.This data has the structure
{model_id: hist}
wherehist
is a subset ofhistory_
andmodel_id
are model identifiers.- history_list of dicts
Information about each model after each
partial_fit
call. Each dict the keyspartial_fit_time
score_time
score
model_id
params
partial_fit_calls
elapsed_wall_time
The key
model_id
corresponds to themodel_id
incv_results_
. This list of dicts can be imported into Pandas.- best_estimator_BaseEstimator
The model with the highest validation score among all the models retained by the “inverse decay” algorithm.
- best_score_float
Score achieved by
best_estimator_
on the validation set after the final call topartial_fit
.- best_index_int
Index indicating which estimator in
cv_results_
corresponds to the highest score.- best_params_dict
Dictionary of best parameters found on the hold-out data.
- scorer_
The function used to score models, which has a call signature of
scorer_(estimator, X, y)
.- n_splits_int
Number of cross validation splits.
- multimetric_bool
Whether this cross validation search uses multiple metrics.
Notes
When
decay_rate==1
, this class approximates the number ofpartial_fit
calls thatSuccessiveHalvingSearchCV
performs. Ifn_initial_parameters
is configured properly withdecay_rate=1
, it’s possible this class will mirror the most aggressive bracket ofHyperbandSearchCV
. This might yield good results and/or find good models, but is untested.References
- 1
Li, L., Jamieson, K., DeSalvo, G., Rostamizadeh, A., & Talwalkar, A. (2017). Hyperband: A novel bandit-based approach to hyperparameter optimization. The Journal of Machine Learning Research, 18(1), 6765-6816. http://www.jmlr.org/papers/volume18/16-558/16-558.pdf
Methods
decision_function
(X)fit
(X[, y])Find the best parameters for a particular model.
get_metadata_routing
()Get metadata routing of this object.
get_params
([deep])Get parameters for this estimator.
inverse_transform
(Xt)predict
(X)Predict for X.
predict_log_proba
(X)Log of probability estimates.
predict_proba
(X)Probability estimates.
score
(X[, y])Returns the score on the given data.
set_params
(**params)Set the parameters of this estimator.
set_score_request
(*[, compute])Request metadata passed to the
score
method.transform
(X)Transform block or partition-wise for dask inputs.
partial_fit
- __init__(estimator, parameters, n_initial_parameters=10, test_size=None, patience=False, tol=0.001, fits_per_score=1, max_iter=100, random_state=None, scoring=None, verbose=False, prefix='', decay_rate=1.0)¶