dask_ml.model_selection.ShuffleSplit
dask_ml.model_selection
.ShuffleSplit¶
- class dask_ml.model_selection.ShuffleSplit(n_splits=10, test_size=0.1, train_size=None, blockwise=True, random_state=None)¶
Random permutation cross-validator.
Yields indices to split data into training and test sets.
Warning
By default, this performs a blockwise-shuffle. That is, each block is shuffled internally, but data are not shuffled between blocks. If your data is ordered, then set
blockwise=False
.Note: contrary to other cross-validation strategies, random splits do not guarantee that all folds will be different, although this is still very likely for sizeable datasets.
- Parameters
- n_splitsint, default 10
Number of re-shuffling & splitting iterations.
- test_sizefloat, int, None, default=0.1
If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the test split. If int, represents the absolute number of test samples. If None, the value is set to the complement of the train size.
- train_sizefloat, int, or None, default=None
If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the train split. If int, represents the absolute number of train samples. If None, the value is automatically set to the complement of the test size.
- blockwisebool, default True
Whether to shuffle data only within blocks (True), or allow data to be shuffled between blocks (False). Shuffling between blocks can be much more expensive, especially in distributed environments.
- random_stateint, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
Methods
get_metadata_routing
()Get metadata routing of this object.
get_n_splits
([X, y, groups])Returns the number of splitting iterations in the cross-validator.
split
(X[, y, groups])Generate indices to split data into training and test set.
- __init__(n_splits=10, test_size=0.1, train_size=None, blockwise=True, random_state=None)¶