dask_ml.metrics.mean_squared_log_error(y_true: ArrayLike, y_pred: ArrayLike, sample_weight: Optional[ArrayLike] = None, multioutput: Optional[str] = 'uniform_average', compute: bool = True) → ArrayLike

Mean squared logarithmic error regression loss

This docstring was copied from sklearn.metrics.mean_squared_log_error.

Some inconsistencies with the Dask version may exist.

Read more in the User Guide.

Parameters: y_true : array-like of shape (n_samples,) or (n_samples, n_outputs) Ground truth (correct) target values. y_pred : array-like of shape (n_samples,) or (n_samples, n_outputs) Estimated target values. sample_weight : array-like of shape (n_samples,), optional Sample weights. multioutput : string in [‘raw_values’, ‘uniform_average’] or array-like of shape (n_outputs) Defines aggregating of multiple output values. Array-like value defines weights used to average errors. ‘raw_values’ : Returns a full set of errors when the input is of multioutput format. ‘uniform_average’ : Errors of all outputs are averaged with uniform weight. loss : float or ndarray of floats A non-negative floating point value (the best value is 0.0), or an array of floating point values, one for each individual target.

Examples

>>> from sklearn.metrics import mean_squared_log_error  # doctest: +SKIP
>>> y_true = [3, 5, 2.5, 7]  # doctest: +SKIP
>>> y_pred = [2.5, 5, 4, 8]  # doctest: +SKIP
>>> mean_squared_log_error(y_true, y_pred)  # doctest: +SKIP
0.039...
>>> y_true = [[0.5, 1], [1, 2], [7, 6]]  # doctest: +SKIP
>>> y_pred = [[0.5, 2], [1, 2.5], [8, 8]]  # doctest: +SKIP
>>> mean_squared_log_error(y_true, y_pred)  # doctest: +SKIP
0.044...
>>> mean_squared_log_error(y_true, y_pred, multioutput='raw_values')  # doctest: +SKIP
array([0.00462428, 0.08377444])
>>> mean_squared_log_error(y_true, y_pred, multioutput=[0.3, 0.7])  # doctest: +SKIP
0.060...