Why is scoring equal to "neg_mean_squared_error"?

Screen Link:

Why is scoring set to “neg_mean_squared_error”? I read the documentation referenced in the text and it states that it returns the negated value of the metric. Meaning it nulls it, makes it ineffectual and it doesn’t do anything?

@vroomvroom

There are different kinds of scoring metrics depending on the problem. Please see here.

This problem uses the mean squared error, mse as the scoring metric. However, this metric returns negative values. Therefore, we need to use abs(mse) to get positive values.

The mse takes the errors: difference between the actual values and those predicted by the model, and find the mean of the squares.

It isn’t null and the negative sign does not make it ineffective. A high mse means that the error is large.

I understand what you’re saying about how to calculate mse. On the page you referred to it states " Thus metrics which measure the distance between the model and the data, like metrics.mean_squared_error, are available as neg_mean_squared_error which return the negated value of the metric."
What is meant by the negated value of the metric? How does mse return negative values when the difference is squared?

This means that a negative value is a prefix to all mse calculations.

The mse cannot return negative values. Although the difference between one value and the mean can be negative, this negative value is squared. Therefore all results are either positive or zero.

1 Like

I initially didn’t understand why this would return the negative of the MSE values. I found a great answer here python - Is sklearn.metrics.mean_squared_error the larger the better (negated)? - Stack Overflow

1 Like