Hi community,
I am working on something difficult to interpret (related to the Guided Project about k-nearest neighbors).
Let’s say we create an univariate model and we want to test two different features. We want also to compare two error metrics: RMSE and Mean Absolute Percentage Error (MAPE). I have tuned the k parameter (n-neighbors) with different values and obtain the different scores for each feature shown in the figure below.
The caveat here is that Feature 1 makes the better RMSE scores (see the 2 red points on the upper left) but with worst MAPE scores than Feature 2!
The two models have been trained with the same random split for the training set.
Also, for Feature 2, the RMSE and MAPE scores show a pretty clean linear relationship, but this is not the case for Feature 1, at least to my eyes.
Does it mean that Feature 1 induces some kind of instability in the model and that Feature 2 should be seen as a more conservative choice for the model? What lesson could we draw from it?
I am conscious this is probably the kind of caveat that happens when no clear choice is being made about the error metric to optimize, but I find it disturbing.
Best
W.