Potentially use sklearn.model_selection.cross_val_score?

Screen Link: https://app.dataquest.io/m/240/guided-project%3A-predicting-house-sale-prices/4/train-and-test

Hi guys,

In the solution, we have used a custom defined function to do k-fold cross validation. Wouldn’t it have been easier and simpler to simply use sklearn.model_selection’s cross_val_score class instead (with scoring parameter of “neg_root_mean_squared_error”)?



I agree, and I used cross_val_score to get the rmse. On that project, for k=4, the avg rmse value is 29111.434972870542, it differs slightly with the number obtained from the ‘manual’ calculation (29112.149223755107). You decide which one is more convenient :slight_smile:

hello ,
i tried to use cross validation score function but unable to get the answer , send me the code for train_test() function

my code which is not working:

kf = KFold(n_splits=k, shuffle=True,random_state=1)
mses = cross_val_score(lr,numeric_df[features],numeric_df[‘SalePrice’],scoring = ‘neg_root_mean_squared_error’,cv=kf)
rmse = np.sqrt(mses)
avg = np.mean(rmse)
return avg

I did the same thing dudes, I was hung up on this and played around. Make sure you use the same random state and you’ll get the same results!