Workflow of k-fold cross validation

While performing KNN based on regression Machine Learning algorithm we need to split up the data in train and test sets. So, when using k-fold cross-validation we do splitting before making the model as shown in the link below right?
https://app.dataquest.io/m/154/cross-validation/5/function-for-training-models

But, if we are using the SciKit learn to perform k-fold cv then don’t we explicitly need to use the method model.fit() and model.predict() from KNeighborsRegressor class?

I am not sure I understand. Both model.fit() and model.predict() are being used in the code in the link you shared. Could you clarify what you mean?

I mean to say that in the link above we were using predict() and fit() method and we get the predicted values then we add those as a column in our dataframe.

But, when performing K-fold cv then don’t we need to use predict() and fit() method?

Any idea about the same?