I hope this finds you (the person reading this) well.
I seem to have run into a slight complication. As per the screenshot I have shared, I have gotten a Dataquest error saying that my model was not able to reach convergence with the number of iterations my model did. I went ahead and changed the ‘max_iter’ parameter to 1000 to see if it would make a difference. It did not help as I got the same error.
This does seem normal, as in the solutions notebook this error was also apparent. My problem however is that after running my models I am unable to get an output in any of the proceeding cells, or in fact the cell in which I run my model I see in the solution notebook that the individual who did the project was still able to plot his results despite the error - I however am unable to do so. I do not get an “output number”, just a an asterisk. Can anyone offer me some guidance? I would be forever grateful!
Below you will see a screenshot of my pipeline for training, testing and cross-validation. And below that the screenshot showing the error and highlighting the problem of not being able to return any output for my code:
Did u try to change the solver?
According to the scipy doc:
Note: The default solver ‘adam’ works pretty well on relatively large datasets (with thousands of training samples or more) in terms of both training time and validation score. For small datasets, however, ‘lbfgs’ can converge faster and perform better.
Also about “adam” solver you are using (this the solver by default), I found this:
tol: float, default=1e-4. Tolerance for the optimization. When the loss or score is not improving by at least
n_iter_no_change consecutive iterations, unless
learning_rate is set to ‘adaptive’, convergence is considered to be reached and training stops.
n_iter_no_change: int, default=10. Maximum number of epochs to not meet tol improvement. Only effective when solver=’sgd’ or ‘adam’. New in version 0.20.
Maybe you can play with n_iter_no_change parameter if you want absolutly to use “adam” solver. Make sure also you are using version 0.20.
It looks like despite of the warning that the training doesn’t stop, as if it was lost in an inifinite loop, so you have to find the way to force it to stop when not converging.
Thanks for your help!
I changed the solver and got the model to converge!
Really appreciate your help on this. Learning new things every day
Can you mark my post as “solution”? (I don’t know how to do it)
Ok just in case someone else still had issue after changing solver. This settings worked for me and errors gone. I bumped up max_iter to 2,000
MLPClassifier(hidden_layer_sizes=(neuron_set,),solver=‘sgd’, activation=‘logistic’, random_state=0, max_iter=2000)