Confused with the output

Hi
I have a few confusions over the screens below.
For the screen Learn data science with Python and R projects
the output I got here is as follows:
image

But as per the next screen,
image

As per screen 9, fpr is 60%, tpr is ~62%, whereas as per screen 10, it should have been 39% and 66% respectively.

Also, the output from screen 10 (Learn data science with Python and R projects) is
image

But screen 11 (Learn data science with Python and R projects) mentions a different fpr
image

I have been confused with the above, can anyone please clarify.

Thanks,
Debasmita

5 Likes

@dash.debasmita I’ve encountered the same problem. This seems to be a bug in the content, best report it through Contact Us.

Also, @Sahil will you kindly take a look at this? I’m wondering if it’s related to version change with scikitlearn. The tpr and fpr in the output are so close in both penalized cases, it feels like the overall accuracy just dropped so much it’s not really valid optimizing.

3 Likes

I am having the exact same problem!

Difficult to see the induced parameter changes at work in attaining lower fp’s and higher tp’s.

Regards,
John

1 Like

Hi @veratsien,

Thank you for mentioning me. I will get this issue logged.

Best,
Sahil

1 Like

@Sahil Awesome, thank you!

1 Like

Thank you Sahil , and veratsien, I forgot to follow up on this.

2 Likes

Any update on the issue mentioned here? I am seeing the same issue that is being reported here.

1 Like

Hi @Mathew.Thomas,

Thank you for asking. I just checked the ticket for updates and it seems like the issue is scheduled to be fixed by January 11, 2021.

Best,
Sahil

Hi @Sahil - just to note that these issues still seem to be present. Thanks

1 Like

Hi @Sahil : The issue on both screens still persists. The model actually deteriorates on applying penalties on both the screens. Infact, it gets worse with manual penalties.
Is this the correct way of applying penalties because we seem to be making our model worse with each step?

1 Like

Hi @joe.gamse,

Sorry about that; it seems like the fix has been delayed so that the content team can focus on our SQL skills path, which is a high priority task for this quarter. I will update this topic once the issue is fixed. Until then, please use this workaround to mark the screen as completed:

https://dataquest.elevio.help/en/articles/151-how-to-mark-a-lesson-screen-as-complete

Best,
Sahil

Hi @vinayak.naik87,

Sorry, I don’t understand what you mean by worse here. The goal here is to reduce the false-positive rate and to demonstrate how to use manual penalties.

We reduced the false positive rate from 60% to 21% using a manual penalty. However, the manual penalty unintentionally reduced the true positive rate as well which is expected behavior as mentioned in screen 4:

Generally, if we want to reduce false positive rate, true positive rate will also go down. This is because if we want to reduce the risk of false positives, we wouldn’t think about funding riskier loans in the first place.

And why it is best for us to focus on the false positive rate (in the case of loans) is explained in screen 11:

Note that this comes at the expense of true positive rate. While we have fewer false positives, we’re also missing opportunities to fund more loans and potentially make more money. Given that we’re approaching this as a conservative investor, this strategy makes sense, but it’s worth keeping in mind the tradeoffs.

Hope this helps! :slightly_smiling_face:

Hi there,

I also struggled with the results in this mission. And finally I realized that the variable “target” it’s not equals to “loans[“loan_status”]”, and it should be because it is defined so at the beguining of this mission. This manner you can get different results in tpr and fpr depending on using “target” or “loans[“loan_status”]”, when you compares your predictions with them.

I reckon this is why results doesn’t mach with those mentioned during this mission. May be they would have been calculated using the variable “target” instead of “loans[“loan_status”]”.

I hope it serves.

Regards.

2 Likes

Great question @vinayak.naik87 !

First, yes the issues on all the screens are still there, and that is annoying. Especially since a couple people now have pointed out the source of the issue.

Second, this is an interesting notion, that the model deteriorates as we add these penalties. Just looking at the score for the metrics we have chosen we see the numbers going down as we add penalties and so it is easy to think that going from 66% to 30% (or whatever the actual numbers are) indicates a decrease in the performance of the model. But its important to remember why we have created this model in the first place, and what it is that these numbers actually mean.

For this lesson we are a potential investor that is looking to use the lending club to make money. However, there is no guarantee that our investment will payoff, and in fact some of these ‘investment opportunities’ end up losing a bunch of money. How can we guarantee that we dont pick one of the ‘bad’ borrowers? Simple, we build a model that predicts that EVERY borrower is bad, and we dont invest. That model has predicts all zeros and has a 100% True Negative rate. Unfortunately, that model does not serve the original purpose of selecting an investment to make money.

So, how can we guarantee that we find good investments? Simple, we build a model that predicts all ones and has a 100% True Positive rate. Unfortunately, this model has no discriminating power to help us select a borrower that will pay back the loan, so we might as well just pick a borrower at random.

So why dont we just build a model that accurately predicts the category to which each borrow belongs? We know that this model will never be ‘perfect’, but lets assume we know that we could create a model with a 90% accuracy, it correctly categorizes 9 of every 10 borrowers. Dataquest actually played out this example pretty well here:
https://app.dataquest.io/m/135/machine-learning-project-walkthrough%3A-making-predictions/4/class-imbalance

Even with this ‘accurate’ model we end up losing money! So we need another metric to try to judge the success of our model. We want a model that is going to select as few ‘bad’ borrowers as possible, but unlike our all zero model, we do need it to select at least some borrowers. If the model assigns a 1 to a bad borrower, then we would say that the model gave us a False Positive, since the model falsely predicted that the borrower would be good. For this scenario then, we want to produce a model that has a LOW False Positive rate, ideally zero, since that would mean that it would never choose a bad borrower. We dont really care if we miss out on some potentially good borrowers, but we do need to have a model that will still correctly identify some of them. Successfully selecting a good borrower here is a True Positive, since it is a positive borrower that is truthfully identified. We dont really care if this number is low, meaning it doesnt identify many good borrowers, but we can have it go all the way to zero like our all zero model did.

Important to remember though that which metric you use to evaluate the model depends on what you are trying to accomplish with the model. If you are trying to identify international terrorists at the airport, you need a low False Negative rate because you dont want to miss even one! However, a high False Positive rate would mean that you are arresting everyone simply for being at the airport!

2 Likes

Hi all, I was encountering the same problem, but Daniel_H’s solution seemed to work for me on DataQuest.

In the True/False/Pos/Neg matrix, I replaced loans[“loan_status”] with target, and got the same FPR and TPR %s as described in the guide.

tn_filter = (predictions == 0) & (target == 0)
tn = len(predictions[tn_filter])

tp_filter = (predictions == 1) & (target == 1)
tp = len(predictions[tp_filter])

fn_filter = (predictions == 0) & (target == 1)
fn = len(predictions[fn_filter])

fp_filter = (predictions == 1) & (target == 0)
fp = len(predictions[fp_filter])

fpr = fp / (fp + tn)
tpr = tp / (tp + fn)

However, when I tried to follow this same project in my own Jupyter, I seemed to get the ‘wrong’ %s again…

def true_false_matrix(df, column, predictions):
    target = pd.Series(df[column])
    tn_filter = (predictions == 0) & (target == 0)
    tn = len(predictions[tn_filter])

    tp_filter = (predictions == 1) & (target == 1)
    tp = len(predictions[tp_filter])

    fn_filter = (predictions == 0) & (target == 1)
    fn = len(predictions[fn_filter])

    fp_filter = (predictions == 1) & (target == 0)
    fp = len(predictions[fp_filter])
    
    print(' True Negatives: {} ({}%)'.format(tn, round(100 * tn / (tn+tp+fn+fp), 1)))
    print(' True Positives: {} ({}%)'.format(tp, round(100 * tp / (tn+tp+fn+fp), 1)))
    print('False Negatives: {} ({}%)'.format(fn, round(100 * fn / (tn+tp+fn+fp), 1)))
    print('False Positives: {} ({}%)'.format(fp, round(100 * fp / (tn+tp+fn+fp), 1)))
    print('-----------------------------')
    print(' True Positive Rate: {}%'. format(round(100 * tp / (tp + fn), 2)))
    print('False Positive Rate: {}%'. format(round(100 * fp / (fp + tn), 2)))

E.g. output:

Logistic Regression (k-fold Cross Validation, Harsh Penalty) Metrics:
 True Negatives: 4303 (12.1%)
 True Positives: 4738 (13.3%)
False Negatives: 25774 (72.4%)
False Positives: 762 (2.1%)
-----------------------------
 True Positive Rate: 15.53%
False Positive Rate: 15.04%

Instead of TPR 24% and FPR 9% that was mentioned in DataQuest’s guide.

I’d appreciate some clarity on what the ‘correct’ values should be, if possible, as I’m a bit confused why Jupyter vs DataQuest are getting slightly different results.

Thanks!

1 Like

It’s been well over a year since this issue has been reported and it’s ongoing. Kind of insane. This is a paid service.

When you target for calculating TPR/FPR you get the actual correct answer (referenced in the text) because we actually trained the LR model on target. In order to get the “right” answer for the completion of the screen the filters need to be created with loans[‘loan_status’]. This shouldn’t matter because in theory loans[‘loan_status’] == target since that is how it was defined.

Super weird. Please fix things this instead of continually updating your frontend to make it less usable.

1 Like

100% agree. It is very dissapointing such an issue is still here after the 2 years discussion about it