Actual and expected have "different" memory usage?

Screen Link:

Line Graphs And Time Series — WHO Time Series Data | Dataquest

My Code:

import pandas as pd
who_time_series = pd.read_csv('WHO_time_series.csv')
who_time_series["Date_reported"]  = pd.to_datetime(who_time_series["Date_reported"]) 

print(who_time_series.head(5) )
print( who_time_series.tail(5))
who_time_series.info() 

What I expected to happen:

I expected to pass this screen

What actually happened:

Actual and expected did not match, but not in a way that I can figure out how to debug. Specifically, with red as actual and green as expected:

Any tips appreciated.

Well I peeked at the solution, and I see that the problem was I hadn’t wrapped df.info() in a print statement. But why did the error appear the way the way it did? How does the missing print statement mess up the memory usage in some invisible way?

It doesn’t mess up the memory usage. Our answer checking here could be more refined than what it is. As of the time when you experienced this, the answer checker expects None to be printed and you didn’t print it. It’s stupid to print it, though, your answer should be considered correct and our answer needs to be fixed.

Because it’s checking the differences between your output and Dataquest’s expected output.

1 Like

Ah - got it. I had thought there was an issue with the “memory usage” part of the output also. Thanks for the clarification.

Darn. I spent so much time trying to fix that “error”.

Thanks for this. Saved me from pulling out all my remaining hair.

1 Like