Why Dataframe memory footprint displays 'None' at the end

Screen Link: https://app.dataquest.io/m/163/optimizing-dataframe-memory-footprint/15/selecting-types-while-reading-the-data-in

My Code:

keep_cols = ['ExhibitionID', 'ExhibitionNumber', 'ExhibitionBeginDate', 'ExhibitionEndDate', 'ExhibitionSortOrder', 'ExhibitionRole', 'ConstituentType', 'DisplayName', 'Institution', 'Nationality', 'Gender']

moma = pd.read_csv('moma.csv', parse_dates=["ExhibitionBeginDate", "ExhibitionEndDate"], usecols=keep_cols)


What I expected to happen:
I used the df.info(memory_usage=‘deep’) method to obtain the deep memory footprint of the ‘moma’ dataframe in MB.

It seems to get me the expected info. However, I see a None at the bottom and wonder what that reason is. Could you please tell me why it says None at the end of the memory usage displayed this way?

What actually happened:

Data columns (total 11 columns):
ExhibitionID           34129 non-null float64
ExhibitionNumber       34558 non-null object
ExhibitionBeginDate    34558 non-null datetime64[ns]
ExhibitionEndDate      33354 non-null datetime64[ns]
ExhibitionSortOrder    34558 non-null float64
ExhibitionRole         34424 non-null object
ConstituentType        34424 non-null object
DisplayName            34424 non-null object
Institution            2458 non-null object
Nationality            26072 non-null object
Gender                 25796 non-null object
dtypes: datetime64[ns](2), float64(2), object(7)
memory usage: 14.6 MB


Run the following, see what happens. The behavior is the same.

1 Like

haha :slight_smile: Noted what works by default.THANKS @Bruno

1 Like