Why is the rating count different?

Hi all, I’m trying to figure out why my output code is 3 counts lower than it should be

Screen Link:
https://app.dataquest.io/m/314/dictionaries-and-frequency-tables/13/filtering-for-the-intervals

My Code:

opened_file = open('AppleStore.csv')
from csv import reader
read_file = reader(opened_file)
apps_data = list(read_file)

n_user_ratings = []

rating_dictionary = {'<1m ratings': 0, '<2m ratings': 0, '<3m ratings': 0}

for row in apps_data[1:]:
    rating = int(row[5])
    n_user_ratings.append(rating)
    
    if rating <= 1000000:
        rating_dictionary[ '<1m ratings' ] +=1
    
    elif 10000000 < rating <= 2000000 :
        rating_dictionary [ '<2m ratings' ] += 1
    
    elif 2000000 < rating <= 3000000 :
        rating_dictionary [ '<3m ratings' ] += 1

print (min(n_user_ratings))
print (max(n_user_ratings))
print (rating_dictionary)
print (len(apps_data[1:]))

What I expected to happen:
The total of the rating counts should be the length of the apps_data list, 7197

What actually happened:
The total is 7194

This is the final output 

0
2974676
{’<3m ratings’: 3, ‘<1m ratings’: 7191, ‘<2m ratings’: 0}
7197

Try adding else clause to capture those that are greater than 3000000 or others that may not fall in the other if else…

Thank you!! That was the problem :slight_smile: I thought I had picked a value that was greater than the maximum rating count