Feedback on my "duplicate remover function"

Hello kind humans!
I wrote this function, which i find a lot more straight forward than, the way dataquest does it in theire
solution. Is my way a good take, or do you have some tweeks, to make it even more efficient, and easy?

def check_for_duplicates_v2(dataset,title_column,rating_column):
    clean_dict={}
    fake_duplicates_index=[]
    index=1 # not 0 since the for loop starts at index 1
    ####only sticks with unique names with highest review number####
    for counter, value in enumerate(dataset[1:]):
        title=value[title_column]
        rating=float(value[rating_column])
        if title in clean_dict and rating<clean_dict[title]:
            #fake_duplicates_index.append(index)
            del dataset[index]
            
        else:
            clean_dict[title]=rating
            index+=1
                
            
            
    print("Length of my dict without header",len(clean_dict))
    ####clean dict to clean list type##### 
    index=0
    global clean_data
    clean_data=[]
    clean_data.append(dataset[0]) #Header
    for key in clean_dict:
        clean_data.append([key,clean_dict[key]])
    print("First two Indexes of clean List :",clean_data[:2])
    print("Length of my final list : ",len(clean_data),'\n')
  

        
        
        
###google###    
check_for_duplicates_v2(apps_data_google,0,3)
google_clean=clean_data
###apple###
check_for_duplicates_v2(apps_data_apple,1,5)
apple_clean=clean_data

print("global google: ",google_clean[:2])
print("global apple: ",apple_clean[:2])



Click here to open the screen in a new tab.

Hello @phibaar1!

In your finction you are using the enumerate() function but you never using the index counter it produces. Could you avoid using enumerate()?

Could you also explain your idea with index? I’m not sure how it works.