Advanced Reg Ex 3/12. Why my code passed in our Dataquest script but not in the jupyter notebook?

I feel extremely frustrated because lately when I either worked on the Dataquest project or a mini mission/exercise, my lines of code as well as Dataquest’s own code passed in the Dataquest Script or Dataquest’s jupyter notebook.
But when I ran the same code locally on my own jypyter notebook. Some lines of code didn’t pass.

Here is a mini-exercise from 3/12 advanced Regular Expression.

Here is my page from Dataquest Script (Both my own code and Dq’s code passed)


Here is the screen shot from my own locally run jupyternotebook: I ran with Dataquest’s solution!

I did check the py_versions. It it a 1-d arrary, therefore it is a series and dataframe is a n-d arrary.
So why the system in this case recognizes py_versions as a dataframe but not series.
Could anyone tell me what went wrong? TIA for your help!

image

Yes value_counts() exists for Series, not DataFrame. Because value_counts produces an output series with indexes based on unique values in the input series. For DataFrame, it contains multiple columns, each of which contains different nunique() which will generate different number and names of value_counts indexes, which will make joining the outputs for each column back together (like how groupby.apply does) rather senseless. You can realize this by googling dataframe value_counts and realizing there isn’t a documentation page for that. Good to notice which methods can be applied to both dataframe/series and which can only be applied to series. Usually there are more series methods than dataframe, and going from series to dataframe, you just need to specify an optional axis (row,col,or elementwise)

You can think of value_counts in terms of what it’s really doing as part of EDA. It’s a univariate analysis. Under univariate, there are histograms for numerical values, bar charts for categorical features. People usually chain series.value_counts.plot.bar() to create that visualization of a single column.

Another trick if you don’t know yet is column.value_counts is the same as column.groupby(column).size() and similar to column.groupby(column).count() (if value_counts(dropna=True)).
value_counts is a shortcut of single index grouping. You would use the more general groupby for multiindex grouping where you put a list of columns into groupby() in the correct order.
That allows unstacking later and other forms of analysis.