I’ll try to explain how to work through the notation. First, let’s set up our data:
>>> import pandas as pd
>>> data = {
... "age": [25, 50, 30, 50, 80],
... "high_income": [1, 1, 0, 0, 1],
... "split_age": [0, 0, 0, 0, 1]
... }
>>> df = pd.DataFrame(data)
>>> df
age high_income split_age
0 25 1 0
1 50 1 0
2 30 0 0
3 50 0 0
4 80 1 1
We will be looking at the formula. . .
\displaystyle IG(T,A) = \text{Entropy}(T)\sum \limits_{v\in A} \left(\dfrac{T_{v}}{T} \cdot \text{Entropy}(T_{v})\right)
where A represents the (unique) values in split_age
. In other words, A = \{0, 1\}. Thus, the summation. . .
\sum \limits_{v\in A} \left(\dfrac{T_{v}}{T} \cdot \text{Entropy}(T_{v})\right)
. . . can be unpacked as follows:
\begin{align}
\sum \limits_{v\in A} \left(\dfrac{T_{v}}{T} \cdot \text{Entropy}(T_{v})\right) &= \sum \limits_{v\in \{0, 1\}} \left(\dfrac{T_{v}}{T} \cdot \text{Entropy}(T_{v})\right)\\
&= \dfrac{T_{0}}{T} \text{Entropy}(T_{0}) + \dfrac{T_{1}}{T} \text{Entropy}(T_{1}) \tag 1
\end{align}
Let’s dig in into the notation again. The symbol T denotes the whole dataset, while the symbol T_v represents the rows for the splitting value v that comes from split_age
. In particular, and more explicitly, we have that:

T_0 is the set of rows for which split_age
equals 0:
>>> df[df["split_age"] == 0]
age high_income split_age
0 25 1 0
1 50 1 0
2 30 0 0
3 50 0 0

T_1 is the set of rows for which split_age
equals 1:
>>> df[df["split_age"] == 1]
age high_income split_age
4 80 1 1
The vertical bars with a set inbetween them denote the number of elements of that set. Therefore we have:
 \left\vert T\right\vert = 5
 \left\vert T_0\right\vert = 4
 \left\vert T_1\right\vert = 1
Replacing back in (1) we obtain:
\dfrac{4}{5} \text{Entropy}(T_{0}) + \dfrac{1}{5} \text{Entropy}(T_{1}) \tag 2
Now let’s compute the remaining terms, namely \text{Entropy}(T_{0}) and \text{Entropy}(T_{1}).
In a previous screen, we can read that the entropy is given by \displaystyle \sum \limits_{i=1}^{c} {\mathrm{P}(x_i) \log_2 \left(\mathrm{P}(x_i)\right)} where:

x_1, \ldots, x_c are the unique values in our target variable (
high_income
) where . . .

c is the number of unique values in our target colum
N.B.: If P(x_i) above is 0, then \log_2(x_i) isn’t meaningful. In this case, \mathrm{P}(x_i) \log_2 \left(\mathrm{P}(x_i)\right) is replaced by 0.
We thus have c=2, x_0 = 0 and x_1 = 1.
Finally, P(x_i) is the ratio between the number of times x_i occurs in the high_income
column in S (where S is either T_0 or T_1) and the number of elements in S.
For T_0 we have P(x_0) = \dfrac{2}{4} and P(x_1) = \dfrac{2}{4}. Consequently:
\begin{align}
\text{Entropy}(T_{0}) &= \sum \limits_{i=1}^{2} {\mathrm{P}(x_i) \log_2 \left(\mathrm{P}(x_i)\right)}\\
&=  \left(\color{blue}{\left(\dfrac{2}{4}\log_2\left(\dfrac{2}{4}\right)\right)} \color{black}{+} \color{brown}{\left(\dfrac{2}{4}\log_2\left(\dfrac{2}{4}\right)\right)}\right)
\end{align}
For T_1 we have P(x_0) = 0 and P(x_1) = 1, thus:
\begin{align}
\text{Entropy}(T_{1}) &= \sum \limits_{i=1}^{2} {\mathrm{P}(x_i) \log_2 \left(\mathrm{P}(x_i)\right)}\\
&=  \left(\color{blue}{0} \color{black}{+} \color{brown}{\left(1\cdot \log_2\left(1\right)\right)}\right)
\end{align}
Replacing back in (2) we obtain:
\dfrac{4}{5} \left(\color{blue}{\left(\dfrac{2}{4}\log_2\left(\dfrac{2}{4}\right)\right)} \color{black}{+} \color{brown}{\left(\dfrac{2}{4}\log_2\left(\dfrac{2}{4}\right)\right)}\right)  \dfrac{1}{5} \left(\color{blue}{0} \color{black}{+} \color{brown}{\left(1\cdot \log_2\left(1\right)\right)}\right)
The rest is just computations.