I have a doubt regarding the code output for the below screen:

Screen Link: https://app.dataquest.io/m/161/vectors/6/dot-product

My code is shown below

```
vector_one = np.asarray([
[1],
[2],
[1]
], dtype=np.float32)
vector_two = np.asarray([
[3],
[0],
[1]
], dtype=np.float32)
dot_product = np.dot(vector_one[:,0], vector_two)
dot_product2 = np.dot(vector_one.T, vector_two)
print(dot_product)
print(dot_product2)
print(dot_product.shape)
print(dot_product2.shape)
```

While taking the dot product we need to convert one of the column vectors to row vector. In the solution provided this was done using **vector_one[:,0]**

The result is [4.]

I implemented the solution using the transpose attribute **vector_one.T**

The result isn [[4.]]

Both the above outputs are numpy arrays but the shape for 1st one is (1,) whereas for the 2nd output is (1,1). What is the difference between the two outputs?

Hi @vinayak.naik87

`vector_one[:,0]`

gives you a vector so the result of your dot product is a vector too

`T`

(matrix transpose) just output the transposed matrix so your dot product is a 1 x 1 matrix

As said in the course:

- matrix vector multiplication = dot product between a matrix and a column vector
- matrix multiplication : dot product between each row of the first matrix and each column of the 2nd matrix. When you use transpose you make matrix multiplication, so your output is a matrix

Thanks for your response @WilfriedF

Can you please explain one more thing

The column vector has shape (3,1)

Going by that row vector should have shape (1,3) which is the case when I use transpose

What does the shape (3, ) mean when we do **vector_one[:,0]** and given the different dimensions how come both work in the above code?

Hi @vinayak.naik87,

I agree this is confusing. And now I am a little bit in trouble too!

If you want to make the dot product with the 2d array after using `T`

, you will need to select the first row so you will have a row vector:

```
transposed = vector_one.T
dot_product = np.dot(transposed[0,:], vector_two)
```

Let’s print the shapes so maybe it will become more intuitive:

```
print(vector_one[:,0].shape)
print(vector_two.shape)
print(vector_one.T.shape)
print(vector_one.T[0,:].shape)
```

Output:

```
(3,)
(3, 1)
(1, 3)
(3,)
```

First case: dot product with (m,) and (m,n) => returns a 1d array of shape (n,)

Second case: dot product with (n,m) and (m,n) => returns a 2d array of shape (n,n)

What numpy is doing in the first case :

[3x1 + 0x2 + 1x1] = [4]

What numpy is doing in the second case :

[[3x1 + 0x2 + 1x1]] = [[4]]

Think we need other fellows for better explanations and rectify if I have made incorrect statements.

Actually you don’t need to do `transposed[0,:]`

for dot product. You can just do `np.dot(**transposed**, vector_two)`

and still get the same result, only difference being the dimension of the output.

I think the important takeaway from this would be:

- dot product of 1D array (m,) with 2D (m,n) array will return a 1D array
- dot product of 2D array (n,m) with 2D (m,n) array will return a 2D array

Need to understand the 1st case a bit better since it looks counterintuitive from matrix multiplication perspective.

I know, but I just wanted to highlight the fact doing `transposed[0,:]`

you will find the same dimension output than in the course.

Regarding the difference between (3,) and (3,1): think a list vs list of lists.

1 Like