Why do I need to use np.dot() despite scalar * matrix? (Matrix Algebra 6/11)

Screen Link:

My Code:

matrix_a = np.asarray([
    [1.5, 3],
    [1, 4]

def matrix_inverse_two(matrix):
    det = matrix[0][0] * matrix[1][1] - matrix[0][1] * matrix[1][0]
    matrix = np.asarray([
                                 [matrix[1][1], -matrix[0][1]],
                                 [-matrix[1][0], matrix[0][0]]
    if det == 0:
        raise ValueError("The matrix isn't invertible")
        return np.dot(1/det,matrix) 
inverse_a = matrix_inverse_two(matrix_a)
i_2 = np.dot(inverse_a, matrix_a)

What I expected to happen:

What actually happened:

np.dot(1/det,matrix) doesn’t make sense to me because I don’t calculate a dot product using this.
1/det is a scalar, not matrix or vector.

But without using this, I didn’t make it.


According to the documentation, seems like np.dot() operation is possible between integers. That being said, I think you are right in the sense that it should not be used in this way.

If either a or b is 0-D (scalar), it is equivalent to multiply and using numpy.multiply(a, b) or a * b is preferred.

Fill in this ticket to submit your feedback