Both methods serve different purposes.
In min-max normalization, if data is right-skewed then it will remain right-skewed, it will just scale down the values.
Whereas in standard(z) normalization, the data would follow the normal distribution after applying the method.
In some machine learning algorithms, it is required that data is normally distributed so in that case you need to apply z-normalization.
And if you want to just scale down values for the same measure of the unit, you can apply min-max normalization. Like if you have two columns, where one represents values in thousands and another in millions then min-max can be used to represent the same unit.