The usual use pattern is pipeline.fit_transform on training set, then pipeline.transform on test set.
You can split the fit_transform into fit and transform on training, it’s just unnecessarily longer and prone to programming mistakes.
This paper explains the design considerations of sklearn and terms used (transformer, estimator, predictor) https://arxiv.org/pdf/1309.0238.pdf
You can end at fit if you’re just studying statistics of the data, but usually people would want to feed the data into a model to further fit and predict. Some models require (not to run without error, but to give meaningful results) preprocessed data, so you have to transform after fitting the data using the preprocessing transformer/estimator (i haven’t got a clear difference between these 2) then fit the model and predict with the model.