Global minimum is not the target in machine learning

Contrary to what many believe, reaching the global optimum through gradient descent is NOT the goal of supervised learning. Reaching the global minimum of the empirical risk surface would virtually guarantee overfitting. Rather, local minima are more than enough, stopping when the testing loss starts to increase.