Selection of the optimal learning rate for training neural networks has often been a matter of concern for the machine learning community. The existing learning rates are dependent on multiple scaling factors. This paper proposes Cost-Responsive Learning (CoRL) which does not require manual hyper-parameter tuning. It maintains a linear relationship with the prediction error of the neural network. This is expected to offer the lowest learning rate at the global minima, and higher learning rates elsewhere. Hence, a number proportional to the prediction error is used as a learning rate, subject to the constraint that the number is within an acceptable range (here [0,1]). The derivation of an optimal learning rate from a given cost function is illustrated with the popular binary/categorical cross-entropy cost function(s). Experiments performed under multiple settings demonstrate that, with the CoRL optimizer needs no parameter tuning to obtain state-of-the-art results with significantly lower training time for equivalent performance.