This paper makes a practical contribution to classification in credit risk assessment by providing empirical evidence on which class distribution in the training sample should be used to maximize performance of different classification algorithms. Data used for credit risk assessment are often imbalanced due to the number of defaulting clients, who represent the minority class, being much less numerous than the non-defaulting majority class. Classification algorithms are usually biased towards the majority class and can show deceivingly high overall prediction accuracy, while at the same time exhibiting poor performance in prediction of the minority class. Although altering class distribution can be an effective method for alleviating the adverse impact of class imbalance, limited research efforts have been devoted to empirical evaluation of the role of class distribution in credit risk assessment, especially when dealing with real life data samples. To address this issue, an empirical study on the effect of different proportion of training examples belonging to each class on classifiers' performance is presented. In addition to logistic regression, neural networks and gradient boosting methods were evaluated using several real life and publicly available datasets from the UCI Machine Learning Repository.