With the prevalence of machine learning in many high-stakes decision-making processes, e.g., hiring and admission, it is important to take fairness into consideration when practitioners design and deploy machine learning models. Although many approaches have been developed for fair machine learning, most of them focus on classification. In this paper, we target a notable but under-explored task, selection, where the number of selected individuals cannot exceed a pre-defined budget, such as employee hiring or university admission with limited positions or capabilities. In the selection task, existing fairness notions designed for classification are not suitable. In particular, our experimental results show that the selection models subject to common fairness notions may still make biased predictions against the underrepresented group. Hence, we propose a novel fairness notion, Selection Parity, which captures the demographic diversity among the selected groups in this restricted selection problem. Since the selection of qualified individuals with a fixed budget is non-differentiable, existing fairness regularization terms cannot be directly integrated with the selection task. To close the gap, we develop a novel in-processing framework named Fair Selection with the Differentiable Distribution Difference constraint (FS-DD), which incorporates a differentiable constraint into the training process and produces fair decisions for selection problems. Our theoretical analysis shows that common fairness metrics are bounded by the proposed Distribution Difference measurement. In other words, the FS-DD framework can guarantee fairness with regard to the common existing fairness metrics. We evaluate the performance of our method as well as several baselines on four real-world datasets. The experimental results demonstrate that the proposed method achieves fairness in various selection settings. In addition, the proposed method has a better fairness-accuracy trade-off compared with existing baseline methods.