Dental caries, the most prevalent oral disease, poses a significant healthcare challenge. Deep Neural Network (DNN)-based object detection techniques offer promising solutions to improve the efficiency of dental caries diagnosis. It is widely acknowledged that the performance of DNN models heavily relies on the availability of sufficient and accurately labeled data. The collection and annotation of dental X-ray images encounter obstacles due to privacy concerns and the requirement for specialized expertise. Consequently, the limited access to labeled dental image datasets restricts the potential of DNNs in supporting oral and dental healthcare. Self-Training (ST) is a semi-supervised machine leaning approach that addresses this problem to a large extent. It repeats the procedures of training a model on the labeled dataset, and then applying it to generate pseudo labels on the unlabeled dataset, and further using the combined data with the original and pseudo labels to train new models. However, the latent errors of the pseudo labels can arise and even be amplified throughout the ST pipeline, which leads to a significant performance decline for DNN models. In this paper, we propose a Greedy algorithm-based Self-Training (Greedy-ST) pipeline to address this problem. At each iteration, the Greedy-ST selects an optimal confidence threshold to generate predictions as pseudo labels, and uses static fine-tuning (SFT) and dynamic fine-tuning (DFT) to refine them. Experimental results demonstrate that by utilizing the pseudo labels generated by the Greedy-ST pipeline, the selected baseline model achieves improved performance compared to using the pseudo labels generated by the vanilla ST approach.