Traffic Light Recognition (TLR) aims at detecting Traffic Lights (TLs) and then classifying the status of light signals, being an essential constituent of autonomous driving perception systems. However, it’s challenging for existing TLR methods to accurately distinguish both color and shape status of TLs due to small object sizes, illumination variations, close resemblance with other objects and varying weather conditions. Existing public datasets for TLR have three main drawbacks:(i) poor diversity, (ii) sample imbalance, and (iii) insufficient category labels, greatly hindering the development of TLR. To overcome the aforementioned problems, we propose a Robust Traffic Light Recognition Pipeline based on YOLOv8 (RTLRP-YOLO) that can recognize TLs accurately with strong robustness based on adaptively generated high-quality images. Specifically, we develop a Self-Adaptive Preprocessing Module (SAPM) which is designed to adaptively generate high-quality images under hostile conditions, followed by a Two-stage Traffic Light Recognition Model based on YOLOv8 (TTRM) to obtain both the location and status information of TLs. Moreover, We also provide our self-made Tongji Small Traffic Light Dataset (TSTLD), covering a variety of weather conditions, regions, light intensities and shooting angles. To the best of our knowledge, our proposed method is the first one to be able of simultaneously identifying three colors (i.e., red, yellow and green) and four shapes (i.e., circle, left arrow, right arrow and up arrow) of TLs, achieving 95.43% accuracy on TSTLD with the inference time of 26 ms for per image.