To deal with the problem that traditional visual SLAM methods in dynamic environments are susceptible to the influence of moving targets, which leads to a decrease in the system's localization accuracy. This paper presents a dynamic visual SLAM system named YOLOv5s-SLAM. YOLOv5s is embedded in the ORB-SLAM3 system to recognize dynamic targets in the surrounding environment by deep learning methods and simultaneously extract ORB feature points with the SLAM system at the front end as a way to improve real-time performance. Combined with the constructed dynamic object detection rules, the dynamic feature points are divided and removed, which effectively reduces the mismatching in the feature extraction process and the impact on the system performance, and achieves the optimization of the visual SLAM localization performance. Tests on the TUM dataset show that the proposed method has smaller trajectory errors and higher position estimation accuracy, which can effectively enhance the robustness of localization.