Simultaneous Localization and Mapping (SLAM) as the basic technology of intelligent mobile robots and VR, many scholars have proposed impressive SLAM systems, including: ORB-SLAM, LSD, SVO and other SLAM systems. However, most of them are based on the assumption that the environment is static, which severely restricts their use in actual dynamic circumstances. Due to the widespread usage of deep learning, a number of deep learning-based solutions have been put out that make use of semantic data to reduce the impact of dynamic objects on the system. Deep learning techniques using Mask R-CNN are among them, although they typically operate quite slowly, restricting the speed of SLAM. Therefore, we suggest a YOLOv7-based real-time running SLAM system for dynamic situations (DYS-SLAM). The system enhances the ORB-SLAM2 system's accuracy and dynamic object identification speed in dynamic surroundings by integrating the thin and quick target detection network YOLOv7. Finally, we evaluate on the TUM RGB-D public dataset, and the experimental results show that for high dynamic scenes, the RMSE and S.D. of the absolute pose error and relative pose error of DYS-SLAM are improved by more than 90% relative to ORB-SLAM2, for low dynamic scenes, the increase is about 10%.