Vision simultaneous localization and mapping (SLAM) is essential for adapting to new environments and for localization and is therefore widely used in robotics. However, accurate location estimation and map consistency remain challenging issues in dynamic environments. In addition, building dense scene maps is critical for spatial artificial intelligence (AI) applications such as visual localization and navigation. We propose a visual SLAM with ORB features and NeRF mapping in dynamic environments (DN-SLAM), a visual SLAM system based on oriented FAST and rotated BRIEF (ORB)-SLAM3, which uses ORB features to track dynamic objects, uses semantic segmentation to obtain potentially moving objects, and combines optical flow and the segment anything model (SAM) to perform fine segmentation and reduce the redundancy by culling dynamic objects to enhance the performance of the SLAM system in dynamic environments. Meanwhile, 3-D rendering using neural radiation field removes dynamic objects and renders them. We performed experiments on both the Technical University of Munich (TUM) red, green, blue (RGB)-D dataset and the Bonn dataset, and we compared our results with the advanced dynamic SLAM algorithms available. Our findings reveal that, when compared to ORB-SLAM3, DN-SLAM significantly improves trajectory accuracy in highly dynamic environments and achieves more accurate localization than other advanced dynamic SLAM methods and successful 3-D reconstruction of static scenes.