In this study, we proposed a wearable guiding system, which contains an embedded system-Jetson AGX Xavier launched by Nvidia and a RGB-D binocular depth camera-Stereolabs ZED2, for guiding visually-impaired people to walk outdoors. Using the deep learning image segmentation model and the depth map obtained by the ZED2, the front image of the blind is divided into seven divisions. Each division has its confidence of walkability which is computed by our specific methods. Based on the confidence of walkability, the most suitable direction for the visually-impaired people is selected and voice prompts are played to lead the visually-impaired people walking forward on the sidewalk or walking crosswalk to cross the road safely. An experiment is performed to verify the effectiveness of the proposed system.