Relocation is a fundamental part of the mobile robot positioning system, and the robot can obtain the initial position through relocation. However, commonly used visual relocation schemes are susceptible to lighting, weather, and seasonal changes. Therefore, this paper proposes a relocation method for vision and LiDAR sensor fusion. By making full use of different sensors, this method can also achieve good relocation performance in scenarios where the individual sensor is degraded. Our algorithm consists of three parts, namely visual relocation, LiDAR relocation and point cloud registration verification. The Bag-of-Words method is used for visual relocation, the Scan Context method is used for LiDAR relocation, and the ICP or NDT method is used for point cloud registration verification. It is worth noting that each part of our algorithm is an independent module, so it can be easily replaced by other methods, which greatly improves the flexibility of the algorithm. We tested our algorithm on KITTI, M2DGR, and NCLT datasets, and the experimental results show that visual-LiDAR fusion relocation has greatly improved performance compared to visual relocation. When environmental factors such as light, weather, and seasons change, visual-LiDAR fusion relocation can still operate robustly, which is the key to long-term robot positioning.