Generally, motion vector information primarily stems from moving objects, while static objects contribute minimally to the estimation task. Traditional approaches for motion vector estimation typically rely on scene flow methods that depend on deep models to extract features from individual points at a high cost. These methods then acquire flow information through complex matching mechanisms or feature decoding. Such approaches are computationally expensive and exhibit substantial latency. Moreover, they neglect the importance of motion objects in motion vector estimation and the interference from static objects. Therefore, this paper introduces a novel method that first performs point cloud motion segmentation and subsequently estimates motion vectors. This approach focuses on leveraging point cloud information annotated with motion objects to estimate three-dimensional scene flow more effectively. By employing motion segmentation, we can obtain annotations for moving objects, enabling greater emphasis on the estimation of motion vectors for more challenging cases. In our experiments conducted on the KITTI dataset, the proposed method demonstrates superior performance compared to existing scene flow estimation methods. Specifically, without considering motion segmentation errors, the error in the motion direction is only 0.0363 m/s, showcasing better performance. Additionally, our method achieves an error of 0.076 m for three-dimensional endpoint error (EPE3D), showcasing distinct advantages over current scene flow networks.