Serious occlusion among parts in industrial stacking scenes brings a significant challenge to the automatic sorting. This paper proposes a 3D vision-guided robotic grasping method to solve part sorting problem in stacking scenes. Firstly, a large-scale synthetic dataset is generated automatically by the simulation technology to train a Point-wise Pose Regression Network (PPRNet), which is the extension of the Pointnet++ with the Hough voting theory. Then, the trained PPRNet is used to realize instance segmentation and pose estimation simultaneously. Finally, according to the pose estimation of segmented instances in the stacking scene, a grasp pose selection model is established to determine the optimal grasp target and corresponding grasp pose. We carried out 200 grasp experiments on random stacking scenes of single-class objects and obtained success rate of 78.0%. The result shows that PPRNet improves the efficiency and stability of the robotic grasping method, which basically meets the requirements of industrial application.