The perceptual information acquired by a single vehicle-side LiDAR in autonomous driving is limited, and this phenomenon is more prominent at intersections where vehicles are turning. Existing solutions improve vehicle perception by designing complex systems to match homogeneous point clouds acquired by the same type of sensors. In this study, we propose a heterogeneous point cloud registration for vehicle-infrastructure collaboration (HPCR-VI) that supplements the missing sensory information of the vehicle-side mechanical LiDAR with the point cloud information acquired by the infrastructure-side solid-state LiDAR. The HPCR-VI framework proposed in this paper breaks the limitation of homogeneous point cloud registration and can quickly obtain alignment results from two frames of heterogeneous point clouds, whose densities and viewing angles differ greatly, solving the heterogeneous point cloud registration problem where traditional point cloud alignment methods fail. Our proposed method is tested on the DAIR-V2X dataset, and the success rate of alignment is 40-50 points higher than that of the baseline method.