Light detection and ranging (LiDAR) plays a vital role in autonomous driving applications. Due to the enormous volume of point cloud data from LiDAR acquisition, LiDAR point cloud compression schemes are desirable under limited storage and transmission bandwidth. This paper proposes an improved coarse-to-fine motion estimation scheme to significantly reduce the spatial redundancy among LiDAR point clouds, combining registration-based global motion estimation and block-based local motion estimation. First, in the global motion estimation process, an efficient scheme is introduced to segment ground level and object level from original point clouds. Then we use an ICP-based registration method for extracted object point cloud to estimate global motion transformation. Second, an irregular prediction unit (PU) partition method is introduced in the block-based local motion estimation part, which provides more flexibility to estimate complex motions. Experimental results have shown significant gains of our proposed model over the state-of-the-art test model (TMC13) of MPEG Geometry-based PCC (GPCC) standard. For the dynamically acquired point clouds, average coding gains of 8.0% and 1.2% are obtained respectively in lossy and lossless geometric coding.