In this paper, we present a hybrid-residual-based odometry approach using observations from a monocular camera and sparse depth information from LiDAR. Our approach takes full advantage of different types of complementary information from both, the image and the laser scan, to build an accurate odometry. The proposed approach utilizes both reprojected and photometric features, where the reprojected residuals and the photometric residuals are jointly minimized. To enhance the accuracy and robustness, the occluded points are explicitly filtered during the odometry process. Experiments are conducted both on real world data and open source datasets to show accuracy and robustness for odometry algorithm. The results suggest that our odometry approach can achieve competitive localization accuracy.