Accurate localization is a key technology for automated mobile robot systems. Light detection and ranging (LiDAR) is widely used in simultaneous localization and mapping (SLAM) systems because of its stable and high-precision measurements. Feature extraction and motion constraint construction, as two core modules of feature-based SLAM, have attracted extensive research in recent years. However, existing methods mostly innovate for both separately, ignoring the interactions between features and motion constraints. To this end, this article constructs a highly accurate and robust LiDAR SLAM based on features with well motion observability. The method screens feature with well motion observability by estimating the unit contribution to motion constraints on six degrees of freedom (DoF). In addition, the reprojection constraints of each feature are weighted according to the cumulative contribution to motion constraints on each DoF. Compared with traditional methods, the close correlation between feature extraction and motion constraint construction reduces redundant features and constraints. Balanced motion constraints effectively improve the robustness and accuracy of the proposed method, especially in feature-poor environments. Further, feature vectors are introduced in the map, and the feature vectors of the current keyframe are verified using multiframe observations in the map to improve the consistency and accuracy of the map and reduce the impact of feature vector errors on localization accuracy. The proposed SLAM system is tested in environments with sparse and inhomogeneous distributed features and compared with existing methods. The experimental results show that our method has higher accuracy and robustness.