Lane detection involves identifying a high-level semantic identifier with a slender structure, which requires both high-level feature detection for existence and low-level features for shape and position determination. The objective of this study is to introduce a framework called hierarchical lane detection (HLDNet), which from coarse to fine established stronger relationships between the global and local lanes to achieve stable and accurate lane detection. Specifically, we reframe the existing lane identification method by expressing lanes as a combination of lane instances (for existence detection) and lane point sets (for shape detection). Our proposed Lane IoU loss improves the model performance by regressing the lane instance as a whole. To enhance stability, we capture the complete lane with high-level features using multiple receptive fields. Additionally, our model adaptively fuses high- and low-level features, propagates key features between channels, and further enhances lane local feature representation. Our experimental results show significant improvements in the stability and accuracy of lane detection across multiple benchmark datasets and can run in realtime at a speed of 91 FPS.