As we enter the era of smart cars and electric vehicles, the significance of autonomous driving systems, especially crucial components like lane detection technology, increases dramatically. However, advanced solutions often face daunting challenges, especially under adverse environmental and lighting conditions, yielding varying detection results. This variability prompts automakers to enforce numerous restrictions on auxiliary functions, often deterring potential customers from upgrading relevant services. This study addresses these limitations and introduces a sophisticated deep learning-based lane detection model to supersede existing technologies. The model fuses an efficient feature extraction network with a Recurrent Feature-Shift Aggregator (RESA) and a Bilateral Up-Sampling Decoder (BUSD). These elements work harmoniously, enabling enhanced global information extraction and fortifying detection capability under challenging conditions. The methodology of this study is segregated into three parts: Efficient Backbone and Neck for feature extraction, Global Feature Enhancement Decoder for maximizing feature map details, and the actual implementation of a Lane Assist System. These modules collectively help us calculate high-accuracy lane coordinates, thereby overcoming the limitations of previous lane detection methods. Following the successful creation and validation of the model, the study extends the implementation to the Hexagon vehicle control platform, where the Lane Departure and Lane Keeping assistance systems are deployed. The proposed technology is validated through rigorous experiments and analyses, utilizing the highly recognized TuSimple dataset, custom-collected Taiwanese scene data, and actual road tests. Results illustrate high and consistent detection performance under diverse environmental conditions, validating the technology’s effectiveness and feasibility.