Traffic driving scenarios are sophisticated and varied. Experienced drivers routinely allocate visual attention to critical targets and areas in advance and judge their intentions to ensure driving safety. Consequently, the research on the driver's visual attention allocation behavior is essential for the development of assisted driving systems and self-driving. Data-driven visual attention research has largely profited from the development of Convolutional Neural Networks in recent years. In this paper, we present a novel model that predicts accurate visual attention maps through the construction of a deep neural network, the integration of multiscale pyramid features, and the incorporation of neural attention mechanisms. Finally, extensive experimental evaluations indicate that the proposed model outperforms classical models and several excellent deep learning-based approaches on the TDV dataset.