Unmanned aerial vehicle (UAV) images are characterized by high spatial resolution, multiple temporal phases, and a wide range of target elements. These images are extensively utilized in the development of intelligent transportation systems. However, the significant challenges for existing object detection models arise from the drastic change in object size and the prevalence of small targets from the UAV viewpoint. To address the aforementioned issues, this paper introduces Dense-YOLO, a densely connected object detection model. By adding a small object detection layer, the problem caused by the drastic change in image scale during detection is alleviated. Additionally, a dense connection module is incorporated into the convolutional block of the backbone network to reduce the loss of shallow position information. Furthermore, a multi-scale feature fusion module is introduced to perform cross-scale connections and weighting operations on contextual information. This enhances the feature extraction of the target to be detected and reduces the leakage of small object detection. A new decoupling head has been designed to achieve the isolation and decoupling of the location regression and classification tasks. This design aims to enhance the accuracy of the model for both localization and classification. The proposed algorithm underwent ablation experiments on the VisDrone2019 dataset, resulting in a 6.5% improvement in mean average precision (mAP) compared to the original YOLOv5L structure. This improvement is particularly significant in enhancing the accuracy of small object detection from a UAV's perspective. Additionally, on the DOTA dataset, the mAP increased by 1.6% to reach 76.3%. These results confirm the effectiveness of the proposed algorithm in various application scenarios.