An Attention Fusion Network For Event-Based Vehicle Object Detection
- Resource Type
- Conference
- Authors
- Liu, Mengyun; Qi, Na; Shi, Yunhui; Yin, Baocai
- Source
- 2021 IEEE International Conference on Image Processing (ICIP) Image Processing (ICIP), 2021 IEEE International Conference on. :3363-3367 Sep, 2021
- Subject
- Computing and Processing
Signal Processing and Analysis
Location awareness
Uncertainty
Fuses
Conferences
Object detection
Predictive models
Feature extraction
vehicle detection
Event-based cameras
DVS
Attention module
- Language
- ISSN
- 2381-8549
Under the extreme conditions such as excessive light, insufficient light or high-speed motion, the detection of vehicles by frame-based cameras still has challenges. Event cameras can capture the frame and event data asynchronously, which is of great help to address the object detection under the aforementioned extreme condition. We propose a fusion network with Attention Fusion module for vehicle object detection by jointly utilizing the features of both frame and event data. The frame and event data are separately fed into the symmetric framework based on Gaussian YOLOv3 to model the bounding box (bbox) coordinates of YOLOv3 as the Gaussian parameters and predict the localization uncertainty of bbox with a redesigned cross-entropy loss function of bbox. The feature maps of these Gaussian parameter and confidence map in each layer are deeply fused in the Attention Fusion module. Finally, the feature maps of the frame and event data are concatenated to the detection layer to improve the detection accuracy. The experimental results show that the method presented in this paper outperforms the state-of-the-art methods only using the traditional frame-based network and the joint network combining the event and frame information.