DDANet: Dilated Deformable Attention Network for Dynamic Scene Deblurring
- Resource Type
- Conference
- Authors
- Kim, Byungnam; Jung, Hyungjoo; Sohn, Kwanghoon
- Source
- 2024 International Conference on Electronics, Information, and Communication (ICEIC) Electronics, Information, and Communication (ICEIC), 2024 International Conference on. :1-4 Jan, 2024
- Subject
- Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Fields, Waves and Electromagnetics
Photonics and Electrooptics
Deformable models
Correlation
Convolution
Dynamics
Computer architecture
Benchmark testing
Digital cameras
Deformable convolution network
Dilated convolution layer
Spatial-attention method
Multi-scale architecture
Dynamic scene deblurring
- Language
- ISSN
- 2767-7699
Image motion blur is a phenomenon that typically occurs due to the movement of dynamic objects or the shaking of a digital camera. These blurring problems have non-uniformity and non-directionality. Recent deblurring research aims to address the blur problems by employing self-attention and scale transformation approaches. The self-attention approach can be affected by the attributes of unrelated images in all spatial domains, and the multi-scale approach incurs high computational costs due to its own recurrent framework. In this paper, we propose a dilated deformable attention network (DDANet) that focuses on relevant blur attributes at all positions and handles significant variations in blur attributes distributed over different spatial domains. DDANet also utilizes a multi-scale architecture to leverage the correlation of blur attributes through progressive spatial variations in the input image. Extensive experimental results on the GoPro benchmarks demonstrate that the proposed DDANet effectively performs blurred image restoration in both subjective and objective evaluations.