Retiformer: Retinex-Based Enhancement In Transformer For Low-Light Image
- Resource Type
- Conference
- Authors
- Ruan, Junxiang; Kong, Xiangtao; Huang, Wenqi; Yang, Wenming
- Source
- ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) Acoustics, Speech and Signal Processing (ICASSP), ICASSP 2023 - 2023 IEEE International Conference on. :1-5 Jun, 2023
- Subject
- Bioengineering
Communication, Networking and Broadcast Technologies
Computing and Processing
Signal Processing and Analysis
Reflectivity
Head
Lighting
Tail
Speech enhancement
Signal processing
Transformers
Transformer
Retinex
low-light image enhancement
self-attention
decomposition
- Language
- ISSN
- 2379-190X
Transformer-based methods have shown impressive potential in many low-level vision tasks but are rarely used for low-light image enhancement (LLIE). Direct use of Transformer in LLIE will bring unnatural visual effects. This phenomenon encourages us to attempt to learn from the theory of Retinex. After trial and analysis, we finally propose Retiformer. Retiformer decomposes images into reflectance and illumination attention maps by Retinex Window Self-Attention (R-WSA). It will replace element-wise multiplication with the attention mechanism. By the R-WSA, we respectively apply a Decom-Retiformer block and an Enhance-Retiformer block at the head and tail of a Transformer-based backbone. They can decompose and align the reflection and illumination components just like RetinexNet. With this pipeline, Retiformer combines the advantages of Transformer and Retinex theory and achieves state-of-the-art performance of Retinex-based methods.