Convolution Neural Networks (CNN) and Transpose Convolution Neural Networks (TCNN) play a significant role in deep learning and has a wide range of application in algorithms such as generative adversarial networks (GAN), U-Net based image segmentation networks, super-resolution networks. However, TCNN has a completely different computational mode from CNN, which poses a significant challenge for designing a unified architecture for both CNN and TCNN. Some former researchers have proposed several hardware architectures, but they introduce massive hardware overhead and reduce energy efficiency, which can be a significant problem in low-power embedded applications. To alleviate this problem, we propose UACT, an energy-efficient unified Architecture of CNN and TCNN, co-designed with a unified approach for CNN and TCNN address generation. UACT is designed and evaluated under TSMC 28nm library, which costs 655 K logic gates and 128 KB SRAM, with a power of 106.7 mW and a throughput of 117.8 GOPS at 1GHz, achieving an energy efficiency of 1104 GOPS/W, which is more efficient than previous state-of-the-art TCNN processors.