An 1-bit by 1-bit High Parallelism In-RRAM Macro with Co-Training Mechanism for DCNN Applications
- Resource Type
- Conference
- Authors
- Liu, Chi; Li, Shao- Tzu; Pan, Tong-Lin; Ni, Cheng-En; Sung, Yun; Hu, Chia-Lin; Chang, Kang-Yu; Hou, Tuo-Hung; Chang, Tian-Sheuan; Jou, Shyh-Jye
- Source
- 2022 International Symposium on VLSI Design, Automation and Test (VLSI-DAT) VLSI Design, Automation and Test (VLSI-DAT), 2022 International Symposium on. :1-4 Apr, 2022
- Subject
- Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Resistance
Power demand
Voltage
Parallel processing
Very large scale integration
Real-time systems
Inference algorithms
Computing In-Memory
In-RRAM Computing
Co-Training
CIFAR-10
- Language
- ISSN
- 2472-9124
A methodology for Artificial Intelligence (AI) edge Deep Convolutional Neural Network (DCNN) hardware design to increase computation parallelism and decrease latency is needed for a real time application. To increase the computation parallelism, a 1-bit by 1-bit high parallelism in-RRAM computing (IRC) macro is proposed. The goal of this testing macro is to test the characteristic of the RRAM and propose a co-training mechanism between DCNN algorithm and RRAM module to deal with the non-linearity issues of IRC.