Data Shepherding: A Last Level Cache Design for Large Scale Chips
- Resource Type
- Conference
- Authors
- Jang, Ganghee; Gaudiot, Jean-Luc
- Source
- 2019 IEEE 21st International Conference on High Performance Computing and Communications; IEEE 17th International Conference on Smart City; IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS) High Performance Computing and Communications; IEEE 17th International Conference on Smart City; IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS), 2019 IEEE 21st International Conference on. :1920-1927 Aug, 2019
- Subject
- Communication, Networking and Broadcast Technologies
Computing and Processing
Memory management
Cache memory
Hardware
Software
Random access memory
Resource management
last level shared cache
multi-bank cache
tagless cache design
shared coherent TLB
mesh on-chip network
- Language
Newer chips include cache memories as large as 128 MB to sustain the bandwidth for the GPGPU module. As 128 MB was a reasonable main memory size a decade ago, we examine the design impact of a larger granularity in the management of caches. We thus propose a cache memory design called the Data Shepherding Cache for larger last level caches. Even with a granularity as large as a page for the management of the last level cache, our Data Shepherding Cache could achieve reasonable performance from a smaller area footage over the same sized set associative cache.