Graph convolutional neural networks (GCNs) are neural networks for graph structures with large vertex features [1]. GCNs have two phases: aggregation and combination. During the combination phase, matrix multiplications are performed using trained weights and aggregated features. The aggregation phase, however, requires traversing the graph to gather features from neighboring vertices. However, performing aggregation on graphs with high irregularity and sparsity presents challenges in terms of memory bandwidth utilization. SRAM computing-in-memory (CIM) can address this issue by reducing data movement and increasing parallelism. Nevertheless, applying GCNs to CIM faces several challenges: (1) The aggregation operation is highly irregular due to the varying number of neighbors for each vertex in the graph, which leads to a huge amount of PSUM overhead. Additionally, it is challenging to obtain the degree matrix data required for refined aggregation in CIM. (2) Aggregation and combination are two distinct operations with different operator types, data sparsity, and operation scales. This makes it challenging to perform these operations within a standard memory architecture. (3) The efficiency of the operations is affected by input sparsity in aggregation and bit sparsity in both aggregation and combination.