Computing-in-memory (CIM) has become one of the most energy-efficient neural network accelerator architectures as it overcomes the problem of the memory wall. CIM with the novel non-volatile memory (nvCIM), such as resistive random access memory (RRAM), has potential performance gains. Along with the sparsity of deep neural networks (DNNs), nvCIM can further improve energy efficiency. In this brief, we propose the weight and multiply-accumulation (MAC) sparsity-aware nvCIM system to optimize structured weight sparsity and dynamic MAC range. Experimental results show that the proposed sparsity-aware nvCIM system has a maximum $4.65\times $ improvement in energy efficiency compared to the baseline with 89.56% accuracy of CIFAR-10 in the ResNet-18 network.