Computing-in-memory (CIM) has emerged as a promising architecture for energy-efficient edge neural network inference. While CIM designs have shown great potential to benefit from sparsity, there are still limitations in supporting output analog-digital conversion (ADC) sparsity in current designs due to redundant circuits and weight-refreshing overhead. In this brief, we propose a novel CIM design called OASIS (Output Activation SparsIty Support). OASIS employs a single-bit-line computation circuit to support output ADC sparsity, avoiding redundancy or overhead. Our design achieves up to 812.2 TOPS/W normalized energy efficiency at a typical 50% sparsity. We demonstrate its effectiveness by deploying all layers of quantized neural networks to the chip and evaluating them on the MNIST and CIFAR-10 datasets, achieving 99.20% and 84.33% accuracy, respectively.