The energy efficiency of artificial intelligence networks must be increased if they are to be implemented on edge devices. Brain-inspired spiking neural networks (SNNs) are considered potential candidates for this purpose because they do not involve multiplication operations. SNNs only perform addition and shifting operations. SNNs can be used with a convolutional neural network (CNN) to reduce the required computational power. The combination of an SNN and a CNN is called a spiking CNN (SCNN). To achieve a high operation speed with an SCNN, a large memory, which occupies a relatively large area and consumes a relatively large amount of power, is often required. In this paper, a data flow method is proposed to reduce the required on-chip memory and power consumption and to eliminate the operation unit skipping of a high-sparsity SCNN. This method decreases the overall on-chip memory required by an SCNN and increases the network's energy efficiency. When using the proposed method in this study, an SCNN exhibited energy efficiency of 104.76 TOPS/W when processing the CIFA-10 dataset.