This study aims to investigate differences in brain functional connectivity between deaf children and hearing children when processing three distinct tones (first tone, second tone, third tone) using deep learning methods. Additionally, the study seeks to identify disparities in brain activation regions during tone processing between deaf and hearing children, along with variations in activated brain regions during tone processing between deaf children and typically developing children. The participant pool consisted of five deaf children and two typically developing children. Resting-state functional magnetic resonance imaging (fMRI) scans were conducted using a functional MRI scanner, followed by data preprocessing and fMRI data analysis. A one-dimensional convolutional network was employed to extract features and classify the input brain functional connectivity. Through a series of theoretical and comparative analyses, it was determined that the one-dimensional convolutional network is adept at capturing local patterns within sequential data. It can effectively extract information pertinent to connectivity patterns from the one-dimensional sequence of connection weights, rendering it particularly suitable for processing brain functional connectivity data in deaf children.