In this paper, we proposed a multi-layer pre-training approach to overcoming the limitations of extractive machine reading comprehension (EMRC) in capturing global semantic information in depth and facilitating deep interaction between text and questions. The proposed framework adopted the Masked-Language Model correction BERT (MacBERT) model and multi-layer perception (MLP) to predict the probability of each location as an answer. To address span-extractive, unanswerable and YES/NO questions, we employed Bi-directional long short-term memory (BiLSTM) and self-attention to create different targeted layers of the neural network model. The novel model exhibits robust generalization and interactive extraction capability. With the Chinese Judicial Reading Comprehension (CJRC) dataset, the experimental results show that the proposed algorithm brings significant performance improvement by 3.5% in civil cases and 4.9% in criminal cases in terms of F1 score.