In the field of graph-generated text, one of the core issues commonly explored is how to maintain the structural information of the graph to the maximum extent and reduce the problem of knowledge loss during training. Current research has mainly focused on exploring the ability of models to learn graph structures by increasing model size and refining the graph representation. In contrast, our work emphasizes the importance of perceiving and exploring edges in the graph itself. Edges provide a wide variety of structures to the graph structure, offering it freedom and diversity. Therefore, improving the model's ability to perceive edges could potentially enhance the task metrics of graph generation for text. To address this, we propose a graph-generated text model MKGS that maintains the original structure of the knowledge graph, effectively reducing knowledge loss during the learning process. Our approach achieves this at three levels: reorganizing the knowledge sequence as input to the model, enhancing edge perception during processing, and incorporating a graph rational activation function at the output. We validate our method using the Kg-to-text benchmark dataset WebNLG, where MKGS achieves a score of 66.22%. Additionally, the model exhibits fewer syntactic errors and produces smoother expressions in the generated text.