Extracting valuable information from the biomedical literature is gaining attention among researchers, and Biomedical Named Entity Recognition (BioNER) becomes one of the most essential tasks in text mining. Previous studies have shown that long-distance interactions between words play an important role in enhancing hidden representations in NER. However, the existing mainstream NER models treat texts as plain linear sequences, resulting in the loss of structural information in sentences, such as BiLSTM-CRF. To overcome this shortcoming, this paper proposes BGGF, a novel model that is able to capture both semantic information in text sequences and structural information in dependency trees. Experiments are conducted on BC2GM and NCBI disease datasets to demonstrate the effectiveness of the proposed model in improving BioNER. The resulting model achieves state-of-the-art performance on the BC2GM and NCBI disease datasets.