The performance of graph neural networks (GNNs) in a variety of graph-related tasks, such as node categorization, has been remarkably good. Existing GNN models, especially when working with big and sparse graphs, are constrained in how well they can capture complicated graph topologies. In order to overcome this issue, we incorporate an attention mechanism into the GNN design in this study. During the message-passing phase, our proposed model, Attention-Based Graph Neural Networks (AB-GNN), uses a learned attention mechanism to differentially weight the significance of surrounding nodes. Using numerous benchmark datasets for node classification, we test the performance of the AB-GNN and demonstrate that it outperforms current state-of-the-art GNN models. Our tests specifically show that AB-GNN improves accuracy by up to 1% in comparison to the top baseline model. According to our findings, the attention mechanism enhances the model's capacity to detect critical aspects in the graph, resulting in more precise node classification on Cora and CiteSeer datasets in our case. Comprehensively, our work demonstrates the potential of attention mechanisms to enhance the functionality of GNN models and offers directions for further study in this field.