Hand gesture recognition (HGR) is a powerful means of communication, especially for individuals facing challenges related to hearing loss and speech impediments. According to the World Health Organization (WHO), the estimated global population of deaf individuals was roughly 466 million in 2020, with a projected rise to 900 million by 2050. Hence, it is essential to conduct a detailed investigation to gain more insight to apprehend the benefits of this technique. This paper explores application areas of HGR and delves into extant literature to identify the state of the art. Evaluating diverse approaches, including Artificial Neural Networks (ANNs), Convolutional Neural Networks (CNNs), and K Nearest Neighbour (K-NN) algorithms, alongside contemporary advancements like Hidden Markov Models (HMMs), wearable glove approaches, and CNN-based systems, the study emphasizes the pivotal role of sign languages in communication, cognitive development, and emotional well-being. The findings of study reveals that the literature indicates application of a hybrid model to increase accuracy.