Sign Language Recognition (SLR) tends to overcome the communication barrier between normal people and persons who lack the sense of hearing and the ability to speak. Recognition of sign language gestures using deep learning involves recognizing and translating hand gestures and movements into corresponding alphabets or speech through computer vision. This research paper gives an overview of the advancements in SLR techniques, such as datasets, preprocessing, feature extraction methods, classifiers and applications. The proposed method uses Convolutional Neural Network (CNN) and Inception v3 algorithms which have demonstrated favorable results in the domain of image-based recognition tasks. This paper aims to enhance the performance and accuracy of sign language gesture recognition models. Inception v3 algorithm gave accuracy of 98.94% as compared to previous other approaches. Accurate recognition of sign language will have a very powerful social impact that will ease the communication with differently abled people.