In the current landscape of social media communication, believing sentiment expression is crucial for diverse applications, including brand management and public opinion analysis. The study explores ways to improvise sentiment analysis, in media by using contextual embeddings. It places its emphasis on well-settled models such as BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer). By tackling the force of these dynamic inserting, the study looks to beat the impediments of conventional feeling examination strategies in catching the nuanced and context-dependent nature of language prevalent in social media disclosure. Through a comprehensive examination of existing literature and recent developments, the paper systematically evaluates the impact of dynamic contextual embedding, emphasizing prominent models such as BERT and GPT. The main goal is to provide an in-depth overview of how these techniques have subsidized to the enhancement of sentiment analysis in the ever-evolving landscape of social media platforms. The research involves a comprehensive investigation into the fine-tuning and pre-training of these models on social media datasets. This paper provides an outline based on previous studies in the field of understanding sentiments in social media posts using algorithms like BERT and GPT.