This study introduces a novel approach to abstractive text summarisation, by harnessing the power of sequence-to-sequence (Seq2Seq) models along with word sense disambiguation (WSD) techniques. We utilised TensorFlow and NLTK libraries, employing the CNN/Daily Mail dataset to train the model. We integrated word sense disambiguation [17] in the preprocessing phase. With the help of WordNet we identified and utilised most common senses of word within texts. This step played a crucial role in improving the model’s understanding.We created a Seq2Seq model equipped with LSTM units, embedding layers, and Time Distributed Dense layers, while focusing on optimising the vocabulary size, embedding dimensions, and LSTM units to achieve efficient summarisation. The model is trained on the preprocessed dataset, undergoing numerous epochs to refine its summarisation capabilities.Our approach shows a significant improvement in capturing the context and nuances of the input texts, resulting in more coherent and accurate summaries. The results indicate promising advances in the field of abstractive text summarisation, opening ways for context-aware and semantically rich summarisation tools. This study not only contributes to the academic discussions in natural language processing but also offers practical implications for summarisation applications in various domains.