Sentiment analysis and emotion classification are two crucial components of natural language processing (NLP), which have been widely explored in recent years due to their broad applications. Sentiment analysis aims to identify the polarity of written texts, ranging from positive to negative. Meanwhile, emotion classification is focused on recognizing and categorizing the emotional states expressed in the text. To achieve a deeper understanding of sentiments and emotions, it's essential to utilize models like BERT transformers that can effectively interpret the context. The process begins with data preprocessing, including tokenization and noise removal, followed by fine-tuning techniques to adapt the BERT model to the proposed tasks. We employed the BERT model on four datasets obtained from various sources, including Twitter, news websites, and restaurant reviews, where each dataset represents a distinct Arabic dialect. Our proposed model outperforms commonly used techniques like LSTM and CNN, yielding superior results. Despite the progress made, there are still challenges to overcome, such as dealing with Arabic diacritics, the new Arabic Arabizi, which uses Latin characters, and handling Arabic idioms. Further research is required to address these challenges adequately.