Brain tumor segmentation is essential in medical imaging for accurate diagnosis and effective treatment planning. Brain tumors are formed during the uncontrolled growth of cells and are divided into two primary and secondary categories. The most common type of primary brain tumors is gliomas, which are examined in this research. In recent years, deep learning models have shown promising results in automating tumor segmentation from medical imaging data. This study compares the performance of Variational Autoencoder (VAE) and Adversarial Autoencoder (AAE) networks for brain tumor segmentation based on an encoder-decoder architecture. The encoder part extracts image features, while the decoder part reconstructs segmentation labels. These two networks are powerful generative models that capture the underlying distribution of the data. This capability makes them suitable for extracting meaningful tumor features. In the VAE network, a variational autoencoder branch is added to reconstruct the input images. In the adversarial network, the discriminator part is added to train the encoder-decode network and helps the decoder to generate outputs that closely match the tumor labels. The results indicate that both VAE and AAE achieve satisfactory segmentation accuracy. However, AAE achieves higher Dice scores compared to VAE, indicating that it is more effective at segmenting tumor regions.