The rapid advancement in Deep Learning models which were originally designed for computer vision has unlocked a multitude of opportunities for addressing complex visual problems. Originally, these models were designed to tackle problems such as computer vision or image processing but now-a-days, it has opened new horizons in the domains of remote sensing and geoinformatics. This research aims to present a comprehensive comparison of semantic segmentation outcomes generated by multiple deep learning models. The comparative analysis between FPN, UNET, and DeepLabv3+ on the L band Uninhabited Ariel Vehicle Synthetic Aperture Radar (UAVSAR) dataset of the Houston region in the Texas, USA show that the DeepLabv3+ outperforms the FPN and UNET models with Xception as a backbone. The overall accuracy of FPN, UNET and DeepLabv3+ is 84.62%, 86.52%, and 87.78% respectively.