Satellite image segmentation has become an important auxiliary tool for decision-making in various fields, including city planning, building extraction, traffic control, and environmental protection. However, for satellite images, there is a significant challenge that the datasets are often small and lacked of label. To address this issue, we propose a novel framework called the Triple Generative Adaptive Networks (TripleGAN), which employs domain adaptation to transfer models from labeled datasets to unlabeled ones.TripleGAN is a deep neural network consisting of one segmentation module and three discriminators to reduce the domain gap between labeled and unlabeled datasets. By jointly training three GANs, our model achieves a significantly better performance in handling different data distributions. In experiments on IS-PRS public datasets, TripleGAN outperforms the state-of-the-art model, demonstrating its advantages and providing a promising multi adversarial learning scheme for future researches.