Multimodal Co-learning: A Domain Adaptation Method for Building Extraction from Optical Remote Sensing Imagery
- Resource Type
- Conference
- Authors
- Xie, Yuxing; Tian, Jiaojiao
- Source
- 2023 Joint Urban Remote Sensing Event (JURSE) Urban Remote Sensing Event (JURSE), 2023 Joint. :1-4 May, 2023
- Subject
- Computing and Processing
Geoscience
Signal Processing and Analysis
Point cloud compression
Training
Three-dimensional displays
Buildings
Transfer learning
Training data
Optical imaging
building extraction
multimodal data
co-learning
domain adaptation
transfer learning
- Language
- ISSN
- 2642-9535
In this paper, we aim to improve the transfer learning ability of 2D convolutional neural networks (CNNs) for building extraction from optical imagery and digital surface models (DSMs) using a 2D-3D co-learning framework. Unlabeled target domain data are incorporated as unlabeled training data pairs to optimize the training procedure. Our framework adaptively transfers unsupervised mutual information between the 2D and 3D modality (i.e., DSM-derived point clouds) during the training phase via a soft connection, utilizing a predefined loss function. Experimental results from a spaceborne-to-airborne cross-domain case demonstrate that the framework we present can quantitatively and qualitatively improve the testing results for building extraction from single-modality optical images.