Pixel-level segmentation with deep learning is widely applied in interpretation tasks of high-resolution remote sensing imagery, such as change detection and object extraction. However, most existing methods focus on designing deep learning model structures, but few consider improving segmentation accuracy in the inference stage with a trained model. The most important advantage of improving model accuracy in the inference stage is that it does not need to retrain models but can improve the detection accuracy by a cost-effective means. In this letter, a novel decision-level fusion method based on the Dempster–Shafer (DS) theory was proposed, namely, DeepDSFusion. As a general method, it can be seamlessly integrated into any other pixel-level segmentation model. In the implementation detail of the DeepDSFusion, first, several classical data augmentation methods, such as rotation transform and scale transform, were adopted to acquire multiscale probability maps. Then, the DS theory was used to fuse multiscale probability maps into a single probability map. Finally, a simple threshold is applied in the single probability map to acquire segmentation results. Three classical pixel-level segmentation tasks, deforestation detection, road extraction, and landcover mapping on high-resolution imagery, prove the effectiveness of DeepDSFusion.