The GIST and RIST of Iterative Self-Training for Semi-Supervised Segmentation
- Resource Type
- Conference
- Authors
- Teh, Eu Wern; DeVries, Terrance; Duke, Brendan; Jiang, Ruowei; Aarabi, Parham; Taylor, Graham W.
- Source
- 2022 19th Conference on Robots and Vision (CRV) CRV Robots and Vision (CRV), 2022 19th Conference on. :58-66 May, 2022
- Subject
- Computing and Processing
Training
Degradation
Semantics
Semisupervised learning
Behavioral sciences
Iterative methods
Task analysis
semi-supervised learning
semantic segmentation
self-training
- Language
We consider the task of semi-supervised semantic segmentation, where we aim to produce pixel-wise semantic object masks given only a small number of human-labeled training examples. We focus on iterative self-training methods in which we explore the behavior of self-training over multiple refinement stages. We show that iterative self-training leads to performance degradation if done naïvely with a fixed ratio of human-labeled to pseudo-labeled training examples. We propose Greedy Iterative Self-Training (GIST) and Random Iterative Self-Training (RIST) strategies that alternate between training on either human-labeled data or pseudo-labeled data at each refinement stage, resulting in a performance boost rather than degradation. We further show that GIST and RIST can be combined with existing semi-supervised learning methods to boost performance.