Rethinking Data Distillation: Do Not Overlook Calibration
- Resource Type
- Conference
- Authors
- Zhu, Dongyao; Fang, Yanbo; Lei, Bowen; Xie, Yiqun; Xu, Dongkuan; Zhang, Jie; Zhang, Ruqi
- Source
- 2023 IEEE/CVF International Conference on Computer Vision (ICCV) ICCV Computer Vision (ICCV), 2023 IEEE/CVF International Conference on. :4912-4922 Oct, 2023
- Subject
- Computing and Processing
Signal Processing and Analysis
Training
Temperature distribution
Computer vision
Computer network reliability
Neural networks
Encoding
Calibration
- Language
- ISSN
- 2380-7504
Neural networks trained on distilled data often produce over-confident output and require correction by calibration methods. Existing calibration methods such as temperature scaling and mixup work well for networks trained on original large-scale data. However, we find that these methods fail to calibrate networks trained on data distilled from large source datasets. In this paper, we show that distilled data lead to networks that are not calibratable due to (i) a more concentrated distribution of the maximum logits and (ii) the loss of information that is semantically meaningful but unrelated to classification tasks. To address this problem, we propose 1 Masked Temperature Scaling (MTS) and Masked Distillation Training (MDT) which mitigate the limitations of distilled data and achieve better calibration results while maintaining the efficiency of dataset distillation.