Abstract

We study the problem of Continual Distillation Learning (CDL) that considers Knowledge Distillation (KD) in the Continual Learning (CL) setup. A teacher model and a student model need to learn a sequence of tasks, and the knowledge of the teacher model will be distillated to the student in order to improve the student model. We introduce a novel method named CDL-Prompt that leverages prompt- based continual learning models to build the teacher-student model. We investigate how to utilize the prompts of the teacher model in the student model for knowledge distillation, and propose an attention-based prompt mapping scheme to use the teacher prompts for the student. We demonstrate that our method can be applied to different prompt-based continual learning models such as L2P, DualPrompt and CODA-Prompt to improve their performance using powerful teacher models. While recent CL methods focus on prompt learning, we show that our method can be utilized to build efficient CL models using prompt-based knowledge distillation.

CDL-Prompt

CDL-Prompt is a framework for continual distillation learning that can be integrated into various prompt-based methods to improve performance.

gto

Experiment Results

CDL-Prompt(Using CODA baseline and ViT-base backbone) outperforms other prompt-based methods in both Cifar-100 and ImageNet-R datasets. Accuracy refers to the average accuracy for all 10 tasks. We train multiple times and take the average.

CDL-Prompt(Using CODA baseline and ViT-base backbone) outperforms other prompt-based methods in both Cifar-100 and ImageNet-R datasets. Accuracy refers to the average accuracy for all 10 tasks. We train multiple times and take the average.

 More Prompt-based Models

Adding more methods: Our CDL-Prompt not only uses CODA-Prompt as the baseline model but also includes result statistics from various different prompt-based methods. In the future, we will continuously add and update our CDL method on different prompt-based models.
Cifar-100
# Teacher Student Baseline Task-Number Accuarcy(%) Forgetting(%)
1#ViT-LargeCODA [3]1088.973.97
9ViT-LargeViT-BaseCODA [3]1087.695.40
2#ViT-LargeDual [2]1087.564.99
7ViT-LargeViT-BaseDual [2]1086.575.72
3#ViT-LargeL2P [1]1086.365.98
8#ViT-BaseCODA [3]1086.165.63
6#ViT-BaseDual [2]1084.665.46
5ViT-LargeViT-BaseL2P [1]1083.787.43
15ViT-BaseViT-SmallCODA [3]1083.247.63
4#ViT-BaseL2P [1]1083.026.06
13ViT-BaseViT-SmallDual [2]1082.296.6
14#ViT-SmallCODA [3]1082.186.48
11ViT-BaseViT-SmallL2P [1]1080.247.31
12#ViT-SmallDual [2]1079.856.12
10#ViT-SmallL2P [1]1077.717.12
21ViT-BaseViT-TinyCODA [3]1070.0514.33
19ViT-BaseViT-TinyDual [2]1068.5810.79
17ViT-BaseViT-TinyL2P [1]1067.6110.99
20#ViT-TinyCODA [3]1065.0513.55
18#ViT-TinyDual [2]1062.6314.74
16#ViT-TinyL2P [1]1060.6813.98
ImageNet-R
# Teacher Student Baseline Task-Number Accuarcy(%) Forgetting(%)
1#ViT-LargeCODA [3]1078.794.46
9ViT-LargeViT-BaseCODA [3]1077.955.64
7ViT-LargeViT-BaseDual [2]1076.364.27
8#ViT-BaseCODA [3]1075.785.70
2#ViT-LargeDual [2]1074.954.93
3#ViT-LargeL2P [1]1074.195.31
5ViT-LargeViT-BaseL2P [1]1074.014.26
6#ViT-BaseDual [2]1072.443.80
4#ViT-BaseL2P [1]1071.595.65
15ViT-BaseViT-SmallCODA [3]1070.068.70
13ViT-BaseViT-SmallDual [2]1067.756.61
14#ViT-SmallCODA [3]1067.448.52
11ViT-BaseViT-SmallL2P [1]1065.047.38
12#ViT-SmallDual [2]1064.275.93
10#ViT-SmallL2P [1]1061.956.52
19ViT-BaseViT-TinyDual [2]1053.889.60
21ViT-BaseViT-TinyCODA [3]1053.1313.92
17ViT-BaseViT-TinyL2P [1]1051.009.18
20#ViT-TinyCODA [3]1050.2312.75
18#ViT-TinyDual [2]1046.5410.25
16#ViT-TinyL2P [1]1044.988.79

References

Official Code: Source code from the authors of the method
  1. Zifeng Wang, Zizhao Zhang, Chen-Yu Lee, Han Zhang, Ruoxi Sun, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, and Tomas Pfister. Learning to prompt for continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 139–149, 2022. [ Official Code | Paper ]
  2. Zifeng Wang, Zizhao Zhang, Sayna Ebrahimi, Ruoxi Sun, Han Zhang, Chen-Yu Lee, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, et al. Dualprompt: Complementary prompting for rehearsal-free continual learning. In European Conference on Computer Vision, pages 631–648. Springer, 2022. [ Official Code | Paper ]
  3. James Seale Smith, Leonid Karlinsky, Vyshnavi Gutta, Paola Cascante-Bonilla, Donghyun Kim, Assaf Arbelle, Rameswar Panda, Rogerio Feris, and Zsolt Kira. Coda-prompt: Continual decomposed attention-based prompting for rehearsal-free continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11909–11919, June 2023. [ Official Code | Paper]

Code (coming soon...)

CDL

The code for CDL-Prompt.

BibTeX

Please cite CDL if it helps your research:
@misc{2024CDL,
title={Continual Distillation Learning},
author={Qifan Zhang and Yunhui Guo and Yu Xiang},
year={2024},
eprint={2407.13911},
archivePrefix={arXiv},
primaryClass={cs.CV}
}

Contact

Send any comments or questions to Qifan Zhang: qifan.zhang@utdallas.edu

Acknowledgements

This work was supported in part by the DARPA Perceptually-enabled Task Guidance (PTG) Program under contract number HR00112220005 and the Sony Research Award Program.