Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

INV-WZQ/LightCL

Repository files navigation

LightCL

1 Introduction

LightCL is a compact algorithm for continual learning. Specifically, we consider two factors of generalizability, learning plasticity and memory stability, and design metrics of both to quantitatively assess generalizability of neural networks during CL. This evaluation shows that generalizability of different layers in a neural network exhibits a significant variation. Thus, as shown in the following figure, we Maintain Generalizability by freezing generalized parts without the resource-intensive training process and Memorize Feature Patterns by stabilizing feature extracting of previous tasks to enhance generalizability for less-generalized parts with a little extra memory, which is far less than the reduction by freezing. The following Experiment illustrate that LightCL outperforms other state-of-the-art methods and reduces at most ×ばつ memory footprint. We also verify the effectiveness of LightCL on the edge device.

2 Implementation

Requirements

  • Python==3.11.4
  • numpy==1.24.3
  • torch==2.1.2
  • torchvision==0.16.2
  • tqdm==4.65.0, etc(which can be easily installed with pip)

Usage

Run LightCL.py with the commands like below:

  • Default
python LightCL.py 
  • With sparse
python LightCL.py --Sparse

Here the major parameters are:

  • lr: learning rate (default=0.01)
  • Beta: hyperparameter for regulation loss (default=0.0002)
  • BufferNum: number of Memory Buffer (default=15)
  • Ratio: selecting vital feature map with Ratio (default=0.15)
  • Seed: the random seed (default=0)
  • pretrain: whether use the pre-trained model (default=True)
  • Dataset: dataset (default: CIFAR10; Other: TinyImageNet)
  • Sparse: whether sparse (default=False)

Note that we already have the pre-trained model (ResNet18_for_LightCL.pth) in the directory. If you want to pre-train the model yourself, you can download dataset ImageNet32x32 under data/ and run code get_parameter.py.

3 Cite

@inproceedings{wang2025lightCL,
author = {Wang, Zeqing and Cheng, Fei and Ji, Kangye and Huang, Bohu},
title = {LightCL: Compact Continual Learning with Low Memory Footprint For Edge Device},
booktitle = {Proceedings of the 30th Asia and South Pacific Design Automation Conference},
year = {2025},
}

About

[ASP-DAC 2025] LightCL: Compact Continual Learning with Low Memory Footprint For Edge Device

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

AltStyle によって変換されたページ (->オリジナル) /