Calendar14 October 2024

Publication: Adaptive LoRA Merging of Efficient Domain Incremental Learning Publication: Adaptive LoRA Merging of Efficient Domain Incremental Learning

Large pre-trained models have become the default solutions for several applications that require Machine Learning, mainly given the great diversity of problems in which these models can perform effectively. However, these models need to be adapted to perform well in most downstream tasks, due to the fixed and limited training distribution. Merging Low-Rank Adaptation (LoRA) is a Parameter-Efficient Fine-Tuning (PEFT) technique that allows large pretrained models to adapt to downstream tasks by modifying only a small subset of the model’s original parameters. However, it is unclear how these methods behave in Continual Learning dynamic scenarios like Domain Incremental Learning (DIL).

In this work, EMERGE partners from the University of Pisa propose an adaptive LoRA merging approach for DIL tasks that dynamically computes the coefficient for merging, allowing for continuous adaptation to new domains while adjusting the influence of previous ones. They evaluated their approach against some current state-of-the-art merging algorithms on two DIL benchmarks: PACS and OfficeHome. The results show that the adaptive merging technique achieves performance comparable to or superior to fixed-weight methods while eliminating the need for manual weight selection. In particular, their method maintains high accuracy with minimal memory requirements, using as little as one sample per class for coefficient learning. This work showcases a promising use of LoRA adapters and merging algorithms in continual learning, providing a valuable direction for future research.

Read the paper in the link below.