Selective Aggregation for Low-Rank Adaptation in Federated Learning
Abstract
We investigate LoRA in federated learning through the lens of the asymmetry analysis of the learned A and B matrices. In doing so, we uncover that A matrices are responsible for learning general knowledge, while B matrices focus on capturing client-specific knowledge. Based on this finding, we introduce Federated Share-A Low-Rank Adaptation (FedSA-LoRA), which employs two low-rank trainable matrices A and B to model the weight update, but only A matrices are shared with the server for aggregation. Moreover, we delve into the relationship between the learned A and B matrices in other LoRA variants, such as rsLoRA and VeRA, revealing a consistent pattern. Consequently, we extend our FedSA-LoRA method to these LoRA variants, resulting in FedSA-rsLoRA and FedSA-VeRA. In this way, we establish a general paradigm for integrating LoRA with FL, offering guidance for future work on subsequent LoRA variants combined with FL. Extensive experimental results on natural language understanding and generation tasks demonstrate the effectiveness of the proposed method.
Community
We establish a general paradigm for integrating LoRA with FL, offering guidance for future work on subsequent LoRA variants combined with FL.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Robust Federated Finetuning of Foundation Models via Alternating Minimization of LoRA (2024)
- FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous Low-Rank Adaptations (2024)
- Exploring Selective Layer Fine-Tuning in Federated Learning (2024)
- Communication-Efficient Federated Low-Rank Update Algorithm and its Connection to Implicit Regularization (2024)
- FedMoE: Personalized Federated Learning via Heterogeneous Mixture of Experts (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper