Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
appliedml42 
posted an update 1 day ago
Post
938
I am trying to find resources that explain how I can protect against instruction following capability degradation due to LoRA fine-tuning.

For example, I fine-tuned Llama 3.2 3B Instruct on cornell-movie-review-data/rotten_tomatoes dataset and saw significant degradation in ifeval benchmark scores.

I would appreciate any pointers 🙏🏽
In this post