Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
LoRA-TMLR-2024
's Collections
Instruction Finetuning - Code (Magicoder-Evol-Instruct-110K)
Continued Pretraining - Code (StarCoder-Python)
Instruction Finetuning - Math (MetaMathQA)
Continued Pretraining - Math (OpenWebMath)
Instruction Finetuning - Code (Magicoder-Evol-Instruct-110K)
updated
Sep 26
Full finetuning and LoRA adapters for Llama-2-7B finetuned on Magicoder-Evol-Instruct-110K
Upvote
-
LoRA-TMLR-2024/magicoder-lora-rank-64-alpha-128
Updated
Sep 27
•
102
LoRA-TMLR-2024/magicoder-lora-rank-16-alpha-32
Updated
Oct 16
•
65
LoRA-TMLR-2024/magicoder-lora-rank-256-alpha-512
Updated
Sep 27
•
1
LoRA-TMLR-2024/magicoder-lora-rank-2048-alpha-4096
Updated
Sep 26
•
2
LoRA-TMLR-2024/magicoder-full-finetuning-lr-5e-05
Updated
Sep 27
Upvote
-
Share collection
View history
Collection guide
Browse collections