File size: 855 Bytes
631eb94
 
ee716b9
 
 
 
 
 
 
631eb94
ee716b9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
library_name: transformers
tags:
- peft
license: mit
datasets:
- HuggingFaceH4/ultrachat_200k
language:
- en
---
LoRA adapter for kaitchup/Maixtchup-4x7b briefly fine-tuned on UltraChat.

To load and use this adapter:
```
model_name = "kaitchup/Maixtchup-4x7b"
#Tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
compute_dtype = getattr(torch, "float16")
bnb_config = BitsAndBytesConfig(
        load_in_4bit=True,
        bnb_4bit_quant_type="nf4",
        bnb_4bit_compute_dtype=compute_dtype,
        bnb_4bit_use_double_quant=True,
)
model = AutoModelForCausalLM.from_pretrained(
          model_name, quantization_config=bnb_config, device_map="auto", attn_implementation="flash_attention_2",
)

model.config.use_cache = True

model = PeftModel.from_pretrained(model, "kaitchup/Maixtchup-4x7b-QLoRA-SFT-UltraChat")
```