smd29 commited on
Commit
6a2fd72
1 Parent(s): 4b20464

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -7
README.md CHANGED
@@ -1,7 +1,62 @@
1
- ---
2
- language:
3
- - en
4
- tags:
5
- - code
6
- ---
7
- This is finetuned the LLAMA2-7b-chat model using LoRA.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ # Model Card for Model ID
4
+
5
+ finetuned the LLAMA2-7b-chat model using LoRA.
6
+
7
+ This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
8
+
9
+ ## Training Details
10
+
11
+ ### Training Data
12
+
13
+ Trained on this dataset
14
+
15
+ ## mlabonne/guanaco-llama2-1k
16
+
17
+ <!-- ## code -->
18
+
19
+ <!-- !pip install transformers accelerate
20
+ pip install transformers accelerate
21
+
22
+ from transformers import AutoTokenizer
23
+ import transformers
24
+ import torch
25
+
26
+ model = "smd29/llama-2-7b-chat-finetuned"
27
+ "
28
+ prompt = "What is a large language model?"
29
+
30
+ tokenizer = AutoTokenizer.from_pretrained(model)
31
+
32
+ pipeline = transformers.pipeline(
33
+ "text-generation",
34
+ model=model,
35
+ torch_dtype=torch.float16,
36
+ device_map="auto",
37
+ )
38
+
39
+ sequences = pipeline(
40
+ f'<s>[INST] {prompt} [/INST]',
41
+ do_sample=True,
42
+ top_k=10,
43
+ num_return_sequences=1,
44
+ eos_token_id=tokenizer.eos_token_id,
45
+ max_length=200,
46
+ )
47
+ for seq in sequences:
48
+ print(f"Result: {seq['generated_text']}")
49
+ prompt="explain me in a simple way,what the equation for finding the nth triangle number is and how it can be proved by using only high school level math. please give each step of a proof using LaTeX."
50
+ sequences = pipeline(
51
+ f'<s>[INST] {prompt} [/INST]',
52
+ do_sample=True,
53
+ top_k=10,
54
+ num_return_sequences=1,
55
+ eos_token_id=tokenizer.eos_token_id,
56
+ max_length=400,
57
+ )
58
+ for seq in sequences:
59
+ print(f"Result: {seq['generated_text']}")
60
+
61
+
62
+