aashish1904 commited on
Commit
3b8dbc4
1 Parent(s): 477b20c

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +74 -0
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ library_name: transformers
5
+ tags:
6
+ - trl
7
+ - sft
8
+ license: apache-2.0
9
+ datasets:
10
+ - gokaygokay/prompt-enhancement-75k
11
+ language:
12
+ - en
13
+ base_model:
14
+ - HuggingFaceTB/SmolLM2-135M-Instruct
15
+ pipeline_tag: text-generation
16
+
17
+ ---
18
+
19
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
20
+
21
+
22
+ # QuantFactory/SmolLM2-Prompt-Enhance-GGUF
23
+ This is quantized version of [gokaygokay/SmolLM2-Prompt-Enhance](https://huggingface.co/gokaygokay/SmolLM2-Prompt-Enhance) created using llama.cpp
24
+
25
+ # Original Model Card
26
+
27
+
28
+ ```python
29
+ from transformers import AutoTokenizer, AutoModelForCausalLM
30
+ import torch
31
+
32
+ device = "cuda" if torch.cuda.is_available() else "cpu"
33
+ model_id = "gokaygokay/SmolLM2-Prompt-Enhance"
34
+ tokenizer_id = "HuggingFaceTB/SmolLM2-135M-Instruct"
35
+ # Load model and tokenizer
36
+ tokenizer = AutoTokenizer.from_pretrained(tokenizer_id )
37
+ model = AutoModelForCausalLM.from_pretrained(model_id).to(device)
38
+
39
+ # Model response generation functions
40
+ def generate_response(model, tokenizer, instruction, device="cpu"):
41
+ """Generate a response from the model based on an instruction."""
42
+ messages = [{"role": "user", "content": instruction}]
43
+ input_text = tokenizer.apply_chat_template(
44
+ messages, tokenize=False, add_generation_prompt=True
45
+ )
46
+ inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
47
+ outputs = model.generate(
48
+ inputs, max_new_tokens=256, repetition_penalty=1.2
49
+ )
50
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
51
+ return response
52
+
53
+ def print_response(response):
54
+ """Print the model's response."""
55
+ print(f"Model response:")
56
+ print(response.split("assistant\n")[-1])
57
+ print("-" * 100)
58
+
59
+ prompt = "cat"
60
+
61
+ response = generate_response(model, tokenizer, prompt, device)
62
+ print_response(response)
63
+
64
+ # a gray cat with white fur and black eyes is in the center of an open window on a concrete floor.
65
+ # The front wall has two large windows that have light grey frames behind them.
66
+ # here is a small wooden door to the left side of the frame at the bottom right corner.
67
+ # A metal fence runs along both sides of the image from top down towards the middle ground.
68
+ # Behind the cats face away toward the camera's view it appears as if there is another cat sitting next to the one
69
+ # they're facing forward against the glass surface above their head.
70
+ ```
71
+
72
+ ### Training Script
73
+
74
+ https://colab.research.google.com/drive/1Gqmp3VIcr860jBnyGYEbHtCHcC49u0mo?usp=sharing