Upload README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,102 @@
|
|
1 |
-
---
|
2 |
-
license:
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-sa-4.0
|
3 |
+
inference: false
|
4 |
+
---
|
5 |
+
|
6 |
+
# SLIM-SUMMARY
|
7 |
+
|
8 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
+
|
10 |
+
**slim-summary** is a small, specialized model finetuned for summarize function-calls, generating output consisting of a python list of distinct summary points.
|
11 |
+
|
12 |
+
As an experimental feature in the model, there is an optional list size that can be passed with the parameters in invoking the model to guide the model to a specific number of response elements.
|
13 |
+
|
14 |
+
Input is a text passage, and output is a list of the form:
|
15 |
+
|
16 |
+
`['summary_point1', 'summary_point2', 'summary_point3']`
|
17 |
+
|
18 |
+
This model is 2.7B parameters, small enough to run on a CPU, and is fine-tuned on top of [**llmware/bling-stable-lm-3b-4e1t-v0**](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0), which in turn, is a fine-tune of stabilityai/stablelm-3b-4elt.
|
19 |
+
|
20 |
+
For fast inference use of this model, we would recommend using the 'quantized tool' version, e.g., [**'slim-summary-tool'**](https://huggingface.co/llmware/slim-summary-tool).
|
21 |
+
|
22 |
+
## Usage Tips
|
23 |
+
|
24 |
+
-- Automatic (ast.literal_eval) conversion of the llm output to a python list is often complicated by the presence of '"' (ascii 34 double quotes) and "'" (ascii 39 single quote). We have provided a straightforward string remediation handler in [llmware](https://www.github.com/llmware-ai/llmware.git) that automatically remediates and provides a well-formed Python list. We have tried multiple ways to handle 34/39 in training - and each has a set of trade-offs - we will continue to look for ways to better automate in future releases of the model.
|
25 |
+
|
26 |
+
-- If you are looking for a single output point, try the params: "brief description (1)"
|
27 |
+
|
28 |
+
-- If the document has a lot of financial points, try the params "financial data points" or "financial data points (5)"
|
29 |
+
|
30 |
+
-- Param counts are an experimental feature, but work reasonably well to guide the scope of the model's output length. At times, the model's attempt to match the target number of output points will result in some repetitive points.
|
31 |
+
|
32 |
+
|
33 |
+
## Prompt format:
|
34 |
+
|
35 |
+
`function = "summarize"`
|
36 |
+
`params = "key points (3)"`
|
37 |
+
`prompt = "<human> " + {text} + "\n" + `
|
38 |
+
`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
|
39 |
+
|
40 |
+
|
41 |
+
<details>
|
42 |
+
<summary>Transformers Script </summary>
|
43 |
+
|
44 |
+
model = AutoModelForCausalLM.from_pretrained("llmware/slim-summary")
|
45 |
+
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-summary")
|
46 |
+
|
47 |
+
function = "summarize"
|
48 |
+
params = "key points (3)"
|
49 |
+
|
50 |
+
text = "Tesla stock declined yesterday 8% in premarket trading after a poorly-received event in San Francisco yesterday, in which the company indicated a likely shortfall in revenue."
|
51 |
+
|
52 |
+
prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"
|
53 |
+
|
54 |
+
inputs = tokenizer(prompt, return_tensors="pt")
|
55 |
+
start_of_input = len(inputs.input_ids[0])
|
56 |
+
|
57 |
+
outputs = model.generate(
|
58 |
+
inputs.input_ids.to('cpu'),
|
59 |
+
eos_token_id=tokenizer.eos_token_id,
|
60 |
+
pad_token_id=tokenizer.eos_token_id,
|
61 |
+
do_sample=True,
|
62 |
+
temperature=0.3,
|
63 |
+
max_new_tokens=100
|
64 |
+
)
|
65 |
+
|
66 |
+
output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True)
|
67 |
+
|
68 |
+
print("output only: ", output_only)
|
69 |
+
|
70 |
+
# here's the fun part
|
71 |
+
try:
|
72 |
+
output_only = ast.literal_eval(llm_string_output)
|
73 |
+
print("success - converted to python dictionary automatically")
|
74 |
+
except:
|
75 |
+
# note: rules-based conversion may be required - see comment above - and remediation script @ https://www.github.com/llmware-ai/llmware/blobs/main/llmware/models.py - ModelCatalog.remediate_function_call_string()
|
76 |
+
# for good example of post-processing conversion script
|
77 |
+
|
78 |
+
print("fail - could not convert to python dictionary automatically - ", llm_string_output)
|
79 |
+
|
80 |
+
</details>
|
81 |
+
|
82 |
+
<details>
|
83 |
+
|
84 |
+
|
85 |
+
|
86 |
+
|
87 |
+
<summary>Using as Function Call in LLMWare</summary>
|
88 |
+
|
89 |
+
from llmware.models import ModelCatalog
|
90 |
+
slim_model = ModelCatalog().load_model("llmware/slim-summary")
|
91 |
+
response = slim_model.function_call(text,params=["key points (3)], function="summarize")
|
92 |
+
|
93 |
+
print("llmware - llm_response: ", response)
|
94 |
+
|
95 |
+
</details>
|
96 |
+
|
97 |
+
|
98 |
+
## Model Card Contact
|
99 |
+
|
100 |
+
Darren Oberst & llmware team
|
101 |
+
|
102 |
+
[Join us on Discord](https://discord.gg/MhZn5Nc39h)
|