Commit
•
bef895b
1
Parent(s):
e75a97e
Update README.md
Browse files
README.md
CHANGED
@@ -101,58 +101,80 @@ To utilize the prompt format without a system prompt, simply leave the line out.
|
|
101 |
|
102 |
## Prompt Format for Function Calling
|
103 |
|
104 |
-
Our model was trained on specific system prompts and structures for Function Calling.
|
|
|
|
|
105 |
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
113 |
```
|
114 |
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
119 |
```
|
120 |
|
121 |
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
|
|
|
122 |
```
|
123 |
-
<|im_start|>assistant
|
124 |
<tool_call>
|
125 |
-
{"arguments": {"
|
126 |
</tool_call><|im_end|>
|
127 |
```
|
128 |
|
129 |
-
Once you parse the tool call,
|
|
|
|
|
|
|
|
|
|
|
|
|
130 |
```
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
|
|
|
|
|
|
136 |
```
|
137 |
|
138 |
-
|
|
|
139 |
```
|
140 |
-
|
141 |
-
The stock fundamentals data for Tesla (TSLA) are as follows:
|
142 |
-
- **Symbol**: TSLA
|
143 |
-
- **Company Name**: Tesla, Inc.
|
144 |
-
- **Sector**: Consumer Cyclical
|
145 |
-
- **Industry**: Auto Manufacturers
|
146 |
-
- **Market Capitalization**: $566,160,130,480
|
147 |
-
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
|
148 |
-
- **Price-to-Book Ratio (PB Ratio)**: 9.04
|
149 |
-
- **Dividend Yield**: N/A
|
150 |
-
- **Trailing Earnings Per Share (EPS)**: $4.3
|
151 |
-
- **Beta Value of the Stock**: 2.42
|
152 |
-
- **52-Week High Price of the Stock**: $299.29
|
153 |
-
- **52-Week Low Price of the Stock**: $152.37
|
154 |
-
|
155 |
-
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
|
156 |
```
|
157 |
|
158 |
## Prompt Format for JSON Mode / Structured Outputs
|
|
|
101 |
|
102 |
## Prompt Format for Function Calling
|
103 |
|
104 |
+
Our model was trained on specific system prompts and structures for Function Calling. These are handled by the `tool_use` chat template. To use this template,
|
105 |
+
first define a list of tool functions. It's okay if these are dummy functions - what matters is their name, type hints, and docstring, as these will be
|
106 |
+
extracted and made available to the model:
|
107 |
|
108 |
+
```python
|
109 |
+
def get_current_temperature(location: str, unit: str) -> float:
|
110 |
+
"""
|
111 |
+
Get the current temperature at a location.
|
112 |
+
|
113 |
+
Args:
|
114 |
+
location: The location to get the temperature for, in the format "City, Country"
|
115 |
+
unit: The unit to return the temperature in. (choices: ["celsius", "fahrenheit"])
|
116 |
+
Returns:
|
117 |
+
The current temperature at the specified location in the specified units, as a float.
|
118 |
+
"""
|
119 |
+
return 22. # A real function should probably actually get the temperature!
|
120 |
+
|
121 |
+
def get_current_wind_speed(location: str) -> float:
|
122 |
+
"""
|
123 |
+
Get the current wind speed in km/h at a given location.
|
124 |
+
|
125 |
+
Args:
|
126 |
+
location: The location to get the temperature for, in the format "City, Country"
|
127 |
+
Returns:
|
128 |
+
The current wind speed at the given location in km/h, as a float.
|
129 |
+
"""
|
130 |
+
return 6. # A real function should probably actually get the wind speed!
|
131 |
+
|
132 |
+
tools = [get_current_temperature, get_current_wind_speed]
|
133 |
```
|
134 |
|
135 |
+
Now, prepare a chat and apply the chat template, then generate the model's response
|
136 |
+
|
137 |
+
```python
|
138 |
+
messages = [
|
139 |
+
{"role": "user", "content": "Hey, what's the temperature in Paris right now?"}
|
140 |
+
]
|
141 |
+
|
142 |
+
inputs = tokenizer.apply_chat_template(messages, chat_template="tool_use", tools=tools, add_generation_prompt=True, return_dict=True, return_tensors="pt")
|
143 |
+
inputs = {k: v.to(model.device) for k, v in inputs.items()}
|
144 |
+
out = model.generate(**inputs, max_new_tokens=128)
|
145 |
+
print(tokenizer.decode(out[0][len(inputs["input_ids"][0]):]))
|
146 |
```
|
147 |
|
148 |
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
|
149 |
+
|
150 |
```
|
|
|
151 |
<tool_call>
|
152 |
+
{"arguments": {"location": "Paris, France", "unit": "celsius"}, "name": "get_current_temperature"}
|
153 |
</tool_call><|im_end|>
|
154 |
```
|
155 |
|
156 |
+
Once you parse the tool call, add it to the chat as an `assistant` response, using the `tool_calls` key, then append the tool output
|
157 |
+
as a response with the `tool` role:
|
158 |
+
|
159 |
+
```python
|
160 |
+
tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France", "unit": "celsius"}}
|
161 |
+
messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]})
|
162 |
+
messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"})
|
163 |
```
|
164 |
+
|
165 |
+
Now you can apply the chat template again to format the conversation, and generate a response from the model:
|
166 |
+
|
167 |
+
```python
|
168 |
+
inputs = tokenizer.apply_chat_template(messages, chat_template="tool_use", tools=tools, add_generation_prompt=True, return_dict=True, return_tensors="pt")
|
169 |
+
inputs = {k: v.to(model.device) for k, v in inputs.items()}
|
170 |
+
out = model.generate(**inputs, max_new_tokens=128)
|
171 |
+
print(tokenizer.decode(out[0][len(inputs["input_ids"][0]):]))
|
172 |
```
|
173 |
|
174 |
+
and we get:
|
175 |
+
|
176 |
```
|
177 |
+
The current temperature in Paris, France is 22.0 degrees Celsius.<|im_end|>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
178 |
```
|
179 |
|
180 |
## Prompt Format for JSON Mode / Structured Outputs
|