Commit
•
286b8a3
1
Parent(s):
6debac6
Create README.md (#1)
Browse files- Create README.md (71c47770bcf0fc800116bdea3923b88d95886b4d)
Co-authored-by: XUELING LIU <Anitaliu98@users.noreply.huggingface.co>
README.md
ADDED
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
pipeline_tag: text-generation
|
5 |
+
tags:
|
6 |
+
- code
|
7 |
+
---
|
8 |
+
|
9 |
+
<h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1>
|
10 |
+
|
11 |
+
<p align="center">
|
12 |
+
<img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png">
|
13 |
+
</p>
|
14 |
+
<p align="center">
|
15 |
+
<a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a>
|
16 |
+
|
|
17 |
+
<a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[🛠️Code]</a>
|
18 |
+
</p>
|
19 |
+
<hr>
|
20 |
+
|
21 |
+
## Introduction
|
22 |
+
OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities.
|
23 |
+
|
24 |
+
## Model Usage
|
25 |
+
### Inference
|
26 |
+
|
27 |
+
```python
|
28 |
+
import torch
|
29 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
30 |
+
model_path="OpenCodeInterpreter-CL-13B"
|
31 |
+
|
32 |
+
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
33 |
+
model = AutoModelForCausalLM.from_pretrained(
|
34 |
+
model_path,
|
35 |
+
torch_dtype=torch.bfloat16,
|
36 |
+
device_map="auto",
|
37 |
+
)
|
38 |
+
model.eval()
|
39 |
+
|
40 |
+
prompt = "Write a function to find the shared elements from the given two lists."
|
41 |
+
inputs = tokenizer.apply_chat_template(
|
42 |
+
[{'role': 'user', 'content': prompt }],
|
43 |
+
return_tensors="pt"
|
44 |
+
).to(model.device)
|
45 |
+
outputs = model.generate(
|
46 |
+
inputs,
|
47 |
+
max_new_tokens=1024,
|
48 |
+
do_sample=False,
|
49 |
+
pad_token_id=tokenizer.eos_token_id,
|
50 |
+
eos_token_id=tokenizer.eos_token_id,
|
51 |
+
)
|
52 |
+
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
|
53 |
+
```
|
54 |
+
|
55 |
+
|
56 |
+
#### Contact
|
57 |
+
|
58 |
+
If you have any inquiries, please feel free to raise an issue or reach out to us via email at: xiangyue.work@gmail.com, zhengtianyu0428@gmail.com.
|
59 |
+
We're here to assist you!"
|