PEFT
Safetensors
henryen commited on
Commit
17629dc
1 Parent(s): be21e27

update readme

Browse files
Files changed (1) hide show
  1. README.md +13 -2
README.md CHANGED
@@ -10,14 +10,25 @@ library_name: peft
10
  ### Introduction
11
  OriGen is a fine-tuned lora model designed for Verilog code generation. It is trained on top of DeepSeek Coder 7B using datasets generated from code-to-code augmentation and self-reflection.
12
 
13
-
14
- - **Repository:** [pku-liang/OriGen](https://github.com/pku-liang/OriGen)
15
 
16
  ### Evaluation Results
17
  <img src="figures/evaluation.png" alt="evaluation" width="1000"/>
18
 
19
  ### Quick Start
20
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  ```python
22
  from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
23
  import torch
 
10
  ### Introduction
11
  OriGen is a fine-tuned lora model designed for Verilog code generation. It is trained on top of DeepSeek Coder 7B using datasets generated from code-to-code augmentation and self-reflection.
12
 
13
+ The model has been uploaded to Hugging Face, and the repository contains the inference scripts. The dataset and data generation flow will be released soon.
 
14
 
15
  ### Evaluation Results
16
  <img src="figures/evaluation.png" alt="evaluation" width="1000"/>
17
 
18
  ### Quick Start
19
 
20
+ Before running the following code, please install the required packages:
21
+
22
+ ```bash
23
+ conda create -n origen python=3.11
24
+ conda activate origen
25
+ pip install -r requirements.txt
26
+ ```
27
+
28
+ Here is an example of how to use the model. Please note that the base model, DeepSeek Coder 7B, is loaded in float16 precision though the its default precision is bfloat16.
29
+
30
+ The reason for this is that we find Lora trained in float16 performs better than that in bfloat16 in experiments.
31
+
32
  ```python
33
  from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
34
  import torch