rooa commited on
Commit
e1bf58f
1 Parent(s): 970beaf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -16,7 +16,7 @@ This checkpoint (CodeGen-Mono 2B) was firstly initialized with *CodeGen-Multi 2B
16
  ## Training procedure
17
 
18
  CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
19
- The family of models are trained using 4 TPU-v4 chips by Google, leveraging data and model parallelism.
20
  See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
21
 
22
  ## Evaluation results
@@ -35,8 +35,8 @@ This model can be easily loaded using the `AutoModelForCausalLM` functionality:
35
 
36
  ```python
37
  from transformers import AutoTokenizer, AutoModelForCausalLM
38
- tokenizer = AutoTokenizer.from_pretrained('Salesforce/codegen-2B-mono')
39
- model = AutoModelForCausalLM.from_pretrained('Salesforce/codegen-2B-mono')
40
 
41
  text = "def hello_world():"
42
  input_ids = tokenizer(text, return_tensors="pt").input_ids
 
16
  ## Training procedure
17
 
18
  CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
19
+ The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
20
  See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
21
 
22
  ## Evaluation results
 
35
 
36
  ```python
37
  from transformers import AutoTokenizer, AutoModelForCausalLM
38
+ tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-2B-mono")
39
+ model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-2B-mono")
40
 
41
  text = "def hello_world():"
42
  input_ids = tokenizer(text, return_tensors="pt").input_ids