Safetensors
English
llama
loubnabnl HF staff commited on
Commit
0bdfe76
1 Parent(s): 52d5fe3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -31,7 +31,7 @@ It is important to note that the primary intended use case of this model is to c
31
  # pip install -q transformers
32
  from transformers import AutoModelForCausalLM, AutoTokenizer
33
 
34
- model = "HuggingFaceTB/finemath-ablation-finemath-infimath-3plus"
35
  device = "cuda" # for GPU usage or "cpu" for CPU usage
36
 
37
  tokenizer = AutoTokenizer.from_pretrained(model)
@@ -48,12 +48,12 @@ We are releasing intermediate checkpoints for this model at intervals of every 1
48
 
49
  You can load a specific model revision with `transformers` using the argument `revision`:
50
  ```python
51
- model = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/finemath-ablation-finemath-infimath-3plus", revision="10B")
52
  ```
53
  You can access all the revisions for the models via the following code:
54
  ```python
55
  from huggingface_hub import list_repo_refs
56
- out = list_repo_refs("HuggingFaceTB/finemath-ablation-finemath-infimath-3plus")
57
  print([b.name for b in out.branches])
58
  ```
59
 
 
31
  # pip install -q transformers
32
  from transformers import AutoModelForCausalLM, AutoTokenizer
33
 
34
+ model = "HuggingFaceTB/finemath-ablation-3plus-160B"
35
  device = "cuda" # for GPU usage or "cpu" for CPU usage
36
 
37
  tokenizer = AutoTokenizer.from_pretrained(model)
 
48
 
49
  You can load a specific model revision with `transformers` using the argument `revision`:
50
  ```python
51
+ model = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/finemath-ablation-3plus-160B", revision="10B")
52
  ```
53
  You can access all the revisions for the models via the following code:
54
  ```python
55
  from huggingface_hub import list_repo_refs
56
+ out = list_repo_refs("HuggingFaceTB/finemath-ablation-3plus-160B")
57
  print([b.name for b in out.branches])
58
  ```
59