Transformers
English
Inference Endpoints
norabelrose commited on
Commit
d31d399
1 Parent(s): 427ffac

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -1
README.md CHANGED
@@ -9,4 +9,11 @@ library_name: transformers
9
 
10
  This is a set of sparse autoencoders (SAEs) trained on the residual stream of [Llama 3 8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) using the 10B sample of the [RedPajama v2 corpus](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2), which comes out to roughly 8.5B tokens using the Llama 3 tokenizer. The SAEs are organized by layer, and can be loaded using the EleutherAI [`sae` library](https://github.com/EleutherAI/sae).
11
 
12
- These are early checkpoints of an ongoing training run which can be tracked [here](https://wandb.ai/eleutherai/sae/runs/7r5puw5z?nw=nwusernorabelrose). They will be updated as the training run progresses. The last upload was at 7,000 steps.
 
 
 
 
 
 
 
 
9
 
10
  This is a set of sparse autoencoders (SAEs) trained on the residual stream of [Llama 3 8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) using the 10B sample of the [RedPajama v2 corpus](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2), which comes out to roughly 8.5B tokens using the Llama 3 tokenizer. The SAEs are organized by layer, and can be loaded using the EleutherAI [`sae` library](https://github.com/EleutherAI/sae).
11
 
12
+ The `layers.24` SAE in this repo has finished training on all 8.5B tokens of the RedPajama V2 sample. With the `sae` library installed, you can access it like this:
13
+ ```python
14
+ from sae import Sae
15
+
16
+ sae = Sae.load_from_hub("EleutherAI/sae-llama-3-8b-32x-v2", hookpoint="layers.24")
17
+ ```
18
+
19
+ The rest of the SAEs are early checkpoints of an ongoing training run which can be tracked [here](https://wandb.ai/eleutherai/sae/runs/7r5puw5z?nw=nwusernorabelrose). They will be updated as the training run progresses. The last upload was at 7,000 steps.