ZennyKenny commited on
Commit
b868591
1 Parent(s): 5ec30f4

fix broken pytorch link

Browse files

Protol not present in URL renders 404.

Was:
- [torch.compile](pytorch.org/docs/stable/generated/torch.compile.html)
(Looks for: https://huggingface.co/OPI-PG/Qra-1b/edit/main/pytorch.org/docs/stable/generated/torch.compile.html)

Must be:
- [torch.compile](https://pytorch.org/docs/stable/generated/torch.compile.html)

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -24,7 +24,7 @@ The final distribution of documents by topic is shown in the chart below:
24
  ## Model details
25
 
26
  The models were trained for one epoch on sequences of 4096 tokens. During training, we used many modern optimizations such as:
27
- - [torch.compile](pytorch.org/docs/stable/generated/torch.compile.html)
28
  - [adamw_apex_fused](https://huggingface.co/docs/transformers/main/en/perf_train_gpu_one#optimizer-choice) optimizer
29
  - [Flash Attention 2](github.com/Dao-AILab/flash-attention)
30
  - [Mixed precision](https://huggingface.co/docs/transformers/main/en/perf_train_gpu_one#bf16) (`--bf16` and `--tf32` options)
 
24
  ## Model details
25
 
26
  The models were trained for one epoch on sequences of 4096 tokens. During training, we used many modern optimizations such as:
27
+ - [torch.compile](https://pytorch.org/docs/stable/generated/torch.compile.html)
28
  - [adamw_apex_fused](https://huggingface.co/docs/transformers/main/en/perf_train_gpu_one#optimizer-choice) optimizer
29
  - [Flash Attention 2](github.com/Dao-AILab/flash-attention)
30
  - [Mixed precision](https://huggingface.co/docs/transformers/main/en/perf_train_gpu_one#bf16) (`--bf16` and `--tf32` options)