Text Generation
Transformers
PyTorch
llama
text-generation-inference
Inference Endpoints
compasszzn commited on
Commit
8397a1b
β€’
1 Parent(s): fa1230c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -50,9 +50,9 @@ Our dataset and models are all available at Huggingface.
50
  |----|---------------------------------------------------------------|---------------------------------------------------------------------------|
51
  | 7B-LLaMA 2 | πŸ™ [MathOctopus-Parallel-7B](https://huggingface.co/Mathoctopus/Parallel_7B) | πŸ™ [MathOctopus-Cross-7B](https://huggingface.co/Mathoctopus/Cross_7B) |
52
  || πŸ™[MathOctopus-Parallel-xRFT-7B](https://huggingface.co/Mathoctopus/Parallel_xRFT_7B)|πŸ™[MathOctopus-Cross-xRFT-7B](https://huggingface.co/Mathoctopus/Cross_xRFT_7B)|
53
- | 13B-LLaMA 2 | πŸ™ [MathOctopus-Parallel-13B] | πŸ™ [MathOctopus-Cross-13B] |
54
- || πŸ™[MathOctopus-Parallel-xRFT-13B](https://huggingface.co/Mathoctopus/Parallel_xRFT_13B/tree/main)|πŸ™[MathOctopus-Cross-xRFT-13B]|
55
- | 33B-LLaMA 1 | πŸ™ [MathOctopus-Parallel-33B] | πŸ™ [MathOctopus-Cross-33B] |
56
  | 70B-LLaMA 2 | Coming soon! | Coming Soon! |
57
 
58
  *-Parallel refers to our model trained with the parallel-training strategy.
 
50
  |----|---------------------------------------------------------------|---------------------------------------------------------------------------|
51
  | 7B-LLaMA 2 | πŸ™ [MathOctopus-Parallel-7B](https://huggingface.co/Mathoctopus/Parallel_7B) | πŸ™ [MathOctopus-Cross-7B](https://huggingface.co/Mathoctopus/Cross_7B) |
52
  || πŸ™[MathOctopus-Parallel-xRFT-7B](https://huggingface.co/Mathoctopus/Parallel_xRFT_7B)|πŸ™[MathOctopus-Cross-xRFT-7B](https://huggingface.co/Mathoctopus/Cross_xRFT_7B)|
53
+ | 13B-LLaMA 2 | πŸ™ [MathOctopus-Parallel-13B](https://huggingface.co/Mathoctopus/Parallel_13B) | πŸ™ [MathOctopus-Cross-13B](https://huggingface.co/Mathoctopus/Cross_13B) |
54
+ || πŸ™[MathOctopus-Parallel-xRFT-13B](https://huggingface.co/Mathoctopus/Parallel_xRFT_13B)|πŸ™[MathOctopus-Cross-xRFT-13B]|
55
+ | 33B-LLaMA 1 | πŸ™ [MathOctopus-Parallel-33B](https://huggingface.co/Mathoctopus/Parallel_33B) | πŸ™ [MathOctopus-Cross-33B] |
56
  | 70B-LLaMA 2 | Coming soon! | Coming Soon! |
57
 
58
  *-Parallel refers to our model trained with the parallel-training strategy.