Text Generation
Transformers
English
llm-rs
ggml
Inference Endpoints
LLukas22 commited on
Commit
85a3cc1
1 Parent(s): cf68004

Update README_TEMPLATE.md

Browse files
Files changed (1) hide show
  1. README_TEMPLATE.md +8 -15
README_TEMPLATE.md CHANGED
@@ -9,24 +9,17 @@ language:
9
  datasets:
10
  - togethercomputer/RedPajama-Data-1T
11
  ---
12
- # GGML converted versions of [Together](https://huggingface.co/togethercomputer)'s RedPajama models
13
 
14
- # RedPajama-INCITE-7B-Base
15
 
16
- RedPajama-INCITE-7B-Base was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
17
- The training was done on 3,072 V100 GPUs provided as part of the INCITE 2023 project on Scalable Foundation Models for Transferrable Generalist AI, awarded to MILA, LAION, and EleutherAI in fall 2022, with support from the Oak Ridge Leadership Computing Facility (OLCF) and INCITE program.
18
 
19
- - Base Model: [RedPajama-INCITE-7B-Base](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Base)
20
- - Instruction-tuned Version: [RedPajama-INCITE-7B-Instruct](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Instruct)
21
- - Chat Version: [RedPajama-INCITE-7B-Chat](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Chat)
22
 
23
 
24
- ## Model Details
25
- - **Developed by**: Together Computer.
26
- - **Model type**: Language Model
27
- - **Language(s)**: English
28
- - **License**: Apache 2.0
29
- - **Model Description**: A 6.9B parameter pretrained language model.
30
 
31
  ## Converted Models:
32
 
@@ -44,7 +37,7 @@ Via pip: `pip install llm-rs`
44
  from llm_rs import AutoModel
45
 
46
  #Load the model, define any model you like from the list above as the `model_file`
47
- model = AutoModel.from_pretrained("rustformers/redpajama-7b-ggml",model_file="RedPajama-INCITE-7B-Base-q4_0-ggjt.bin")
48
 
49
  #Generate
50
  print(model.generate("The meaning of life is"))
@@ -68,5 +61,5 @@ cargo build --release
68
 
69
  #### Run inference
70
  ```
71
- cargo run --release -- gptneox infer -m path/to/model.bin -p "Tell me how cool the Rust programming language is:"
72
  ```
 
9
  datasets:
10
  - togethercomputer/RedPajama-Data-1T
11
  ---
12
+ # GGML converted versions of [OpenLM Research](https://huggingface.co/openlm-research)'s LLaMA models
13
 
14
+ # OpenLLaMA: An Open Reproduction of LLaMA
15
 
 
 
16
 
17
+ In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a 7B and 3B model trained on 1T tokens, as well as the preview of a 13B model trained on 600B tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details.
 
 
18
 
19
 
20
+ ## Weights Release, License and Usage
21
+
22
+ We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
 
 
 
23
 
24
  ## Converted Models:
25
 
 
37
  from llm_rs import AutoModel
38
 
39
  #Load the model, define any model you like from the list above as the `model_file`
40
+ model = AutoModel.from_pretrained("rustformers/open-llama-ggml",model_file=" open_llama_7b-q4_0-ggjt.bin")
41
 
42
  #Generate
43
  print(model.generate("The meaning of life is"))
 
61
 
62
  #### Run inference
63
  ```
64
+ cargo run --release -- llama infer -m path/to/model.bin -p "Tell me how cool the Rust programming language is:"
65
  ```