jphme commited on
Commit
25ec893
1 Parent(s): fb6fdb7

small readme fix

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -34,7 +34,7 @@ DiscoLM Mixtral 8x7b alpha is a [DiscoResearch](https://huggingface.co/DiscoRese
34
  1. [Download](#download)
35
  2. [Benchmarks](#benchmarks)
36
  3. [Prompt Format](#prompt-format)
37
- 4. [Dataset](#dataset)
38
  5. [Acknowledgements](#acknowledgements)
39
  6. [Contact](#contact)
40
  7. [About DiscoResearch](#about-discoresearch)
@@ -123,7 +123,7 @@ DiscoResearch is an aspiring open research community. Disco should be a place wh
123
  ## Acknowledgements
124
 
125
  Many thanks first and foremost to [Mistral AI](https://huggingface.co/mistralai) for releasing another awesome model and their release strategy that is much fun for the whole community.
126
- Additionally, many thanks in particular to [Dmytro Dzhulgakov](https://huggingface.co/dzhulgakov) who was the first one with a running [inference implementation](https://github.com/dzhulgakov/llama-mistral), [Vik](https://huggingface.co/vikhyatk) who spotted a critical bug in our first implementation (he actually read the paper!), [winglian](https://huggingface.co/winglian) for helpful advice and Axolotl which was used to finetune the model, [Teknium](https://huggingface.co/teknium), [Verzora](https://huggingface.co/Vezora) [MetaMath](https://huggingface.co/meta-math) for their great datasets, and everyone who participated in this awesome speedrun on either our, the [Nous Research](https://huggingface.co/NousResearch) or one of the other Discords (please contact us if we forgot to mention you here!).
127
 
128
  **DiscoLM Mixtral is a [DiscoResearch](https://huggingface.co/DiscoResearch) project and was created by [Björn Plüster](https://huggingface.co/bjoernp).
129
  The model was trained with compute provided by [HessianAI](https://hessian.ai/); many thanks as well to [LAION](https://laion.ai) for their coordination and providing invaluable contacts + advice.**
 
34
  1. [Download](#download)
35
  2. [Benchmarks](#benchmarks)
36
  3. [Prompt Format](#prompt-format)
37
+ 4. [Dataset](#datasets)
38
  5. [Acknowledgements](#acknowledgements)
39
  6. [Contact](#contact)
40
  7. [About DiscoResearch](#about-discoresearch)
 
123
  ## Acknowledgements
124
 
125
  Many thanks first and foremost to [Mistral AI](https://huggingface.co/mistralai) for releasing another awesome model and their release strategy that is much fun for the whole community.
126
+ Additionally, many thanks in particular to [Dmytro Dzhulgakov](https://huggingface.co/dzhulgakov) who was the first one with a running [inference implementation](https://github.com/dzhulgakov/llama-mistral), [Vik](https://huggingface.co/vikhyatk) who spotted a critical bug in our first implementation (he actually read the paper!), [winglian](https://huggingface.co/winglian) for helpful advice and Axolotl which was used to finetune the model, [Migel Tissera](https://huggingface.co/migtissera), [Nous Research](https://huggingface.co/NousResearch) and [MetaMath](https://huggingface.co/meta-math) for their great datasets, and everyone who participated in this awesome speedrun on any of the Discords (please contact us if we forgot to mention you here!).
127
 
128
  **DiscoLM Mixtral is a [DiscoResearch](https://huggingface.co/DiscoResearch) project and was created by [Björn Plüster](https://huggingface.co/bjoernp).
129
  The model was trained with compute provided by [HessianAI](https://hessian.ai/); many thanks as well to [LAION](https://laion.ai) for their coordination and providing invaluable contacts + advice.**