Fix typo in the name Goddard
Browse files
README.md
CHANGED
@@ -47,7 +47,7 @@ Please note that Mistral-22b is still in a WIP. v0.3 has started training now, w
|
|
47 |
|
48 |
## Thank you!
|
49 |
- Thank you to [Daniel Han](https://twitter.com/danielhanchen), for Unsloth AI which was used to train this model. this led to a 2-3x speed increae and 2-3x decrease in memmory consumption.
|
50 |
-
- Thank you to [Charles
|
51 |
- Thank you to Mistral, for releasing Another Wonderful open source model, under Apache 2.0.
|
52 |
- Thank you to [Tim Dettmers](https://twitter.com/Tim_Dettmers), for creating QLora
|
53 |
- Thank you to [Tri Dao](https://twitter.com/tri_dao), for creating Flash Attention
|
|
|
47 |
|
48 |
## Thank you!
|
49 |
- Thank you to [Daniel Han](https://twitter.com/danielhanchen), for Unsloth AI which was used to train this model. this led to a 2-3x speed increae and 2-3x decrease in memmory consumption.
|
50 |
+
- Thank you to [Charles Goddard](https://twitter.com/chargoddard), for providng me with a script that was nessary to make this model.
|
51 |
- Thank you to Mistral, for releasing Another Wonderful open source model, under Apache 2.0.
|
52 |
- Thank you to [Tim Dettmers](https://twitter.com/Tim_Dettmers), for creating QLora
|
53 |
- Thank you to [Tri Dao](https://twitter.com/tri_dao), for creating Flash Attention
|