phanerozoic commited on
Commit
a0991ef
1 Parent(s): bf7089a

Update README.md

Browse files

PirateTalk erroneously referred to as PirateSpeak

Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -1,14 +1,14 @@
1
  ---
2
  license: cc-by-nc-4.0
3
  ---
4
- This repository contains the PirateSpeak-13b-v1 model, an advanced derivative of the 13b Llama 2 Chat model. It has been fine-tuned on a comprehensive dataset encompassing a wide spectrum of pirate-themed content, from standard pirate lexemes to intricate elements of pirate vernacular.
5
 
6
- Objective: The inception of PirateSpeak-13b-v1 was driven by the objective to integrate a specific dialect—pirate language—into the model. Our ambition was to ensure that the model not only adopts pirate vocabulary but also the nuanced syntactic structures inherent to pirate discourse.
7
 
8
- Model Evolution: PirateSpeak-13b-v1 epitomizes our continued efforts in domain-specific model fine-tuning. While our preliminary merged model was anchored in the OpenOrca series, with PirateSpeak-13b-v1, we've leveraged the lessons from that experiment and incorporated the fine-tuning directly into the Llama 2 architecture. This methodology, combined with a curated dataset, reflects our ongoing commitment to pushing the boundaries of model adaptability.
9
 
10
- Performance Insights: Comparative evaluations indicate that PirateSpeak-13b-v1 surpasses its OpenOrca-based predecessor in terms of both response accuracy and dialect consistency. The enhanced performance of PirateSpeak-13b-v1 can likely be attributed to our refined dataset and optimized hyperparameter settings. It's important to emphasize that this improvement isn't a reflection of any shortcomings of the OpenOrca model but rather the advancements in our training strategies.
11
 
12
- Technical Specifications: PirateSpeak-13b-v1 underwent training at half precision (16) and is optimized for inference at this precision level.
13
 
14
- Future Endeavors: While we acknowledge the success of PirateSpeak-13b-v1 as a testament to our proof-of-concept, our exploration doesn't conclude here. We envisage extending this methodology to larger quantized models, aiming to further enhance the model's knowledge depth, practical utility, and linguistic flair in subsequent iterations.
 
1
  ---
2
  license: cc-by-nc-4.0
3
  ---
4
+ This repository contains the PirateTalk-13b-v1 model, an advanced derivative of the 13b Llama 2 Chat model. It has been fine-tuned on a comprehensive dataset encompassing a wide spectrum of pirate-themed content, from standard pirate lexemes to intricate elements of pirate vernacular.
5
 
6
+ Objective: The inception of Piratetalk-13b-v1 was driven by the objective to integrate a specific dialect—pirate language—into the model. Our ambition was to ensure that the model not only adopts pirate vocabulary but also the nuanced syntactic structures inherent to pirate discourse.
7
 
8
+ Model Evolution: Piratetalk-13b-v1 epitomizes our continued efforts in domain-specific model fine-tuning. While our preliminary merged model was anchored in the OpenOrca series, with PirateTalk-13b-v1, we've leveraged the lessons from that experiment and incorporated the fine-tuning directly into the Llama 2 architecture. This methodology, combined with a curated dataset, reflects our ongoing commitment to pushing the boundaries of model adaptability.
9
 
10
+ Performance Insights: Comparative evaluations indicate that PirateTalk-13b-v1 surpasses its OpenOrca-based predecessor in terms of both response accuracy and dialect consistency. The enhanced performance of PirateTalk-13b-v1 can likely be attributed to our refined dataset and optimized hyperparameter settings. It's important to emphasize that this improvement isn't a reflection of any shortcomings of the OpenOrca model but rather the advancements in our training strategies.
11
 
12
+ Technical Specifications: PirateTalk-13b-v1 underwent training at half precision (16) and is optimized for inference at this precision level.
13
 
14
+ Future Endeavors: While we acknowledge the success of PirateTalk-13b-v1 as a testament to our proof-of-concept, our exploration doesn't conclude here. We envisage extending this methodology to larger quantized models, aiming to further enhance the model's knowledge depth, practical utility, and linguistic flair in subsequent iterations.