HimankJ commited on
Commit
da38199
1 Parent(s): e9437a6

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +2 -3
app.py CHANGED
@@ -19,12 +19,11 @@ def generateText(inputText, num_tokens=200):
19
 
20
 
21
  title = "Fine tuned Phi3.5 instruct model on OpenAssist dataset using QLora"
22
- description =
23
- '''
24
  Phi-3.5-mini is a lightweight, state-of-the-art open model built upon datasets used for Phi-3 - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data.
25
  The model belongs to the Phi-3 model family and supports 128K token context length.
26
  The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning, proximal policy optimization, and direct preference optimization to ensure precise instruction adherence and robust safety measures.
27
- This demo utilises a fine tuned version of Phi3.5 instruct model using QLora on OpenAssist dataset - a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 quality ratings, resulting in over 10,000 fully annotated conversation trees..
28
  '''
29
  examples = [
30
  ["How do I build a PC?", 200],
 
19
 
20
 
21
  title = "Fine tuned Phi3.5 instruct model on OpenAssist dataset using QLora"
22
+ description = '''
 
23
  Phi-3.5-mini is a lightweight, state-of-the-art open model built upon datasets used for Phi-3 - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data.
24
  The model belongs to the Phi-3 model family and supports 128K token context length.
25
  The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning, proximal policy optimization, and direct preference optimization to ensure precise instruction adherence and robust safety measures.
26
+ This demo utilises a fine tuned version of Phi3.5 instruct model using QLora on OpenAssist dataset - a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 quality ratings, resulting in over 10,000 fully annotated conversation trees.
27
  '''
28
  examples = [
29
  ["How do I build a PC?", 200],