How to finetune?
I have come across this model and have been very impressed with its SOTA performance. I was curious on the how to fine tune this model on our own custom datasets. In this case, I have -2500 q/a pairs that are in the style of an AI assistant I have created. Any libraries you could recommend, or the way I should prompt the model? Any additional info would be helpful. Thank you in advanced!
im glad you like the model!
this model is in some sense reaching the limits of its "general" capabilities, meaning that any fine-tuning might make it a little worse on other things, but if you would like to fine-tune it for your use case you could try using unsloth, it's super fast on performance and even have an option to adapt to specific domains (essentially pretraining it) by finetuning the embedings/lm_head.
regarding to your other question, there is no prompt at all, you can prompt it like:
"What is Google?" and it will answer so basically the prompt is the questions/instructions that you ask it directly.
Oh ok, thanks for the response! I am curious on what you said about finetuning the embeddings/lm_head. Could I adapt them to, for example, respond in a conversational tone? Or could I use it to hard code some information into the model, such as its name, sibilings, hobbies, just to name a few. I have never heard of doing that, so I am really curious! Thanks for your help thus far.
sure! that's fundamentally what fine-tuning does you can either give it the instructions or finetune it, personally i would first try with a prompt like:
"The following script showcases the personal information of Nick.
Age: 25
Hobbies: To play guitar, to watch youtube videos
Personality: Nick is a little shy, his responses are often short but he is willing to help anyone that needs it
Me: What is Google?
Nick:"
Using the prompt above, i got this from a quantized version ( ~500MB in size ):
"Me: What is Google?
Nick: Google is a search engine
Me: What are your hobbies?
Nick: I like to watch youtube videos, listen to music, and play guitar"
Oh ok, thanks for the response! I am curious on what you said about finetuning the embeddings/lm_head. Could I adapt them to, for example, respond in a conversational tone? Or could I use it to hard code some information into the model, such as its name, sibilings, hobbies, just to name a few. I have never heard of doing that, so I am really curious! Thanks for your help thus far.
if you are just getting started with fine-tuning i suggest you to first know a little about how to fine-tune using unsloth and evaluate the models for your specific use case using maybe a lm-eval-harnsess with some custom tasks for the model, it really depends on which tasks you want the model to perform
i hope that helps!
Thanks for that thus far! I have been researching in the AI field for the past 2.5 years, and I have been using unsloth for a year or so for training. The .pdf that I am using with the personality of said AI is 4-5 pages. Thus, tokenizing all of that text would take a while, and in my use case (robotics) I need responses to be completed is 2-3 seconds locally. So, would the AI be able to understand contextually if I used a vectordb to return the relevant snippet of context to the AI so it could respond? Or as you have said in the readme.md, should I just finetune it? Thanks so far for your help!
Thanks for that thus far! I have been researching in the AI field for the past 2.5 years, and I have been using unsloth for a year or so for training. The .pdf that I am using with the personality of said AI is 4-5 pages. Thus, tokenizing all of that text would take a while, and in my use case (robotics) I need responses to be completed is 2-3 seconds locally. So, would the AI be able to understand contextually if I used a vectordb to return the relevant snippet of context to the AI so it could respond? Or as you have said in the readme.md, should I just finetune it? Thanks so far for your help!
that's a good question!
i haven't experimented a lot with rag for this model yet, i suggest you to first try using langchain or your own method to retrieve the information, if it doesn't work as expected then you should finetune it
a way you could approach this is by letting the model learn from examples that contain rag text as similar as how you would like the model to respond
Thanks for your help thus far! I will try to implement the RAG with this model over the weekend, and see how it goes. Thanks once again for your help!
You're welcome and good luck!