Problem with generating anything
Hello, I've tried to run this model on my machine and on together.ai as well. I've tried many inputs such as "write a poem about a dog" and attempted to answer the question based on the context, among others. However, I have never received correct answers. Instead, I have received responses like this:
Yeah, I actually had the same issue in Google Colab. I'm not sure what the issue is
@wempoo @Jordancole21 thanks for playing with the model and the feedback!
@wempoo What we have is a base model so it might not be aligned. For example, the same query would have something similar for Llama-2-7B.
We are currently working on the chat version which should react much better with instructions! Stay tuned!
@Jordancole21 BTW, we just made a small update to the HF repo (disable the start token, which should allow the model to generate better -- but again, probably the instruction capacity of the model is still limited)
@wempoo
@Jordancole21
We now have an instruct version that is fine-tunned on QA https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct
More details here: https://together.ai/blog/llama-2-7b-32k-instruct
and here is the poem for dog prompt :)
Any feedback would be awesome!