This is a tiny random Llama model derived from "meta-llama/Llama-2-7b-hf".
See make_tiny_model.py for how this was done.
This is useful for functional testing (not quality generation, since its weights are random and the tokenizer has been shrunk to 3k items)
- Downloads last month
- 1,321
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.