Llama3-ChatQA-1.5-8B-lora
This is a LoRA extracted from a language model. It was extracted using mergekit.
LoRA Details
This LoRA adapter was extracted from nvidia/Llama3-ChatQA-1.5-8B and uses meta-llama/Meta-Llama-3-8B as a base.
Parameters
The following command was used to extract this LoRA adapter:
mergekit-extract-lora meta-llama/Meta-Llama-3-8B nvidia/Llama3-ChatQA-1.5-8B OUTPUT_PATH --no-lazy-unpickle --rank=64
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for beratcmn/Llama3-ChatQA-1.5-8B-lora
Base model
meta-llama/Meta-Llama-3-8B