LLM Loras
Collection
Low-rank adapters for LLMs, to be applied during inference or used in merging.
•
7 items
•
Updated
Standard 8bit lora trained using axolotl on the supercot dataset using Alpaca instruct format at 4096 context:
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
<instruction>
### Input:
<any additional context. Remove this if it's not neccesary>
### Response:
<make sure to leave a single new-line here for optimal results>
The model will show biases similar to those exhibited by the base model. It is not intended for supplying factual information or advice in any form.