Description
Further fine-tuned Locutusque/Hyperion-2.0-Mistral-7B at a higher learning rate. This was done to see if performance increased. Read Locutusque/Hyperion-2.0-Mistral-7B's model card for more information. Slight performance gain was observed. More checkpoints will be released in the future.
Disclaimer
This model is very compliant. It will respond to any request without refusal. If you intend to deploy this model at an enterprise level, I would recommend aligning this model using DPO.
Quants
ExLlamaV2: https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-exl2
GGUF: https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF
AWQ: https://huggingface.co/solidrust/Hyperion-2.1-Mistral-7B-AWQ
- Downloads last month
- 110
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.