Edit model card

This project is sponsored by PrimeLine

Model Card

This model is an finetuned version for german instructions and conversations in style of Alpaca. "### Assistant:" "### User:" The dataset used is deduplicated and cleaned, with no codes inside. The focus is on instruction following and conversational tasks.

The model archictecture is based on Llama version 2 with 7B parameters, trained on 100% renewable energy powered hardware.

This work is contributed by private research of flozi00

Join discussions about german llm research, and plan larger training runs together: https://join.slack.com/t/slack-dtc7771/shared_invite/zt-219keplqu-hLwjm0xcFAOX7enERfBz0Q

Downloads last month
14
Safetensors
Model size
6.74B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train flozi00/Llama-2-7b-german-assistant-v3

Collection including flozi00/Llama-2-7b-german-assistant-v3