datasets:
- oscar-corpus/OSCAR-2301
- wikipedia
- bjoernp/tagesschau-2018-2023
language:
- en
- de
library_name: transformers
pipeline_tag: text-generation
LAION LeoLM: Linguistically Enhanced Open Language Model
Meet LeoLM, the first open and commercially available German Foundation Language Model built on Llama-2.
Our models extend Llama-2's capabilities into German through continued pretraining on a large corpus of German-language and mostly locality specific text.
Thanks to a compute grant at HessianAI's new supercomputer 42, we release two foundation models trained with 8k context length,
LeoLM/leo-hessianai-7b
and LeoLM/leo-hessianai-13b
under the Llama-2 community license (70b also coming soon! 👀).
With this release, we hope to bring a new wave of opportunities to German open-source and commercial LLM research and accelerate adoption.
Read our blog post or our paper (preprint coming soon) for more details!
A project by Björn Plüster and Christoph Schuhmann in collaboration with LAION and HessianAI.
Model Details
- Finetuned from: meta-llama/Llama-2-13b-hf
- Model type: Causal decoder-only transformer language model
- Language: English and German
- License: LLAMA 2 COMMUNITY LICENSE AGREEMENT
- Contact: LAION Discord or Björn Plüster
Use in 🤗Transformers
First install direct dependencies:
pip install transformers torch sentencepiece
If you want faster inference using flash-attention2, you need to install these dependencies:
pip install packaging ninja
pip install flash-attn==v2.1.1 --no-build-isolation
pip install git+https://github.com/HazyResearch/flash-attention.git@v2.1.1#subdirectory=csrc/rotary
Then load the model in transformers:
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
model="LeoLM/leo-hessianai-13b",
device_map="auto",
torch_dtype=torch.float16,
trust_remote_code=True # True for flash-attn2 else False
)