Edit model card

Full parameters pretrain checkpoint with Polish content from base model: openlm-research/open_llama_3b_v2

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

device = "cuda"
device_map = {
    "lm_head": device,
    "model": device
}

tokenizer = AutoTokenizer.from_pretrained("piotr-ai/polanka-3b-pretrain-full-v0.4")
model = AutoModelForCausalLM.from_pretrained("piotr-ai/polanka-3b-pretrain-full-v0.4", torch_dtype=torch.bfloat16, device_map=device_map)

prompt = "Psychologia to"

model_input = tokenizer(prompt, return_tensors="pt").to(device)
generated = model.generate(**model_input, do_sample=True, temperature=0.6, max_new_tokens=100)[0]
decoded = tokenizer.decode(generated, skip_special_tokens=False)
print(decoded)
<s> Psychologia to fascynuj膮ca dziedzina wiedzy, kt贸ra wci膮偶 kryje przed nami wiele tajemnic. Mechanizmy, kt贸re rz膮dz膮 naszymi zachowaniami, decyzjami i wyborami, to nadal przedmiot bada艅 wielu naukowc贸w. Niekt贸re z nich ok
Downloads last month
24
Safetensors
Model size
3.43B params
Tensor type
BF16
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.