nort5-base / README.md
davda54's picture
Update README.md
b8849ff
|
raw
history blame
3.77 kB
metadata
language:
  - 'no'
  - nb
  - nn
inference: false
tags:
  - T5
  - NorT5
  - Norwegian
  - encoder-decoder
license: cc-by-4.0
pipeline_tag: text2text-generation

NorT5 base

The official release of a new generation of NorT5 language models described in paper NorBench — A Benchmark for Norwegian Language Models. Plese read the paper to learn more details about the model.

Other sizes:

Encoder-only NorBERT siblings:

Example usage

This model currently needs a custom wrapper from modeling_nort5.py, you should therefore load the model with trust_remote_code=True.

import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("ltg/nort5-base", trust_remote_code=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ltg/nort5-base", trust_remote_code=True)


# MASKED LANGUAGE MODELING

sentence = "Brukseksempel: Elektrisk oppvarming. Definisjonen på ordet oppvarming er: å[MASK_0]."
encoding = tokenizer(sentence)

input_tensor = torch.tensor([encoding.input_ids])
output_tensor = model.generate(input_tensor, decoder_start_token_id=7, eos_token_id=8)
tokenizer.decode(output_tensor.squeeze(), skip_special_tokens=True)

# should output: ' varme opp et rom.'


# PREFIX LANGUAGE MODELING
# you need to finetune this model or use `nort5-{size}-lm` model, which is finetuned on prefix language modeling

sentence = "Brukseksempel: Elektrisk oppvarming. Definisjonen på ordet oppvarming er (Wikipedia) "
encoding = tokenizer(sentence)

input_tensor = torch.tensor([encoding.input_ids])
output_tensor = model.generate(input_tensor, max_new_tokens=50, num_beams=4, do_sample=False)
tokenizer.decode(output_tensor.squeeze())

# should output: [BOS]ˈoppvarming, det vil si at det skjer en endring i temperaturen i et medium, f.eks. en ovn eller en radiator, slik at den blir varmere eller kaldere, eller at den blir varmere eller kaldere, eller at den blir

The following classes are currently implemented: AutoModel, AutoModelForSeq2SeqLM.

Cite us

@inproceedings{samuel-etal-2023-norbench,
    title = "{N}or{B}ench {--} A Benchmark for {N}orwegian Language Models",
    author = "Samuel, David  and
      Kutuzov, Andrey  and
      Touileb, Samia  and
      Velldal, Erik  and
      {\O}vrelid, Lilja  and
      R{\o}nningstad, Egil  and
      Sigdel, Elina  and
      Palatkina, Anna",
    booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)",
    month = may,
    year = "2023",
    address = "T{\'o}rshavn, Faroe Islands",
    publisher = "University of Tartu Library",
    url = "https://aclanthology.org/2023.nodalida-1.61",
    pages = "618--633",
    abstract = "We present NorBench: a streamlined suite of NLP tasks and probes for evaluating Norwegian language models (LMs) on standardized data splits and evaluation metrics. We also introduce a range of new Norwegian language models (both encoder and encoder-decoder based). Finally, we compare and analyze their performance, along with other existing LMs, across the different benchmark tests of NorBench.",
}