--- license: apache-2.0 language: - en - gl - de - es - ca - it - fr - eu - pt metrics: - comet - bleu pipeline_tag: translation --- # Plume32k ## Table of Contents
Click to expand - [Model description](#model-description) - [Intended uses and limitations](#intended-uses-and-limitations) - [Run the model](#run-the-model) - [Training](#training) - [Evaluation](#evaluation) - [Citation](#citation) - [Additional information](#additional-information)
## Summary Plume is the first LLM trained for Neural Machine Translation with only parallel Catalan-Centric data from scratch. It is a language model with the same architecture as Gemma 2B. The model is trained for general translation tasks at sentence level. For more information about training, architecture and interpretability of the model check out the paper; "Investigating the translation capabilities of Large Language Models trained on parallel data only". The preprint is available on [arXiv](). - **Developed by:** The Language Technologies Unit from Barcelona Supercomputing Center (BSC). - **Languages:** Spanish, French, Italian, Portuguese, Galician, German, English, and Basque. - **License:** Apache License, Version 2.0 ## Model Description In recent years, Large Language Models (LLMs) have demonstrated exceptional proficiency across a broad spectrum of Natural Language Processing (NLP) tasks, including Machine Translation. However, previous methodologies predominantly relied on iterative processes such as instruction fine-tuning or continual pre-training, leaving unexplored the challenges of training LLMs solely on parallel data. In this work, we introduce Plume (**P**arallel **L**ang**u**age **M**od**e**l), a collection of three 2B LLMs featuring varying vocabulary sizes (32k, 128k, and 256k) trained exclusively on Catalan-centric parallel examples. These models perform comparable to previous encoder-decoder architectures on 16 supervised translation directions and 56 zero-shot ones. For more details regarding the model architecture, the dataset and model interpretability take a look at the paper which is available on [arXiv](). ## Intended Uses and Limitations The model is proficient in 16 supervised translation directions that include Catalan and is capable of translating in other 56 zero-shot directions as well. At the time of submission, no measures have been taken to estimate the bias and added toxicity embedded in the model. However, we are aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Run the model ```python from transformers import AutoTokenizer, AutoModelForCausalLM # language codes: spa_Latn (Spanish), cat_Latn (Catalan), eng_Latn (English), ita_Latn (Italian), # eus_Latn (Basque), deu_Latn (German), por_Latn (Portuguese), glg_Latn (Galician), fra_Latn (French) model_id = "projecte-aina/Plume32k" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) src_lang_code = 'spa_Latn' tgt_lang_code = 'cat_Latn' sentence = 'Ayer se fue, tomó sus cosas y se puso a navegar.' prompt = ' [{}] {} \n[{}]'.format(src_lang_code, sentence, tgt_lang_code) input_ids = tokenizer(prompt, return_tensors='pt').input_ids output_ids = model.generate( input_ids, max_length=200, num_beams=5 ) input_length = input_ids.shape[1] generated_text = tokenizer.decode(output_ids[0, input_length: ], skip_special_tokens=True).strip() # Ahir se'n va anar, va agafar les seves coses i es va posar a navegar. ``` ## Training Training details are specified in the [paper](). Code for training the model and running other experiments can be found in our [GitHub repository](https://github.com/projecte-aina/Plume). ## Evaluation Below are the evaluation results on Flores-200 and NTREX for supervised MT directions. For more details about model evaluation check out the [paper](). | Model | FLORES BLEU | FLORES COMET | NTREX BLEU | NTREX COMET | |----------------------|-------------|--------------|------------|-------------| | NLLB-1.3B | 31.02 | 0.86 | 29.68 | 0.85 | | NLLB-600M | 29.24 | 0.85 | 28.37 | 0.84 | | Bilinguals BSC | 31.93 | 0.86 | 29.77 | 0.84 | | **Parlam 32k** | 30.44 | 0.86 | 28.46 | 0.84 | | **Parlam 128k** | 30.81 | 0.86 | 28.78 | 0.84 | | **Parlam 256k** | 30.72 | 0.86 | 28.87 | 0.84 | ## Citation ```bibtex ``` ## Additional information ### Author The Language Technologies Unit from Barcelona Supercomputing Center. ### Contact Feel free to write us at with any questions you may have to {javier.garcia1, carlos.escolano, aleix.santsavall, francesca.delucafornaciari, audrey.mash, xixian.liao, maite.melero}@bsc.es ### Copyright Copyright(c) 2023 by Language Technologies Unit, Barcelona Supercomputing Center. ### License [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina). ### Disclaimer
Click to expand The model published in this repository is intended for a generalist purpose and is available to third parties under a permissive Apache License, Version 2.0. Be aware that the model may have biases and/or any other undesirable distortions. When third parties deploy or provide systems and/or services to other parties using this model (or any system based on it) or become users of the model, they should note that it is their responsibility to mitigate the risks arising from its use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner and creator of the model (Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties.