|
--- |
|
license: apache-2.0 |
|
language: |
|
- zh |
|
- en |
|
tags: |
|
- openba |
|
--- |
|
|
|
# Introduction |
|
|
|
OpenBA is an Open-Sourced 15B Bilingual Asymmetric Seq2Seq Model Pre-trained from Scratch. |
|
|
|
## Open Source Plan |
|
|
|
We are excited to unveil two distinguished versions of our model, with another on the horizon: |
|
|
|
- [OpenBA-LM](https://huggingface.co/OpenBA/OpenBA-LM): The backbone language models was pre-trained on 340B English, Chinese, and code tokens. |
|
- [OpenBA-Flan](https://huggingface.co/OpenBA/OpenBA-Flan): We perform supervised fine-tuning on the base model with additional 40B tokens using our collected BiFlan Dataset. |
|
- OpenBA-Chat: coming soon |
|
|
|
## Model Description |
|
- **Model type:** Language model |
|
- **Language(s) (NLP):** zh, en (We also offer the possibility for multilingual learning, by using a multilingual tokenizer.) |
|
- **License:** Apache 2.0 |
|
- **Resources for more information:** |
|
- [Paper](https://arxiv.org/abs/2309.10706) |
|
- [GitHub Repo](https://github.com/OpenNLG/OpenBA/) |
|
|
|
# Usage |
|
|
|
## Install requirements |
|
|
|
```bash |
|
pip install transformers torch>=2.0 sentencepiece |
|
``` |
|
|
|
## Demo usage |
|
|
|
```python |
|
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM |
|
>>> tokenizer = AutoTokenizer.from_pretrained("OpenBA/OpenBA-LM", trust_remote_code=True) |
|
>>> model = AutoModelForSeq2SeqLM.from_pretrained("OpenBA/OpenBA-LM", trust_remote_code=True).half().cuda() |
|
>>> model = model.eval() |
|
>>> query = "<S>" + "苏州处太湖平原,沿江为高沙平原,河" + "<extra_id_0>" |
|
>>> inputs = tokenizer(query, return_tensors="pt").to("cuda") |
|
>>> outputs = model.generate(**inputs, do_sample=True, max_new_tokens=32) |
|
>>> response = tokenizer.decode(outputs[0], skip_special_tokens=True) |
|
>>> print(response) |
|
流两侧为河淤平原,苏州平原是江苏平原主体,地势低平,土地肥沃,气候温和 |
|
``` |