Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

You agree to not use the models for any harmful, inappropriate, unethical or illegal purpose or intention. You agree to perform your own red teaming and provide related safety and security measures before deployment for any product relevant to our models and demos, and you must abide by and comply with local governance and regulations. In no event shall the models' authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos. The models and demos may be subject to export controls or restrictions in the United States or other countries or regions. You shall comply with applicable laws and regulations in your use of the demos.

Log in or Sign Up to review the conditions and access this model content.

SeaLLMs - Large Language Models for Southeast Asia

๐Ÿค— Tech Memo    ๐Ÿค— DEMO    Github    Technical Report

SeaLLM-hybrid-7b

This is a 7B pre-train & SFT hybrid version of SeaLLMs. It supports Vietnamese ๐Ÿ‡ป๐Ÿ‡ณ, Indonesian ๐Ÿ‡ฎ๐Ÿ‡ฉ, Thai ๐Ÿ‡น๐Ÿ‡ญ, Malay ๐Ÿ‡ฒ๐Ÿ‡พ, Khmer ๐Ÿ‡ฐ๐Ÿ‡ญ, Lao ๐Ÿ‡ฑ๐Ÿ‡ฆ, Tagalog ๐Ÿ‡ต๐Ÿ‡ญ and Burmese ๐Ÿ‡ฒ๐Ÿ‡ฒ. SeaLLM-hybrid-7b is pre-trained from Llama-2 with unlabeled raw text, and then fine-tuned with a mix of English-only SFT data and unlabeled text from other languages.

This hybrid model should be treated as a base model and should not be expected to perform instruction-following, but instead should be used for few-shot prompting.

It may have lower capability and performance than the 13B models but it is much more memory-efficient and faster.

Visit our Technical Report and ๐Ÿค— Tech Memo for more details.

Terms of Use and License: By using our released weights, codes, and demos, you agree to and comply with the terms and conditions specified in our SeaLLMs Terms Of Use.

Disclaimer: We must note that even though the weights, codes, and demos are released in an open manner, similar to other pre-trained language models, and despite our best efforts in red teaming and safety fine-tuning and enforcement, our models come with potential risks, including but not limited to inaccurate, misleading or potentially harmful generation. Developers and stakeholders should perform their own red teaming and provide related security measures before deployment, and they must abide by and comply with local governance and regulations. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos.

The logo was generated by DALL-E 3.

How to Run:

SeaLLM models work the same way as Llama-2, so the Llama-2 generation codebase should be sufficient to run.

Citation

If you find our project useful, we hope you would kindly star our repo and cite our work as follows: Corresponding Author: l.bing@alibaba-inc.com

@article{damonlpsg2023seallm,
  author = {Xuan-Phi Nguyen*, Wenxuan Zhang*, Xin Li*, Mahani Aljunied*,
            Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen Yang,
            Chaoqun Liu, Hang Zhang, Lidong Bing},
  title = {SeaLLMs - Large Language Models for Southeast Asia},
  year = 2023,
  Eprint = {arXiv:2312.00738},
}
Downloads last month
0
Safetensors
Model size
6.87B params
Tensor type
F32
ยท
BF16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.