Edit model card

SeaLLMs-v3 - Large Language Models for Southeast Asia

Website    Model    πŸ€— DEMO    Github    [NEW] Technical Report

We introduce SeaLLMs-v3, the latest series of the SeaLLMs (Large Language Models for Southeast Asian languages) family. It achieves state-of-the-art performance among models with similar sizes, excelling across a diverse array of tasks such as world knowledge, mathematical reasoning, translation, and instruction following. In the meantime, it was specifically enhanced to be more trustworthy, exhibiting reduced hallucination and providing safe responses, particularly in queries closed related to Southeast Asian culture.

πŸ”₯ Highlights

  • State-of-the-art performance compared to open-source models of similar sizes, evaluated across various dimensions such as human exam questions, instruction-following, mathematics, and translation.
  • Significantly enhanced instruction-following capability, especially in multi-turn settings.
  • Ensures safety in usage with significantly reduced instances of hallucination and sensitivity to local contexts.

Uses

SeaLLMs is tailored for handling a wide range of languages spoken in the SEA region, including English, Chinese, Indonesian, Vietnamese, Thai, Tagalog, Malay, Burmese, Khmer, Lao, Tamil, and Javanese.

This page introduces the SeaLLMs-v3-7B model, which can be fine-tuned for your specific downstream tasks, especially in SEA languages. Note that this is a base model, if you are looking for a model that can be directly applicable to your downstream applications, you may want to check the chat version model: SeaLLMs-v3-7B-Chat.

Evaluation

We evaluate SeaLLMs-v3-7B using human exam questions and mathematics.

Multilingual World Knowledge - M3Exam

M3Exam consists of local exam questions collected from each country. It reflects the model's world knowledge (e.g., with language or social science subjects) and reasoning abilities (e.g., with mathematics or natural science subjects).

Model en zh id th vi avg avg_sea
Gemma-7B 0.732 0.519 0.475 0.460 0.594 0.556 0.510
Sailor-7B-Chat 0.660 0.652 0.475 0.462 0.513 0.552 0.483
SeaLLM-7B-v2.5 0.758 0.581 0.499 0.502 0.622 0.592 0.541
Sailor-14B 0.748 0.840 0.536 0.528 0.621 0.655 0.562
Sailor-14B-Chat 0.749 0.843 0.553 0.566 0.637 0.670 0.585
Qwen2-7B 0.815 0.874 0.530 0.479 0.628 0.665 0.546
Qwen2-7B-Instruct 0.809 0.880 0.558 0.555 0.624 0.685 0.579
SeaLLMs-v3-7B 0.809 0.863 0.545 0.530 0.628 0.675 0.568
SeaLLMs-v3-7B-Chat 0.809 0.874 0.558 0.569 0.649 0.692 0.592

Multilingual World Knowledge - MMLU

MMLU questions are translated to SEA languages for evaluation, which primarily tests the cross-lingual alignment of the model as the required knowledge is still mainly Western-focused.

Model en zh id th vi avg avg_sea
Gemma-7B 0.634 0.509 0.545 0.490 0.494 0.535 0.510
Sailor-7B-Chat 0.558 0.472 0.484 0.414 0.462 0.478 0.454
SeaLLM-7B-v2.5 0.652 0.544 0.565 0.479 0.528 0.553 0.524
Sailor-14B 0.618 0.564 0.570 0.482 0.535 0.554 0.529
Sailor-14B-Chat 0.627 0.561 0.567 0.496 0.541 0.558 0.535
Qwen2-7B 0.710 0.642 0.602 0.520 0.566 0.608 0.563
Qwen2-7B-Instruct 0.708 0.635 0.599 0.524 0.568 0.607 0.564
SeaLLMs-v3-7B 0.706 0.654 0.617 0.536 0.587 0.620 0.580
SeaLLMs-v3-7B-Chat 0.713 0.647 0.625 0.544 0.578 0.622 0.582

Multilingual Math - MGSM

We evaluate the multilingual math capability by utilizing the MGSM dataset with a 5-shot prompting approach. MGSM originally contains English, Chinese and Thai testing sets only, we use Google Translate to translate the same English questions into other SEA languages. Note that we adopt the tradition of each country to represent the number, e.g., in Indonesian and Vietnamese, dots are used as thousands separators and commas as decimal separators, the opposite of the English system.

MGSM en id ms th vi zh avg
Gemma-7B 64.8 41.2 43.2 38.0 34.0 39.6 43.5
Sailor-7B 34.4 25.2 22.8 24.8 22.4 26.4 26.0
Meta-Llama-3-8B 56.8 36.0 33.6 34.8 33.6 43.6 39.7
GLM-4-9B 78.0 53.6 57.2 46.0 56.8 69.6 60.2
Qwen2-7B 79.6 58.8 56.8 54.8 54.8 69.2 62.3
SeaLLMs-v3-7B 78.8 59.2 56.8 56.8 54.8 72.0 63.1

Acknowledgement to Our Linguists

We would like to express our special thanks to our professional and native linguists, Tantong Champaiboon, Nguyen Ngoc Yen Nhi and Tara Devina Putri, who helped build, evaluate, and fact-check our sampled pretraining and SFT dataset as well as evaluating our models across different aspects, especially safety.

Citation

If you find our project useful, we hope you would kindly star our repo and cite our work as follows:

@article{damonlp2024seallm3,
  author = {Wenxuan Zhang*, Hou Pong Chan*, Yiran Zhao*, Mahani Aljunied*,
            Jianyu Wang*, Chaoqun Liu, Yue Deng, Zhiqiang Hu, Weiwen Xu,
            Yew Ken Chia, Xin Li, Lidong Bing},
  title = {SeaLLMs 3: Open Foundation and Chat Multilingual Large Language Models for Southeast Asian Languages},
  year = {2024},
  url = {https://arxiv.org/abs/2407.19672}
}

Corresponding Author: l.bing@alibaba-inc.com

Downloads last month
667
Safetensors
Model size
7.62B params
Tensor type
F32
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for SeaLLMs/SeaLLMs-v3-7B

Quantizations
3 models

Collection including SeaLLMs/SeaLLMs-v3-7B