lamatama / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
f7be917 verified
|
raw
history blame
6.29 kB
---
language:
- en
license: apache-2.0
model-index:
- name: lamatama
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 36.35
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/lamatama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 61.12
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/lamatama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/lamatama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 37.67
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/lamatama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/lamatama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 2.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/lamatama
name: Open LLM Leaderboard
---
# Model Card: kevin009/lamatama
## Model Description
The `kevin009/lamatama` model is a groundbreaking achievement in the field of language modeling, showcasing the power of leveraging a substantial dataset and state-of-the-art training techniques. This model is designed to push the boundaries of what's possible in natural language understanding and generation.
### Training Details
- **Model Architecture**: The `kevin009/lamatama` model is built upon the architecture and tokenizer of Llama 2, ensuring compatibility and easy integration with various open-source projects.
- **Dataset**: It was pretrained on an impressive 3 trillion tokens, a scale that allows for a deep and nuanced understanding of language.
- **Training Period**: The training process was carried out over 90 days, utilizing 16 A100-40G GPUs, a testament to the model's efficiency and the team's optimization skills.
### Fine-tuning
This specific version of the model has been fine-tuned to excel in chat-based applications. It builds upon the `TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T` model, incorporating learnings and optimizations from HF's Zephyr's training recipe.
- **Initial Phase**: The model was first fine-tuned on a variant of the UltraChat dataset, which is rich in synthetic dialogues generated by ChatGPT.
- **Further Alignment**: Subsequent alignment was achieved using 🤗 TRL's DPOTrainer with the openbmb/UltraFeedback dataset, comprising 64k prompts and model completions ranked by GPT-4.
## How to Use
Ensure you have `transformers>=4.34`. For detailed instructions and updates, check out the GitHub page for `kevin009/lamatama`.
### Installation (for versions <= v4.34)
```bash
pip install git+https://github.com/huggingface/transformers.git
pip install accelerate
```
### Example Usage
Here's a quick guide on using `kevin009/lamatama` for generating text:
```python
import torch
from transformers import pipeline
# Initialize the pipeline
pipe = pipeline("text-generation", model="kevin009/lamatama", torch_dtype=torch.bfloat16, device_map="auto")
# Sample dialogue with templating
messages = [
{"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate"},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"}
]
# Generate prompt and outputs
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Acknowledgements
This model is a product of collaboration and innovative approaches to language modeling. We extend our thanks to all contributors, as well as the creators of the datasets and training methodologies that made `kevin009/lamatama` a reality.
---
This model card introduces `kevin009/lamatama`, a versatile and powerful language model fine-tuned for chat applications, demonstrating exceptional understanding and generation capabilities.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kevin009__lamatama)
| Metric |Value|
|---------------------------------|----:|
|Avg. |37.15|
|AI2 Reasoning Challenge (25-Shot)|36.35|
|HellaSwag (10-Shot) |61.12|
|MMLU (5-Shot) |24.72|
|TruthfulQA (0-shot) |37.67|
|Winogrande (5-shot) |60.77|
|GSM8k (5-shot) | 2.27|