|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- llama |
|
- trl |
|
base_model: EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-128K-code |
|
model-index: |
|
- name: Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.004-128K-code-COT |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: IFEval (0-Shot) |
|
type: HuggingFaceH4/ifeval |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: inst_level_strict_acc and prompt_level_strict_acc |
|
value: 45.78 |
|
name: strict accuracy |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.004-128K-code-COT |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: BBH (3-Shot) |
|
type: BBH |
|
args: |
|
num_few_shot: 3 |
|
metrics: |
|
- type: acc_norm |
|
value: 25.82 |
|
name: normalized accuracy |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.004-128K-code-COT |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MATH Lvl 5 (4-Shot) |
|
type: hendrycks/competition_math |
|
args: |
|
num_few_shot: 4 |
|
metrics: |
|
- type: exact_match |
|
value: 12.39 |
|
name: exact match |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.004-128K-code-COT |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GPQA (0-shot) |
|
type: Idavidrein/gpqa |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: acc_norm |
|
value: 5.82 |
|
name: acc_norm |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.004-128K-code-COT |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MuSR (0-shot) |
|
type: TAUR-Lab/MuSR |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: acc_norm |
|
value: 6.45 |
|
name: acc_norm |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.004-128K-code-COT |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU-PRO (5-shot) |
|
type: TIGER-Lab/MMLU-Pro |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 27.45 |
|
name: accuracy |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.004-128K-code-COT |
|
name: Open LLM Leaderboard |
|
datasets: |
|
- EpistemeAI/agentic-test1 |
|
--- |
|
|
|
# Agent LLama with tasks |
|
|
|
Experimental and revolutionary fine-tune technique to allow LLama 3.1 8B to be agentic coder with tasks and CoT(Chain of Thought). It fine tuned with code dataset and Glaive's Cot Tasks dataset for Coder Agent. |
|
It has some build-in agent features: |
|
- search |
|
- calculator |
|
- ReAct. [Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629) |
|
- fine tuned ReAct for better responses |
|
|
|
Other noticable features: |
|
- Self learning using unsloth. (in progress) |
|
- can be used in RAG applications |
|
- Memory. [**please use Langchain memory , section Message persistence**](https://python.langchain.com/docs/tutorials/chatbot/) |
|
|
|
It is perfectly use for Langchain or LLamaIndex. |
|
|
|
Context Window: 128K |
|
|
|
### Installation |
|
```bash |
|
pip install --upgrade "transformers>=4.43.2" torch==2.3.1 accelerate vllm==0.5.3.post1 |
|
``` |
|
|
|
Developers can easily integrate EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-128K into their projects using popular libraries like Transformers and vLLM. The following sections illustrate the usage with simple hands-on examples: |
|
|
|
Optional: to use build in tool, please add to system prompt: "Environment: ipython. Tools: brave_search, wolfram_alpha. Cutting Knowledge Date: December 2023. Today Date: 4 October 2024\n" |
|
|
|
|
|
## Use Alpaca Prompt template: |
|
```python |
|
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. |
|
|
|
### Instructions: |
|
{} |
|
|
|
### Input: |
|
{} |
|
|
|
### Response: |
|
{}""" |
|
``` |
|
## Recommend system prompt for generatel use: |
|
```python |
|
""" |
|
You should reason about the input and provide a logical explanation. |
|
The explanation should follow these rules: |
|
- The explanation should be written at graduate level engineering, science, math and literature |
|
- The explanation should be split into subtasks |
|
- The explanation should always end with 2-3 related concepts. |
|
- subtasks have their own chain of thoughts |
|
""" |
|
``` |
|
|
|
## Recommend system prompt for coding: |
|
```python |
|
""" |
|
Environment: ipython. Tools: brave_search, wolfram_alpha. Cutting Knowledge Date: December 2023. Today Date: 4 October 2024\n |
|
You are a coding assistant with expert with everything\n |
|
Ensure any code you provide can be executed \n |
|
with all required imports and variables defined. List the imports. Structure your answer with a description of the code solution. \n |
|
write only the code. do not print anything else.\n |
|
debug code if error occurs. \n |
|
Here is the user question: {question} |
|
""" |
|
``` |
|
|
|
### Conversational Use-case |
|
#### Use with [Transformers](https://github.com/huggingface/transformers) |
|
##### Using `transformers.pipeline()` API , best use for 4bit for fast response. |
|
```python |
|
import transformers |
|
import torch |
|
from langchain_community.llms import HuggingFaceEndpoint |
|
from langchain_community.chat_models.huggingface import ChatHuggingFace |
|
|
|
from transformers import BitsAndBytesConfig |
|
|
|
quantization_config = BitsAndBytesConfig( |
|
load_in_4bit=True, |
|
bnb_4bit_quant_type="nf4", |
|
bnb_4bit_compute_dtype="float16", |
|
bnb_4bit_use_double_quant=True, |
|
) |
|
|
|
model_id = "EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.004-128K-code-COT" |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model_id, |
|
model_kwargs={"quantization_config": quantization_config}, #for fast response. For full 16bit inference, remove this code. |
|
device_map="auto", |
|
) |
|
messages = [ |
|
{"role": "system", "content": """ |
|
You should reason about the input and provide a logical explanation. |
|
The explanation should follow these rules: |
|
- The explanation should be written at graduate level engineering, science, math and literature |
|
- The explanation should be split into subtasks |
|
- The explanation should always end with 2-3 related concepts. |
|
- subtasks have their own chain of thoughts |
|
"""}, |
|
{"role": "user", "content": "Create a bar plot showing the market capitalization of the top 7 publicly listed companies using matplotlib"} |
|
] |
|
outputs = pipeline(messages, max_new_tokens=128, do_sample=True, temperature=0.01, top_k=100, top_p=0.95) |
|
print(outputs[0]["generated_text"][-1]) |
|
``` |
|
|
|
# Example: |
|
Please go to Colab for sample of the code using Langchain [Colab](https://colab.research.google.com/drive/129SEHVRxlr24r73yf34BKnIHOlD3as09?authuser=1) |
|
|
|
# Unsloth Fast |
|
|
|
```python |
|
%%capture |
|
# Installs Unsloth, Xformers (Flash Attention) and all other packages! |
|
!pip install unsloth |
|
# Get latest Unsloth |
|
!pip install --upgrade --no-deps "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git" |
|
!pip install langchain_experimental |
|
|
|
from unsloth import FastLanguageModel |
|
from google.colab import userdata |
|
|
|
|
|
# 4bit pre quantized models we support for 4x faster downloading + no OOMs. |
|
fourbit_models = [ |
|
"unsloth/mistral-7b-instruct-v0.2-bnb-4bit", |
|
"unsloth/gemma-7b-it-bnb-4bit", |
|
] # More models at https://huggingface.co/unsloth |
|
|
|
model, tokenizer = FastLanguageModel.from_pretrained( |
|
model_name = "EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.004-128K-code-COT", |
|
max_seq_length = 128000, |
|
load_in_4bit = True, |
|
token =userdata.get('HF_TOKEN') |
|
) |
|
def chatbot(query): |
|
messages = [ |
|
{"from": "system", "value": |
|
""" |
|
You should reason about the input and provide a logical explanation. |
|
The explanation should follow these rules: |
|
- The explanation should be written at graduate level engineering, science, math and literature |
|
- The explanation should be split into subtasks |
|
- The explanation should always end with 2-3 related concepts. |
|
- subtasks have their own chain of thoughts |
|
""" |
|
}, |
|
{"from": "human", "value": query}, |
|
] |
|
inputs = tokenizer.apply_chat_template(messages, tokenize = True, add_generation_prompt = True, return_tensors = "pt").to("cuda") |
|
|
|
text_streamer = TextStreamer(tokenizer) |
|
_ = model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 2048, use_cache = True) |
|
``` |
|
|
|
# Response |
|
|
|
```python |
|
<|begin_of_text|>Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. |
|
|
|
### Instructions: |
|
Concept: Abductive Reasoning |
|
|
|
### Input: |
|
Split in tasks for Concept: Mathematical Proof Structuring |
|
|
|
### Response 1: |
|
Task 1: |
|
- Identify the key components of a mathematical proof, including the statement, assumptions, and conclusion. |
|
- Explain the role of each component in the overall structure of the proof. |
|
|
|
Task 2: |
|
- Describe the process of abductive reasoning in the context of mathematical proof. |
|
- Provide an example of how abductive reasoning can be used to infer a mathematical statement. |
|
|
|
Task 3: |
|
- Discuss the importance of structuring a mathematical proof. |
|
- Explain how a well-structured proof can help to establish the validity of a mathematical statement. |
|
|
|
Task 4: |
|
- Compare and contrast deductive and abductive reasoning in the context of mathematical proof. |
|
- Provide an example of how each type of reasoning can be used to establish the validity of a mathematical statement. |
|
|
|
Task 5: |
|
- Design a template for structuring a mathematical proof. |
|
- Explain how the template can be used to ensure that all necessary components are included in the proof.<|im_end|> |
|
``` |
|
### Response 2: |
|
|
|
```python |
|
<|begin_of_text|>Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. |
|
|
|
### Instructions: |
|
Concept: Occam's Razor |
|
|
|
### Input: |
|
Split in tasks for Concept: Occam's Razor and perform the task |
|
|
|
### Response: |
|
Split the task into the following subtasks: |
|
|
|
1. **Task 1: Define Occam's Razor** |
|
* Definition: Occam's Razor is a principle used in problem-solving and evidence-based reasoning, stating that, all things being equal, the simplest explanation or solution is usually the best one. |
|
2. **Task 2: Explain the Key Components of Occam's Razor** |
|
* Components: |
|
* **Simplicity**: The principle favors explanations that require fewer assumptions or hypothetical entities. |
|
* **Evidence**: It emphasizes the importance of empirical evidence in supporting or rejecting explanations. |
|
* ** Parsimony**: Occam's Razor encourages the selection of explanations that are more parsimonious, meaning they require fewer assumptions or entities. |
|
3. **Task 3: Provide Examples of Occam's Razor in Action** |
|
* Examples: |
|
* **The Solar System**: The ancient Greeks proposed a complex system with multiple celestial spheres. Occam's Razor would suggest a simpler explanation, like the Copernican heliocentric model. |
|
* **Medical Diagnosis**: A doctor might initially suspect a rare disease, but Occam's Razor would favor a more common and simpler explanation, such as a viral infection |
|
``` |
|
|
|
# Execute code (Make sure to use virtual environments) |
|
```bash |
|
python3 -m venv env |
|
source env/bin/activate |
|
``` |
|
|
|
## Execution code responses from Llama |
|
#### Please use execute python code function for local. For langchain, please use Python REPL() to execute code |
|
|
|
execute code funciton locally in python: |
|
```python |
|
def execute_Python_code(code): |
|
# A string stream to capture the outputs of exec |
|
output = io.StringIO() |
|
try: |
|
# Redirect stdout to the StringIO object |
|
with contextlib.redirect_stdout(output): |
|
# Allow imports |
|
exec(code, globals()) |
|
except Exception as e: |
|
# If an error occurs, capture it as part of the output |
|
print(f"Error: {e}", file=output) |
|
return output.getvalue() |
|
``` |
|
|
|
Langchain python Repl |
|
- Install |
|
|
|
```bash |
|
!pip install langchain_experimental |
|
``` |
|
|
|
Code: |
|
```python |
|
from langchain_core.tools import Tool |
|
from langchain_experimental.utilities import PythonREPL |
|
|
|
python_repl = PythonREPL() |
|
|
|
# You can create the tool to pass to an agent |
|
repl_tool = Tool( |
|
name="python_repl", |
|
description="A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.", |
|
func=python_repl.run, |
|
) |
|
repl_tool(outputs[0]["generated_text"][-1]) |
|
``` |
|
|
|
# Safety inputs/ outputs procedures |
|
Fo all inputs, please use Llama-Guard: meta-llama/Llama-Guard-3-8B for safety classification. |
|
Go to model card [Llama-Guard](https://huggingface.co/meta-llama/Llama-Guard-3-8B) |
|
|
|
## Other usess |
|
|
|
#### ToT - Tree of Thought |
|
- Use system prompt: |
|
```python |
|
"Imagine three different experts are answering this question. |
|
All experts will write down 1 step of their thinking, |
|
then share it with the group. |
|
Then all experts will go on to the next step, etc. |
|
If any expert realises they're wrong at any point then they leave. |
|
The question is..." |
|
``` |
|
#### ReAct |
|
example from langchain agent - [langchain React agent](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/react/agent.py) |
|
- Use system prompt: |
|
```python |
|
""" |
|
Answer the following questions as best you can. You have access to the following tools: |
|
|
|
{tools} |
|
|
|
Use the following format: |
|
|
|
Question: the input question you must answer |
|
Thought: you should always think about what to do |
|
Action: the action to take, should be one of [{tool_names}] |
|
Action Input: the input to the action |
|
Observation: the result of the action |
|
... (this Thought/Action/Action Input/Observation can repeat N times) |
|
Thought: I now know the final answer |
|
Final Answer: the final answer to the original input question |
|
|
|
Begin! |
|
|
|
Question: {input} |
|
Thought:{agent_scratchpad} |
|
""" |
|
``` |
|
|
|
|
|
# Uploaded model |
|
|
|
- **Developed by:** EpistemeAI |
|
- **License:** apache-2.0 |
|
- **Finetuned from model :** EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-128K-code |
|
|
|
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
|
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
|
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.004-128K-code-COT) |
|
|
|
| Metric |Value| |
|
|-------------------|----:| |
|
|Avg. |20.62| |
|
|IFEval (0-Shot) |45.78| |
|
|BBH (3-Shot) |25.82| |
|
|MATH Lvl 5 (4-Shot)|12.39| |
|
|GPQA (0-shot) | 5.82| |
|
|MuSR (0-shot) | 6.45| |
|
|MMLU-PRO (5-shot) |27.45| |