|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- llama |
|
- trl |
|
base_model: |
|
- EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto |
|
model-index: |
|
- name: Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.004-128K-code-ds-auto |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: IFEval (0-Shot) |
|
type: HuggingFaceH4/ifeval |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: inst_level_strict_acc and prompt_level_strict_acc |
|
value: 72.05 |
|
name: strict accuracy |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.004-128K-code-ds-auto |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: BBH (3-Shot) |
|
type: BBH |
|
args: |
|
num_few_shot: 3 |
|
metrics: |
|
- type: acc_norm |
|
value: 26.45 |
|
name: normalized accuracy |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.004-128K-code-ds-auto |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MATH Lvl 5 (4-Shot) |
|
type: hendrycks/competition_math |
|
args: |
|
num_few_shot: 4 |
|
metrics: |
|
- type: exact_match |
|
value: 13.67 |
|
name: exact match |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.004-128K-code-ds-auto |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GPQA (0-shot) |
|
type: Idavidrein/gpqa |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: acc_norm |
|
value: 0 |
|
name: acc_norm |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.004-128K-code-ds-auto |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MuSR (0-shot) |
|
type: TAUR-Lab/MuSR |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: acc_norm |
|
value: 2.08 |
|
name: acc_norm |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.004-128K-code-ds-auto |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU-PRO (5-shot) |
|
type: TIGER-Lab/MMLU-Pro |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 28.31 |
|
name: accuracy |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.004-128K-code-ds-auto |
|
name: Open LLM Leaderboard |
|
--- |
|
|
|
## Model Description |
|
|
|
### Coding and ToT/ Cot Assistant |
|
|
|
<img src="https://huggingface.co/EpistemeAI/Fireball-Llama-3.1-8B-v1dpo/resolve/main/fireball-llama.JPG" width="200"/> |
|
|
|
<a href="https://ko-fi.com/epistemeai">>>Please support and donate<<</a> |
|
|
|
|
|
We are **introducing a revolutionary fine-tuning model** for general purpose,researcha, advance coding, data scientists/engineers and machine learning scientists. |
|
Utilize this AI model for data mining, large data processing, EDA on large data, and visualization of data including machine learning, AI engineering, and MLOps. |
|
This AI model also can create it's own new AI models. |
|
|
|
It has some build-in agent features: |
|
- search |
|
- calculator |
|
- ReAct. [Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629) |
|
- fine tuned ReAct for better responses |
|
- automatic reasoning and Tool-use (ART) |
|
|
|
Other noticable features: |
|
- Self learning(automatically trains) chatbot using unsloth. |
|
- can be used in RAG applications |
|
- Memory. [**please use Langchain memory , section Message persistence**](https://python.langchain.com/docs/tutorials/chatbot/) |
|
|
|
It is perfectly use for Langchain or LLamaIndex. |
|
|
|
It is best use for autochat (autotrains AI chatbot), Customer can still use normal transformer , see **How to Use** section. please add your request in the community section for auto-train chatbot colab. |
|
This model will be auto updated often. please delete previous model and load latest model |
|
|
|
Context Window: 128K |
|
|
|
## Intended Use |
|
|
|
**Intended Use Cases:** Agent Llama 003 auto is intended for commercial and research use in multiple languages. Instruction-tuned text-only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI-powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use cases with limited compute resources. |
|
|
|
**Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. |
|
|
|
|
|
## How to Use |
|
|
|
|
|
### Installation |
|
```bash |
|
!pip install --upgrade --no-cache-dir "git+https://github.com/huggingface/transformers.git" |
|
!pip install --upgrade tokenizer |
|
|
|
#For unsloth |
|
%%capture |
|
!pip install unsloth |
|
# Also get the latest nightly Unsloth! |
|
!pip uninstall unsloth -y && pip install --upgrade --no-cache-dir "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git" |
|
``` |
|
|
|
Developers can easily integrate EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-128K into their projects using popular libraries like Transformers and vLLM. The following sections illustrate the usage with simple hands-on examples: |
|
|
|
Optional: to use build in tool, please add to system prompt: "Environment: ipython. Tools: brave_search, wolfram_alpha. Cutting Knowledge Date: December 2023. Today Date: 4 October 2024\n" |
|
|
|
### Fine tuned to use automatic reasoning and Tool-use (ART) |
|
[ART](https://arxiv.org/abs/2303.09014) |
|
|
|
|
|
### ToT - Tree of Thought |
|
- Use system prompt: |
|
```python |
|
"Imagine three different experts are answering this question. |
|
All experts will write down 1 step of their thinking, |
|
then share it with the group. |
|
Then all experts will go on to the next step, etc. |
|
If any expert realises they're wrong at any point then they leave. |
|
The question is..." |
|
``` |
|
|
|
### ReAct (Preferred) |
|
example from langchain agent - [langchain React agent](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/react/agent.py) |
|
- Use system prompt: |
|
```python |
|
""" |
|
Answer the following questions as best you can. You have access to the following tools: |
|
|
|
{tools} |
|
|
|
Use the following format: |
|
|
|
Question: the input question you must answer |
|
Thought: you should always think about what to do |
|
Action: the action to take, should be one of [{tool_names}] |
|
Action Input: the input to the action |
|
Observation: the result of the action |
|
... (this Thought/Action/Action Input/Observation can repeat N times) |
|
Thought: I now know the final answer |
|
Final Answer: the final answer to the original input question |
|
|
|
Begin! |
|
|
|
Question: {input} |
|
Thought:{agent_scratchpad} |
|
""" |
|
``` |
|
|
|
### Use with transformers |
|
|
|
### Conversational Use-case |
|
#### Use with [Transformers](https://github.com/huggingface/transformers) |
|
##### Using `transformers.pipeline()` API , best use for 4bit for fast response. |
|
```python |
|
import transformers |
|
import torch |
|
from langchain_community.llms import HuggingFaceEndpoint |
|
from langchain_community.chat_models.huggingface import ChatHuggingFace |
|
|
|
from transformers import BitsAndBytesConfig |
|
|
|
quantization_config = BitsAndBytesConfig( |
|
load_in_4bit=True, |
|
bnb_4bit_quant_type="nf4", |
|
bnb_4bit_compute_dtype="float16", |
|
bnb_4bit_use_double_quant=True, |
|
) |
|
|
|
model_id = "EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.004-128K-code-ds-auto" |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model_id, |
|
model_kwargs={"quantization_config": quantization_config}, #for fast response. For full 16bit inference, remove this code. |
|
device_map="auto", |
|
) |
|
messages = [ |
|
{"role": "system", "content": """ |
|
Environment: ipython. Tools: brave_search, wolfram_alpha. Cutting Knowledge Date: December 2023. Today Date: 4 October 2024\n |
|
You are a coding assistant with expert with everything\n |
|
Ensure any code you provide can be executed \n |
|
with all required imports and variables defined. List the imports. Structure your answer with a description of the code solution. \n |
|
write only the code. do not print anything else.\n |
|
debug code if error occurs. \n |
|
### Question: {}\n |
|
### Answer: {} \n |
|
"""}, |
|
{"role": "user", "content": "Train an AI model to predict the number of purchases made per customer in a given store."} |
|
] |
|
outputs = pipeline(messages, max_new_tokens=128, do_sample=True, temperature=0.01, top_k=100, top_p=0.95) |
|
print(outputs[0]["generated_text"][-1]) |
|
``` |
|
|
|
# Example: |
|
Please go to Colab for sample of the code using Langchain [Colab](https://colab.research.google.com/drive/129SEHVRxlr24r73yf34BKnIHOlD3as09?authuser=1) |
|
|
|
# Unsloth Fast |
|
|
|
```python |
|
%%capture |
|
# Installs Unsloth, Xformers (Flash Attention) and all other packages! |
|
!pip install unsloth |
|
# Get latest Unsloth |
|
!pip install --upgrade --no-deps "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git" |
|
!pip install langchain_experimental |
|
|
|
from unsloth import FastLanguageModel |
|
from google.colab import userdata |
|
|
|
|
|
# 4bit pre quantized models we support for 4x faster downloading + no OOMs. |
|
fourbit_models = [ |
|
"unsloth/mistral-7b-instruct-v0.2-bnb-4bit", |
|
"unsloth/gemma-7b-it-bnb-4bit", |
|
] # More models at https://huggingface.co/unsloth |
|
|
|
model, tokenizer = FastLanguageModel.from_pretrained( |
|
model_name = "EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.004-128K-code-ds-auto", |
|
max_seq_length = 128000, |
|
load_in_4bit = True, |
|
token =userdata.get('HF_TOKEN') |
|
) |
|
def chatbot(query): |
|
messages = [ |
|
{"from": "system", "value": |
|
""" |
|
Environment: ipython. Tools: brave_search, wolfram_alpha. Cutting Knowledge Date: December 2023. Today Date: 4 October 2024\n |
|
You are a coding assistant with expert with everything\n |
|
Ensure any code you provide can be executed \n |
|
with all required imports and variables defined. List the imports. Structure your answer with a description of the code solution. \n |
|
write only the code. do not print anything else.\n |
|
use ipython for search tool. \n |
|
debug code if error occurs. \n |
|
Here is the user question: |
|
### Question: {}\n |
|
### Answer: {} \n |
|
""" |
|
}, |
|
{"from": "human", "value": "Write an algorithm for predicting the stock market using a AI model."}, |
|
] |
|
inputs = tokenizer.apply_chat_template(messages, tokenize = True, add_generation_prompt = True, return_tensors = "pt").to("cuda") |
|
|
|
text_streamer = TextStreamer(tokenizer) |
|
_ = model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 2048, use_cache = True) |
|
``` |
|
|
|
|
|
|
|
# Execute code (Make sure to use virtual environments) |
|
```bash |
|
python3 -m venv env |
|
source env/bin/activate |
|
``` |
|
|
|
## Execution code responses from Llama |
|
#### Please use execute python code function for local. For langchain, please use Python REPL() to execute code |
|
|
|
execute code funciton locally in python: |
|
```python |
|
def execute_Python_code(code): |
|
# A string stream to capture the outputs of exec |
|
output = io.StringIO() |
|
try: |
|
# Redirect stdout to the StringIO object |
|
with contextlib.redirect_stdout(output): |
|
# Allow imports |
|
exec(code, globals()) |
|
except Exception as e: |
|
# If an error occurs, capture it as part of the output |
|
print(f"Error: {e}", file=output) |
|
return output.getvalue() |
|
``` |
|
|
|
Langchain python Repl |
|
- Install |
|
|
|
```bash |
|
!pip install langchain_experimental |
|
``` |
|
|
|
Code: |
|
```python |
|
from langchain_core.tools import Tool |
|
from langchain_experimental.utilities import PythonREPL |
|
|
|
python_repl = PythonREPL() |
|
|
|
# You can create the tool to pass to an agent |
|
repl_tool = Tool( |
|
name="python_repl", |
|
description="A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.", |
|
func=python_repl.run, |
|
) |
|
repl_tool(outputs[0]["generated_text"][-1]) |
|
``` |
|
|
|
# Safety inputs/ outputs procedures |
|
Fo all inputs, please use Llama-Guard: meta-llama/Llama-Guard-3-8B for safety classification. |
|
Go to model card [Llama-Guard](https://huggingface.co/meta-llama/Llama-Guard-3-8B) |
|
|
|
**Critical and Other Risks** |
|
|
|
We specifically focused our efforts on mitigating the following critical risk areas: |
|
|
|
**1. Data Privacy** |
|
|
|
To assess risks related to data privacy, we performed uplift testing designed to assess whether use of Llama 3.1 models could lead to unauthorized access, disclosure, or exfiltration of sensitive user data. |
|
|
|
**2. Inclusivity and Bias** |
|
|
|
Inclusivity and bias risk assessments were conducted using a team of experts, to assess the model's capability to produce outputs that could result in discriminatory or biased outcomes and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. |
|
|
|
**3. Misinformation and Disinformation** |
|
|
|
Our misinformation and disinformation uplift study investigated whether LLMs can enhance human capabilities in spreading false information or propaganda. Our study of Llama-3.1-405B’s potential to amplify misinformation was conducted to assess the model's effectiveness in aiding malicious actors in spreading false narratives. |
|
|
|
**4. Intellectual Property Infringement** |
|
|
|
Our intellectual property infringement study evaluated the model's potential to infringe on copyrights, trademarks, or patents. This assessment was conducted to identify potential risks related to the use of Llama 3.1 models in generating or disseminating copyrighted materials without permission. |
|
|
|
**5. Emotional Manipulation** |
|
|
|
Our emotional manipulation uplift study investigated whether LLMs can enhance human capabilities in exploiting emotional vulnerabilities for malicious purposes. Our study of Llama-3.1-405B’s potential to manipulate users emotionally was conducted to assess the model's effectiveness in aiding malicious actors in exploiting emotional vulnerabilities. |
|
|
|
**6. Cyber Attack Enablement** |
|
|
|
Our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. |
|
|
|
**7. Physical Harm** |
|
|
|
Our physical harm uplift study evaluated the model's potential to cause physical harm to individuals or communities. This assessment was conducted to identify potential risks related to the use of Llama 3.1 models in generating or disseminating content that could lead to physical harm. |
|
|
|
**8. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness** |
|
To assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. |
|
**9. Child Safety** |
|
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. |
|
|
|
|
|
## Ethical Considerations and Limitations |
|
|
|
The core values of Agent Llama are openness, inclusivity, and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences, and perspectives. Agent Llama addresses users and their needs as they are, without inserting unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. |
|
|
|
It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. However, Agent Llama is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. |
|
|
|
For these reasons, as with all LLMs, Agent Llama's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased, or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.1 models, developers should perform safety testing and tuning tailored to their specific applications of the model. |
|
|
|
Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development. |
|
|
|
# For Commercial use |
|
Please go to Community and add New Discussion to apply for commercial use. |
|
|
|
# Changelog |
|
- 10/28 updated transformer and unsloth inference with model: EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto |
|
- 10/29 auto tuned with 1st level protection jail breaking and prompt injection to protect AI for malicious attacks. We provided some prompts to auto trained. |
|
- - LLM tested with Open LLM. It was exceeded expectation for test result, especially in IFeval and MMLU-Pro |
|
- 11/13, Fine tuned in math and more accurate facts. It is fine tuned with latest news in advancement in science. |
|
|
|
|
|
## Thanks for Ed for the dataset: [ed001/ds-coder-instruct-v2](https://huggingface.co/datasets/ed001/ds-coder-instruct-v2) |
|
|
|
## finetune or distil is allowed, please cite this page when fine tune |
|
|
|
# Uploaded model |
|
|
|
- **Developed by:** EpistemeAI |
|
- **License:** apache-2.0 |
|
- **Finetuned from model :** EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto |
|
|
|
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
|
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
|
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.004-128K-code-ds-auto) |
|
|
|
| Metric |Value| |
|
|-------------------|----:| |
|
|Avg. |23.76| |
|
|IFEval (0-Shot) |72.05| |
|
|BBH (3-Shot) |26.45| |
|
|MATH Lvl 5 (4-Shot)|13.67| |
|
|GPQA (0-shot) | 0.00| |
|
|MuSR (0-shot) | 2.08| |
|
|MMLU-PRO (5-shot) |28.31| |