Edit model card

Model Card for ChartGPT-Llama3

Model Details

Model Description

This model is used to generate charts from natural language. For more information, please refer to the paper.

Model Input Format

Click to expand

Model input on the Step x.

Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
Your response should follow the following format:
{Step 1 prompt}

{Step x-1 prompt}
{Step x prompt}

### Instruction:
{instruction}

### Input:
Table Name: {table name}
Table Header: {column names}
Table Header Type: {column types}
Table Data Example:
{data row 1}
{data row 2}
Previous Answer:
{previous answer}

### Response:

And the model should output the answer corresponding to step x.

The step 1-6 prompts are as follows:

Step 1. Select the columns:
Step 2. Filter the data:
Step 3. Add aggregate functions:
Step 4. Choose chart type:
Step 5. Select encodings:
Step 6. Sort the data:

How to Get Started with the Model

Running the Model on a GPU

An example of a movie dataset with an instruction "Give me a visual representation of the faculty members by their professional status.". The model should give the answers to all steps. You can use the code below to test if you can run the model successfully.

Click to expand
from transformers import (
    AutoTokenizer,
    AutoModelForCausalLM,
)
tokenizer = AutoTokenizer.from_pretrained("yuan-tian/chartgpt-llama3")
model = AutoModelForCausalLM.from_pretrained("yuan-tian/chartgpt-llama3", device_map="auto")
input_text = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
Your response should follow the following format:
Step 1. Select the columns:
Step 2. Filter the data:
Step 3. Add aggregate functions:
Step 4. Choose chart type:
Step 5. Select encodings:
Step 6. Sort the data:

### Instruction:
Give me a visual representation of the faculty members by their professional status.

### Input:
Table Name: Faculty
Table Header: FacID,Lname,Fname,Rank,Sex,Phone,Room,Building
Table Header Type: quantitative,nominal,nominal,nominal,nominal,quantitative,nominal,nominal
Table Data Example:
1082,Giuliano,Mark,Instructor,M,2424,224,NEB
1121,Goodrich,Michael,Professor,M,3593,219,NEB
Previous Answer:


### Response:"""
inputs = tokenizer(input_text, return_tensors="pt", padding=True).to("cuda")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens = True))

Training Details

Training Data

This model is Fine-tuned from Meta-Llama-3-8B-Instruct on the chartgpt-dataset-llama3.

Training Procedure

Plan to update the preprocessing and training procedure in the future.

Citation

BibTeX:

@article{tian2024chartgpt,
  title={ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language},
  author={Tian, Yuan and Cui, Weiwei and Deng, Dazhen and Yi, Xinjing and Yang, Yurun and Zhang, Haidong and Wu, Yingcai},
  journal={IEEE Transactions on Visualization and Computer Graphics},
  year={2024},
  pages={1-15},
  doi={10.1109/TVCG.2024.3368621}
}
Downloads last month
81
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for yuan-tian/chartgpt-llama3

Finetuned
(425)
this model

Dataset used to train yuan-tian/chartgpt-llama3