alexmarques's picture
Update README.md
c2b1691 verified
metadata
configs:
  - config_name: multi-turn_chat
    data_files:
      - split: test
        path: multi-turn_chat.parquet
  - config_name: code_completion
    data_files:
      - split: test
        path: code_completion.parquet
  - config_name: instruction_tuning
    data_files:
      - split: test
        path: instruction_tuning.parquet
  - config_name: code_fixing
    data_files:
      - split: test
        path: code_fixing.parquet
  - config_name: rag
    data_files:
      - split: test
        path: rag.parquet
  - config_name: large_summarization
    data_files:
      - split: test
        path: large_summarization.parquet
  - config_name: docstring
    data_files:
      - split: test
        path: docstring.parquet

This dataset contains inference performance benchmarking obtained with vllm version 0.6.1.post2 on different use-case scenarios. The scenarios are defined as bellow:

Use case Prompt tokens Generated tokens
Code Completion 256 1024
Docstring Generation 768 128
Code Fixing 1024 1024
RAG 1024 128
Instruction Following 256 128
Multi-turn chat 512 256
Large Summarization 4096 512

Benchmarking was conducted with GuideLLM using the following syntax:

guidellm --model <model name> --data-type emulated --data "prompt_tokens=<prompt tokens>,generated_tokens=<generated tokens>"