llm-perf-leaderboard / hardware.yaml
baptistecolle's picture
add support for additional backends (#33)
51a4daf verified
raw
history blame contribute delete
895 Bytes
- machine: 1xA10
description: A10-24GB-150W πŸ–₯️
hardware_provider: nvidia
hardware_type: cuda
subsets:
- unquantized
- awq
- bnb
- gptq
backends:
- pytorch
- machine: 1xA100
description: A100-80GB-275W πŸ–₯️
hardware_provider: nvidia
hardware_type: cuda
subsets:
- unquantized
- awq
- bnb
- gptq
backends:
- pytorch
- machine: 1xT4
description: T4-16GB-70W πŸ–₯️
hardware_provider: nvidia
hardware_type: cuda
subsets:
- unquantized
- awq
- bnb
- gptq
backends:
- pytorch
- machine: 32vCPU-C7i
description: Intel-Xeon-SPR-385W πŸ–₯️
detail: |
We tested the [32vCPU AWS C7i](https://aws.amazon.com/ec2/instance-types/c7i/) instance for the benchmark.
hardware_provider: intel
hardware_type: cpu
subsets:
- unquantized
backends:
- pytorch
- openvino
- onnxruntime