dmxMetric / README.md
wanzin's picture
made changes to readme
d0f5011

A newer version of the Gradio SDK is available: 5.5.0

Upgrade
metadata
title: DmxMetric
emoji: πŸŒ–
colorFrom: purple
colorTo: pink
sdk: gradio
sdk_version: 4.41.0
app_file: app.py
pinned: false
license: apache-2.0
tags:
  - evaluate
  - metric
description: >-
  Evaluation function using lm-eval with d-Matrix integration. This function
  allows for the evaluation of language models across various tasks,  with the
  option to use d-Matrix compressed models. For more information, see
  https://github.com/EleutherAI/lm-evaluation-harness and
  https://github.com/d-matrix-ai/dmx-compressor

Metric Card for dmxMetric

How to Use

>>>import evaluate
>>>metric = evaluate.load("d-matrix/dmxMetric", module_type="metric")
>>>results = metric._compute(model="d-matrix/gpt2",revision="distilgpt2",tasks="wikitext",dmx_config="BASIC" )
>>>print(results)

Inputs

  • model (str): The name or path of the model to evaluate.
  • tasks (Union[str, List[str]]): The task or list of tasks to evaluate on.
  • dmx_config (Optional[str]): Configuration string for d-Matrix transformations, defaults to None.
  • num_fewshot (Optional[int]): Number of examples in few-shot context, defaults to None.
  • batch_size (Optional[Union[int, str]]): Batch size for evaluation, defaults to None.
  • max_batch_size (Optional[int]): Maximum batch size to try with automatic batch size detection, defaults to None.
  • limit (Optional[Union[int, float]]): Limit the number of examples per task, defaults to None.
  • device (Optional[str]): Device to run on. If None, defaults to 'cuda' if available, otherwise 'cpu'.
  • revision (str): Model revision to use, defaults to 'main'.
  • trust_remote_code (bool): Whether to trust remote code, defaults to False.
  • log_samples (bool): If True, logs all model outputs and documents, defaults to True.
  • verbosity (str): Logging verbosity level, defaults to 'INFO'.
  • kwargs: Additional keyword arguments to pass to lm_eval.evaluate.

Output Values

  • results (dict): A dictionary containing the evaluation results for each task.

Output Example:

{
    'wikitext': {
        'alias': 'wikitext',
        'word_perplexity,none': 56.66175009356436,
        'word_perplexity_stderr,none': 'N/A',
        'byte_perplexity,none': 2.127521665015424,
        'byte_perplexity_stderr,none': 'N/A',
        'bits_per_byte,none': 1.0891738232631387,
        'bits_per_byte_stderr,none': 'N/A'
    }
}

This metric outputs a dictionary containing the evaluation results for each task. In this example, the results are shown for the 'wikitext' task. The output includes various perplexity and bits-per-byte metrics, along with their standard errors (where available). The specific metrics may include:

  • alias: The name or alias of the task.
  • word_perplexity,none: The perplexity calculated on a word level.
  • word_perplexity_stderr,none: The standard error of the word perplexity (if available).
  • byte_perplexity,none: The perplexity calculated on a byte level.
  • byte_perplexity_stderr,none: The standard error of the byte perplexity (if available).
  • bits_per_byte,none: The average number of bits required to encode each byte of the text.
  • bits_per_byte_stderr,none: The standard error of the bits per byte metric (if available).

Note that 'N/A' values indicate that the standard error was not calculated or not available for that metric.

Citation(s)

https://github.com/EleutherAI/lm-evaluation-harness