ece / README.md
Natooz's picture
Update README.md
6a555d1 verified

A newer version of the Gradio SDK is available: 5.7.1

Upgrade
metadata
title: Expected Calibration Error (ECE)
emoji: 🧮
colorFrom: yellow
colorTo: blue
tags:
  - evaluate
  - metric
description: Expected Calibration Error (ECE)
sdk: gradio
sdk_version: 5.5.0
app_file: app.py
pinned: false

Metric Card for the Expected Calibration Error (ECE)

Metric Description

This metrics computes the expected calibration error (ECE). ECE evaluates how well a model is calibrated, i.e. how well its output probabilities match the actual ground truth distribution. It measures the $L^p$ norm difference between a model’s posterior and the true likelihood of being correct. This module directly calls the torchmetrics package implementation, allowing to use its flexible arguments.

How to Use

Inputs

List all input arguments in the format below

  • predictions (float32): predictions (after softmax). They must have a shape (N,C) if multiclass, or (N,...) if binary;
  • references (int64): reference for each prediction, with a shape (N,...);
  • kwargs arguments to pass to the calibration error method.

Output Values

ECE as a float number.

Examples

ece = evaluate.load("Natooz/ece")
results = ece.compute(
    references=np.array([[0.25, 0.20, 0.55],
                         [0.55, 0.05, 0.40],
                         [0.10, 0.30, 0.60],
                         [0.90, 0.05, 0.05]]),
    predictions=np.array(),
    num_classes=3,
    n_bins=3,
    norm="l1",
)
print(results)

Citation

@InProceedings{pmlr-v70-guo17a,
      title = 	 {On Calibration of Modern Neural Networks},
      author =       {Chuan Guo and Geoff Pleiss and Yu Sun and Kilian Q. Weinberger},
      booktitle = 	 {Proceedings of the 34th International Conference on Machine Learning},
      pages = 	 {1321--1330},
      year = 	 {2017},
      editor = 	 {Precup, Doina and Teh, Yee Whye},
      volume = 	 {70},
      series = 	 {Proceedings of Machine Learning Research},
      month = 	 {06--11 Aug},
      publisher =    {PMLR},
      pdf = 	 {http://proceedings.mlr.press/v70/guo17a/guo17a.pdf},
      url = 	 {https://proceedings.mlr.press/v70/guo17a.html},
}
@inproceedings{NEURIPS2019_f8c0c968,
     author = {Kumar, Ananya and Liang, Percy S and Ma, Tengyu},
     booktitle = {Advances in Neural Information Processing Systems},
     editor = {H. Wallach and H. Larochelle and A. Beygelzimer and F. d\textquotesingle Alch\'{e}-Buc and E. Fox and R. Garnett},
     publisher = {Curran Associates, Inc.},
     title = {Verified Uncertainty Calibration},
     url = {https://papers.nips.cc/paper_files/paper/2019/hash/f8c0c968632845cd133308b1a494967f-Abstract.html},
     volume = {32},
     year = {2019}
}
@InProceedings{Nixon_2019_CVPR_Workshops,
    author = {Nixon, Jeremy and Dusenberry, Michael W. and Zhang, Linchuan and Jerfel, Ghassen and Tran, Dustin},
    title = {Measuring Calibration in Deep Learning},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month = {June},
    year = {2019},
    url = {https://openaccess.thecvf.com/content_CVPRW_2019/html/Uncertainty_and_Robustness_in_Deep_Visual_Learning/Nixon_Measuring_Calibration_in_Deep_Learning_CVPRW_2019_paper.html},
}