Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,19 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
|
5 |
+
This is the LACIE fine-tuned version of Mistral-7B, finetuned according to our paper [LACIE: Listener-Aware Finetuning for Confidence Calibration in Large Language Models](arxiv.org/abs/2405.21028)
|
6 |
+
|
7 |
+
This model is a fine-tuned version of [Mistral-7B base](https://huggingface.co/mistralai/Mistral-7B-v0.1) that has been finetuned using data from [TriviaQA](https://huggingface.co/datasets/mandarjoshi/trivia_qa).
|
8 |
+
LACIE is pragmatic preference-based finetuning method that optimizes models to be better calibrated w.r.t. both implicit and explicit confidence statements.
|
9 |
+
The preferences in the dataset are based on correctness and whether listener accepted to rejected the answer.
|
10 |
+
For more details, please see our paper.
|
11 |
+
|
12 |
+
## Model Architecture
|
13 |
+
The architecture is the same as Mistral-7B; the weights in this repo are adapter weights for Mistral.
|
14 |
+
|
15 |
+
|
16 |
+
|
17 |
+
|
18 |
+
|
19 |
+
|