Update README.md
Browse files
README.md
CHANGED
@@ -27,7 +27,8 @@ We also have other two MiniCheck model variants:
|
|
27 |
### Model Performance
|
28 |
The performance of these models is evaluated on our new collected benchmark (unseen by our models during training), [LLM-AggreFact](https://huggingface.co/datasets/lytang/LLM-AggreFact),
|
29 |
from 10 recent human annotated datasets on fact-checking and grounding LLM generations. MiniCheck-RoBERTa-Large outperform all
|
30 |
-
exisiting specialized fact-checkers with a similar scale by a large margin but is 2% worse than our best model MiniCheck-Flan-T5-Large
|
|
|
31 |
|
32 |
Note: We only evaluated the performance of our models on real claims -- without any human intervention in
|
33 |
any format, such as injecting certain error types into model-generated claims. Those edited claims do not reflect
|
|
|
27 |
### Model Performance
|
28 |
The performance of these models is evaluated on our new collected benchmark (unseen by our models during training), [LLM-AggreFact](https://huggingface.co/datasets/lytang/LLM-AggreFact),
|
29 |
from 10 recent human annotated datasets on fact-checking and grounding LLM generations. MiniCheck-RoBERTa-Large outperform all
|
30 |
+
exisiting specialized fact-checkers with a similar scale by a large margin but is 2% worse than our best model MiniCheck-Flan-T5-Large, which
|
31 |
+
is on par with GPT-4 but 400x cheaper. See full results in our work.
|
32 |
|
33 |
Note: We only evaluated the performance of our models on real claims -- without any human intervention in
|
34 |
any format, such as injecting certain error types into model-generated claims. Those edited claims do not reflect
|