Post
991
A few weeks ago, we uploaded the MERIT Dataset ๐๐๐ into Hugging Face ๐ค!
Now, we are excited to share the Merit Dataset paper via arXiv! ๐๐ซ
The MERIT Dataset: Modelling and Efficiently Rendering Interpretable Transcripts (2409.00447)
The MERIT Dataset is a fully synthetic, labeled dataset created for training and benchmarking LLMs on Visually Rich Document Understanding tasks. It is also designed to help detect biases and improve interpretability in LLMs, where we are actively working. ๐ง๐จ
MERIT contains synthetically rendered students' transcripts of records from different schools in English and Spanish. We plan to expand the dataset into different contexts (synth medical/insurance documents, synth IDS, etc.) Want to collaborate? Do you have any feedback? ๐ง
Resources:
- Dataset: de-Rodrigo/merit
- Code and generation pipeline: https://github.com/nachoDRT/MERIT-Dataset
PD: We are grateful to Hugging Face ๐ค for providing the fantastic tools and resources we find in the platform and, more specifically, to @nielsr for sharing the fine-tuning/inference scripts we have used in our benchmark.
Now, we are excited to share the Merit Dataset paper via arXiv! ๐๐ซ
The MERIT Dataset: Modelling and Efficiently Rendering Interpretable Transcripts (2409.00447)
The MERIT Dataset is a fully synthetic, labeled dataset created for training and benchmarking LLMs on Visually Rich Document Understanding tasks. It is also designed to help detect biases and improve interpretability in LLMs, where we are actively working. ๐ง๐จ
MERIT contains synthetically rendered students' transcripts of records from different schools in English and Spanish. We plan to expand the dataset into different contexts (synth medical/insurance documents, synth IDS, etc.) Want to collaborate? Do you have any feedback? ๐ง
Resources:
- Dataset: de-Rodrigo/merit
- Code and generation pipeline: https://github.com/nachoDRT/MERIT-Dataset
PD: We are grateful to Hugging Face ๐ค for providing the fantastic tools and resources we find in the platform and, more specifically, to @nielsr for sharing the fine-tuning/inference scripts we have used in our benchmark.