Edit model card

LayoutLMv2

Multimodal (text + layout/format + image) pre-training for document AI

The documentation of this model in the Transformers library can be found here.

Microsoft Document AI | GitHub

Introduction

LayoutLMv2 is an improved version of LayoutLM with new pre-training tasks to model the interaction among text, layout, and image in a single multi-modal framework. It outperforms strong baselines and achieves new state-of-the-art results on a wide variety of downstream visually-rich document understanding tasks, including , including FUNSD (0.7895 β†’ 0.8420), CORD (0.9493 β†’ 0.9601), SROIE (0.9524 β†’ 0.9781), Kleister-NDA (0.834 β†’ 0.852), RVL-CDIP (0.9443 β†’ 0.9564), and DocVQA (0.7295 β†’ 0.8672).

LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou, ACL 2021

Downloads last month
523,545
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for microsoft/layoutlmv2-base-uncased

Finetunes
64 models

Spaces using microsoft/layoutlmv2-base-uncased 15

Collection including microsoft/layoutlmv2-base-uncased