mboillet's picture
Update README.md
43ec736 verified
|
raw
history blame
1.96 kB
metadata
library_name: Doc-UFCN
license: mit
tags:
  - Doc-UFCN
  - PyTorch
  - object-detection
  - dla
  - historical
  - handwritten
metrics:
  - IoU
  - F1
  - AP@.5
  - AP@.75
  - AP@[.5,.95]
pipeline_tag: image-segmentation
language:
  - 'no'

Doc-UFCN - NorHand v1 - Line detection

The NorHand v1 line detection model predicts the following elements from NorHand document images:

  • vertical text lines;
  • horizontal text lines.

This model was developed during the HUGIN-MUNIN project.

Model description

The model has been trained using the Doc-UFCN library on the NorHand dataset. It has been trained on images with their largest dimension equal to 768 pixels, keeping the original aspect ratio.

Evaluation results

The model achieves the following results:

set class IoU F1 AP@[.5] AP@[.75] AP@[.5,.95]
train vertical 88.29 89.67 71.37 33.26 36.32
horizontal 69.81 81.35 91.73 36.62 45.67
val vertical 73.01 75.13 46.02 4.99 15.58
horizontal 61.65 75.69 87.98 11.18 31.55
test vertical 78.62 80.03 59.93 15.90 24.11
horizontal 63.59 76.49 95.93 24.18 41.45

How to use?

Please refer to the Doc-UFCN library page to use this model.

Cite us!

@inproceedings{doc_ufcn2021,
    author = {Boillet, Mélodie and Kermorvant, Christopher and Paquet, Thierry},
    title = {{Multiple Document Datasets Pre-training Improves Text Line Detection With
              Deep Neural Networks}},
    booktitle = {2020 25th International Conference on Pattern Recognition (ICPR)},
    year = {2021},
    month = Jan,
    pages = {2134-2141},
    doi = {10.1109/ICPR48806.2021.9412447}
}