Update README.md
#4
by
mboillet
- opened
README.md
CHANGED
@@ -4,23 +4,26 @@ license: mit
|
|
4 |
tags:
|
5 |
- Doc-UFCN
|
6 |
- PyTorch
|
7 |
-
-
|
|
|
|
|
|
|
8 |
metrics:
|
9 |
- IoU
|
10 |
- F1
|
11 |
- AP@.5
|
12 |
- AP@.75
|
13 |
- AP@[.5,.95]
|
|
|
14 |
---
|
15 |
|
|
|
16 |
|
17 |
-
|
18 |
-
|
19 |
-
The Hugin-Munin line detection model predicts text lines from Hugin-Munin document images. This model was developed during the [HUGIN-MUNIN project](https://hugin-munin-project.github.io/).
|
20 |
|
21 |
## Model description
|
22 |
|
23 |
-
The model has been trained using the Doc-UFCN library on
|
24 |
It has been trained on images with their largest dimension equal to 768 pixels, keeping the original aspect ratio.
|
25 |
The model predicts two classes: vertical and horizontal text lines.
|
26 |
|
@@ -29,7 +32,7 @@ The model predicts two classes: vertical and horizontal text lines.
|
|
29 |
The model achieves the following results:
|
30 |
|
31 |
| set | class | IoU | F1 | AP@[.5] | AP@[.75] | AP@[.5,.95] |
|
32 |
-
| ----- | ---------- |
|
33 |
| train | vertical | 88.29 | 89.67 | 71.37 | 33.26 | 36.32 |
|
34 |
| | horizontal | 69.81 | 81.35 | 91.73 | 36.62 | 45.67 |
|
35 |
| val | vertical | 73.01 | 75.13 | 46.02 | 4.99 | 15.58 |
|
@@ -54,4 +57,4 @@ Please refer to the Doc-UFCN library page (https://pypi.org/project/doc-ufcn/) t
|
|
54 |
pages = {2134-2141},
|
55 |
doi = {10.1109/ICPR48806.2021.9412447}
|
56 |
}
|
57 |
-
```
|
|
|
4 |
tags:
|
5 |
- Doc-UFCN
|
6 |
- PyTorch
|
7 |
+
- object-detection
|
8 |
+
- dla
|
9 |
+
- historical
|
10 |
+
- handwritten
|
11 |
metrics:
|
12 |
- IoU
|
13 |
- F1
|
14 |
- AP@.5
|
15 |
- AP@.75
|
16 |
- AP@[.5,.95]
|
17 |
+
pipeline_tag: image-segmentation
|
18 |
---
|
19 |
|
20 |
+
# Doc-UFCN - NorHand v1 - Line detection
|
21 |
|
22 |
+
The NorHand v1 line detection model predicts text lines from NorHand document images. This model was developed during the [HUGIN-MUNIN project](https://hugin-munin-project.github.io/).
|
|
|
|
|
23 |
|
24 |
## Model description
|
25 |
|
26 |
+
The model has been trained using the Doc-UFCN library on NorHand document images.
|
27 |
It has been trained on images with their largest dimension equal to 768 pixels, keeping the original aspect ratio.
|
28 |
The model predicts two classes: vertical and horizontal text lines.
|
29 |
|
|
|
32 |
The model achieves the following results:
|
33 |
|
34 |
| set | class | IoU | F1 | AP@[.5] | AP@[.75] | AP@[.5,.95] |
|
35 |
+
| ----- | ---------- | ----: | ----: | ------: | -------: | ----------: |
|
36 |
| train | vertical | 88.29 | 89.67 | 71.37 | 33.26 | 36.32 |
|
37 |
| | horizontal | 69.81 | 81.35 | 91.73 | 36.62 | 45.67 |
|
38 |
| val | vertical | 73.01 | 75.13 | 46.02 | 4.99 | 15.58 |
|
|
|
57 |
pages = {2134-2141},
|
58 |
doi = {10.1109/ICPR48806.2021.9412447}
|
59 |
}
|
60 |
+
```
|