File size: 3,387 Bytes
11395d1
 
 
 
 
 
 
 
 
 
 
68f8659
 
11395d1
68f8659
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
902cc77
68f8659
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
language:
- ru
- ar
- fr
- pt
- en
size_categories:
- 10K<n<100K
---

## About dataset
The main purpose of this dataset is to train and evaluate the model used for defining the orientation of the document and the number of text columns in it. As for the model, we chose EffecientNet B0. We constructed this dataset to represent the variety of documents we usually deal with. It contains open data in the form of scientific papers, legal acts, reports, tables, etc. The languages represented in this dataset are: Russian, English, French, Spanish, Portuguese, Arabic, Armenian, Chinese, Georgian, Greek, Italian, Japanese, Korean, and Mongolian. More specifically, it contains 2426 one-column source documents and 1695 multiple-column source documents. These source files are then rotated at four possible angles to cover all possible orientations (0, 90, 180, and 270 degrees).  

Formally, document orientation is the angle by which a text document has been rotated relative to its vertical position (the one in which a person can read it). We consider four possible orientations: 0 (vertical position), 90, 180, and 270 degrees.  

A document is considered a one-column document if most of the text in it is arranged in one column. Similarly, a document is considered a multi-column document if most of the text is divided into two columns.  

## Description 
The initial repository structure goes as follows:
```
└─orientation_columns_dataset
  ├─.gitattributes
  ├─README.md
  └─generate_dataset_orient_classifier.zip
```
The structure of the `generate_dataset_orient_classifier.zip` archive after unzipping goes as follows:
```
└─generate_dataset_orient_classifier
  └─src
    ├─one_column
    └─miltiple_column
  ├─README.md
  └─sctipts
    ├─gen_dataset.py
    └─get_imgs_from_pdf.py
```
Folders `one_column` and `miltiple_column` above contain source pictures for the dataset. `one_column` folder contains documents with only one text column, and the `multiple_column` folder contains documents with two columns of text.
After using the generation scripts `gen_dataset.py` and `get_imgs_from_pdf.py`, you will get the dataset in its final form, which can be used for training and evaluation of the model. The structure of the output dataset folder should look as follows:
```
└─columns_orientation_dataset
  ├─test
  └─train
```
Both the `train` and `test` folders above contain rotated document pictures and files with the name `labels.csv`. These are the dataset markup tables with columns `image_name`,`orientation` and `columns` that represent all the necessary information about the dataset documents. These markup files are generated automatically.

## About generation scripts:
* `scripts/gen_dataset.py` - generates an output dataset for model training and testing. It rotates document images and creates a `label.csv` markup file in each dataset
  * `-i`, `--input_path_img`: source folder absolute path
  * `-o`, `--output_path_img`: absolute path for output folder
  * `-l`, `--output_path_lbl`: absolute path for label file, by default it is contained in output folders

* `scripts/get_imgs_from_pdf.py` - just to help if you want to add images to the src folder from different pdfs
  * `-i`, `--input_path_img`: source folder absolute path
  * `-o`, `--output_path_img`: absolute path for output folder