tomofi's picture
Add application file
2366e36

A newer version of the Gradio SDK is available: 5.6.0

Upgrade

Text Detection

Overview

The structure of the text detection dataset directory is organized as follows.

β”œβ”€β”€ ctw1500
β”‚   β”œβ”€β”€ annotations
β”‚   β”œβ”€β”€ imgs
β”‚   β”œβ”€β”€ instances_test.json
β”‚   └── instances_training.json
β”œβ”€β”€ icdar2015
β”‚   β”œβ”€β”€ imgs
β”‚   β”œβ”€β”€ instances_test.json
β”‚   └── instances_training.json
β”œβ”€β”€ icdar2017
β”‚   β”œβ”€β”€ imgs
β”‚   β”œβ”€β”€ instances_training.json
β”‚   └── instances_val.json
β”œβ”€β”€ synthtext
β”‚   β”œβ”€β”€ imgs
β”‚   └── instances_training.lmdb
β”‚       β”œβ”€β”€ data.mdb
β”‚       └── lock.mdb
β”œβ”€β”€ textocr
β”‚   β”œβ”€β”€ train
β”‚   β”œβ”€β”€ instances_training.json
β”‚   └── instances_val.json
β”œβ”€β”€ totaltext
β”‚   β”œβ”€β”€ imgs
β”‚   β”œβ”€β”€ instances_test.json
β”‚   └── instances_training.json
β”œβ”€β”€ CurvedSynText150k
β”‚   β”œβ”€β”€ syntext_word_eng
β”‚   β”œβ”€β”€ emcs_imgs
β”‚   └── instances_training.json
|── funsd
|   β”œβ”€β”€ annotations
β”‚   β”œβ”€β”€ imgs
β”‚   β”œβ”€β”€ instances_test.json
β”‚   └── instances_training.json
Dataset Images Annotation Files
training validation testing
CTW1500 homepage - - -
ICDAR2015 homepage instances_training.json - instances_test.json
ICDAR2017 homepage instances_training.json instances_val.json -
Synthtext homepage instances_training.lmdb (data.mdb, lock.mdb) - -
TextOCR homepage - - -
Totaltext homepage - - -
CurvedSynText150k homepage | Part1 | Part2 instances_training.json - -
FUNSD homepage - - -

Important Note

:::{note} For users who want to train models on CTW1500, ICDAR 2015/2017, and Totaltext dataset, there might be some images containing orientation info in EXIF data. The default OpenCV backend used in MMCV would read them and apply the rotation on the images. However, their gold annotations are made on the raw pixels, and such inconsistency results in false examples in the training set. Therefore, users should use dict(type='LoadImageFromFile', color_type='color_ignore_orientation') in pipelines to change MMCV's default loading behaviour. (see DBNet's pipeline config for example) :::

Preparation Steps

ICDAR 2015

  • Step0: Read Important Note
  • Step1: Download ch4_training_images.zip, ch4_test_images.zip, ch4_training_localization_transcription_gt.zip, Challenge4_Test_Task1_GT.zip from homepage
  • Step2:
mkdir icdar2015 && cd icdar2015
mkdir imgs && mkdir annotations
# For images,
mv ch4_training_images imgs/training
mv ch4_test_images imgs/test
# For annotations,
mv ch4_training_localization_transcription_gt annotations/training
mv Challenge4_Test_Task1_GT annotations/test
python tools/data/textdet/icdar_converter.py /path/to/icdar2015 -o /path/to/icdar2015 -d icdar2015 --split-list training test

ICDAR 2017

CTW1500

  • Step0: Read Important Note
  • Step1: Download train_images.zip, test_images.zip, train_labels.zip, test_labels.zip from github
mkdir ctw1500 && cd ctw1500
mkdir imgs && mkdir annotations

# For annotations
cd annotations
wget -O train_labels.zip https://universityofadelaide.box.com/shared/static/jikuazluzyj4lq6umzei7m2ppmt3afyw.zip
wget -O test_labels.zip https://cloudstor.aarnet.edu.au/plus/s/uoeFl0pCN9BOCN5/download
unzip train_labels.zip && mv ctw1500_train_labels training
unzip test_labels.zip -d test
cd ..
# For images
cd imgs
wget -O train_images.zip https://universityofadelaide.box.com/shared/static/py5uwlfyyytbb2pxzq9czvu6fuqbjdh8.zip
wget -O test_images.zip https://universityofadelaide.box.com/shared/static/t4w48ofnqkdw7jyc4t11nsukoeqk9c3d.zip
unzip train_images.zip && mv train_images training
unzip test_images.zip && mv test_images test
  • Step2: Generate instances_training.json and instances_test.json with following command:
python tools/data/textdet/ctw1500_converter.py /path/to/ctw1500 -o /path/to/ctw1500 --split-list training test

SynthText

TextOCR

mkdir textocr && cd textocr

# Download TextOCR dataset
wget https://dl.fbaipublicfiles.com/textvqa/images/train_val_images.zip
wget https://dl.fbaipublicfiles.com/textvqa/data/textocr/TextOCR_0.1_train.json
wget https://dl.fbaipublicfiles.com/textvqa/data/textocr/TextOCR_0.1_val.json

# For images
unzip -q train_val_images.zip
mv train_images train
  • Step2: Generate instances_training.json and instances_val.json with the following command:
python tools/data/textdet/textocr_converter.py /path/to/textocr

Totaltext

mkdir totaltext && cd totaltext
mkdir imgs && mkdir annotations

# For images
# in ./totaltext
unzip totaltext.zip
mv Images/Train imgs/training
mv Images/Test imgs/test

# For annotations
unzip groundtruth_text.zip
cd Groundtruth
mv Polygon/Train ../annotations/training
mv Polygon/Test ../annotations/test
  • Step2: Generate instances_training.json and instances_test.json with the following command:
python tools/data/textdet/totaltext_converter.py /path/to/totaltext -o /path/to/totaltext --split-list training test

CurvedSynText150k

unzip -q syntext1.zip
mv train.json train1.json
unzip images.zip
rm images.zip

unzip -q syntext2.zip
mv train.json train2.json
unzip images.zip
rm images.zip
  • Step3: Download instances_training.json to CurvedSynText150k/
  • Or, generate instances_training.json with following command:
python tools/data/common/curvedsyntext_converter.py PATH/TO/CurvedSynText150k --nproc 4

FUNSD

mkdir funsd && cd funsd

# Download FUNSD dataset
wget https://guillaumejaume.github.io/FUNSD/dataset.zip
unzip -q dataset.zip

# For images
mv dataset/training_data/images imgs && mv dataset/testing_data/images/* imgs/

# For annotations
mkdir annotations
mv dataset/training_data/annotations annotations/training && mv dataset/testing_data/annotations annotations/test

rm dataset.zip && rm -rf dataset
  • Step2: Generate instances_training.json and instances_test.json with following command:
python tools/data/textdet/funsd_converter.py PATH/TO/funsd --nproc 4