ayymen's picture
Update README.md
86a35b8
---
language:
- zgh
- ber
tags:
- OCR
pipeline_tag: image-to-text
---
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
### Run Configuration
{
"arch": "crnn_mobilenet_v3_large",
"train_path": "train",
"val_path": "val",
"train_samples": 1000,
"val_samples": 20,
"font": "FreeMono.ttf,FreeSans.ttf,FreeSerif.ttf",
"min_chars": 1,
"max_chars": 12,
"name": "crnn_mobilenet_v3_large_gen_hw",
"epochs": 3,
"batch_size": 64,
"device": null,
"input_size": 32,
"lr": 0.001,
"weight_decay": 0,
"workers": 2,
"resume": "crnn_mobilenet_v3_large_printed.pt",
"vocab": "tamazight",
"test_only": false,
"show_samples": false,
"wb": true,
"push_to_hub": true,
"pretrained": false,
"sched": "cosine",
"amp": false,
"find_lr": false
}