korean trocr model
- trocr λͺ¨λΈμ λμ½λμ ν ν¬λμ΄μ μ μλ κΈμλ ocr νμ§ λͺ»νκΈ° λλ¬Έμ, μ΄μ±μ μ¬μ©νλ ν ν¬λμ΄μ λ₯Ό μ¬μ©νλ λμ½λ λͺ¨λΈμ μ¬μ©νμ¬ μ΄μ±λ UNKλ‘ λμ€μ§ μκ² λ§λ trocr λͺ¨λΈμ λλ€.
- 2023 κ΅μκ·Έλ£Ή AI OCR μ±λ¦°μ§ μμ μ»μλ λ Ένμ°λ₯Ό νμ©νμ¬ μ μνμμ΅λλ€.
train datasets
AI Hub
model structure
- encoder : trocr-base-stage1's encoder
- decoder : KR-BERT-char16424
how to use
from transformers import TrOCRProcessor, VisionEncoderDecoderModel, AutoTokenizer
import requests
import unicodedata
from io import BytesIO
from PIL import Image
processor = TrOCRProcessor.from_pretrained("ddobokki/ko-trocr")
model = VisionEncoderDecoderModel.from_pretrained("ddobokki/ko-trocr")
tokenizer = AutoTokenizer.from_pretrained("ddobokki/ko-trocr")
url = "https://raw.githubusercontent.com/ddobokki/ocr_img_example/master/g.jpg"
response = requests.get(url)
img = Image.open(BytesIO(response.content))
pixel_values = processor(img, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values, max_length=64)
generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
generated_text = unicodedata.normalize("NFC", generated_text)
print(generated_text)
- Downloads last month
- 9,491
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.