Alex Strick van Linschoten
add accuracy comment
f84e090
|
raw
history blame
2.68 kB

I've been working through the first two lessons of the fastai course. For lesson one I trained a model to recognise my cat, Mr Blupus. For lesson two the emphasis is on getting those models out in the world as some kind of demo or application. Gradio and Huggingface Spaces makes it super easy to get a prototype of your model on the internet.

This model has an accuracy of ~96% on the validation dataset.

The Dataset

I downloaded a few thousand publicly-available FOIA documents from a government website. I split the PDFs up into individual .jpg files and then used Prodigy to annotate the data. (This process was described in a blogpost written last year.)

Training the model

I trained the model with fastai's flexible vision_learner, fine-tuning resnet18 which was both smaller than resnet34 (no surprises there) and less liable to early overfitting. I trained the model for 10 epochs.

Further Reading

This initial dataset spurred an ongoing interest in the domain and I've since been working on the problem of object detection, i.e. identifying exactly which parts of the image contain redactions.

Some of the key blogs I've written about this project:

  • How to annotate data for an object detection problem with Prodigy (link)
  • How to create synthetic images to supplement a small dataset (link)
  • How to use error analysis and visual tools like FiftyOne to improve model performance (link)
  • Creating more synthetic data focused on the tasks my model finds hard (link)
  • Data validation for object detection / computer vision (a three part series — part 1, part 2, part 3)