merve HF staff commited on
Commit
f88c6d8
β€’
1 Parent(s): 3976308

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -2
README.md CHANGED
@@ -11,6 +11,20 @@ widget:
11
  example_title: Cat in a Crate
12
  - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-3.jpg
13
  example_title: Two Cats Chilling
14
- license: apache-2.0
15
  ---
16
- Keras image captioning model with encoder-decoder network. πŸŒƒπŸŒ…πŸŽ‘
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  example_title: Cat in a Crate
12
  - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-3.jpg
13
  example_title: Two Cats Chilling
14
+ license: cc0.0
15
  ---
16
+ ## Tensorflow Keras Implementation of an Image Captioning Model with encoder-decoder network. πŸŒƒπŸŒ…πŸŽ‘
17
+
18
+ This repo contains the models and the notebook [on Image captioning with visual attention](https://www.tensorflow.org/tutorials/text/image_captioning?hl=en).
19
+
20
+ Full credits to TensorFlow Team
21
+
22
+ ## Background Information
23
+ This notebook implements TensorFlow Keras implementation on Image captioning with visual attention.
24
+ Given an image like the example below, your goal is to generate a caption such as "a surfer riding on a wave".
25
+ ![image](https://www.tensorflow.org/images/surf.jpg)
26
+ To accomplish this, you'll use an attention-based model, which enables us to see what parts of the image the model focuses on as it generates a caption.
27
+ ![attention](https://www.tensorflow.org/images/imcap_prediction.png)
28
+ The model architecture is similar to [Show, Attend and Tell: Neural Image Caption Generation with Visual Attention](https://arxiv.org/abs/1502.03044).
29
+
30
+ This notebook is an end-to-end example. When you run the notebook, it downloads the [MS-COCO](https://cocodataset.org/#home) dataset, preprocesses and caches a subset of images using Inception V3, trains an encoder-decoder model, and generates captions on new images using the trained model.