Edit model card

This model is designed for detecting throw capture moments in Tekken 8 gameplay.
It is based on the VGG16 architecture, modified by removing the top layer to serve as a feature extractor.
The model was trained using Keras on a dataset comprising video compilations from Tekken 8 fights, resulting in a total of 701,990 images at a resolution of 640x360. Approximately 5,000 of these images feature throw captures. Training involved augmentation techniques such as slight color shifting and the addition of mild color or black-and-white noise to enhance model robustness.

The model underwent 65 training cycles, each consisting of 13 epochs.
In each cycle, a batch of 250 randomly selected images from the dataset was used, with at least 40 images depicting throw captures. The batch size was set to 20.
The custom top layer added for this task includes a Flatten layer followed by a Dense layer with 128 units and 'relu' activation, a Dropout layer with a rate of 0.4, and a final Dense layer with 1 unit and 'sigmoid' activation to predict throw captures.

Processing 640x360 image took around 14.5ms on 3060 Ti with opened Tekken 8

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .