PlayerDirection SqueezeNet Model

Overview

This model is trained for ice hockey player orientation detection, classifying cropped player images into one of eight orientations: Top, Top-Right, Right, Bottom-Right, Bottom, Bottom-Left, Left, and Top-Left. It is based on the SqueezeNet architecture and achieves an F1 score of 75%.

Model Details

  • Architecture: SqueezeNet (modified for 8-class classification).
  • Training Configuration:
    • Learning rate: 1e-4
    • Batch size: 24
    • Epochs: 300
    • Weight decay: 1e-4
    • Dropout: 0.3
    • Early stopping: patience = 50
    • Augmentations: Color jitter (no rotation)
  • Performance:
    • Accuracy: ~75%
    • F1 Score: ~75%

Usage

  1. Extract frames from a video using OpenCV.
  2. Detect player bounding boxes with a YOLO model.
  3. Crop player images, resize them to 224x224, and preprocess with the given PyTorch transformations:
    • Resize to (224, 224)
    • Normalize with mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225].
  4. Classify the direction of each cropped player image using the SqueezeNet model:
    with torch.no_grad():
        output = model(image_tensor)
        direction_class = torch.argmax(output, dim=1).item()
    
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .