# SPIGA: Shape Preserving Facial Landmarks with Graph Attention Networks. [![Project Page](https://badgen.net/badge/color/Project%20Page/purple?icon=atom&label)](https://bmvc2022.mpi-inf.mpg.de/155/) [![arXiv](https://img.shields.io/badge/arXiv-2210.07233-b31b1b.svg)](https://arxiv.org/abs/2210.07233) [![PyPI version](https://badge.fury.io/py/spiga.svg)](https://badge.fury.io/py/spiga) [![License](https://img.shields.io/badge/License-BSD%203--Clause-blue.svg)](LICENSE) [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/andresprados/SPIGA/blob/main/colab_tutorials/video_demo.ipynb) This repository contains the source code of **SPIGA, a face alignment and headpose estimator** that takes advantage of the complementary benefits from CNN and GNN architectures producing plausible face shapes in presence of strong appearance changes.
**It achieves top-performing results in:** [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/shape-preserving-facial-landmarks-with-graph/pose-estimation-on-300w-full)](https://paperswithcode.com/sota/pose-estimation-on-300w-full?p=shape-preserving-facial-landmarks-with-graph) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/shape-preserving-facial-landmarks-with-graph/head-pose-estimation-on-wflw)](https://paperswithcode.com/sota/head-pose-estimation-on-wflw?p=shape-preserving-facial-landmarks-with-graph) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/shape-preserving-facial-landmarks-with-graph/pose-estimation-on-merl-rav)](https://paperswithcode.com/sota/pose-estimation-on-merl-rav?p=shape-preserving-facial-landmarks-with-graph) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/shape-preserving-facial-landmarks-with-graph/face-alignment-on-merl-rav)](https://paperswithcode.com/sota/face-alignment-on-merl-rav?p=shape-preserving-facial-landmarks-with-graph) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/shape-preserving-facial-landmarks-with-graph/face-alignment-on-wflw)](https://paperswithcode.com/sota/face-alignment-on-wflw?p=shape-preserving-facial-landmarks-with-graph) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/shape-preserving-facial-landmarks-with-graph/face-alignment-on-300w-split-2)](https://paperswithcode.com/sota/face-alignment-on-300w-split-2?p=shape-preserving-facial-landmarks-with-graph) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/shape-preserving-facial-landmarks-with-graph/face-alignment-on-cofw-68)](https://paperswithcode.com/sota/face-alignment-on-cofw-68?p=shape-preserving-facial-landmarks-with-graph) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/shape-preserving-facial-landmarks-with-graph/face-alignment-on-300w)](https://paperswithcode.com/sota/face-alignment-on-300w?p=shape-preserving-facial-landmarks-with-graph) ## Setup The repository has been tested on Ubuntu 20.04 with CUDA 11.4, the latest version of cuDNN, Python 3.8 and Pytorch 1.12.1. To run the video analyzer demo or evaluate the algorithm, install the repository from the source code: ``` # Best practices: # 1. Create a virtual environment. # 2. Install Pytorch according to your CUDA version. # 3. Install SPIGA from source code: git clone https://github.com/andresprados/SPIGA.git cd spiga pip install -e . # To run the video analyzer demo install the extra requirements. pip install -e .[demo] ``` **Models:** By default, model weights are automatically downloaded on demand and stored at ```./spiga/models/weights/```. You can also download them from [Google Drive](https://drive.google.com/drive/folders/1olrkoiDNK_NUCscaG9BbO3qsussbDi7I?usp=sharing). ***Note:*** All the callable files provide a detailed parser that describes the behaviour of the program and their inputs. Please, check the operational modes by using the extension ```--help```. ## Inference and Demo We provide an inference framework for SPIGA available at ```./spiga/inference```. The models can be easily deployed in third-party projects by adding a few lines of code. Check out our inference and application tutorials for more information:
***Note:*** For more information check the [Demo Readme](spiga/demo/readme.md) or call the app parser ```--help```. ## Dataloaders and Benchmarks This repository provides general-use tools for the task of face alignment and headpose estimation: * **Dataloaders:** Training and inference dataloaders are available at ```./spiga/data```. Including the data augmentation tools used for training SPIGA and data-visualizer to analyze the dataset images and features. For more information check the [Data Readme](spiga/data/readme.md) . * **Benchmark:** A common benchmark framework to test any algorithm in the task of face alignment and headpose estimation is available at ```./spiga/eval/benchmark```. For more information check the following Evaluation Section and the [Benchmark Readme](spiga/eval/benchmark/readme.md). **Datasets:** To run the data visualizers or the evaluation benchmark please download the dataset images from the official websites ([300W](https://ibug.doc.ic.ac.uk/resources/facial-point-annotations/), [AFLW](https://www.tugraz.at/institute/icg/research/team-bischof/lrs/downloads/aflw/), [WFLW](https://wywu.github.io/projects/LAB/WFLW.html), [COFW](http://www.vision.caltech.edu/xpburgos/ICCV13/)). By default they should be saved following the next folder structure: ``` ./spiga/data/databases/ # Default path can be updated by modifying 'db_img_path' in ./spiga/data/loaders/dl_config.py | └───/300w │ └─── /images │ | /private │ | /test | └ /train | └───/cofw │ └─── /images | └───/aflw │ └─── /data | └ /flickr | └───/wflw └─── /images ``` **Annotations:** We have stored for simplicity the datasets annotations directly in ```./spiga/data/annotations```. We strongly recommend to move them out of the repository if you plan to use it as a git directory. **Results:** Similar to the annotations problem, we have stored the SPIGA results in ```./spiga/eval/results/