Mediapipe 68-points Eyes-Closed and Mouth-Opened

Mediapipe Face Detection

This Space use the Apache 2.0 Licensed Mediapipe FaceLandmarker
One of json format is from MIT licensed face_recognition
I should clarify because it is confusing: I'm not using dlib's non-MIT licensed 68-point model at all.
This is 10-year-old technology. However, most amazing talk-head models,
while often having their core code under MIT/Apache licenses, rely on datasets or NVIDIA libraries with more restrictive licenses.
[Article]Results: Converted Guide Images(eyes-closed and mouth-opened) with Flux.1 schenll img2img/inpaint