Object Detection
English
yolo
yolo11
nsfw
Not-For-All-Audiences

Logo

🔞 WARNING: SENSITIVE CONTENT 🔞

THIS MEDIA CONTAINS SENSITIVE CONTENT (I.E. NUDITY, VIOLENCE, PROFANITY, PORN) THAT SOME PEOPLE MAY FIND OFFENSIVE. YOU MUST BE 18 OR OLDER TO VIEW THIS CONTENT.


EraX-Anti-NSFW-V1.1

A Highly Efficient Model for NSFW Detection. Very effective for pre-publication image and video control, or for limiting children's access to harmful publications. You can either just predict the classes and their boundingboxes or even mask the predicted harmful object(s) or mask the entire image. Please see the deployment codes below.

Model Details / Overview

  • Model Architecture: YOLO11 (Nano, Small, Medium)
  • Task: Object Detection (NSFW Detection)
  • Dataset: Private datasets (From Internet).
  • Training set: 40192 images.
  • Validation set: 3495 images.
  • Classes: anus, make_love, nipple, penis, vagina.

Labels

Labels

Training Configuration

Evaluation Metrics

Below are the key metrics from the model evaluation on the validation set: comming soon

Benchmark

  • CPU: 11th Gen Intel Core(TM) i7-11800H 2.30GHz
  • GPU: NVIDIA GeForce RTX 3050 Ti 3902MiB
Format Model Metrics/mAP50-95(B) GPU CPU
Inference time (ms/im) FPS Inference time (ms/im) FPS
PyTorch erax-anti-nsfw-yolo11n-v1.1.pt 0.438 3.500 286 27.900 36
erax-anti-nsfw-yolo11s-v1.1.pt 0.453 7.000 143 71.000 14
erax-anti-nsfw-yolo11m-v1.1.pt 0.467 16.500 61 206.600 5
TorchScript erax-anti-nsfw-yolo11n-v1.1.torchscript 0.435 3.700 270 38.500 26
erax-anti-nsfw-yolo11s-v1.1.torchscript 0.449 8.100 123 108.500 9
erax-anti-nsfw-yolo11m-v1.1.torchscript 0.463 20.300 49 394.900 3
ONNX erax-anti-nsfw-yolo11n-v1.1.onnx 0.435 - - 28.300 35
erax-anti-nsfw-yolo11s-v1.1.onnx 0.449 - - 59.800 17
erax-anti-nsfw-yolo11m-v1.1.onnx 0.463 - - 157.800 6
OpenVINO erax-anti-nsfw-yolo11n-v1.1_openvino_model 0.435 13.900 72 15.900 63
erax-anti-nsfw-yolo11s-v1.1_openvino_model 0.449 72.300 14 40.800 25
erax-anti-nsfw-yolo11m-v1.1_openvino_model 0.463 245.900 4 121.700 8
TensorRT erax-anti-nsfw-yolo11n-v1.1.engine 0.435 3.500 286 - -
erax-anti-nsfw-yolo11s-v1.1.engine 0.449 6.800 147 - -
erax-anti-nsfw-yolo11m-v1.1.engine 0.463 15.700 64 - -
PaddlePaddle erax-anti-nsfw-yolo11n-v1.1_paddle_model 0.435 214.700 5 136.200 7
erax-anti-nsfw-yolo11s-v1.1_paddle_model 0.449 517.700 2 234.600 4
erax-anti-nsfw-yolo11m-v1.1_paddle_model 0.463 887.000 1 506.300 2
MNN erax-anti-nsfw-yolo11n-v1.1.mnn 0.435 55.800 18 59.300 17
erax-anti-nsfw-yolo11s-v1.1.mnn 0.449 147.600 7 146.300 7
erax-anti-nsfw-yolo11m-v1.1.mnn 0.463 378.500 3 380.700 3
NCNN erax-anti-nsfw-yolo11n-v1.1_ncnn_model 0.435 57.100 18 61.100 16
erax-anti-nsfw-yolo11s-v1.1_ncnn_model 0.449 141.200 7 137.200 7
erax-anti-nsfw-yolo11m-v1.1_ncnn_model 0.463 375.500 3 367.400 3

Training Validation Results

Training and Validation Losses

Training and Validation Losses

Confusion Matrix

Confusion Matrix

Inference

To use the trained model, follow these steps:

  1. Install the necessary packages:
pip install ultralytics supervision huggingface-hub
  1. Download Pretrained model:
from huggingface_hub import snapshot_download
snapshot_download(repo_id="erax-ai/EraX-Anti-NSFW-V1.1", local_dir="./", force_download=True)
  1. Simple Use Case:
from ultralytics import YOLO
from PIL import Image
import supervision as sv
import numpy as np

IOU_THRESHOLD        = 0.3
CONFIDENCE_THRESHOLD = 0.2

# pretrained_path = "erax-anti-nsfw-yolo11m-v1.1.pt"
# pretrained_path = "erax-anti-nsfw-yolo11s-v1.1.pt"
pretrained_path = "erax-anti-nsfw-yolo11n-v1.1.pt"

image_path_list = ["test_images/img_1.jpg", "test_images/img_2.jpg"]

model = YOLO(pretrained_path)
results = model(image_path_list,
                  conf=CONFIDENCE_THRESHOLD,
                  iou=IOU_THRESHOLD
                )


for result in results:
    annotated_image = result.orig_img.copy()
    h, w = annotated_image.shape[:2]
    anchor = h if h > w else w

    detections = sv.Detections.from_ultralytics(result)
    label_annotator = sv.LabelAnnotator(text_color=sv.Color.BLACK,
                                        text_position=sv.Position.CENTER,
                                        text_scale=anchor/1700)
    
    pixelate_annotator = sv.PixelateAnnotator(pixel_size=anchor/50)
    
    annotated_image = pixelate_annotator.annotate(
        scene=annotated_image.copy(),
        detections=detections
    )


    annotated_image = label_annotator.annotate(
        annotated_image,
        detections=detections
    )

    
    sv.plot_image(annotated_image, size=(10, 10))

More examples

  1. Example 01: Example 03

  2. Example 02: Example 06

  3. Example 03: SAFEEST for using make_love class as it will cover entire context.

    Without make_love class With make_love class

Citation

If you find our project useful, we would appreciate it if you could star our repository and cite our work as follows:

@article{EraX-Anti-NSFW-V1.1,
  author    = {Lê Chí Tài and
              Phạm Đình Thục and
              Mr. Nguyễn Anh Nguyên and
              Đoàn Thành Khang and
              Mr. Trần Hải Khương and
              Mr. Trương Công Đức and 
              Phan Nguyễn Tuấn Kha and 
              Phạm Huỳnh Nhật},
  title     = {EraX-Anti-NSFW-V1.1: A Highly Efficient Model for NSFW Detection},
  organization={EraX JS Company},
  year={2024},
  url={https://huggingface.co/erax-ai/EraX-Anti-NSFW-V1.1}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for erax-ai/EraX-Anti-NSFW-V1.1

Base model

Ultralytics/YOLO11
Finetuned
(13)
this model

Collection including erax-ai/EraX-Anti-NSFW-V1.1