---
license: apache-2.0
language:
- en
base_model:
- Ultralytics/YOLO11
tags:
- yolo
- yolo11
- nsfw
pipeline_tag: object-detection
---
🔞 WARNING: SENSITIVE CONTENT 🔞
THIS MEDIA CONTAINS SENSITIVE CONTENT (I.E. NUDITY, VIOLENCE, PROFANITY, PORN) THAT SOME PEOPLE MAY FIND OFFENSIVE. YOU MUST BE 18 OR OLDER TO VIEW THIS CONTENT.
----------
# EraX-Anti-NSFW-V1.1
A Highly Efficient Model for NSFW Detection. Very effective for **pre-publication image and video control**, or for **limiting children's access to harmful publications**.
You can either just predict the classes and their boundingboxes or even mask the predicted harmful object(s) or mask the entire image.
Please see the deployment codes below.
- **Developed by**:
- Lê Chí Tài (tai.le@erax.ai)
- Phạm Đình Thục (thuc.pd@erax.ai)
- Mr. Nguyễn Anh Nguyên (nguyen@erax.ai)
- **Model version**: v1.1
- **License**: Apache 2.0
## Model Details / Overview
- **Model Architecture**: YOLO11 (Nano, Small, Medium)
- **Task**: Object Detection (NSFW Detection)
- **Dataset**: Private datasets (From Internet).
- **Training set**: 40192 images.
- **Validation set**: 3495 images.
- **Classes**: anus, make_love, nipple, penis, vagina.
### Labels
![Labels](./train_result/erax-anti-nsfw-yolo11n-v1.1/labels.jpg)
## Training Configuration
- **Model Weights Files**:
- Nano: [`erax-anti-nsfw-yolo11n-v1.1.pt`](./erax-anti-nsfw-yolo11n-v1.1.pt) (5.45 MB)
- Small: [`erax-anti-nsfw-yolo11s-v1.1.pt`](./erax-anti-nsfw-yolo11s-v1.1.pt) (40.5 MB)
- Medium: [`erax-anti-nsfw-yolo11m-v1.1.pt`](./erax-anti-nsfw-yolo11m-v1.1.pt) (19.2 MB)
- **Training Config**:
- **Number of Epochs**: 100
- **Learning Rate**: 0.01
- **Batch Size**: 336/192/92 (Nano/Small/Medium)
- **Image Size**: 640x640
- **Training server**: 4 x NVIDIA RTX A4000 (16GB GDDR6)
## Evaluation Metrics
Below are the key metrics from the model evaluation on the validation set:
comming soon
## Benchmark
- **CPU: 11th Gen Intel Core(TM) i7-11800H 2.30GHz**
- **GPU: NVIDIA GeForce RTX 3050 Ti 3902MiB**
Format |
Model |
Metrics/mAP50-95(B) |
GPU |
CPU |
Inference time (ms/im) |
FPS |
Inference time (ms/im) |
FPS |
PyTorch |
erax-anti-nsfw-yolo11n-v1.1.pt |
0.438 |
3.500 |
286 |
27.900 |
36 |
erax-anti-nsfw-yolo11s-v1.1.pt |
0.453 |
7.000 |
143 |
71.000 |
14 |
erax-anti-nsfw-yolo11m-v1.1.pt |
0.467 |
16.500 |
61 |
206.600 |
5 |
TorchScript |
erax-anti-nsfw-yolo11n-v1.1.torchscript |
0.435 |
3.700 |
270 |
38.500 |
26 |
erax-anti-nsfw-yolo11s-v1.1.torchscript |
0.449 |
8.100 |
123 |
108.500 |
9 |
erax-anti-nsfw-yolo11m-v1.1.torchscript |
0.463 |
20.300 |
49 |
394.900 |
3 |
ONNX |
erax-anti-nsfw-yolo11n-v1.1.onnx |
0.435 |
- |
- |
28.300 |
35 |
erax-anti-nsfw-yolo11s-v1.1.onnx |
0.449 |
- |
- |
59.800 |
17 |
erax-anti-nsfw-yolo11m-v1.1.onnx |
0.463 |
- |
- |
157.800 |
6 |
OpenVINO |
erax-anti-nsfw-yolo11n-v1.1_openvino_model |
0.435 |
13.900 |
72 |
15.900 |
63 |
erax-anti-nsfw-yolo11s-v1.1_openvino_model |
0.449 |
72.300 |
14 |
40.800 |
25 |
erax-anti-nsfw-yolo11m-v1.1_openvino_model |
0.463 |
245.900 |
4 |
121.700 |
8 |
TensorRT |
erax-anti-nsfw-yolo11n-v1.1.engine |
0.435 |
3.500 |
286 |
- |
- |
erax-anti-nsfw-yolo11s-v1.1.engine |
0.449 |
6.800 |
147 |
- |
- |
erax-anti-nsfw-yolo11m-v1.1.engine |
0.463 |
15.700 |
64 |
- |
- |
PaddlePaddle |
erax-anti-nsfw-yolo11n-v1.1_paddle_model |
0.435 |
214.700 |
5 |
136.200 |
7 |
erax-anti-nsfw-yolo11s-v1.1_paddle_model |
0.449 |
517.700 |
2 |
234.600 |
4 |
erax-anti-nsfw-yolo11m-v1.1_paddle_model |
0.463 |
887.000 |
1 |
506.300 |
2 |
MNN |
erax-anti-nsfw-yolo11n-v1.1.mnn |
0.435 |
55.800 |
18 |
59.300 |
17 |
erax-anti-nsfw-yolo11s-v1.1.mnn |
0.449 |
147.600 |
7 |
146.300 |
7 |
erax-anti-nsfw-yolo11m-v1.1.mnn |
0.463 |
378.500 |
3 |
380.700 |
3 |
NCNN |
erax-anti-nsfw-yolo11n-v1.1_ncnn_model |
0.435 |
57.100 |
18 |
61.100 |
16 |
erax-anti-nsfw-yolo11s-v1.1_ncnn_model |
0.449 |
141.200 |
7 |
137.200 |
7 |
erax-anti-nsfw-yolo11m-v1.1_ncnn_model |
0.463 |
375.500 |
3 |
367.400 |
3 |
## Training Validation Results
### Training and Validation Losses
![Training and Validation Losses](./train_result/erax-anti-nsfw-yolo11n-v1.1/results.png)
### Confusion Matrix
![Confusion Matrix](./train_result/erax-anti-nsfw-yolo11n-v1.1/confusion_matrix_normalized.png)
## Inference
To use the trained model, follow these steps:
1. **Install the necessary packages**:
```curl
pip install ultralytics supervision huggingface-hub
```
2. **Download Pretrained model**:
```python
from huggingface_hub import snapshot_download
snapshot_download(repo_id="erax-ai/EraX-Anti-NSFW-V1.1", local_dir="./", force_download=True)
```
3. **Simple Use Case**:
```python
from ultralytics import YOLO
from PIL import Image
import supervision as sv
import numpy as np
IOU_THRESHOLD = 0.3
CONFIDENCE_THRESHOLD = 0.2
# pretrained_path = "erax-anti-nsfw-yolo11m-v1.1.pt"
# pretrained_path = "erax-anti-nsfw-yolo11s-v1.1.pt"
pretrained_path = "erax-anti-nsfw-yolo11n-v1.1.pt"
image_path_list = ["test_images/img_1.jpg", "test_images/img_2.jpg"]
model = YOLO(pretrained_path)
results = model(image_path_list,
conf=CONFIDENCE_THRESHOLD,
iou=IOU_THRESHOLD
)
for result in results:
annotated_image = result.orig_img.copy()
h, w = annotated_image.shape[:2]
anchor = h if h > w else w
detections = sv.Detections.from_ultralytics(result)
label_annotator = sv.LabelAnnotator(text_color=sv.Color.BLACK,
text_position=sv.Position.CENTER,
text_scale=anchor/1700)
pixelate_annotator = sv.PixelateAnnotator(pixel_size=anchor/50)
annotated_image = pixelate_annotator.annotate(
scene=annotated_image.copy(),
detections=detections
)
annotated_image = label_annotator.annotate(
annotated_image,
detections=detections
)
sv.plot_image(annotated_image, size=(10, 10))
```
## More examples
1. **Example 01**:
![Example 03](./examples/img_3.jpg)
2. **Example 02**:
![Example 06](./examples/img_6.jpg)
3. **Example 03**: SAFEEST for using make_love class as it will cover entire context.
Without make_love class | With make_love class
:-------------------------:|:-------------------------:
![](./examples/img_2.jpg) | ![](./examples/img_2_make_love.jpg)
![](./examples/img_4.jpg) | ![](./examples/img_4_make_love.jpg)
![](./examples/img_5.jpg) | ![](./examples/img_5_make_love.jpg)
## Citation
If you find our project useful, we would appreciate it if you could star our repository and cite our work as follows:
```bibtex
@article{EraX-Anti-NSFW-V1.1,
author = {Lê Chí Tài and
Phạm Đình Thục and
Mr. Nguyễn Anh Nguyên and
Đoàn Thành Khang and
Mr. Trần Hải Khương and
Mr. Trương Công Đức and
Phan Nguyễn Tuấn Kha and
Phạm Huỳnh Nhật},
title = {EraX-Anti-NSFW-V1.1: A Highly Efficient Model for NSFW Detection},
organization={EraX JS Company},
year={2024},
url={https://huggingface.co/erax-ai/EraX-Anti-NSFW-V1.1}
}
```