license: cc-by-nc-sa-4.0
language:
- en
🚀 FocalNet NSFW Image Classifier: Your Content Moderation Superhero! 🦸♂️
🌟 Discover the Power of Intelligent Moderation!
👋 Are you ready for a revolution in content moderation? Meet the FocalNet NSFW Image Classifier - your new, lightning-fast, and super-smart assistant in the battle against inappropriate content!
🎭 Who Am I?
I'm an advanced AI model, built on the powerful microsoft/focalnet-base
. My superpower is the lightning-fast classification of images into three categories:
- 🟢 SAFE: "Green light! Let's roll with this content!"
- 🟡 QUESTIONABLE: "Hmm... Maybe we should take a second look?"
- 🔴 UNSAFE: "Whoa! Let's stop this before anyone sees it!"
🦾 What Can I Do?
Imagine you're the guardian of the internet galaxy. Your mission? Protect users from shocking, inappropriate content. But how do you review millions of images daily? That's where I come in!
- 🕵️♂️ Lightning-Fast Detection: I'll analyze every pixel faster than you can say "safe content"!
- 🛡️ Protective Shield: I'll stand guard over your platforms, shielding users from unwanted content.
- 🎯 Sniper's Precision: My eye is so sharp that I can spot potential threats with surgical accuracy.
🚀 How to Use Me?
Ready for an adventure? Here's how you can harness my power:
Install my powers:
pip install transformers==4.37.2 torch==2.3.1 torchvision Pillow
Summon me in your code:
import os from PIL import Image import torch from torchvision import transforms from transformers import AutoProcessor, FocalNetForImageClassification # Path to the folder with images image_folder = "" # Path to the model model_path = "MichalMlodawski/nsfw-image-detection-large" # List of jpg files in the folder jpg_files = [file for file in os.listdir(image_folder) if file.lower().endswith(".jpg")] # Check if there are jpg files in the folder if not jpg_files: print("🚫 No jpg files found in folder:", image_folder) exit() # Load the model and feature extractor feature_extractor = AutoProcessor.from_pretrained(model_path) model = FocalNetForImageClassification.from_pretrained(model_path) model.eval() # Image transformations transform = transforms.Compose([ transforms.Resize((512, 512)), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ]) # Mapping from model labels to NSFW categories label_to_category = { "LABEL_0": "Safe", "LABEL_1": "Questionable", "LABEL_2": "Unsafe" } # Processing and prediction for each image results = [] for jpg_file in jpg_files: selected_image = os.path.join(image_folder, jpg_file) image = Image.open(selected_image).convert("RGB") image_tensor = transform(image).unsqueeze(0) # Process image using feature_extractor inputs = feature_extractor(images=image, return_tensors="pt") # Prediction using the model with torch.no_grad(): outputs = model(**inputs) probabilities = torch.nn.functional.softmax(outputs.logits, dim=-1) confidence, predicted = torch.max(probabilities, 1) # Get the label from the model's configuration label = model.config.id2label[predicted.item()] results.append((jpg_file, label, confidence.item() * 100)) # Display results print("🖼️ NSFW Classification Results 🖼️") print("=" * 40) for jpg_file, label, confidence in results: category = label_to_category.get(label, "Unknown") emoji = {"Safe": "✅", "Questionable": "⚠️", "Unsafe": "🔞"}.get(category, "❓") confidence_bar = "🟩" * int(confidence // 10) + "⬜" * (10 - int(confidence // 10)) print(f"📄 File name: {jpg_file}") print(f"🏷️ Model Label: {label}") print(f"{emoji} NSFW Category: {category}") print(f"🎯 Confidence: {confidence:.2f}% {confidence_bar}") print(f"{'=' * 40}") print("🏁 Classification completed! 🎉")
🎉 What Sets Me Apart?
- 🚄 Speed of Light: I'll analyze thousands of images before you finish your morning coffee!
- 🧠 Intelligence Level 100: I've learned from millions of examples, so I know all the tricks!
- 🛠️ Easy Integration: I'll hop into your code faster than a cat on a keyboard!
- 🌐 Multilingual Support: I understand images from all cultures and contexts!
- 🔄 Continuous Learning: I'm always improving, adapting to new trends and challenges!
🔬 Technical Specifications
- Base Model: microsoft/focalnet-base
- Model Type: FocalNetForImageClassification
- Input Size: 512x512 pixels
- Output: 3 classes (Safe, Questionable, Unsafe)
- Framework: PyTorch
- Language: Python 3.6+
🚀 Use Cases
- Social Media Platforms: Keep user-generated content clean and safe.
- E-commerce Sites: Ensure product images meet community standards.
- Dating Apps: Maintain a respectful environment for all users.
- Content Sharing Platforms: Automatically filter potentially inappropriate uploads.
- Educational Platforms: Ensure learning materials are age-appropriate.
🏋️ Training and Performance
- Training Data: Millions of diverse images across various categories
- Fine-tuning: Specialized NSFW dataset for precise categorization
- Accuracy: 95%+ on benchmark NSFW detection tasks
- Latency: <100ms per image on standard GPU hardware
⚠️ Important Warnings (Because Every Superhero Has Their Weaknesses)
- 🎢 Not for Extreme Challenges: I'm great, but don't use me where an error could cost more than burnt toast!
- 🤖 I'm Not Skynet: I can make mistakes sometimes, so don't leave me alone with the red button!
- 🕵️♂️ Respect Privacy: Make sure you have the right to process the images you show me. I don't like prying eyes!
- 🔄 I Need Updates: The world changes, and so must I! Regularly check if I need a refresh.
- 🤝 Collaboration is Key: I'm a great assistant, but let's leave final decisions to humans. Together, we're unbeatable!
🌈 The Future is Bright!
Remember, I'm part of an ongoing research process. With each update, I become smarter, faster, and even more incredible!
Ready to revolutionize content moderation together? Bring me on board your project and watch the magic happen! 🎩✨
Join the AI revolution today and make the internet a safer place! 🌍💪
📚 References and Resources
Let's make the digital world safer, one image at a time! 🌟