File size: 4,986 Bytes
d0b649e
 
 
 
 
 
 
 
 
 
5d0e623
d0b649e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5d0e623
d0b649e
 
 
5d0e623
d0b649e
 
 
 
5d0e623
d0b649e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
---
license: cc-by-nc-nd-4.0
datasets:
- MichalMlodawski/closed-open-eyes
language:
- en
tags:
- eye
- eyes
model-index:
- name: mobilenet_v2 Eye State Classifier
  results:
  - task:
      type: image-classification
    dataset:
      name: MichalMlodawski/closed-open-eyes
      type: custom
    metrics:
    - name: Accuracy
      type: self-reported
      value: 99%
    - name: Precision
      type: self-reported
      value: 99%
    - name: Recall
      type: self-reported
      value: 99%
---
---

# 👁️ Open-Closed Eye Classification mobilenet_v2 👁️

## Model Overview 🔍

This model is a fine-tuned version of mobilenet_v2, specifically designed for classifying images of eyes as either open or closed. With an impressive accuracy of 99%, this classifier excels in distinguishing between open and closed eyes in various contexts.

## Model Details 📊

- **Model Name**: open-closed-eye-classification-focalnet-base
- **Base Model**: google/mobilenet_v2_1.4_224
- **Fine-tuned By**: Michał Młodawski
- **Categories**:
  - 0: Closed Eyes 😴
  - 1: Open Eyes 👀
- **Accuracy**: 99% 🎯

## Use Cases 💡

This high-accuracy model is particularly useful for applications involving:

- Driver Drowsiness Detection 🚗
- Attentiveness Monitoring in Educational Settings 🏫
- Medical Diagnostics related to Eye Conditions 🏥
- Facial Analysis in Photography and Videography 📸
- Human-Computer Interaction Systems 💻

## How It Works 🛠️

The model takes an input image and classifies it into one of two categories:

- **Closed Eyes** (0): Images where the subject's eyes are fully or mostly closed.
- **Open Eyes** (1): Images where the subject's eyes are open.

The classification leverages the advanced image processing capabilities of the FocalNet architecture, fine-tuned on a carefully curated dataset of eye images.

## Getting Started 🚀

To start using the open-closed-eye-classification-focalnet-base, you can integrate it into your projects with the following steps:

### Installation

```bash
pip install transformers==4.37.2
pip install torch==2.3.1
pip install Pillow
```

### Usage

```python
import os
from PIL import Image
import torch
from torchvision import transforms
from transformers import AutoImageProcessor, MobileNetV2ForImageClassification

# Path to the folder with images
image_folder = ""
# Path to the model
model_path = "MichalMlodawski/open-closed-eye-classification-mobilev2"

# List of jpg files in the folder
jpg_files = [file for file in os.listdir(image_folder) if file.lower().endswith(".jpg")]

# Check if there are jpg files in the folder
if not jpg_files:
    print("🚫 No jpg files found in folder:", image_folder)
    exit()

# Load the model and image processor
image_processor = AutoImageProcessor.from_pretrained(model_path)
model = MobileNetV2ForImageClassification.from_pretrained(model_path)
model.eval()

# Image transformations
transform = transforms.Compose([
    transforms.Resize((256, 256)),
    transforms.ToTensor()
])

# Processing and prediction for each image
results = []
for jpg_file in jpg_files:
    selected_image = os.path.join(image_folder, jpg_file)
    image = Image.open(selected_image).convert("RGB")
    image_tensor = transform(image).unsqueeze(0)
    
    # Process image using image_processor
    inputs = image_processor(images=image, return_tensors="pt")
    
    # Prediction using the model
    with torch.no_grad():
        outputs = model(**inputs)
        probabilities = torch.nn.functional.softmax(outputs.logits, dim=-1)
        confidence, predicted = torch.max(probabilities, 1)
    
    results.append((jpg_file, predicted.item(), confidence.item() * 100))

# Display results
print("🖼️  Image Classification Results 🖼️")
print("=" * 40)

for jpg_file, prediction, confidence in results:
    emoji = "👁️" if prediction == 1 else "❌"
    confidence_bar = "🟩" * int(confidence // 10) + "⬜" * (10 - int(confidence // 10))
    
    print(f"📄 File name: {jpg_file}")
    print(f"{emoji} Prediction: {'Open' if prediction == 1 else 'Closed'}")
    print(f"🎯 Confidence: {confidence:.2f}% {confidence_bar}")
    print(f"{'=' * 40}")

print("🏁 Classification completed! 🎉")
```

## Disclaimer ⚠️

This model is provided for research and development purposes only. The creators and distributors of this model do not assume any legal responsibility for its use or misuse. Users are solely responsible for ensuring that their use of this model complies with applicable laws, regulations, and ethical standards. The model's performance may vary depending on the quality and nature of input images. Always validate results in critical applications.

🚫 Do not use this model for any illegal, unethical, or potentially harmful purposes.

📝 Please note that while the model demonstrates high accuracy, it should not be used as a sole decision-making tool in safety-critical systems without proper validation and human oversight.