Abstract: Driving can be risky when a driver’s mind wanders, even if their eyes are on the road. This “look but don’t see” problem, called cognitive distraction, is a major cause of car crashes. As self-driving cars become more common, humans will still need to stay alert to take control in emergencies for years to come. To tackle this, we’ve developed a new model called Self-DSNet to detect when drivers are distracted. Self-DSNet uses a special kind of neural network to spot complex patterns in data. When tested with just camera footage, it was 94.23% accurate at catching distractions. Adding data like heart rate, breathing rate, and how the driver steers the car boosted accuracy to 95.13%. The model relies on using tools like Random Forest, Decision Trees, and Support Vector Machines to make its predictions. We also found that focusing on just a few key signs—like changes in a driver’s pupil size or eye movements—still gave solid results, with 90% accuracy across different types of roads. The study also showed that the type of road can affect how distracted a driver gets. These findings could help build better systems to keep drivers focused. In the future, researchers plan to test this model in real-time driving situations and add more data sources to make it even more reliable across all kinds of roads and scenarios.
Keywords: Driver distraction, Human driving supervision, Vehicle sensors, Self-DSNet model, Self-Organizing Neural Network (Self-ONN), Driver distraction monitoring systems.
Downloads:
|
DOI:
10.17148/IJIREEICE.2025.131040
[1] Ramisetty T M Surya, Sai G, Allam Reddy Charan, Sriram Sanjay S, Neelam Sanjeev Kumar, "Advanced Machine Learning for Real-Time Driver Distraction Analysis with Visual Inputs," International Journal of Innovative Research in Electrical, Electronics, Instrumentation and Control Engineering (IJIREEICE), DOI 10.17148/IJIREEICE.2025.131040