Session: 08-24-01: Human-Machine Interaction: Design, Dynamics, and Control
Paper Number: 166362
Predicting Human Takeover Actions in Autonomous Driving Using Bioanalytical Sensors and Deep Neural Networks
Introduction
Autonomous vehicles (AVs) are revolutionizing transportation, yet human intervention remains essential in critical situations. SAE Level 3 AVs, which can drive autonomously under specific conditions but require human supervision, present challenges in ensuring safe and timely takeovers. Often, takeover responses are delayed or inefficient, reducing their effectiveness in preventing collisions. Ensuring safe and timely takeovers requires understanding human intention and physiological responses. This study explores the use of bioanalytical sensors—including EEG, EMG, IMU, and heart rate monitors—to predict driver takeover actions using deep neural networks. By analyzing sensor data, the model aims to enhance AV responsiveness and driver readiness. The findings contribute to improving human-machine interaction in semi-autonomous driving, reducing reaction times, and increasing overall road safety. This research bridges the gap between advanced automation and real-time human intervention for a safer autonomous future.
Contribution of the work
This project integrates bioanalytical sensors, including EMG, IMU, and EEG, with deep learning to predict human intention during autonomous vehicle (AV) takeovers. By capturing muscle activity, hand position, and brain waves, the study develops a deep learning neural network model to classify driver actions (left turn, right turn, no turn) with 79.45% validation
accuracy. The research highlights the importance of monitoring multiple human factors to enhance driver engagement and reduce takeover response times in Level 3 AVs. The findings contribute to the development of advanced human-machine interfaces, improving safety and efficiency in AV systems. Future work includes refining sensor reliability, expanding data col-
lection, and enhancing simulation realism to further optimize the model. This study provides a foundation for integrating biometric feedback into AV training programs, fostering safer human-AV collaboration.
Methodology
We first integrated off-the-shelf bioanalytical sensors (EMG, IMU, pulse, eye tracking, and EEG) with an Arduino Mega for data acquisition. EMG sensors track forearm muscle activity, an IMU monitors hand position, and EEG sensors measure brain waves. Data is captured using a Simulink model, processed in MATLAB, and exported into CSV files. Participants in
the study of human subjects perform three tasks for deep learning human takeover prediction model training: repeated left turns, right turns, and no turns, with data normalized for analysis. A deep learning algorithm with convolutional neural network (CNN) is employed for model training, using adaptive moment estimation (Adam) for optimization. The deep learn-
ing architecture includes input, convolutional, fully connected, and output layers. Training involves data preprocessing, partitioning, and evaluation, achieving 79.45% validation accuracy. The study identifies challenges in sensor reliability and proposes hardware and software improvements for future work.
To integrate this platform with SAE Level 3 autonomous driving systems, such as Tesla’s Full Self-Driving (FSD) Supervised, the bioanalytical sensor data could be used to enhance driver monitoring and takeover readiness. For example, real-time EEG, eye tracking, and EMG data could detect driver disengagement or fatigue, triggering escalating alerts (auditory, visual,
haptic) to re-engage the driver. The IMU and EMG data could predict hand movements, ensuring that the driver is prepared to take control when needed. This integration would align with Level 3 requirements, where the vehicle handles driving tasks but requires human intervention in complex scenarios. By combining human factor analysis with AV systems, this approach could improve safety and driver-AV collaboration in semi-autonomous driving environments.
Preliminary results and conclusions
The preliminary results demonstrate a CNN model achieving 79.45% validation accuracy in classifying driver actions—left turn, right turn, and no turn—based on EMG and IMU data. Experiments involved participants performing repeated left turns, right turns, and no turns while sensor data was collected. The confusion matrix indicates moderate performance for classifications of the driving motor functions. Overall accuracy is 74%, suggesting the model effectively predicts driver intentions but requires refinement. The study underscores the potential of integrating bioanalytical sensors and deep learning to enhance autonomous driving safety. Future work will focus on improv-
ing sensor reliability, expanding data collection, and refining the experimental setup for more robust results.
Presenting Author: Lin Jiang San Jose State University
Presenting Author Biography: Dr. Lin Jiang is an Assistant Professor of Mechanical Engineering at San José State University. She earned her PhD and MS in Mechanical engineering from the University of Texas at Dallas. Her research focuses on human biomechanics, robotics, human-centered design, medical assistive devices, and AI for healthcare. Dr. Jiang holds a patent for the SmartLact8 breast pump and has published in leading journals like IEEE TBME, IEEE RAL, and ABME. Her work has been supported by NSF, CSUBiotech, and Honda Foundation. She has received multiple awards, including the Exemplary Teaching Award, ASME IMECE Best Paper Award, and IEEE TBME journal recognition paper. She is an active member of IEEE, BMES, ASME, and ISHRML.
Authors:
Shane Sharp San Jose State UniversityFaraz Shaikh San Jose State University
David Peña San Jose State University
Gaojian Huang San Jose State University
Lin Jiang San Jose State University
Predicting Human Takeover Actions in Autonomous Driving Using Bioanalytical Sensors and Deep Neural Networks
Paper Type
Technical Paper Publication
