Session: Government Agency Student Posters
Paper Number: 173263
Uncertainty-Aware Shared Autonomy: Learning From Minimal Human Guidance via Conformal Prediction
Robotic systems leveraging deep learning have demonstrated strong potential in addressing complex tasks across manufacturing, service robotics, and autonomous systems. However, deploying these models in real-world environments is often hindered by their dependence on large volumes of annotated training data, which can be impractical to collect, particularly in robotic applications where manual labeling is labor-intensive, time-consuming, and sometimes infeasible. As a result, the generalization capability of such models is limited, leading to decreased reliability and safety when operating under novel or unstructured conditions. Shared autonomy frameworks have emerged as promising solutions for robotic systems by combining human inputs with robot autonomy to enhance operational safety, adaptability, and task success rates. However, traditional shared autonomy methods rely on static pre-trained policies that do not evolve through interaction and experience, limiting their effectiveness during long-term deployment. Human-in-the-loop (HITL) learning extends these capabilities by enabling continuous policy adaptation based on human guidance during operation. Despite this advancement, existing HITL approaches often demand constant human supervision to prevent catastrophic failures, leading to high labor costs and reduced scalability in real-world applications.
To address these challenges, this research proposes an uncertainty-aware shared autonomy framework that integrates HITL with Conformal Prediction (CP) to enhance robot learning efficiency while minimizing human intervention. Conformal Prediction is a mathematically rigorous technique that provides prediction sets with guaranteed coverage probabilities based on a user-defined error tolerance or confidence level. Our framework uses a small calibration set collected via HITL to model task-specific uncertainty distributions. When the robot encounters a new input state, CP evaluates its prediction confidence and determines whether human assistance is required. The methodology involves three key steps. First, the robot performs standard policy-based task execution while collecting calibration samples through limited human guidance. Second, CP computes non-conformity scores for these samples to establish quantile thresholds corresponding to the desired error tolerance levels.
Finally, during deployment, the system assesses its prediction uncertainty in real time; if uncertainty exceeds the threshold, it requests human intervention; otherwise, it continues autonomously.
Preliminary experiments demonstrate the effectiveness of our framework. The system achieves a true negative rate as low as 5.4% with a 0.05 error tolerance level, and 6.5% at a 0.10 level, indicating minimal false assurances of safety. Moreover, it successfully identifies 43.5% of uncertain predictions at the 0.05 level and 40.2% at the 0.10 level, ensuring human oversight when necessary while maintaining high operational efficiency.
In conclusion, this research contributes a novel approach to enhancing robot autonomy by:
* Reducing human labor through targeted intervention requests based on quantified uncertainty,
* Enabling dynamic accuracy-efficiency trade-offs that adapt to situational error tolerance levels, and
* Facilitating scalable deployment of robot learning algorithms in real-world environments.
These advancements support safer, more efficient, and practical integration of learning-enabled robots into various human-centric applications.
Presenting Author: Yongshuai Wu Kennesaw State University
Presenting Author Biography: Yongshuai Wu received his B.Eng. degree in Microelectronic Science and Engineering from the University of Electronic Science and Technology of China, Chengdu, China, in 2022, and his M.Sc. degree in Information Technology from Kennesaw State University, Marietta, GA, USA, in 2024. He is currently pursuing a Ph.D. in Computer Science at Kennesaw State University, beginning in August 2024. His research interests include autonomous robotics, imitation learning, and robotic reasoning. He is a recipient of the IEEE Globecom 2023 Best Paper Award (BPA) in the IoT and Sensor Networks category.
Authors:
Yongshuai Wu Kennesaw State UniversityJian Zhang Kennesaw State University
Shaoen Wu Kennesaw State University
Shiwen Mao Auburn University
Uncertainty-Aware Shared Autonomy: Learning From Minimal Human Guidance via Conformal Prediction
Paper Type
Government Agency Student Poster Presentation
