Session: 07-17-01: Machine Learning and Artificial Intelligence in Dynamics, Vibrations and Control
Paper Number: 147280
147280 - Ai-Computing, Reinforcement Learning-Based Optimization Method for Closed-Loop Human-Robot Interactions
Wearable robots have demonstrated the capability to improve human performance during walking for able-bodied individuals and to restore mobility for people with disabilities. Wearable robots (exoskeletons) can provide torque and force assistance to the wearer by detecting human intention, forming a human-robot interaction (HRI) system. Many AI methods have been proposed to optimize the HRI simulation. However, the development of exoskeleton robots faces two significant challenges. Firstly, intensive human testing is necessary to validate the performance of the designed exoskeleton robot controller. This can lead to fatigue and increased risks for human subjects during long testing periods, as well as higher testing costs due to economic compensation for people with mobility disabilities. Simulation-based learning is a potential solution to replace intensive human tests. However, such simulations have not proven their benefits in experiments with a physical robot because they either do not incorporate controller design or do not consider HRI. This creates a gap between simulation analysis and human testing. To address these challenges, this work introduces a physics-informed, data-driven reinforcement learning-based simulation to predict realistic muscle response to the exoskeleton assistance and to autonomously teach the exoskeleton how to enhance human mobility given the current human activity.
In our simulation framework, three deep neural networks are developed to simulate the closed-loop human-robot interaction. The human model (motion imitation network and muscle coordination network) and the exoskeleton controller (neural network-based control policy) exchange status information to establish HRI. The exoskeleton neural network produces high-level, real-time assistance torque commands to support the current activity. The network takes proprioceptive history (joint angles and angular velocities) from the IMU sensor on each leg as input and consequently outputs joint target positions for the exoskeleton motors. The objective of the exoskeleton neural network is to learn a continuous control policy for the exoskeleton that reduces human muscle effort and maintains human balance during walking.
To evaluate the effectiveness of the proposed physics-informed, data-driven reinforcement learning-based simulation framework, we conduct numerical experiments and comparisons to demonstrate its remarkable ability in predicting realistic human muscle response and enabling the exoskeleton to provide proper assistance for improving human mobility. Through comparisons with ground truth data from other references, our simulation results align closely with actual human test results, validating the effectiveness of our approach and bridging the gap between simulation and human testing. Our results also prove that the exoskeleton control policy can reduce the human model's muscle activation.
Presenting Author: Shuzhen Luo Department of Mechanical Engineering, Embry Riddle Aeroautical University, Daytona Beach, FL, USA.
Presenting Author Biography: Corresponding author of this paper.
Dr. Shuzhen Luo earned her Ph.D. degree in control science and engineering at Nankai University in 2018. Since 2018, she has worked as a research associate at Rutgers University and then as a postdoctoral fellow at New Jersey Institute of Technology, UNC Chapel Hill and NC State University. Her research interests lie in the area of bio-inspired wearable assistive robotics, large-scale AI computation, deep reinforcement learning-based autonomous control. Her current research is rooted in the translation of AI-powered wearable robots to healthcare (treatment and medicine for people with musculoskeletal disorders). She has more than 20 publications, including journal articles and conference papers. Her work is geared towards empowering engineering solutions that augment human mobility for people with disabilities in community settings.
Authors:
Mingyi Wang Department of Mechanical Engineering, Embry Riddle Aeroautical University, Daytona Beach, FL, USA.Shuzhen Luo Department of Mechanical Engineering, Embry Riddle Aeroautical University, Daytona Beach, FL, USA.
Ai-Computing, Reinforcement Learning-Based Optimization Method for Closed-Loop Human-Robot Interactions
Paper Type
Technical Presentation