Session: 16-01-01: Government Agency Student Poster Competition
Paper Number: 150576
150576 - Learning-Finding-Giving: A Natural Vision-Speech-Based Approach for Robots to Assist Humans in Human-Robot Collaborative Manufacturing Contexts
Human-Robot Collaboration (HRC) is becoming an increasingly popular research topic and industry tool.
Different types of jobs like manufacturing where robots handle tasks that include dangerous conditions,
harmful materials, or require extreme precision. On the other hand, humans can utilize their creativity and
adaptability to handle tasks dynamically but require high safety standards. HRC can improve and enhance
current manufacturing processes, in which robots are able to provide collaborative assistance to humans,
allowing for increased productivity and minimal time waste. Handover tasks are an essential and common
task within HRC applications where a robot must hand over or otherwise interact with different objects,
the use of mechanical appliances and tools is unavoidable in manufacturing. This presents a goal that
research can help reach. A typical handover task can be performed in three general steps: object
identification, object grasping, and object handover. In this work, we propose a learning-finding-giving
framework based on computer vision and speech recognition approaches for robots to dynamically
identify and deliver tools for human partners in collaborative tasks. Learning refers to object detection via
the YOLOv5 algorithm which is utilized for the identification of common mechanical tools in HRC. To
teach robots to understand the target objects, a custom dataset is created from over 2000 images of
mechanical tools. Finding refers to using the trained model alongside hand-eye coordination to find and
return accurate real-world coordinates of the locations of the common tools. Giving refers to when the
robot will act upon speech commands given by a human and the generated real-world coordinates to
move toward the location of a requested tool and give that tool to the human. A real-world experiment in
which the robot, human, tools, object to work on, and microphone exist together in a shared workspace is
conducted to validate the proposed approach. The human’s task would be to repair the object while
requesting necessary tools from the collaborative robot using speech via the microphone. This research
aims to develop an approach for safe handovers and HRC implementations. Experimental results and
evaluations show that the proposed solution allows robots to dynamically understand and grasp tools with
high accuracy, effectively assisting in handover tasks for human teammates. The future work of this study
includes improving the learning aspect’s object recognition to increase its accuracy and improving hand-
eye calibration for more accurate positioning and coordinate retrieval to reinforce the Giving aspect. A
real-world human-robot collaboration demo is available at:
https://www.youtube.com/watch?v=ucAgSIK6crA.
Presenting Author: Emilio Herrera Montclair State University
Presenting Author Biography: Student Research Assistant for the CRoSS Lab at Montclair State Univeristy which focuses on empowering human-robot collaboration.
Authors:
Emilio Herrera Montclair State UniversityWeitian Wang Montclair State University
Learning-Finding-Giving: A Natural Vision-Speech-Based Approach for Robots to Assist Humans in Human-Robot Collaborative Manufacturing Contexts
Paper Type
Government Agency Student Poster Presentation