Session: 06-11-01: Applications of Artificial Intelligence/Machine Learning in Aerospace Engineering
Paper Number: 166206
Development of an Image Classification System for Use in an Autonomous Electric Aircraft Tug
The airport environment represents both an exceptional opportunity and a unique challenge for autonomous vehicles such as the electric, autonomous tug being developed by researchers. On one hand, an airport, including the apron, taxiways, and other paved surfaces, are more tightly controlled and predictable than the environments that autonomous passenger vehicles encounter, such as freeways or city streets. On the other hand, much less work has been done to characterize the airport environment and develop the datasets needed for autonomous vehicles to work in such a space.
This research contributes to that effort by developing a labeled dataset of images for a commercial airport to be used with image recognition software and thus enables an aircraft tug to navigate autonomously in an airport environment. Researchers, with the help of assistants, gathered images from various sources, including visiting an airport and taking photographs, videos, and scans, then uploading those and other gathered images to a central database. Researchers gathered over 650 images which include not only aircraft common to the airport, but other airport equipment such as fuel trucks, snow removal equipment, airport personnel, ground markings, and more. All objects that are necessary for an electric autonomous tug to identify were labeled manually and then the labeled images were trained on top of the pretrained YOLOv11 model provided by Ultralytics.
After developing an initial prototype of the model, testing was conducted to evaluate the model’s ability to recognize and track objects at the Provo Municipal Airport. The trained YOLOv11-based model demonstrated a reasonable ability to identify DA20, DA40, and Seminole aircraft, correctly distinguishing between them in many cases. However, its reliability has shown to be inconsistent at long ranges and when aircraft are partially obscured. To counteract some of these inconsistencies, a two-step classification process was implemented in which the standard pretrained YOLOv11 model first detects for any aircraft, which are then cropped and passed to the customized model to further classify by type. This approach has proved to be useful but still requires further refinement to ensure high levels of accuracy.
Considering that the classification model was mostly trained on images of aircraft, other airport infrastructure like standard tugs, fuel trucks, and runway markings remain as areas for improvement. Future work will focus on expanding the dataset and refining the model to improve its stability and effectiveness when used on an autonomous tug, ultimately contributing to a more reliable system for autonomous airport navigation.
Presenting Author: Max Ostler Utah Valley University
Presenting Author Biography: Max Ostler is a Computer Science and Information Technology student at Utah Valley University. His interests include software development, machine learning, and cybersecurity. He is currently working on projects involving object classification, web development, and systems programming.
Authors:
Alejandro Renteria Chavira Utah Valley UniversityMax Ostler Utah Valley University
Brett Stone Utah Valley University
Matt Jensen Utah Valley University
George Rudolph Utah Valley University
Development of an Image Classification System for Use in an Autonomous Electric Aircraft Tug
Paper Type
Technical Paper Publication
