Session: 14-02: Applications and Reliability of Sensors
Paper Number: 145130
145130 - Methane Detection and Characterization With Ai Sensor Fusion and Decision-Analytic Placement of Rapidly Deployable Sensor
Multiple studies (e.g., United Nations’ Intergovernmental Panel on Climate Change, international agreements - COP26) demonstrate that immediate action is required to mitigate climate change. Methane (CH4) is responsible for up to one-third of the global warming experienced today. Methane traps more heat in the atmosphere per molecule than carbon dioxide (CO2), so methane becomes 80 times more harmful than CO2 for 20 years after it is released.
Methane is difficult to measure and there are costs and safety issues related to site access. Many sites are in remote, hard-to-reach locations above tanks or along pipelines. Such sites usually do not have full-time operators, increasing the risk that methane emissions will go unnoticed. Today, to detect and quantify methane emissions at such locations, human operators often must drive long distances to investigate potential leaks using hand-held sensors. These remote-area assessments are costly in terms of personnel-hours, equipment, and safety.
Sensing by small unmanned aerial systems (sUAS) is a partial solution, but onboard sensors lack accuracy arising from flight operations and atmospheric turbulence. They also cannot monitor over an extended time due to limited battery power, a critical deficiency since many emissions are cyclical (e.g., once a day due to temperature cycles).
This paper describes a methane-specific tunable sensor robot that can be deployed during periodic inspections, or when prompted by a larger area methane survey (such as from satellite data) that a leak is present in a general area. The panoramic multimodal sensor robot could be rapidly deployed by a sUAS to gather a more localized, ground/vessel-level assessment of the leak. Protected by an impact-resistant tensegrity structure and be able to remain on the ground or vessel, providing persistent monitoring. It has an automatic pan-tilt system to increase field of view and provide accurate concentration data as well as visuals. This sUAS and sensor robot combination would see an increase in data fidelity and in the scope and expanse of what an sUAS-mounted system can survey on its own.
The system described uses embedded multimodal, multi-scale AI computation to pinpoint the source and size of the methane leak for use in maintenance, remediation planning, and environmental reporting. This paper describes the deep learning models trained and evaluated on the GasVid Dataset, collected at the Methane Emissions Technology Evaluation Center (METEC). Different AI models are compared and evaluated (GasVid, Vision Transformers (ViTs) and multi-sensor fusion). An experimental cascading model architecture categorizes videos into eight distinct levels of methane leakage. This cascading approach, with its uniform network structure, simplifies implementation and training, ultimately yielding high performance metrics. Cross-modal attention mechanisms enable our system to focus on the most relevant information across modalities, enhancing detection accuracy.
This work also describes a hierarchical multimodal framework for optimizing the dynamic deployment of the best sensor combinations for detecting methane emissions at both the aerial level and the ground level, where our robots are deployed directly next to potential sources. A key aspect of this system is the integration of mesh networks, allowing multiple robots equipped with real-time sensing technology to operate collectively near the leak source. These robots are meshed with one robot carrying an optical gas imaging camera due to its higher cost and specialized function for visualizing methane emissions. This configuration maximizes coverage and data accuracy while optimizing resource allocation, ensuring that high-cost imaging capabilities are utilized efficiently alongside more numerous standard sensors. The cost function for optimization considers costs of false-positive and false-negative errors. The Expected Value of Information (EVI) is used to identify the most valuable type and location of physical sensors to be deployed to increase the decision-analytic value of a sensor network.
Presenting Author: Alice Agogino University of California at Berkeley
Presenting Author Biography: Alice M. Agogino is the Roscoe and Elizabeth Hughes Professor of Mechanical Engineering Emeritus at the University of California at Berkeley and is affiliated faculty at in the Energy Resources Group and Women & Gender Studies. She served Founding Chair of the Graduate Group in Development Engineering, Education Director at the Blum Center for Developing Economies, Chair of the UC Berkeley Academic Senate and Associate Dean of Engineering. She has supervised 197 MS projects/theses, 66 doctoral dissertations and numerous undergraduate researchers. Agogino has authored over 300 peer-reviewed publications and has won numerous teaching, mentoring, best paper and research awards. She is a member of the National Academy of Engineering (NAE) and has served on a number of committees of the National Academies. She received a Ph.D. from Stanford University, M.S. from UC Berkeley and B.S. from University of New Mexico.
Authors:
Alice M. Agogino University of California at BerkeleyAdam Goldstein Squishy Robotics, Inc
Ian Yijun Chen Amazon Web Services
R. Lily Hu Google
Yu Fei Huang University of California, Berkeley
Douglas Hutchings Squishy Robotics, Inc.
Christiana Kang University of California, Berkeley
Angeline Ying Kee Lee University of California, Berkeley
Yuxin Miao University of California, Berkeley
Jeffrey Millan University of California, Berkeley
Vivek Rao Duke University
Methane Detection and Characterization With Ai Sensor Fusion and Decision-Analytic Placement of Rapidly Deployable Sensor
Paper Type
Technical Paper Publication
