Session: Government Agency Student Posters
Paper Number: 174000
Optimizing Energy Usage for Controlled Environment Agriculture Systems With Reinforcement Learning
The ability to produce nutritious food year-round is constrained by the local climate, farmland, and access to water and other key resources. These factors contribute to ongoing challenges in food and nutrition insecurity, particularly in regions with hard weather conditions, short growing seasons, or poor soil quality. Controlled environment agriculture (CEA) offers a promising solution by enabling crop production in enclosed, climate-controlled facilities that operate independently of the external environment. CEA can support reliable, high-yield harvests in both urban and rural areas by decoupling food production from geographic and climatic factors. However, CEA has high energy costs due to the lighting and HVAC demands necessary for crop growth. Most CEA facilities rely on rules-based controls, which are often inefficient and inflexible. In contrast, advanced control through Deep Reinforcement Learning (RL) has shown success in various applications such as self-driving vehicles and legged robots. While RL has been used for greenhouse automation and building HVAC optimization, its application in CEA remains limited. The intent of this research is to explore advanced optimal control in CEA to minimize energy demand while maintaining plant health and productivity.
To address the energy-efficiency challenges in CEA, this study investigates whether a RL control strategy, specifically a Deep Q-Network (DQN) algorithm informed by a resistance-capacitance (RC) grey-box model, can effectively optimize HVAC and grow lighting operations. Unlike rule-based systems, the proposed RL framework learns adaptive control through iterative interaction with the environment. However, standard RL methods often require extensive data and long training times. To support the learning process, we incorporate a physics-informed RC grey-box model that captures system dynamics by simulating heat transfer and energy usage within the facility. This model serves to inform and train the DQN agent. The RL control uses state variables such as envelope and internal temperatures, and inputs including HVAC and lighting power, to predict future states and associated rewards.
A theoretical CEA facility based out of Denver, CO is used to collect the necessary energy consumption information to train the RL agent. The building is a three-story plant factory, in which growth chambers are isolated from outdoor conditions, allowing for precise environmental control. The facility utilizes full-spectrum LEDs to meet lighting demands and a variable air volume (VAV) HVAC system, with individual air handling units (AHU) to ensure the best temperature control. An additional space with an AHU supports germination areas. Heating and cooling needs for the entire building are met by a singular boiler and an air-cooled chiller. These system components and their energy consumption patterns are modeled in EnergyPlus and used to generate data that trains the RC model and RL agent.
We expect that the DQN agent, informed by the RC grey-box model and trained with EnergyPlus data, will outperform conventional rule-based control methods in terms of energy efficiency. Specifically, we anticipate a measurable reduction in HVAC and lighting energy consumption while maintaining thermal conditions suitable for plant growth. We also expect the RL agent to demonstrate adaptability to varying weather conditions and operational constraints, enabling year-round deployment in diverse climate conditions. The use of a physics-informed RC model is expected to enhance learning stability, leading to faster convergence and more reliable control behavior compared to black-box RL methods.
This modeling approach offers an energy-efficient solution for CEA systems. By applying reinforcement learning to HVAC and lighting systems, we introduce advanced control strategies that improve energy efficiency and system resilience. For the CEA industry, this approach provides adaptive control strategies that respond to dynamic environmental conditions, ultimately lowering operational costs. From a reinforcement learning perspective, this study contributes to an emerging field by demonstrating how RL can be applied to resource-constrained agricultural environments. More broadly, by improving the efficiency and scalability of CEA systems, this research supports food and nutrition security by making it more feasible to grow fresh, nutrient-rich crops in regions where traditional agriculture is limited by climate and infrastructure.
Presenting Author: Rona Lei Duldulao University of Hawaii at Manoa
Presenting Author Biography: Rona Lei Duldulao is an undergraduate student majoring in Mechanical Engineering at the University of Hawaiʻi at Mānoa. She recently conducted research at the University of Wyoming through the NSF Research Experiences for Undergraduates (REU) program, where she worked on optimizing energy usage in Controlled Environment Agriculture (CEA) systems using reinforcement learning. Rona is the founder and project lead of Team ʻĀINA, UH Mānoa’s Farm Robotics Challenge team, where she leads interdisciplinary efforts to develop robotic solutions for sustainable agriculture. She plans to pursue graduate studies in Agricultural Engineering, with a focus on robotics, automation, and machine systems for agricultural technologies. Rona is excited to share her team’s progress at ASME and connect with others at the intersection of robotics, energy, and sustainable food systems.
Authors:
Rona Lei Duldulao University of Hawaii at ManoaAndrew Martin University of California, Davis
Elisha Ntakiritimana Texas Tech University
Liping Wang University of Wyoming
Optimizing Energy Usage for Controlled Environment Agriculture Systems With Reinforcement Learning
Paper Type
Government Agency Student Poster Presentation
