CPS: Small: Learning How to Control: A Meta-Learning Approach for the Adaptive Control of Cyber-Physical Systems
Lead PI:
Michael Lemmon
Abstract

Internet-of-Things (IoT) enabled manufacturing systems form a particularly important class of cyber-physical systems (CPS). IoT-enabled manufacturing systems have a physical fabric woven from a heterogeneous mix of machines carrying and processing materials across the factory floor. The cyber fabric for these systems is a heterogeneous mix of wired and wireless digital communication networks enabling the global visibility of the data streams used to manage the physical fabric?s workflows. These IoT-enabled systems are complex CPS with a great deal of modeling uncertainty. The physical and cyber fabrics are open to an external environment that can shift in an abrupt and unpredictable manner. Such shifts may be due to changes in customer work orders or due to environmental changes that cause traffic congestion in the cyber fabric?s wireless networks. The dynamics of both fabrics are coupled since congestion in the physical fabric may create congestion in the cyber fabric and vice versa. This complexity and uncertainty stand as major obstacles to the broader acceptance of IoT technologies by U.S. manufacturers. To lower the risk in adopting IoT technologies, this project proposes developing meta-learning methods that learn how to control complex CPS found in IoT-enabled manufacturing. This project will develop algorithms and software implementations of the meta-learning approach to controlling CPS. The project will benchmark the method?s performance on a testbed capturing the complex interactions between an IoT-manufacturing system?s physical and cyber fabrics.

This project uses meta-learning algorithms for the control of complex and uncertain cyber-physical systems. The approach adopts a new type of machine learning model called a behaviorally ordered abstraction (BOA). This model has greater cross-task generalization capacity, better sample efficiency, and greater interpretability than other deep learning methods. This modeling approach allows the project to address issues regarding the robust stability of deep reinforcement learning by embedding meta-learning in a generalized regulator that learns ?how? to configure controller synthesis across all tasks. This project will evaluate the project?s ?learning-how-to-control? framework on a multi-robotic testbed mimicking the use of WIFI connected robots moving materials across a factory floor. The project will investigate how to transfer the models and policies learned on the testbed to IoT-enabled factories found in local manufacturing facilities.

Performance Period: 06/15/2023 - 05/31/2026
Institution: University of Notre Dame
Sponsor: National Science Foundation
Award Number: 2228092