High-level Perception and Control for Autonomous Reconfigurable Modular Robots



The objective of this research is to develop the theory, hardware and computational infrastructure that will enable automatically transforming user-defined, high-level tasks into correct, low-level perception informed control and configurations for modular robots. Modular robots are composed of simple individual modules with limited sensing and actuation; while each module can locomote in the environment, connecting multiple modules in different configurations allows modular robots to perform complex  actions such as climbing, manipulating objects and traveling in unstructured environments. Furthermore, modular robots can divide into multiple robots and then reconfigure as one. This project addresses the design, perception and control of a physical system capable of not only locomotion in its environment but also reconfiguration (breaking into multiple independent robots and reassembling into larger structures) in order to achieve high level task success. Novel algorithms and theory will be developed that inherently exploit the simple nature of each module and the ability of the modular robot to reconfigure. Probabilistic information from perception algorithms will be tightly integrated with control tasks through the use of unique metrics, thus providing guarantees on robot behavior and reasoning about task completion and optimality. This poster describes the progress in Year 1 of the project; namely, the hardware development, the modular robot simulator that was developed for controller design and the initial set of controllers that were created in a competition between Cornell and UPenn.

License: CC-2.5
Submitted by Hadas Kress-Gazit on