Fishes are masters of locomotion in fluids owing to their highly integrated biological sensing, computing and motor systems. They are adept at collecting and exploiting rich information from the surrounding fluids for underwater sensing and locomotion control. Inspired and informed by fish swimming, this research aims to develop a novel bio-inspired cyber-physical system (CPS) that integrates the "physical" robot fish and fluid environment with the "cyber" robot control & machine learning algorithms. Specifically, this CPS system includes i) a pressure sensory skin with distributed sensing capability to collect flow information, ii) control and learning algorithms that compute robot motor signals, output by central pattern generators (CPGs) which receive pressure sensory feedback, iii) a robot fish platform to implement and validate the CPS framework for underwater sensing and control tasks, and iv) experimental and computational methods to investigate and model the underlying fluid physics. This CPS system will have immediate impacts on the core CPS research areas such as design, control, data analytics, autonomy, and real-time systems. It will also significantly impact a wide range of engineering applications which demand distributed sensing, control and adaptive actuation. Examples include human-machine interactions, medical robots, unmanned aerial/underwater vehicles, drug dosing, medical therapeutics, and space deployable structures among others. Leveraging the multidisciplinary nature of this research, this award will support a variety of educational and outreach activities. In particular, a list of activities in broadening participation in engineering will be carried out.
This research project integrates multiple CPS technologies to develop bio-inspired technologies for swarm control of fish. These include innovation in sensing modality via a stretchable, pressure sensitive skin, physics inspired learning and swarm control. The project will first develop a distributed pressure sensitive synthetic skin, which will be installed on robotic fishes to map the pressure distribution on their body and caudal-fin surfaces. The distributed pressure information will then be used in a feedback control policy that modulates CPGs to produce caudal-fin motion patterns of the robotic fishes. The control policy and the caudal-fin motion patterns will be optimized via reinforcement learning first in a surrogate fluid environment and then in the true fluid environment. The surrogate fluid environment will be developed using data-driven non-parametric models informed by physics-based hydrodynamic models of fish swimming, trained using combined experimental and Computational Fluid Dynamics (CFD) simulation data. The above control-learning methods will also be used to achieve efficient schooling in a group of robotic fishes, individually controlled by a CPG, which interacts with each other through surrounding fluids and pressure sensory feedback. The optimized swimming/schooling performance of robotic fishes and the underlying physics will be studied using CFD simulation. Together, this research will advance CPS knowledge on: 1) the design and creation of electronic and sensor materials and devices for robot skin applications; 2) the development of data-efficient, physics-informed learning methods for robotic systems that operate in complex environments, especially leveraging the recent progress on deep learning to exploit the spatial and temporal richness of the pressure data for underwater sensing and robot control; and 3) the flow physics and modeling of fish swimming.
The objective of this Computer and Information Science and Engineering (CISE) Research Initiation Initiative (CRII) proposal is to develop a cognizant learning framework for cyber-physical systems (CPS) that incorporates risk-sensitive and irrational decision making. The necessity for such a framework is exemplified by two observations. First, CPS such as self-driving cars will share an environment with other CPS and human users. Human drivers demonstrate a heightened sensitivity to changes in speed and can easily adapt to changes in the environment and road conditions, which makes it essential for a CPS to have an ability to recognize non-rational behaviors. Second, large amounts of data generated during their operation and limited access to models of their environments can make a CPS reliant on machine learning algorithms for decision making to meet performance requirements such as reachability and safety. Our research will be grounded on improving behaviors of autonomous vehicles in realistic traffic situations. Outcomes from this effort will contribute to the development of a research paradigm unifying control, learning, and behavioral economics. Students at a Primarily Undergraduate Institution will benefit by being directly involved in all aspects of the research process. Research tasks will involve a team of undergraduate students in a vertically integrated manner where more experienced students will mentor newer team members.
The proposed effort comprises two thrusts. Thrust 1 will construct utilities to encode CPS performance objectives consistent with practical models of risk-sensitive and irrational decision making. Strategies will be learned by formulating and solving a reinforcement learning problem to maximize this utility. Methods to enable learned strategies to adequately consider delays between evaluation and execution of actions arising from the physical components of the CPS will be developed. Thrust 2 will design algorithms to learn decentralized cognizant strategies when multiple CPS operate in the same environment. To improve reliability in uncertain environments, or when feedback is sparse, techniques to identify contributions of each CPS to a shared utility will be identified. Solution methodologies will be evaluated empirically through extensive experiments and theoretically by determining probabilistic performance guarantees. The PI will develop a research agenda and new undergraduate curriculum in CPS and machine learning at Western Washington University (WWU). Research and educational goals of the project will be integrated through the CARLA simulator for autonomous vehicle research and the F1/10 Autonomous Vehicle platform. The multidisciplinary scope of the project will be emphasized in outreach efforts through Student Outreach Services and STEM Clubs at WWU to encourage and broaden participation from traditionally underrepresented student groups.
This cooperative agreement with MetroLab Network aims to build capacity for the Civic Innovation Challenge (CIC), a research and action competition in the Smart & Connected Communities (S&CC) domain, as well as the broader S&CC research ecosystem. Building off of NSF's S&CC program, the CIC aims to flip the community-university dynamic, asking communities to identify civic priorities ripe for innovation and to partner with researchers to address those priorities. The CIC will help bridge the gap between research and deployment, ensuring that research is conducted in a context that allows for realistic testing and evaluation of impact. Features include engagement of other S&CC-focused funders as partners; shortening the typical timeline of civic research projects; and fostering cohorts organized around specific problem statements to encourage sharing of information across teams. MetroLab will work with Smart Cities Lab to support the CIC through outreach, capacity building, support and programming for finalists and winners, and joint-funder engagement.
The CIC is an opportunity to transform civic research. It will enable data-driven, research-informed communities, engage residents in the process, and build a more cohesive research-to-innovation pipeline by finding synergies between funders of use-inspired research and funders of civic innovation. It will lay a foundation for a broader and more fluid exchange of research interests and civic priorities that will create new instances of collaboration and introduce new areas of technical and social scientific discovery. The CIC is designed to support transformative projects while fostering a collaborative spirit.
The specific activities supporting the CIC include "boot camps" for teams of communities and universities to strengthen their partnerships and projects. It will also involve the cultivation of "communities-of-practice" oriented around specific domains, to facilitate knowledge-sharing and cross-site collaboration. The long-term impacts of this work will include deeper collaborations across sectors and regions; improved information sharing and best practices development; and greater impact from research outcomes in cities and communities around the United States.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Over the last three decades we have witnessed historic missions to Mars where unmanned space vehicles successfully landed on and explored the Martian surface in search of evidence of past life. Recently reusable rockets have captured the public's imagination by delivering payloads to orbit and then landing safely back on Earth. A common requirement for these space vehicles is that they must be operated autonomously during the atmospheric entry, descent, and landing (EDL). Furthermore, the first time they are ever tested as a fully integrated system is during the actual mission. This makes EDL extremely challenging and risky. A key technology that has enabled these recent successful space missions is the onboard software that controls the vehicle's motion during EDL, which must work properly under all expected variations in the mission conditions. Motivated by these effective point-design solutions from aerospace engineering, our research aims to develop a unified algorithmic framework for motion planning and control for a large class of Earth-based autonomous vehicles that operate in challenging environments with increasingly complex performance requirements. Applications include autonomous aerial, ground, and underwater vehicles serving many safety critical tasks in, for example, search and rescue, disaster relief, terrain mapping and monitoring, and toxic spill cleanup applications to name few.
Our main hypothesis is that optimization-based motion planning and control provides an effective and unifying mathematical framework that is able to handle the autonomy problems encountered in space applications and this framework can be generalized to a large variety of autonomous vehicles. Our project aims to build this optimization-based framework by leveraging invaluable insights and experiences from NASA's flagship missions to Mars. These missions had to succeed during their first attempt and any failure would have led to catastrophic results, i.e., there was no margin for error. Hence Mars landing can be considered a prototypical benchmark problem, as it encompasses complexities that one would also face with other (Earth-based) autonomous vehicles: switching between a variety of operational modes; limited fuel, power, and mission time; state and control constraints; and uncertainties in the situational awareness, sensing, actuation, vehicle dynamics, and environment. Our project aims to provide algorithmic foundations for optimization-based motion planning and control. It has both a theoretical component to produce fundamental results that can be used to build trustworthy algorithms and a comprehensive experimental component to produce the empirical evidence necessary to evaluate these algorithms on real-world examples, i.e., autonomous quad-rotors and underwater vehicles. Our research team is assembled to build on these lessons learned in space applications and to develop optimization-based planning and control methods that can seamlessly be transitioned to practice.
This project will develop novel, body-worn, flexible sensors fabricated using low-cost inkjet printing technology on thin film polymers, develop novel algorithms capable of automatically detecting health events in different contexts, and develop a novel data reliability metric by analyzing sensor and context data in real-time. The project will produce practice components for test and validation in a clinical setting with cardiac patients to determine their effectiveness for monitoring heart conditions. If successful, the project will provide patients and clinicians with a tool to improve health monitoring in natural environments. The research is expected to impact additive manufacturing methods, flexible electronics, health monitoring, and smart and connected communities initiatives. It will also provide training for undergraduate and graduate students and expose the next generation of scholars and workers to these technologies through a Summer Code Camp for high school students.
Next generation Cyber-Physical Systems (CPS) must utilize resilient and reliable cyber/physical interfacing, be economically viable, and be capable of processing extremely large data automatically and reliably. Achieving this requires overcoming current technological barriers associated with seamless integration of computation and physical domains and meaningful interpretation of multimodal and multigrain data of scalable CPS. Balancing theory with experimentation, this project will: 1) produce foundational engineering process for CPS interface with thin-film flexible electronic electrodes and low cost sensors fabricated with inkjet printing; 2) develop new algorithms for autonomous processing of sensor data to detect context-aware events of interest and data reliability metric for closed-loop CPS using real-time machine learning implemented at edge; and 3) deploy CPS practice components in a real-life pilot study to explore detection of cardiac episodes and explore various closed loop feedback approaches.
With the growing world population and diminishing agricultural lands, it becomes imperative to maximize crop yield by protecting crop health and mitigating against pests and diseases. Though there are decades-old practices still in place, there is also growing adoption of so-called precision agriculture solutions, which employ emerging technologies in sensing, automation, and analytics in daily farmland operations. As farmers gain real-time access to critical data (e.g., land and weather conditions) and can quickly share any untoward findings with others, farmland operations are morphing into full-fledged cyber-physical systems. To this end, this project seeks to develop, implement and evaluate a multi-robot agricultural information collection system that is autonomous, efficient and secure.
This project led by the University of North Florida (UNF) and supported by the University of Central Florida (UCF) has two main goals: (i) develop and implement novel information collection techniques for autonomous mobile robots that collect, store and share data in an efficient yet secure manner using blockchain,and (ii)and to train undergraduate and graduate students to conduct basic and applied research while closely working with local farmland partners in north-east Florida. Current technologies already use robots for agricultural purposes, but they typically have a high maintenance cost and do not necessarily consider issues related to security and data integrity. The primary objective is to design and deploy a set of autonomous robots that communicate wirelessly and navigate through planned paths in order to collect valuable data. This project will also consider the threat of security attacks by which collected data can be corrupted; seeking new distributed blockchain-based consensus protocols that mitigate the adversarial influence of such attacks. This project also contains a significant research and education component leveraging the leadership of UNF in the context of a primarily undergraduate institution (RUI). Being predominantly an undergraduate institution, there is a lack of opportunity for pursuing higher degrees in the Jacksonville area. This project aligns with an established Memorandum of Understanding (MoU) between UNF and UCF to provide a conduit for computing/engineering students to pursue M.S. degrees at UNF that feed seamlessly into Ph.D. programs at UCF. Students will benefit from the new robotics course to be developed at UNF and the ones being offered at UCF. Research progress will be showcased via technical workshops at both institutions to be held annually. Developed solutions are expected to transfer to other cyber-physical system applications, including search and rescue, patrolling, advanced manufacturing, among others. Most broadly, this project will raise awareness among today's teenagers and young adults of the impending agricultural crisis if worldwide food production falls even further behind meeting demands of an increasing global population.
The goal of this research is to enable a broad spectrum of programmers to successfully create apps for distributed computing systems including smart and connected communities, or for systems that require tight coordination or synchronization of time. Creating an application for, say, a smart intersection necessitates gathering information from multiple sources, e.g., cameras, traffic sensors, and passing vehicles; performing distributed computation; and then triggering some action, such as a warning. This requires synchronization and coordination amongst multiple interacting devices including systems that are Internet of Things (IoT) devices that may be connected to safety critical infrastructure. Rather than burden the programmer with understanding and dealing with this complexity, we seek a new programming language, sensor and actuator architecture, and communications networks that can take the programmer's statements of "what to do" and "when to do", and translate these into "how to do" by managing mechanisms for synchronization, power, and communication. This approach will enable more rapid development of these types of systems and can have significant economic development impact.
The proposed approach has four parts: (1) creating a new programming language that embeds the notion of timing islands -- groups of devices that cooperate and are occasionally synchronized; (2) creating a network-wide runtime system that distributes and coordinates the action of code blocks -- portions of the program -- across devices; (3) extending the capabilities of communications networks to improve the ability to synchronize devices and report the quality of synchronization back to the runtime system, enabling adaptive program behavior; and (4) extending device hardware architecture to support synchronization and time-respecting operation.
This NSF CPS CAREER project studies the hardware/software co-design of sub-millisecond machine learning control for high-rate dynamic systems with non-stationary inputs that change the system?s state (i.e., damage). Such systems include combustion processes in jet engines, vehicle structures during crashes, and active blast mitigation structures. The novelty of the approach taken in this project is to co-design the control systems with the computing hardware they will run on to constrain system latency to within 1 millisecond. The developed solutions will be able to learn a system?s dynamics at the data rates required by high-rate dynamic systems. Machine learning models will learn the dynamics of the non-linear system online, which will then be used to model the dynamics of the system to the appropriate prediction horizon. The project is developing an automated programming methodology that enables the deployment of these real-time controllers onto compact and power-efficient computing devices. It follows that this research will impact society and the mission of the NSF by enabling a better understanding of dynamic systems operating in high-rate environments while enabling intelligent decision-making capabilities at speeds never before reached. The project will leverage existing and valuable resources at the University of South Carolina to involve several high school and undergraduate students in the project; with emphasis on providing research experiences to underrepresented, first-generation, and low-income students. This project will also train Ph.D. students in real-time machine learning and control.
More specifically, this research is addressing the fundamental question of how programmable hardware can be used to enable machine learning and control for systems that demand ultra-low latency. This is being done by formulating a framework for real-time machine learning control that co-designs hardware and software and provides a path to deployment on field programmable gate arrays (FPGAs). The project is: 1) Training a novel long short-term memory (LSTM) model on-chip with a custom online trainer that maps sensor signal and actuator input for a high-rate system to system state in real-time. 2) Developing approaches to share FPGA signal processing and memory resource for the parallel utilization of multiple LSTM forward-pass cores while maintaining deterministic timing. 3) Studying trade-offs between accuracy, performance, and resource requirements for real-time machine learning control at the microsecond timescale. Validation of the developed approach is being performed using a hardware-in-the-loop testing methodology with fast-acting actuators to control the outer mold line of a structural panel in simulated hypersonic flight.
The interactions of light with objects in a scene are often complex. An image --- which only captures 2D spatial variations --- is poorly equipped to unravel these interactions and infer properties of a scene including its shape, reflectance, and its composition. This is especially true for scenes that have sharp reflections, refractions, and volumetric scattering. This research models interactions of light with scenes using light rays and their transformations. The central hypothesis underlying the research is the idea that problems of shape, reflectance and material composition estimation are often simpler and well-posed when they are studied using light rays and their transformations. A wide-range of real-world objects and scenes stand to benefit from progress made in this research; this includes scenes with complex configurations that lead to inter-reflections, objects with shine, specularities, and spatially-varying reflectances, as well as objects that are transparent, or translucent. A diverse set of applications including machine vision, microscopy, and consumer photography stand to benefit from this research. The education and outreach components of this project disseminates image processing research in the broader Pittsburgh area via camera building workshops and lab demos for middle/high-school students, and professional development courses for physics teachers.
The focus of the research is to develop novel acquisition and processing methods for scene understanding by studying characterizations of light that go beyond images. In particular, the research analyzes the properties of two signals: the plenoptic function, which captures spatial, temporal, angular, and spectral variations of light, and the plenoptic light transport, which captures how light propagates through a scene. The central hypothesis of the research is that the plenoptic function and light transport provide a rich encoding of how light interacts with a scene; hence, unlike image-based inference, plenoptic inference can be fundamentally well-conditioned even for scenes that interact with light in a complex manner. To this end, the research develops novel low-dimensional models for plenoptic functions that are based on physical laws governing interaction of light with a scene. The research also builds novel computational cameras that acquire light propagates in a scene by decomposing into light paths of varying complexity, and subsequently estimating the 3D shape, reflectance, and material composition.
Wearable sensors show much promise for medical, sports, defense, emergency, and consumer applications, but are currently limited to obtrusive implementations. Akin to the evolution of cell phones that evolved from foot-long prototypes to recent smart devices, next-generation wearables are envisioned to be seamlessly embedded in fabrics. This CAREER project aims to understand the unique challenges of operating such textile sensors ?in-the-wild? and to empower their reliable operation via closed-loop interaction among fabrics, electronics, and humans. To serve as a model and to inspire new applications, the project focuses on new classes of functionalized garments that can seamlessly monitor kinematics and/or tissue abnormalities with unique advantages over the state-of-the-art. Concurrently, the integrated education/outreach efforts aim to increase student and public exposure to bio-electromagnetics that are now confined to specialized research, yet can enable interdisciplinary training for all via appealing activities with direct societal impact.
This CAREER project will pioneer a design, modeling, and implementation framework that reconciles human-in-the-loop Cyber-Physical Systems (CPS) with conductive e-textile sensors operating in complex (human wearing a sensing fabric) and dynamic (real-world) environments. Cognitive and fully-adaptive e-textile CPS are proposed that: (a) are cognizant of inputs received by the wearer, the fabric, and the environment, and (b) integrate agility in both the cyber and physical sides for closed-loop adaptability on the fly. In turn, potentials in optimizing performance, minimizing resources, and enhancing opportunities for myriads of human-in-the-loop CPS are envisioned to be significant. Without loss of generality, focus is on a novel-multi-utility sensor that addresses two of the most challenging sensing modalities in the area of wearables, i.e., motion capture and tissue abnormality monitoring. These modalities may be individually or concurrently employed to create models for dense-data (motion captured on the go), sparse-data (tissues monitored over sparse intervals), and context-aware (both of the above) human-in-the-loop CPS. As a case study, a novel CPS will be progressively designed ? from concept to in vivo testing ? to improve outcomes after anterior cruciate ligament reconstruction.