Autonomous sensors that monitor and control physical or environmental conditions.
All cyber-physical systems (CPS) depend on properly calibrated sensors to sense the surrounding environment. Unfortunately, the current state of the art is that calibration is often a manual and expensive operation; moreover, many types of sensors, especially economical ones, must be recalibrated often. This is typically costly, performed in a lab environment, requiring that sensors be removed from service. MetaSense will reduce the cost and management burden of calibrating sensors. The basic idea is that if two sensors are co-located, then they should report similar values; if they do not, the least-recently-calibrated sensor is suspect. Building on this idea, this project will provide an autonomous system and a set of algorithms that will automate the detection of calibration issues and preform recalibration of sensors in the field, removing the need to take sensors offline and send them to a laboratory for calibration. The outcome of this project will transform the way sensors are engineered and deployed, increasing the scale of sensor network deployment. This in turn will increase the availability of environmental data for research, medical, personal, and business use. MetaSense researchers will leverage this new data to provide early warning for factors that could negatively affect health. In addition, graduate student engagement in the research will help to maintain the STEM pipeline.
This project will leverage large networks of mobile sensors connected to the cloud. The cloud will enable using large data repositories and computational power to cross-reference data from different sensors and detect loss of calibration. The theory of calibration will go beyond classical models for computation and physics of CPS. The project will combine big data, machine learning, and analysis of the physics of sensors to calculate two factors that will be used in the calibration. First, MetaSense researchers will identify measurement transformations that, applied in software after the data collection, will generate calibrated results. Second, the researchers will compute the input for an on-board signal-conditioning circuit that will enable improving the sensitivity of the physical measurement. The project will contribute research results in multiple disciplines. In the field of software engineering, the project will contribute a new theory of service reconfiguration that will support new architecture and workflow languages. New technologies are needed because the recalibration will happen when the machine learning algorithms discover calibration errors, after the data has already been collected and processed. These technologies will support modifying not only the raw data in the database by applying new calibration corrections, but also the results of calculations that used the data. In the field of machine learning, the project will provide new algorithms for dealing with spatiotemporal maps of noisy sensor readings. In particular, the algorithms will work with Gaussian processes and the results of the research will provide more meaningful confidence intervals for these processes, substantially increasing the effectiveness of MetaSense models compared to the current state of the art. In the field of pervasive computing, the project will build on the existing techniques for context-aware sensing to increase the amount of information available to the machine learning algorithms for inferring calibration parameters. Adding information about the sensing context is paramount to achieve correct calibration results. For example, a sensor that measures air pollution inside a car on a highway will get very different readings if the car window is open or closed. Finally, the project will contribute innovations in sensor calibration hardware. Here, the project will contribute innovative signal-conditioning circuits that will interact with the cloud system and receive remote calibration parameters identified by the machine learning algorithms. This will be a substantial advance over current circuits based on simple feedback loops because it will have to account for the cloud and machine learning algorithms in the loop and will have to perform this more complex calibration with power and bandwidth constraints. Inclusion of graduate students in the research helps to maintain the STEM pipeline.
Off
University of California at San Diego
-
National Science Foundation
Submitted by William Griswold on December 22nd, 2015
Brain-computer interfaces (BCIs) are cyber-physical systems (CPSs) that record human brain waves and translate them into the control commands for external devices such as computers and robots. They may allow individuals with spinal cord injury (SCI) to assume direct brain control of a lower extremity prosthesis to regain the ability to walk. Since the lower extremity paralysis due to SCI leads to as much as $50 billion of health care cost each year in the US alone, the use of a BCI-controlled lower extremity prosthesis to restore walking can have a significant public health impact. Recent results have demonstrated that a person with paraplegia due to SCI can use a non-invasive BCI to regain basic walking. While encouraging, this BCI is unlikely to become a widely adopted solution since the poor signal quality of non-invasively recorded brain waves may lead to unreliable BCI operation. Moreover, lengthy and tedious mounting procedures of the non-invasive BCI systems are impractical. A permanently implantable BCI CPS can address these issues, but critical challenges must be overcome to achieve this goal, including the elimination of protruding electronics and reliance on an external computer for brain signal processing. The goal of this study is to develop a benchtop version of a fully implantable BCI CPS, capable of acquiring electrocorticogram signals, recorded directly from the surface of the brain, and analyzing them internally to enable direct brain control of a robotic gait exoskeleton (RGE) for walking.
The BCI CPS will be designed as a low-power system with revolutionary adaptive power management in order to meet stringent heat and power consumption constraints for future human implantation. Comprehensive measurements and benchtop tests will ensure proper function of BCI CPS. Finally, the system will be integrated with an RGE, and its ability to facilitate brain-controlled walking will be tested in a small group of human subjects. The successful completion of this project will have broad bioengineering and scientific impact. It will revolutionize medical device technology by minimizing power consumption and heat production while enabling complex operations to be performed. The study will also help deepen the understanding of how the human brain controls walking, which has long been a mystery to neuroscientists. Finally, this study?s broader impact is to promote education and lifelong learning in engineering students and the community, broaden the participation of underrepresented groups in engineering, and increase the scientific literacy of persons with disabilities. Research opportunities will be provided to (under-)graduate students. Their findings will be broadly disseminated and integrated into teaching activities. To inspire underrepresented K-12 and community college students to pursue higher education in STEM fields, and to increase the scientific literacy of persons with disabilities, outreach activities will be undertaken in the form of live scientific exhibits and actual BCI demonstrations.
Recent results have demonstrated that a person with paraplegia due to SCI can use an electroencephalogram (EEG)-based BCI to regain basic walking. While encouraging, this EEG-based BCI is unlikely to become a widely adopted solution due to EEG?s inherent noise and susceptibility to artifacts, which may lead to unreliable operation. Also, lengthy and tedious EEG (un-)mounting procedures are impractical. A permanently implantable BCI CPS can address these issues, but critical CPS challenges must be overcome to achieve this goal, including the elimination of protruding electronics and reliance on an external computer for neural signal processing. The goal of this study is to implement a benchtop analogue of a fully implantable BCI CPS, capable of acquiring high-density (HD) electrocorticogram (ECoG) signals, and analyzing them internally to facilitate direct brain control of a robotic gait exoskeleton (RGE) for walking. The BCI CPS will be designed as a low-power modular system with revolutionary adaptive power management in order to meet stringent heat dissipation and power consumption constraints for future human implantation. The first module will be used for acquisition of HD-ECoG signals. The second module will internally execute optimized BCI algorithms and wirelessly transmit commands to an RGE for walking. System and circuit-level characterizations will be conducted through comprehensive measurements. Benchtop tests will ensure the proper system function and conformity to biomedical constraints. Finally, the system will be integrated with an RGE, and its ability to facilitate brain-controlled walking will be tested in a group of human subjects.The successful completion of this project will have broad bioengineering and scientific impact. It will revolutionize medical device technology by minimizing power consumption and heat dissipation while enabling complex algorithms to be executed in real time. The study will also help deepen the physiological understanding of how the human brain controls walking. This study will promote education and lifelong learning in engineering students and the community, broaden the participation of underrepresented groups in engineering, and increase the scientific literacy of persons with disabilities. Research opportunities will be provided to under-graduate students. Their findings will be broadly disseminated and integrated into teaching activities. To inspire underrepresented K-12 and community college students to pursue higher education in STEM fields, and to increase the scientific literacy of persons with disabilities, outreach activities will be undertaken in the form of live scientific exhibits and actual BCI demonstrations.
Off
University of California at Irvine
-
National Science Foundation
Submitted by Payam Heydari on December 22nd, 2015
All cyber-physical systems (CPS) depend on properly calibrated sensors to sense the surrounding environment. Unfortunately, the current state of the art is that calibration is often a manual and expensive operation; moreover, many types of sensors, especially economical ones, must be recalibrated often. This is typically costly, performed in a lab environment, requiring that sensors be removed from service. MetaSense will reduce the cost and management burden of calibrating sensors. The basic idea is that if two sensors are co-located, then they should report similar values; if they do not, the least-recently-calibrated sensor is suspect. Building on this idea, this project will provide an autonomous system and a set of algorithms that will automate the detection of calibration issues and preform recalibration of sensors in the field, removing the need to take sensors offline and send them to a laboratory for calibration. The outcome of this project will transform the way sensors are engineered and deployed, increasing the scale of sensor network deployment. This in turn will increase the availability of environmental data for research, medical, personal, and business use. MetaSense researchers will leverage this new data to provide early warning for factors that could negatively affect health. In addition, graduate student engagement in the research will help to maintain the STEM pipeline.
This project will leverage large networks of mobile sensors connected to the cloud. The cloud will enable using large data repositories and computational power to cross-reference data from different sensors and detect loss of calibration. The theory of calibration will go beyond classical models for computation and physics of CPS. The project will combine big data, machine learning, and analysis of the physics of sensors to calculate two factors that will be used in the calibration. First, MetaSense researchers will identify measurement transformations that, applied in software after the data collection, will generate calibrated results. Second, the researchers will compute the input for an on-board signal-conditioning circuit that will enable improving the sensitivity of the physical measurement. The project will contribute research results in multiple disciplines. In the field of software engineering, the project will contribute a new theory of service reconfiguration that will support new architecture and workflow languages. New technologies are needed because the recalibration will happen when the machine learning algorithms discover calibration errors, after the data has already been collected and processed. These technologies will support modifying not only the raw data in the database by applying new calibration corrections, but also the results of calculations that used the data. In the field of machine learning, the project will provide new algorithms for dealing with spatiotemporal maps of noisy sensor readings. In particular, the algorithms will work with Gaussian processes and the results of the research will provide more meaningful confidence intervals for these processes, substantially increasing the effectiveness of MetaSense models compared to the current state of the art. In the field of pervasive computing, the project will build on the existing techniques for context-aware sensing to increase the amount of information available to the machine learning algorithms for inferring calibration parameters. Adding information about the sensing context is paramount to achieve correct calibration results. For example, a sensor that measures air pollution inside a car on a highway will get very different readings if the car window is open or closed. Finally, the project will contribute innovations in sensor calibration hardware. Here, the project will contribute innovative signal-conditioning circuits that will interact with the cloud system and receive remote calibration parameters identified by the machine learning algorithms. This will be a substantial advance over current circuits based on simple feedback loops because it will have to account for the cloud and machine learning algorithms in the loop and will have to perform this more complex calibration with power and bandwidth constraints. Inclusion of graduate students in the research helps to maintain the STEM pipeline.
Off
University of Colorado at Boulder
-
National Science Foundation
The confluence of new networked sensing technologies (e.g., cameras), distributed computational resources (e.g., cloud computing), and algorithmic advances (e.g., computer vision) are offering new and exciting opportunities for solving a variety of new problems that are of societal importance including emergency response, disaster recovery, surveillance, and transportation. Solutions to this new class of problems, referred to as "situation awareness" applications, include surveillance via large-scale distributed camera networks and personalized traffic alerts in vehicular networks using road and traffic sensing. A breakthrough in system software technology is needed to meet the challenges posed by these applications since they are latency-sensitive, data intensive, involve heavy-duty processing, and must run 24x7 while dealing with the vagaries of the physical world. This project aims to make such a breakthrough, through new distributed programming idioms and resource allocation strategies. To better identify the challenges posed by situation awareness applications, the project includes experimental deployment of the new technologies in partnership with the City of Baton Rouge, Louisiana.
The central activity is to develop appropriate system abstractions for design of situation awareness applications and encapsulate them in distributed programming idioms for domain experts (e.g., vision researchers). The resulting programming framework allows association of critical attributes such as location, time, and mobility with sensed data to reason about causal events along these axes. To meet the latency constraints of these applications, the project develops geospatial resource allocation mechanisms that complement and support the distributed programming idioms, extending the utility-computing model of cloud computing to the edge of the network. Since the applications often have to work with inexact knowledge of what is happening in the physical environment, owing to limitations of the distributed sensing sources, the project also investigates system support for application-specific information fusion and spatio-temporal analyses to increase the quality of results. Efforts toward development of a future cyber-physical systems workforce include creation of a new multidisciplinary curriculum around situation awareness, exploration of new immersive learning pedagogical styles, and mentoring and providing research experience to undergraduate students through research experiences and internships aimed at increasing participation of women and minorities.
Off
Georgia Tech Research Corporation
-
National Science Foundation
Designing semi-autonomous networks of miniature robots for inspection of bridges and other large civil infrastructure
According to the U.S. Department of Transportation, the United States has 605102 bridges of which 64% are 30 years or older and 11% are structurally deficient. Visual inspection is a standard procedure to identify structural flaws and possibly predict the imminent collapse of a bridge and determine effective precautionary measures and repairs. Experts who carry out this difficult task must travel to the location of the bridge and spend many hours assessing the integrity of the structure.
The proposal is to establish (i) new design and performance analysis principles and (ii) technologies for creating a self-organizing network of small robots to aid visual inspection of bridges and other large civilian infrastructure. The main idea is to use such a network to aid the experts in remotely and routinely inspecting complex structures, such as the typical girder assemblage that supports the decks of a suspension bridge. The robots will use wireless information exchange to autonomously coordinate and cooperate in the inspection of pre-specified portions of a bridge. At the end of the task, or whenever possible, they will report images as well as other key measurements back to the experts for further evaluation.
Common systems to aid visual inspection rely either on stationary cameras with restricted field of view, or tethered ground vehicles. Unmanned aerial vehicles cannot access constricted spaces and must be tethered due to power requirements and the need for uninterrupted communication to support the continual safety critical supervision by one or more operators. In contrast, the system proposed here would be able to access tight spaces, operate under any weather, and execute tasks autonomously over long periods of time.
The fact that the proposed framework allows remote expert supervision will reduce cost and time between inspections. The added flexibility as well as the increased regularity and longevity of the deployments will improve the detection and diagnosis of problems, which will increase safety and support effective preventive maintenance.
This project will be carried out by a multidisciplinary team specialized in diverse areas of cyber-physical systems and robotics, such as locomotion, network science, modeling, control systems, hardware sensor design and optimization. It involves collaboration between faculty from the University of Maryland (UMD) and Resensys, which specializes in remote bridge monitoring. The proposed system will be tested in collaboration with the Maryland State Highway Administration, which will also provide feedback and expertise throughout the project.
This project includes concrete plans to involve undergraduate students throughout its duration. The investigators, who have an established record of STEM outreach and education, will also leverage on exiting programs and resources at the Maryland Robotics Center to support this initiative and carry out outreach activities. In order to make student participation more productive and educational, the structure of the proposed system conforms to a hardware architecture adopted at UMD and many other schools for the teaching of undergraduate courses relevant to cyber-physical systems and robotics.
This grant will support research on fundamental principles and design of robotic and cyber-physical systems. It will focus on algorithm design for control and coordination, network science, performance evaluation, microfabrication and system integration to address the following challenges: (i) Devise new locomotion and adhesion principles to support mobility within steel and concrete girder structures. (ii) Investigate the design of location estimators, omniscience and coordination algorithms that are provably optimal, subject to power and computational constraints. (iii) Methods to design and analyze the performance of energy-efficient communication protocols to support robot coordination and localization in the presence of the severe propagation barriers caused by metal and concrete structures of a bridge.
Off
University of Maryland College Park
-
National Science Foundation
Submitted by Nuno Martins on December 22nd, 2015
This project aims to enable cyber-physical systems that can be worn on the body in order to one day allow their users to touch, feel, and manipulate computationally simulated three-dimensional objects or digital data in physically realistic ways, using the whole hand. It will do this by precisely measuring touch and movement-induced displacements of the skin in the hand, and by reproducing these signals interactively, via new technologies to be developed in the project. The resulting systems will offer the potential to impact a wide range of human activities that depend on touch and interaction with the hands. The project seeks to enable new applications for wearable cyber physical interfaces that may have broad applications in health care, manufacturing, consumer electronics, and entertainment. Although human interactive technologies have advanced greatly, current systems employ only a fraction of the sensorimotor capabilities of their users, greatly limiting applications and usability. The development of whole-hand haptic interfaces that allow their wearers to feel and manipulate digital content has been a longstanding goal of engineering research, but has remained far from reality. The reason can be traced to the difficulty of reproducing or even characterizing the complex, action-dependent stimuli that give rise to touch sensations during everyday activities.
This project will pioneer new methods for imaging complex haptic stimuli, consisting of movement dependent skin strain and contact-induced surface waves propagating in skin, and for modeling the dependence of these signals on hand kinematics during grasping. It will use the resulting fundamental advances to catalyze the development of novel wearable CPS, in the form of whole-hand haptic interfaces. The latter will employ surface wave and skin strain feedback to supply haptic feedback to the hand during interaction with real and computational objects, enabling a range of new applications in VR. The project will be executed through research in three main research areas. In the first, it will utilize novel contact and non-contact techniques based on data acquired through on-body sensor arrays to measure whole-hand mechanical stimuli and grasping kinematics at high spatial and temporal resolution. In a second research area, it will undertake data-driven systems modeling and analysis of statistical contingencies between the kinematic and cutaneous sensed during everyday activities. In a third research area, it will engineer and perceptually evaluate novel cyber physical systems consisting of haptic interfaces for whole hand interaction.
In order to further advance the applications of these systems in medicine, through a collaboration with the Drexel College of Medicine, the project will develop new methods for assessing clinical skills of palpation during medical examination, with the aim of improving the efficacy of what is often the first, most common, and best opportunity for diagnosis, using physician's own sense of touch.
Off
Drexel University
-
National Science Foundation
More than one million people including many wounded warfighters from recent military missions are living with lower-limb amputation in the United States. This project will design wearable body area sensor systems for real-time measurement of amputee's energy expenditure and will develop computer algorithms for automatic lower-limb prosthesis optimization. The developed technology will offer a practical tool for the optimal prosthetic tuning that may maximally reduce amputee's energy expenditure during walking. Further, this project will develop user-control technology to support user's volitional control of lower-limb prostheses. The developed volitional control technology will allow the prosthesis to be adaptive to altered environments and situations such that amputees can walk as using their own biological limbs. An optimized prosthesis with user-control capability will increase equal force distribution on the intact and prosthetic limbs and decrease the risk of damage to the intact limb from the musculoskeletal imbalance or pathologies. Maintenance of health in these areas is essential for the amputee's quality of life and well-being. Student participation is supported.
This research will advance Cyber-Physical Systems (CPS) science and engineering through the integration of sensor and computational technologies for the optimization and control of physical systems. This project will design body area sensor network systems which integrate spatiotemporal information from electromyography (EMG), electroencephalography (EEG) and inertia measurement unit (IMU) sensors, providing quantitative, real-time measurements of the user's physical load and mental effort for personalized prosthesis optimization. This project will design machine learning technology-based, automatic prosthesis parameter optimization technology to support in-home prosthesis optimization by users themselves. This project will also develop an EEG-based, embedded computing-supported volitional control technology to support user?s volitional control of a prosthesis in real-time by their thoughts to cope with altered situations and environments. The technical advances from this project will provide wearable and wireless body area sensing solutions for broader applications in healthcare and human-CPS interaction applications. The explored computational methods will be broadly applicable for real-time, automatic target recognition from spatiotemporal, multivariate data in CPS-related communication and control applications. This synergic project will be implemented under multidisciplinary team collaboration among computer scientists and engineers, clinicians and prosthetic industry engineers. This project will also provide interdisciplinary, CPS relevant training for both undergraduate and graduate students by integrating computational methods with sensor network, embedded processors, human physical and mental activity recognition, and prosthetic control.
Off
Virginia Commonwealth University
-
National Science Foundation
In the next few decades, autonomous vehicles will become an integral part of the traffic flow on highways. However, they will constitute only a small fraction of all vehicles on the road. This research develops technologies to employ autonomous vehicles already in the stream to improve traffic flow of human-controlled vehicles. The goal is to mitigate undesirable jamming, traffic waves, and to ultimately reduce the fuel consumption. Contemporary control of traffic flow, such as ramp metering and variable speed limits, is largely limited to local and highly aggregate approaches. This research represents a step towards global control of traffic using a few autonomous vehicles, and it provides the mathematical, computational, and engineering structure to address and employ these new connections. Even if autonomous vehicles can provide only a small percentage reduction in fuel consumption, this will have a tremendous economic and environmental impact due to the heavy dependence of the transportation system on non-renewable fuels. The project is highly collaborative and interdisciplinary, involving personnel from different disciplines in engineering and mathematics. It includes the training of PhD students and a postdoctoral researcher, and outreach activities to disseminate traffic research to the broader public.
This project develops new models, computational methods, software tools, and engineering solutions to employ autonomous vehicles to detect and mitigate traffic events that adversely affect fuel consumption and congestion. The approach is to combine the data measured by autonomous vehicles in the traffic flow, as well as other traffic data, with appropriate macroscopic traffic models to detect and predict congestion trends and events. Based on this information, the loop is closed by carefully following prescribed velocity controllers that are demonstrated to reduce congestion. These controllers require detection and response times that are beyond the limit of a human's ability. The choice of the best control strategy is determined via optimization approaches applied to the multiscale traffic model and suitable fuel consumption estimation. The communication between the autonomous vehicles, combined with the computational and control tasks on each individual vehicle, require a cyber-physical approach to the problem. This research considers new types of traffic models (micro-macro models, network approaches for higher-order models), new control algorithms for traffic flow regulation, and new sensing and control paradigms that are enabled by a small number of controllable systems available in a flow.
Off
Rutgers University Camden
-
National Science Foundation
In the next few decades, autonomous vehicles will become an integral part of the traffic flow on highways. However, they will constitute only a small fraction of all vehicles on the road. This research develops technologies to employ autonomous vehicles already in the stream to improve traffic flow of human-controlled vehicles. The goal is to mitigate undesirable jamming, traffic waves, and to ultimately reduce the fuel consumption. Contemporary control of traffic flow, such as ramp metering and variable speed limits, is largely limited to local and highly aggregate approaches. This research represents a step towards global control of traffic using a few autonomous vehicles, and it provides the mathematical, computational, and engineering structure to address and employ these new connections. Even if autonomous vehicles can provide only a small percentage reduction in fuel consumption, this will have a tremendous economic and environmental impact due to the heavy dependence of the transportation system on non-renewable fuels. The project is highly collaborative and interdisciplinary, involving personnel from different disciplines in engineering and mathematics. It includes the training of PhD students and a postdoctoral researcher, and outreach activities to disseminate traffic research to the broader public.
This project develops new models, computational methods, software tools, and engineering solutions to employ autonomous vehicles to detect and mitigate traffic events that adversely affect fuel consumption and congestion. The approach is to combine the data measured by autonomous vehicles in the traffic flow, as well as other traffic data, with appropriate macroscopic traffic models to detect and predict congestion trends and events. Based on this information, the loop is closed by carefully following prescribed velocity controllers that are demonstrated to reduce congestion. These controllers require detection and response times that are beyond the limit of a human's ability. The choice of the best control strategy is determined via optimization approaches applied to the multiscale traffic model and suitable fuel consumption estimation. The communication between the autonomous vehicles, combined with the computational and control tasks on each individual vehicle, require a cyber-physical approach to the problem. This research considers new types of traffic models (micro-macro models, network approaches for higher-order models), new control algorithms for traffic flow regulation, and new sensing and control paradigms that are enabled by a small number of controllable systems available in a flow.
Off
Temple University
-
National Science Foundation
Recent developments in nanotechnology and synthetic biology have enabled a new direction in biological engineering: synthesis of collective behaviors and spatio-temporal patterns in multi-cellular bacterial and mammalian systems. This will have a dramatic impact in such areas as amorphous computing, nano-fabrication, and, in particular, tissue engineering, where patterns can be used to differentiate stem cells into tissues and organs. While recent technologies such as tissue- and organoid on-a-chip have the potential to produce a paradigm shift in tissue engineering and drug development, the synthesis of user-specified, emergent behaviors in cell populations is a key step to unlock this potential and remains a challenging, unsolved problem.
This project brings together synthetic biology and micron-scale mobile robotics to define the basis of a next-generation cyber-physical system (CPS) called biological CPS (bioCPS). Synthetic gene circuits for decision making and local communication among the cells are automatically synthesized using a Bio-Design Automation (BDA) workflow. A Robot Assistant for Communication, Sensing, and Control in Cellular Networks (RA), which is designed and built as part of this project, is used to generate desired patterns in networks of engineered cells. In RA, the engineered cells interact with a set of micro-robots that implement control, sensing, and long-range communication strategies needed to achieve the desired global behavior. The micro-robots include both living and non-living matter (engineered cells attached to inorganic substrates that can be controlled using externally applied fields). This technology is applied to test the formation of various patterns in living cells.
The project has a rich education and outreach plan, which includes nationwide activities for CPS education of high-school students, lab tours and competitions for high-school and undergraduate students, workshops, seminars, and courses for graduate students, as well as specific initiatives for under-represented groups. Central to the project is the development of theory and computational tools that will significantly advance that state of the art in CPS at large. A novel, formal methods approach is proposed for synthesis of emergent, global behaviors in large collections of locally interacting agents. In particular, a new logic whose formulas can be efficiently learned from quad-tree representations of partitioned images is developed. The quantitative semantics of the logic maps the synthesis of local control and communication protocols to an optimization problem. The project contributes to the nascent area of temporal logic inference by developing a machine learning method to learn temporal logic classifiers from large amounts of data. Novel abstraction and verification techniques for stochastic dynamical systems are defined and used to verify the correctness of the gene circuits in the BDA workflow.
Off
University of Pennsylvania
-
National Science Foundation