Rapid growth in additive manufacturing (AM) has improved the accessibility, customizability and affordability of making products using personal printers. Designs can be developed by consumers, if they have enough knowledge of mechanical design and 3D modeling, or they can be obtained from third parties. However, the process of translating a design to a program that can successfully executed by a 3D printer often requires specialized domain knowledge that many end-users currently lack. In the meantime, lots of objects, which may be very similar or identical to what the non-technical user aims to design and print, have been produced by experts in industry and, hence, millions of proven part designs already exist. This research aims to fill the above-mentioned gap by developing a theoretically sound and practically deployable, domain-specific online search engine, called Srch3D, for 3D models. Srch3D will provide the non-technical end-users with a user-friendly solution to efficiently search for their components in a large repository of existing proven part designs.
Saman Zonouz is an Associate Professor at Georgia Tech in the Schools of Cybersecurity and Privacy (SCP) and Electrical and Computer Engineering (ECE). Saman directs the Cyber-Physical Security Laboratory (CPSec). His research focuses on security and privacy research problems in cyber-physical systems including attack detection and response capabilities using techniques from systems security, control theory and artificial intelligence. His research has been awarded by Presidential Early Career Awards for Scientists and Engineers (PECASE), the NSF CAREER Award in Cyber-Physical Systems (CPS), Significant Research in Cyber Security by the National Security Agency (NSA), and Faculty Fellowship Award by the Air Force Office of Scientific Research (AFOSR). His research group has disclosed several security vulnerabilities with published CVEs in widely-used industrial controllers such as Siemens, Allen Bradley, and Wago. Saman is currently a Co-PI on President Biden’s American Rescue Plan $65M Georgia AI Manufacturing (GA-AIM) project. Saman was invited to co-chair the NSF CPS PI Meeting as well as the NSF CPS Next Big Challenges Workshop. Saman has served as the chair and/or program committee member for several conferences (e.g., IEEE Security and Privacy, CCS, NDSS, DSN, and ICCPS). Saman obtained his Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign.
This project aims to transform the software development process in modern cars, which are witnessing significant innovation with many new autonomous functions being introduced, culminating in a fully autonomous vehicle. Most of these new features are indeed implemented in software, at the heart of which lies several control algorithms. Such control algorithms operate in a feedback loop, involving sensing the state of the plant or the system to be controlled, computing a control input, and actuating the plant in order to enforce a desired behavior on it. Examples of this range from brake and engine control, to cruise control, automated parking, and to fully autonomous driving. Current development flows start with mathematically designing a controller, followed by implementing it in software on the embedded systems existing in a car. This flow has worked well in the past, where automotive embedded systems were simple ? with few processors, communication buses, and simple sensors. The control algorithms were simple as well, and important functions were largely implemented by mechanical subsystems. But modern cars have over 100 processors connected by several miles of cables, and multiple sensors like cameras, radars and lidars, whose data needs complex processing before it can be used by a controller. Further, the control algorithms themselves are also more complex since they need to implement new autonomous features that did not exist before. As a result, both computation, communication, and memory accesses in such a complex hardware/software system can now be organized in many different ways, with each being associated with different tradeoffs in accuracy, timing, and resource requirements. These in turn have considerable impact on control performance and how the control strategy needs to be designed. As a result, the clear separation between designing the controller, followed by implementing it in software in the car, no longer works well. This project aims to develop both the theoretical foundations and the tool support to adapt this design flow to emerging automotive control strategies and embedded systems. This will not only result in more cost-effective design of future cars, but will also help with certifying the implemented controllers, thereby leading to safer autonomous cars.
This project aims to create a cyber physical system for remotely controlling cellular processes in real time and leverage the biomedical potential of synthetic biology and microrobotics to create pancreatic tissue. With 114,000 people currently on the waitlist for a lifesaving organ transplant in the United States alone, the ability to directly produce patient-compatible organs, obviating the need for animal and clinical studies can revolutionize personalized medicine. Tissues in the human body such as liver, kidney, and pancreatic islets comprise cells arranged in complex patterns spanning both 2D and 3D structures. However, scaffold- and microgel-based tissue engineering approaches along with 3D bioprinting are often unable to create these complex 3D structures. In this project, the team focuses on the pancreas, which has a unique anatomical structure composed of the regular arrangement of circular cell clusters called islets. The proposed research aims at overcoming the hurdle of recreating these spatial patterns in vitro by developing a cyber physical process by which swarms of microrobots will be steered in 3D to regulate the differentiation of genetically engineered stem cells and drive these cells into forming desired pancreatic tissue. The broader impacts of this line of work are significant because it is a key first step in the synthesis of new, or the repair of ailing, human organs, providing for interactive behavior between computer controlled microrobots and genetically programmed stem cells. Manufacturing living tissue is revolutionary as it could act as a bridge between preclinical and clinical trials, to ensure better drug testing models and develop more personalized precision medicine. For pancreatic components, in particular, generating human organoids compliant with pharmaceutical standards is an exceptional challenge, and current methods are laborious, time-consuming, expensive, and irreproducible, which has caused industry to shy away from this organ. The education and outreach activities that complement the research component of this project address the need to increase underrepresented minorities (that is, women and under-served populations) in problem-solving research careers, like Engineering in K-12.
Today, operators of cellular networks and electricity grids measure large volumes of data, which can provide rich insights into city-wide mobility and congestion patterns. Sharing such real-time societal trends with independent, external entities, such as a taxi fleet operator, can enhance city-scale resource allocation and control tasks, such as electric taxi routing and battery storage optimization. However, the owner of a rich time series and an external control authority must communicate across a data boundary, which limits the scope and volume of data they can share. This project will develop novel algorithms and systems to jointly compress, anonymize, and price rich time series data in a way that only shares minimal, task-relevant data across organizational boundaries. By emphasizing communication efficiency, the developed algorithms will incentivize data sharing and collaboration in future smart cities.
Smart home products have become extremely popular with consumers due to the convenience offered through home automation. In bridging the cyber-physical gap, however, home automation brings a widening of the cyber attack surface of the home. Research towards analyzing and preventing security and safety failures in a smart home faces a fundamental obstacle in practice: the poor characterization of home automation usage. That is, without the knowledge of how users automate their homes, it is difficult to address several critical challenges in designing and analyzing security systems, potentially rendering solutions ineffective in actual deployments. This project aims to bridge this gap, and provide researchers, end-users, and system designers with the means to collect, generate, and analyze realistic examples of home automation usage. This approach builds upon a unique characteristic of emerging smart home platforms: the presence of "user-driven" automation in the form of trigger-action programs that users configure via platform-provided user interfaces. In particular, this project devises methods to capture and model such user-driven home automation to generate statistically significant and useful usage scenarios. The techniques that will be developed during the course of this project will allow researchers and practitioners to analyze various security, safety and privacy properties of the cyber-physical systems that comprise modern smart homes, ultimately leading to deployments of smart home Internet of Things (IoT) devices that are more secure. The project will also produce and disseminate educational materials on best practices for developing secure software with an emphasis on IoT devices, suitable for integration into existing computer literacy courses at all levels of education. In addition, the project will focus on recruiting and retaining computer science students from traditionally underrepresented categories.
This project is centered on three specific goals. First, it will develop novel data collection strategies that allow end-users to easily specify routines in a flexible manner, as well as techniques based on Natural language Processing (NLP) for automatically processing and transforming the data into a format suitable for modeling. Second, it will introduce approaches for transforming routines into realistic home automation event sequences, understanding their latent properties and modeling them using well-understood language modeling techniques. Third, it will contextualize the smart home usage models to make predictions that cater to security analyses specifically and develop tools that allow for the inspection of a smart home?s state alongside the execution of predicted event sequences on real products. The techniques and models developed during the course of this project will be validated with industry partners and are expected to become instrumental for developers and researchers to understand security and privacy properties of smart homes.
The aim of this proposal is to enable people to control robots remotely using virtual reality. Using cameras mounted on the robot and a virtual reality headset, a person can see the environment around the robot. However, controlling the robot using existing technologies is hard: there is a time delay because it?s slow to send high quality video over the Internet. In addition, the fidelity of the image is worse than looking through human eyes, with a fixed and narrow view. This proposal will address these limitations by creating a new system which understands the geometry and appearance of the robot?s environment. Instead of sending high-quality video over the Internet, this new system will only send a smaller amount of information about how the environment?s geometry and appearance has changed over time. Further, understanding the geometry and appearance will let us expand the view visible to the person. Overall, these will improve a human?s ability to remotely control the robot by increasing fidelity and responsiveness. We will demonstrate this technology on household tasks, on assembly tasks, and by manipulating small objects.
The aim of this proposal is to test the hypothesis that integrating scene and networking understanding can enable efficient transmission and rendering for dexterous control of remote robots through virtual reality interfaces. This system will result in dexterous teleoperation that enables remote human operators to perform complex tasks with remote robot manipulators, such as cleaning a room or repairing a machine. Such tasks have not previously been demonstrated to be teleoperated for two reasons: 1) lack of an intuitive awareness and understanding of the scene around the remote robot, and 2) lack of an effective low-latency interface to control the robot. We will address these problems by creating new scene- and network-aware algorithms which tightly couple sensing, display, interaction and transmission, enabling the operator to quickly and intuitively understand the environment around the robot. This project will research new interfaces which allow the operator to use their hand to directly specify the robot?s end effector pose in six degrees of freedom, combined with spatial- and semantic-object-based models that allow safe high-level commands. This project will evaluate the proposed system by assessing the speed and accuracy of the remote operator?s ability to complete complex tasks, including assembly tasks; the aim will be to complete unstructured assembly tasks that have never been demonstrated to be remotely teleoperated before.
This project is in response to the NSF Cyber-Physical Systems 20-563 solicitation.
This project aims to enable mutualistic interaction of cyber damage prognostics and physical reconfigurable sensing for mutualistic and self-adaptive cyber-physical systems (CPS). Drawing inspiration from mutualism in biology where two species interact in a way that benefits both, the cyber and the physical interact in a way that they simultaneously benefit from and contribute to each other to enhance the ability of the CPS to predict, reconfigure, and adapt. Such interaction is generalizable, allowing it to enhance CPS applications in various domains. In the civil infrastructure systems domain, the mutualistic interaction-enabled CPS will allow for reconfiguring a single type of sensor, adaptively based on damage prognostics, to monitor multiple classes of infrastructure damages ? thereby improving the cost-effectiveness of multi-damage infrastructure monitoring by reducing the types and number of sensors needed and maximizing the timeliness and accuracy of damage assessment and prediction at the same time. Enabling cost-effective multi-damage monitoring is promising to leapfrog the development of safer, more resilient, and sustainable infrastructure, which would stimulate economic growth and social welfare for the benefit of the nation and its people. This project will also contribute to NSF?s commitment to broadening participation in engineering (BPE) by developing innovative, interdisciplinary, and inclusive BPE programs to attract, train, and reward the next-generation engineering researchers and practitioners who are capable creators of CPS technology and not only passive consumers, thereby enhancing the U.S. economy, security, and well-being.
The envisioned CPS includes three integrated components: (1) data-driven, knowledge-informed deep learning methods for generalizable damage prognostics to predict the onset and propagation of infrastructure damages, providing information about target damages to inform reconfigurable sensing, (2) signal difference maximization theory-based reconfigurable sensing methods to optimize and physically control the configurations of the sensors to actively seek to monitor each of the predicted target damages, providing damage-seeking feedback to inform damage prognostics, and (3) quality-aware edge cloud computing methods for efficient and effective damage information extraction from raw sensing signals, serving as the bridge between damage prognostics and reconfigurable sensing. The proposed CPS will be tested in multi-damage monitoring of bridges using simulation-based and actual CPS prototypes, and would be generalized to monitoring other civil infrastructure in the future. The proposed CPS methods have the potential to transform the way we design, create, and operate CPS to enable the next-generation CPS that have greater predictive ability, reconfigurability, and adaptability.
The future where non-autonomous systems like human-driven cars are replaced by autonomous, driverless cars is now within reach. This reduction in human effort comes at a cost: in existing systems, human operators implicitly define high-level system objectives through their actions; autonomous systems lack this guidance. Popular design techniques for autonomy such as those based on deep reinforcement learning obtain such guidance from user-specified, state-based reward functions or user-provided demonstrations. Unfortunately, such techniques generally do not provide guarantees on the safe behavior of the trained controllers. This project argues for a different approach where mathematically unambiguous, system-level behavioral specifications expressed in temporal logic are used to guide deep reinforcement learning algorithms to train neural network-based controllers. It allows reasoning about the safety of learning-based control through scalable methods for formal verification of the trained controllers against the given specifications.
To address lack of explainability of neural controllers, this project devises new techniques to distill the neural-network-controlled autonomous system into human-interpretable symbolic automata. The project blends methods from statistical learning, control theory, optimization, and formal methods to give deterministic or probabilistic guarantees on the safe behavior of autonomous systems. It integrates education and research through new graduate courses on verifiable reinforcement learning. The investigator will broadly disseminate the scientific outcomes of the project through technology transfer to industrial partners and through publications at top research conferences and journals. The expected societal impact is improved safety and explainable control for future autonomous cyber-physical systems in various application domains.
The goals of Automated Driving Systems (ADS) and Advanced Driver Assistance Systems (ADAS) include reduction in accidental deaths, enhanced mobility for differently abled people, and an overall improvement in the quality of life for the general public. Such systems typically operate in open and highly uncertain environments for which robust perception systems are essential. However, despite the tremendous theoretical and experimental progress in computer vision, machine learning, and sensor fusion, the form and conditions under which guarantees should be provided for perception components is still unclear. The state-of-the-art is to perform scenario-based evaluation of data against ground truth values, but this has only limited impact. The lack of formal metrics to analyze the quality of perception systems has already led to several catastrophic incidents and a plateau in ADS/ADAS development. This project develops formal languages for specifying and evaluating the quality and robustness of perception sub-systems within ADS and ADAS applications. To enable broader dissemination of this technology, the project develops graduate and undergraduate curricula to train engineers in the use of such methods, and new educational modules to explain the challenges in developing safe and robust ADS for outreach and public engagement activities. To broaden participation in computing, the investigators target the inclusion of undergraduate women in research and development phases through summer internships.
The formal language developed in this project is based on a new spatio-temporal logic pioneered by the investigators. This logic allows one to simultaneously perform temporal reasoning about streaming perception data, and spatially reason about objects both within a single frame of the data and across frames. The project also develops quantitative semantics for this logic, which provides the user with quantifiable quality metrics for perception sub-systems. These semantics enable comparisons between different perception systems and architectures. Crucially, the formal language facilitates the process of abstracting away implementation details, which in turn allows system designers and regulators to specify assumptions and guarantees for system performance at a higher-level of abstraction. An interesting benefit of this formal language is that it enables querying of databases with perception data for specific driving scenarios without the need for the highly manual process of creating ground truth annotations. Such a formal language currently does not exist, and this is a huge impediment to building a thriving marketplace for perception components used in safety-critical systems. This framework sets the foundation for a requirements language between suppliers of perception components and automotive companies. The open source and publicly available software tools developed in this project will assist with testing of perception systems by engineers and governmental agencies.
This Cyber-Physical Systems (CPS) project will develop advanced artificial intelligence and machine-learning (AI/ML) techniques to harness the extensive untapped climatic resources that exist for direct solar heating, natural ventilation, and radiative and evaporative cooling in buildings. Although these mechanisms for building environment conditioning are colloquially termed "passive," their performance depends strongly on the intelligent control of operable elements such as windows and shading, as well as fans in hybrid systems. Towards this goal, this project will create design methodologies for climate- and occupant-responsive strategies that control these operable elements intelligently in coordination with existing building heating ventilation and air conditioning systems, based on sensor measurements of the indoor and outdoor environments, weather and energy forecasts, occupancy, and occupant preferences. The solutions developed in this project can potentially result in substantial reduction in greenhouse gas emissions generated from space heating, cooling, and ventilation. The developed techniques may be particularly valuable in affordable housing by reducing energy costs under normal conditions and improving passive survivability during extreme events and power outages.