Applications of CPS technologies dealing with automated machines that can take the place of humans in dangerous environments or manufacturing processes, or resemble humans in appearance, behavior, and/or cognition.
Event
CyPhy 2016
Call for Papers Workshop on Design, Modeling and Evaluation of Cyber Physical Systems (CyPhy 2016) Held in conjunction with ESWEEK 2016 October 6 2016 | Pittsburgh, PA, USA | http://www.cyphy.org/
Submitted by Anonymous on June 10th, 2016
Event
FISP 2016
The Second  International Workshop on Future Information Security, Privacy and Forensics for Complex systems (FISP 2016) In Conjunction with the 11th International Conference on Future Networks and Communications (FNC'16)  Topics of Interest: 
Submitted by Anonymous on April 26th, 2016
Legged robots have captured the imagination of society at large, through entertainment and through the dissemination of research findings. Yet, today's reality of what (bipedal) legged robots can do falls short of society's vision. A big part of the reason is that legged robots are viewed as surrogates for humans, able to go wherever humans can as aids or as assistants where it might also be too dangerous or risky. It is in the expectation of robustness and walking facility that today's research hits its limits, especially when the terrain has granular properties. Impeding progress is the lack of a holistic approach to the cyber-physical modeling and control of legged robots. The vision of this work is to unite experts in granular mechanics, optimal control, and learning theory in order to define a methodology for advancing cyber-physical systems (CPS) involving a tight coupling of the physical with the cyber through dynamic interactions that must be learned online. The proposed work will advance the science of cyber-physical systems by more explicitly tying sensing, perception, and computing to the optimization and control of physical systems whose properties are variable and uncertain. Achieving reliable, adaptive legged locomotion over terrain with arbitrary granular properties would transform several application domain areas of robotics; e.g., disaster response, agricultural and industrial robotics, and planetary robotics. More broadly, the same tools would apply to related CPS with regards to terrain aware exoskeleton and rehabilitation prosthetics for persons with missing, non-functional, or injured legs, as well as to energy networks with time-varying, nonlinear dynamics models. The CPS platform to be studied is that of a bipedal robot locomoting over granular ground material with uncertain physical properties (sand, gravel, dirt, etc.). The proposed work seeks to overcome current impediments to reliable legged locomotion over uncertain terrain type, which fundamentally relies on the controlled interaction of the robot's feet with the physical environment. The research goal is to improve the perception and control of legged locomotion over granular media for the express purpose of achieving robust, adaptive, terrain-aware locomotion. It revolves around the hypothesis that simple models with decent predictive performance and low computational overhead are sufficient for the optimal control formulations as the compute-constrained adaptive subsystem will both learn and classify the peculiarities of the terrain online. The main research objectives will involve: [1] a validated co-simulation platform for legged robot movement over granular media; [2] terrain-dependent, stable gait generation and gait transition strategies via optimal control; [3] online, compute-constrained learning of granular interactions for adaptation and terrain classification; and [4] validated contributions using experimental testbeds involving variable and unknown (to the robot) granular media. Given the high value of the robotic platforms and the research with regards to outreach and participation, they will be used as outreach tools and to create new educational modules for promotion of STEM fields. Further, the multi-disciplinary nature of the work will be highlighted in order to emphasize its importance.
Off
Georgia Institute of Technology
-
National Science Foundation
Daniel Goldman
Erik Verriest
Submitted by Patricio Vela on April 25th, 2016
Part 1: Upper-limb motor impairments arise from a wide range of clinical conditions including amputations, spinal cord injury, or stroke. Addressing lost hand function, therefore, is a major focus of rehabilitation interventions; and research in robotic hands and hand exoskeletons aimed at restoring fine motor control functions gained significant speed recently. Integration of these robots with neural control mechanisms is also an ongoing research direction. We will develop prosthetic and wearable hands controlled via nested control that seamlessly blends neural control based on human brain activity and dynamic control based on sensors on robots. These Hand Augmentation using Nested Decision (HAND) systems will also provide rudimentary tactile feedback to the user. The HAND design framework will contribute to the assistive and augmentative robotics field. The resulting technology will improve the quality of life for individuals with lost limb function. The project will help train engineers skilled in addressing multidisciplinary challenges. Through outreach activities, STEM careers will be promoted at the K-12 level, individuals from underrepresented groups in engineering will be recruited to engage in this research project, which will contribute to the diversity of the STEM workforce. Part 2: The team previously introduced the concept of human-in-the-loop cyber-physical systems (HILCPS). Using the HILCPS hardware-software co-design and automatic synthesis infrastructure, we will develop prosthetic and wearable HAND systems that are robust to uncertainty in human intent inference from physiological signals. One challenge arises from the fact that the human and the cyber system jointly operate on the same physical element. Synthesis of networked real-time applications from algorithm design environments poses a framework challenge. These will be addressed by a tightly coupled optimal nested control strategy that relies on EEG-EMG-context fusion for human intent inference. Custom distributed embedded computational and robotic platforms will be built and iteratively refined. This work will enhance the HILCPS design framework, while simultaneously making novel contributions to body/brain interface technology and assistive/augmentative robot technology. Specifically we will (1) develop a theoretical EEG-EMG-context fusion framework for agile HILCPS application domains; (2) develop theory for and design novel control theoretic solutions to handle uncertainty, blend motion/force planning with high-level human intent and ambient intelligence to robustly execute daily manipulation activities; (3) further develop and refine the HILCPS domain-specific design framework to enable rapid deployment of HILCPS algorithms onto distributed embedded systems, empowering a new class of real-time algorithms that achieve distributed embedded sensing, analysis, and decision making; (4) develop new paradigms to replace, retrain or augment hand function via the prosthetic/wearable HAND by optimizing performance on a subject-by-subject basis.
Off
WPI
-
National Science Foundation
Cagdas Onal
Taskin Padir Submitted by Taskin Padir on April 6th, 2016
During the last decade, we have witnessed a rapid penetration of autonomous systems technology into aerial, road, underwater, and sea vehicles. The autonomy assumed by these vehicles holds the potential to increase performance significantly, for instance, by reducing delays and increasing capacity, while enhancing safety, in a number of transportation systems. However, to exploit the full potential of these autonomy-enabled transportation systems, we must rethink transportation networks and control algorithms that coordinate autonomous vehicles operating on such networks. This project focuses on the design and operation of autonomy-enabled transportation networks that provide provable guarantees on achieving high performance and maintaining safety at all times. The foundational problems arising in this domain involve taking into account the physics governing the vehicles in order to coordinate them using cyber means. This research effort aims to advance the science of cyber-physical systems by following a unique and radical approach, drawing inspiration and techniques from non-equilibrium statistical mechanics and self-organizing systems, and blending this inspiration with the foundational tools of queueing theory, control theory, and optimization. This approach may allow orders of magnitude improvement in the servicing capabilities of various transportation networks for moving goods or people. The applications include the automation of warehouses, factory floors, sea ports, aircraft carrier decks, transportation networks involving driverless cars, drone-enabled delivery networks, air traffic management, and military logistics networks. The project also aims to start a new wave of classes and tutorials that will create trained engineers and a research community in the area of safe and efficient transportation networks enabled by autonomous cyber-physical systems.
Off
Massachusetts Institute of Technology
-
National Science Foundation
Submitted by Sertac Karaman on April 5th, 2016
Cyber-physical systems of the near future will collaborate with humans. Such cognitive systems will need to understand what the humans are doing. They will need to interpret human action in real-time and predict the humans' immediate intention in complex, noisy and cluttered environments. This proposal puts forward a new architecture for cognitive cyber-physical systems that can understand complex human activities, and focuses specifically on manipulation activities. The proposed architecture, motivated by biological perception and control, consists of three layers. At the bottom layer are vision processes that detect, recognize and track humans, their body parts, objects, tools, and object geometry. The middle layer contains symbolic models of the human activity, and it assembles through a grammatical description the recognized signal components of the previous layer into a representation of the ongoing activity. Finally, at the top layer is the cognitive control, which decides which parts of the scene will be processed next and which algorithms will be applied where. It modulates the vision processes by fetching additional knowledge when needed, and directs the attention by controlling the active vision system to direct its sensors to specific places. Thus, the bottom layer is the perception, the middle layer is the cognition, and the top layer is the control. All layers have access to a knowledge base, built in offline processes, which contains the semantics about the actions. The feasibility of the approach will be demonstrated through the development of a smart manufacturing system, called MONA LISA, which assists humans in assembly tasks. This system will monitor humans as they perform assembly task. It will recognize the assembly action and determine whether it is correct and will communicate to the human possible errors and suggest ways to proceed. The system will have advanced visual sensing and perception; action understanding grounded in robotics and human studies; semantic and procedural-like memory and reasoning, and a control module linking high-level reasoning and low-level perception for real time, reactive and proactive engagement with the human assembler. The proposed work will bring new tools and methodology to the areas of sensor networks and robotics and is applicable, besides smart manufacturing, to a large variety of sectors and applications. Being able to analyze human behavior using vision sensors will have impact on many sectors, ranging from healthcare and advanced driver assistance to human robot collaboration. The project will also catalyze K-12 outreach, new courseware (undergraduate and graduate), publication and open-source software.
Off
University of Maryland at College Park
-
National Science Foundation
Cornelia Fermuller Submitted by Cornelia Fermuller on March 31st, 2016
Part 1: Upper-limb motor impairments arise from a wide range of clinical conditions including amputations, spinal cord injury, or stroke. Addressing lost hand function, therefore, is a major focus of rehabilitation interventions; and research in robotic hands and hand exoskeletons aimed at restoring fine motor control functions gained significant speed recently. Integration of these robots with neural control mechanisms is also an ongoing research direction. We will develop prosthetic and wearable hands controlled via nested control that seamlessly blends neural control based on human brain activity and dynamic control based on sensors on robots. These Hand Augmentation using Nested Decision (HAND) systems will also provide rudimentary tactile feedback to the user. The HAND design framework will contribute to the assistive and augmentative robotics field. The resulting technology will improve the quality of life for individuals with lost limb function. The project will help train engineers skilled in addressing multidisciplinary challenges. Through outreach activities, STEM careers will be promoted at the K-12 level, individuals from underrepresented groups in engineering will be recruited to engage in this research project, which will contribute to the diversity of the STEM workforce. Part 2: The team previously introduced the concept of human-in-the-loop cyber-physical systems (HILCPS). Using the HILCPS hardware-software co-design and automatic synthesis infrastructure, we will develop prosthetic and wearable HAND systems that are robust to uncertainty in human intent inference from physiological signals. One challenge arises from the fact that the human and the cyber system jointly operate on the same physical element. Synthesis of networked real-time applications from algorithm design environments poses a framework challenge. These will be addressed by a tightly coupled optimal nested control strategy that relies on EEG-EMG-context fusion for human intent inference. Custom distributed embedded computational and robotic platforms will be built and iteratively refined. This work will enhance the HILCPS design framework, while simultaneously making novel contributions to body/brain interface technology and assistive/augmentative robot technology. Specifically we will (1) develop a theoretical EEG-EMG-context fusion framework for agile HILCPS application domains; (2) develop theory for and design novel control theoretic solutions to handle uncertainty, blend motion/force planning with high-level human intent and ambient intelligence to robustly execute daily manipulation activities; (3) further develop and refine the HILCPS domain-specific design framework to enable rapid deployment of HILCPS algorithms onto distributed embedded systems, empowering a new class of real-time algorithms that achieve distributed embedded sensing, analysis, and decision making; (4) develop new paradigms to replace, retrain or augment hand function via the prosthetic/wearable HAND by optimizing performance on a subject-by-subject basis.
Off
Northeastern University
-
National Science Foundation
Deniz Erdogmus Submitted by Deniz Erdogmus on March 31st, 2016
This proposal addresses the safety and security issues that arise when giving users remote-access to a multi-robot research test-bed, where mobile robots can coordinate their behaviors in a collaborative manner. Through a public interface, users are able to schedule, and subsequently upload, their own code and run their experiments, while being provided with the scientific data produced through the experiment. Such an open-access framework has the potential to significantly lowering the barriers to entry in robotics research and education, yet is inherently vulnerable from a safety and security point-of-view. This proposal aims at the development and definition of appropriate cyber-physical security notions, formal verification algorithms, and safety-critical, real-time control code for teams of mobile robots that will ultimately make such a system both useful and safe. On top of the research developments, this proposal contains a Transition to Practice component that will allow the system to become a highly usable, shared test-bed; one that can serve as a model for other open, remote-access test-beds. Safety is of central importance to the successful realization of any remote-access test-bed and failure to enforce safety could result in injury in local operators and damaged equipment. To guarantee safe operation, while allowing users to test algorithms remotely, new science is required in the domain of safety-critical control. To address this need, the proposed work follows a three-pronged approach, namely (1) development and use of novel types of barrier certificates in the context of minimally invasive, optimization-based controllers with provable safety properties, (2) formal methods for verification of safety-critical control code for networked cyber-physical systems, and (3) novel methods for protecting against machine-to-machine cyber attacks. By bringing together ideas from multi-agent robotics, safety-critical control, formal verification, and cyber-security, this project will result in a unified and coherent approach to security in networked cyber-physical systems. The potential impact of the resulting open-access multi-robot test-bed is significant along the research, education, and general outreach dimensions in that a future generation of roboticists at institutions across the country will have open and remote access to a world-class research facility, and educators at all levels will be able to run experiments on actual robots.
Off
Georgia Tech Research Corporation
-
National Science Foundation
Submitted by Magnus Egerstedt on March 28th, 2016
13th International Conference on Informatics in Control, Automation and Robotics (ICINCO) In Cooperation with: AAAI, EUROMICRO, INNS, euRobotics AISBL, APCA and APNNA Co-Sponsored by: IFAC Sponsored by: INSTICC INSTICC is Member of: WfMC and FIPA Logistics Partner: SCITEVENTS
Submitted by Anonymous on March 25th, 2016
Event
MORSE 2016
MORSE 2016 - Third Workshop on Model-Driven Robot Software Engineering MORSE'16 is co-located with the RoboCup 2016. RoboCup Date: June 30 - July 4 2016 Workshop Date: July 1, 2016 Location: Messe Leipzig, Leipzig, Germany Website: http://st.inf.tu-dresden.de/MORSE16
Submitted by Anonymous on March 11th, 2016
Subscribe to Robotics