Visible to the public WAISE 2020 - VirtualConflict Detection Enabled

Third International Workshop on Artificial Intelligence Safety Engineering (WAISE 2020)

Research, engineering and regulatory frameworks are needed to achieve the full potential of Artificial Intelligence (AI) because they will guarantee a standard level of safety and settle issues such as compliance with ethical standards and liability for accidents involving, for example, autonomous cars. Designing AI-based systems for operation in proximity to and/or in collaboration with humans implies that current safety engineering and legal mechanisms need to be revisited to ensure that individuals -and their properties- are not harmed and that the desired benefits outweigh the potential unintended consequences.

The different approaches taken to AI safety go from pure theoretical (moral philosophy or ethics) to pure practical (engineering). It appears as essential to combine philosophy and theoretical science with applied science and engineering in order to create safe machines. This should become an interdisciplinary approach covering technical (engineering) aspects of how to actually create, test, deploy, operate and evolve safe AI-based systems, as well as broader strategic, ethical and policy issues.

Increasing levels of AI in "smart" sensory-motor loops allow intelligent systems to perform in increasingly dynamic uncertain complex environments with increasing degrees of autonomy, with human being progressively ruled out from the control loop. Adaptation to the environment is being achieved by Machine Learning (ML) methods rather than more traditional engineering approaches, such as system modelling and programming. Recently, certain ML methods are proving themselves very promising, such as deep learning, reinforcement learning and their combination. However, the inscrutability or opaqueness of the statistical models for perception and decision-making we build through them pose yet another challenge. Moreover, the combination of autonomy and inscrutability in these AI-based systems is particularly challenging in safety-critical applications, such as autonomous vehicles, personal care or assistive robots and collaborative industrial robots.

The WAISE workshop explores new ideas on safety engineering for AI-based systems, ethically aligned design, regulation and standards for AI-based systems. In particular, WAISE provides a forum for thematic presentations and in-depth discussions about safe AI architectures, bounded morality, ML safety, safe human-machine interaction and safety considerations in automated decision-making systems, in a way that makes AI-based systems more trustworthy, accountable and ethically aligned.

WAISE aims at bringing together experts, researchers, and practitioners, from diverse communities, such as AI, safety engineering, ethics, standardization and certification, robotics, cyber-physical systems, safety-critical systems, and application domain communities such as automotive, healthcare, manufacturing, agriculture, aerospace, critical infrastructures, and retail.


  • Regulating AI-based systems: safety standards and certification
  • Safety in AI-based system architectures: safety by design
  • Runtime AI safety monitoring and adaptation
  • Safe machine learning and meta-learning
  • Safety constraints and rules in decision-making systems
  • AI-based system predictability
  • Continuous Verification and Validation of safety properties
  • Avoiding negative side effects
  • Algorithmic bias and AI discrimination
  • Model-based engineering approaches to AI safety
  • Ethically aligned design of AI-based systems
  • Machine-readable representations of ethical principles and rules
  • Uncertainty in AI
  • Accountability, responsibility and liability of AI-based systems
  • AI safety risk assessment and reduction
  • Confidence, self-esteem and the distributional shift problem
  • Reward hacking and training corruption
  • Self-explanation, self-criticism and the transparency problem
  • Safety in the exploration vs exploitation dilemma
  • Simulation for safe exploration and training
  • Human-machine interaction safety
  • AI applied to safety engineering
  • Algorithmic bias and AI discrimination
  • AI safety education and awareness
  • Shared autonomy and human-autonomy teaming
  • AI safety regulation and education
  • Safety testing, verification and validation
  • Experiences in AI-based safety-critical systems, including industrial processes, health, automotive systems, robotics, critical infrastructures, among others


Organization Committee

  • Orlando Avila-Garcia, Atos, Spain
  • Mauricio Castillo-Effen, Lockheed Martin, USA
  • Chih-Hong Cheng, DENSO, Germany
  • Zakaria Chihani, CEA LIST, France
  • Simos Gerasimou, University of York, UK

Steering Committee

  • Rob Alexander, University of York, UK
  • Nozha Boujemaa, DATAIA Institute & INRIA, France
  • Virginia Dignum, Umea University, Sweden
  • Huascar Espinoza, CEA LIST, France
  • Philip Koopman, Carnegie Mellon University, USA
  • Stuart Russell, UC Berkeley, USA
  • Raja Chatila, ISIR - Sorbonne University, France

Programme Committee

  • Rob Alexander, University of York, UK
  • Vincent Aravantinos, Argo AI, Germany
  • Rob Ashmore, Defence Science and Technology Laboratory, UK
  • Alec Banks, Defence Science and Technology Laboratory, UK
  • Markus Borg, RISE SICS, Sweden
  • Lionel Briand, University of Ottawa, Canada
  • Simon Burton, Bosch, Germany
  • Guillame Chapriat, INRIA, France
  • Jose M. Faria, Safe Perspective Ltd, UK
  • John Favaro, Trust-IT, Italy
  • Michael Fischer, University of Liverpool, UK
  • Jelena Frtunikj, Argo AI, Germany
  • Simon Fuerst, BMW, Germany
  • Mario Gleirscher, University of York, UK
  • Stephane Graham-Lengrand, SRI International, USA
  • Jeremie Guiochet, LAAS-CNRS, France
  • Jose Hernandez-Orallo, Universitat Politecnica de Valencia, Spain
  • Nico Hochgeschwende, Hochschule Bonn-Rhein-Sieg, Germany
  • Xiaowei Huang, University of Liverpool, UK
  • Bernhard Kaiser, ANSYS, Germany
  • Guy Katz, Hebrew University of Jerusalem, Israel
  • Philip Koopman, Carnegie Mellon University, USA
  • Timo Latvala, Space Systems Finland, Finland
  • Chokri Mraidha, CEA LIST, France
  • Jonas Nilsson, Nvidia, USA
  • Sebastiano Panichela, Zurich University of Applied Sciences, Switzerland
  • Davy Pissoort, KU Leuven, Belgium
  • Philippa Ryan, Adelard, UK
  • Mehrdad Saadatmand, RISE SICS, Sweden
  • Rick Salay, University of Waterloo, Canada
  • Erwin Schoitsch, Austrian Institute of Technology, Austria
  • Mario Trapp, Fraunhofer ESK, Germany
  • Ilse Verdiesen, TU Delft, Netherlands
  • Alan Winfield, Bristol Robotics Lab, UK
Event Details