Visible to the public Interpreting a Black-Box Model used for SCADA Attack detection in Gas Pipelines Control System

TitleInterpreting a Black-Box Model used for SCADA Attack detection in Gas Pipelines Control System
Publication TypeConference Paper
Year of Publication2020
AuthorsRathod, Jash, Joshi, Chaitali, Khochare, Janavi, Kazi, Faruk
Conference Name2020 IEEE 17th India Council International Conference (INDICON)
Keywordsautoencoder, compositionality, explainable AI, Human Behavior, human factors, Industries, Interpretable Machine Learning, LIME, LRP, machine learning, machine learning algorithms, Pipelines, Prediction algorithms, Predictive models, pubcrawl, resilience, Resiliency, SCADA, SCADA System Security, SCADA systems, SCADA Systems Security
AbstractVarious Machine Learning techniques are considered to be "black-boxes" because of their limited interpretability and explainability. This cannot be afforded, especially in the domain of Cyber-Physical Systems, where there can be huge losses of infrastructure of industries and Governments. Supervisory Control And Data Acquisition (SCADA) systems need to detect and be protected from cyber-attacks. Thus, we need to adopt approaches that make the system secure, can explain predictions made by model, and interpret the model in a human-understandable format. Recently, Autoencoders have shown great success in attack detection in SCADA systems. Numerous interpretable machine learning techniques are developed to help us explain and interpret models. The work presented here is a novel approach to use techniques like Local Interpretable Model-Agnostic Explanations (LIME) and Layer-wise Relevance Propagation (LRP) for interpretation of Autoencoder networks trained on a Gas Pipelines Control System to detect attacks in the system.
Citation Keyrathod_interpreting_2020