Visible to the public Pitfalls in Machine Learning-based Adversary Modeling for Hardware Systems

TitlePitfalls in Machine Learning-based Adversary Modeling for Hardware Systems
Publication TypeConference Paper
Year of Publication2020
AuthorsGanji, F., Amir, S., Tajik, S., Forte, D., Seifert, J.-P.
Conference Name2020 Design, Automation Test in Europe Conference Exhibition (DATE)
Date PublishedMarch 2020
ISBN Number978-3-9819263-4-7
KeywordsAdversary Models, Approximation algorithms, Boolean functions, Composed Hardware, cryptanalysis attacks, cryptographic scheme, cryptography, Hardware, Human Behavior, learning (artificial intelligence), logic locking, machine learning, machine learning-based adversary model, machine learning-based attacks, Metrics, physically unclonable functions, Picture archiving and communication systems, pubcrawl, resilience, Resiliency, Root-of-trust, Scalability

The concept of the adversary model has been widely applied in the context of cryptography. When designing a cryptographic scheme or protocol, the adversary model plays a crucial role in the formalization of the capabilities and limitations of potential attackers. These models further enable the designer to verify the security of the scheme or protocol under investigation. Although being well established for conventional cryptanalysis attacks, adversary models associated with attackers enjoying the advantages of machine learning techniques have not yet been developed thoroughly. In particular, when it comes to composed hardware, often being security-critical, the lack of such models has become increasingly noticeable in the face of advanced, machine learning-enabled attacks. This paper aims at exploring the adversary models from the machine learning perspective. In this regard, we provide examples of machine learning-based attacks against hardware primitives, e.g., obfuscation schemes and hardware root-of-trust, claimed to be infeasible. We demonstrate that this assumption becomes however invalid as inaccurate adversary models have been considered in the literature.

Citation Keyganji_pitfalls_2020