Visible to the public A game-theoretic defense against data poisoning attacks in distributed support vector machines

TitleA game-theoretic defense against data poisoning attacks in distributed support vector machines
Publication TypeConference Paper
Year of Publication2017
AuthorsZhang, R., Zhu, Q.
Conference Name2017 IEEE 56th Annual Conference on Decision and Control (CDC)
KeywordsAI Poisoning, Algorithm design and analysis, Computer crime, control units, data poisoning attacks, defense strategies, distributed algorithms, distributed support vector machines, DSVM learner, DSVMs, dynamic distributed algorithms, game theory, game-theoretic defense, game-theoretic framework, Games, Human Behavior, learning (artificial intelligence), learning algorithms, machine learning, multi-sensor classification, Nash equilibrium, networked systems, pattern classification, prediction tasks, pubcrawl, resilience, Resiliency, resilient DSVM algorithm, Scalability, secure DSVM algorithm, sensor fusion, Sensors, Support vector machines, Training

With a large number of sensors and control units in networked systems, distributed support vector machines (DSVMs) play a fundamental role in scalable and efficient multi-sensor classification and prediction tasks. However, DSVMs are vulnerable to adversaries who can modify and generate data to deceive the system to misclassification and misprediction. This work aims to design defense strategies for DSVM learner against a potential adversary. We use a game-theoretic framework to capture the conflicting interests between the DSVM learner and the attacker. The Nash equilibrium of the game allows predicting the outcome of learning algorithms in adversarial environments, and enhancing the resilience of the machine learning through dynamic distributed algorithms. We develop a secure and resilient DSVM algorithm with rejection method, and show its resiliency against adversary with numerical experiments.

Citation Keyzhang_game-theoretic_2017