Visible to the public Inferring OpenFlow Rules by Active Probing in Software-Defined Networks

TitleInferring OpenFlow Rules by Active Probing in Software-Defined Networks
Publication TypeConference Paper
Year of Publication2017
AuthorsLin, P. C., Li, P. C., Nguyen, V. L.
Conference Name2017 19th International Conference on Advanced Communication Technology (ICACT)
ISBN Number978-89-968650-9-4
Keywordsactive probing, Apriori algorithm, clustering, control systems, controllers, delays, denial-of-service, DoS attacks, inferring sdn by probing and rule extraction, INSPIRE, IP networks, k-means clustering, Network reconnaissance, OpenFlow rules, packet delay time, Probes, Probing, probing packets, pubcrawl, reactive rules, Receivers, Reconnaissance, Resiliency, rule inference, security threats, software defined networking, Software-Defined Networks, telecommunication security

Software-defined networking (SDN) separates the control plane from underlying devices, and allows it to control the data plane from a global view. While SDN brings conveniences to management, it also introduces new security threats. Knowing reactive rules, attackers can launch denial-of-service (DoS) attacks by sending numerous rule-matched packets which trigger packet-in packets to overburden the controller. In this work, we present a novel method ``INferring SDN by Probing and Rule Extraction'' (INSPIRE) to discover the flow rules in SDN from probing packets. We evaluate the delay time from probing packets, classify them into defined classes, and infer the rules. This method involves three relevant steps: probing, clustering and rule inference. First, forged packets with various header fields are sent to measure processing and propagation time in the path. Second, it classifies the packets into multiple classes by using k-means clustering based on packet delay time. Finally, the apriori algorithm will find common header fields in the classes to infer the rules. We show how INSPIRE is able to infer flow rules via simulation, and the accuracy of inference can be up to 98.41% with very low false-positive rates.

Citation Keylin_inferring_2017