Visible to the public Biblio

Found 1182 results

Filters: First Letter Of Last Name is S  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R [S] T U V W X Y Z   [Show ALL]
S
Sudholt, Dirk.  2016.  Theory of Swarm Intelligence. Proceedings of the 2016 on Genetic and Evolutionary Computation Conference Companion. :715–734.

Social animals as found in fish schools, bird flocks, bee hives, and ant colonies are able to solve highly complex problems in nature. This includes foraging for food, constructing astonishingly complex nests, and evading or defending against predators. Remarkably, these animals in many cases use very simple, decentralized communication mechanisms that do not require a single leader. This makes the animals perform surprisingly well, even in dynamically changing environments. The collective intelligence of such animals is known as swarm intelligence and it has inspired popular and very powerful optimization paradigms, including ant colony optimization (ACO) and particle swarm optimization (PSO). The reasons behind their success are often elusive. We are just beginning to understand when and why swarm intelligence algorithms perform well, and how to use swarm intelligence most effectively. Understanding the fundamental working principles that determine their efficiency is a major challenge. This tutorial will give a comprehensive overview of recent theoretical results on swarm intelligence algorithms, with an emphasis on their efficiency (runtime/computational complexity). In particular, the tutorial will show how techniques for the analysis of evolutionary algorithms can be used to analyze swarm intelligence algorithms and how the performance of swarm intelligence algorithms compares to that of evolutionary algorithms. The results shed light on the working principles of swarm intelligence algorithms, identify the impact of parameters and other design choices on performance, and thus help to use swarm intelligence more effectively. The tutorial will be divided into a first, larger part on ACO and a second, smaller part on PSO. For ACO we will consider simple variants of the MAX-MIN ant system. Investigations of example functions in pseudo-Boolean optimization demonstrate that the choices of the pheromone update strategy and the evaporation rate have a drastic impact on the running time. We further consider the performance of ACO on illustrative problems from combinatorial optimization: constructing minimum spanning trees, solving shortest path problems with and without noise, and finding short tours for the TSP. For particle swarm optimization, the tutorial will cover results on PSO for pseudo-Boolean optimization as well as a discussion of theoretical results in continuous spaces.

Sudozai, M. A. K., Saleem, Shahzad.  2018.  Profiling of secure chat and calling apps from encrypted traffic. 2018 15th International Bhurban Conference on Applied Sciences and Technology (IBCAST). :502–508.
Increased use of secure chat and voice/ video apps has transformed the social life. While the benefits and facilitations are seemingly limitless, so are the asscoiacted vulnerabilities and threats. Besides ensuring confidentiality requirements for common users, known facts of non-readable contents over the network make these apps more attractive for criminals. Though access to contents of cryptograhically secure sessions is not possible, network forensics of secure apps can provide interesting information which can be of great help during criminal invetigations. In this paper, we presented a novel framework of profiling the secure chat and voice/ video calling apps which can be employed to extract hidden patterns about the app, information of involved parties, activities of chatting, voice/ video calls, status indications and notifications while having no information of communication protocol of the app and its security architecture. Signatures of any secure app can be developed though our framework and can become base of a large scale solution. Our methodology is considered very important for different cases of criminal investigations and bussiness intelligence solutions for service provider networks. Our results are applicable to any mobile platform of iOS, android and windows.
Suebsombut, P., Sekhari, A., Sureepong, P., Ueasangkomsate, P., Bouras, A..  2017.  The using of bibliometric analysis to classify trends and future directions on \#x201C;smart farm \#x201D;. 2017 International Conference on Digital Arts, Media and Technology (ICDAMT). :136–141.

Climate change has affected the cultivation in all countries with extreme drought, flooding, higher temperature, and changes in the season thus leaving behind the uncontrolled production. Consequently, the smart farm has become part of the crucial trend that is needed for application in certain farm areas. The aims of smart farm are to control and to enhance food production and productivity, and to increase farmers' profits. The advantages in applying smart farm will improve the quality of production, supporting the farm workers, and better utilization of resources. This study aims to explore the research trends and identify research clusters on smart farm using bibliometric analysis that has supported farming to improve the quality of farm production. The bibliometric analysis is the method to explore the relationship of the articles from a co-citation network of the articles and then science mapping is used to identify clusters in the relationship. This study examines the selected research articles in the smart farm field. The area of research in smart farm is categorized into two clusters that are soil carbon emission from farming activity, food security and farm management by using a VOSviewer tool with keywords related to research articles on smart farm, agriculture, supply chain, knowledge management, traceability, and product lifecycle management from Web of Science (WOS) and Scopus online database. The major cluster of smart farm research is the soil carbon emission from farming activity which impacts on climate change that affects food production and productivity. The contribution is to identify the trends on smart farm to develop research in the future by means of bibliometric analysis.

Sugrim, Shridatt, Venkatesan, Sridhar, Youzwak, Jason A., Chiang, Cho-Yu J., Chadha, Ritu, Albanese, Massimiliano, Cam, Hasan.  2018.  Measuring the Effectiveness of Network Deception. 2018 IEEE International Conference on Intelligence and Security Informatics (ISI). :142—147.

Cyber reconnaissance is the process of gathering information about a target network for the purpose of compromising systems within that network. Network-based deception has emerged as a promising approach to disrupt attackers' reconnaissance efforts. However, limited work has been done so far on measuring the effectiveness of network-based deception. Furthermore, given that Software-Defined Networking (SDN) facilitates cyber deception by allowing network traffic to be modified and injected on-the-fly, understanding the effectiveness of employing different cyber deception strategies is critical. In this paper, we present a model to study the reconnaissance surface of a network and model the process of gathering information by attackers as interactions with a cyber defensive system that may use deception. To capture the evolution of the attackers' knowledge during reconnaissance, we design a belief system that is updated by using a Bayesian inference method. For the proposed model, we present two metrics based on KL-divergence to quantify the effectiveness of network deception. We tested the model and the two metrics by conducting experiments with a simulated attacker in an SDN-based deception system. The results of the experiments match our expectations, providing support for the model and proposed metrics.

Sugumar, G., Mathur, A..  2017.  Testing the Effectiveness of Attack Detection Mechanisms in Industrial Control Systems. 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :138–145.

Industrial Control Systems (ICS) are found in critical infrastructure such as for power generation and water treatment. When security requirements are incorporated into an ICS, one needs to test the additional code and devices added do improve the prevention and detection of cyber attacks. Conducting such tests in legacy systems is a challenge due to the high availability requirement. An approach using Timed Automata (TA) is proposed to overcome this challenge. This approach enables assessment of the effectiveness of an attack detection method based on process invariants. The approach has been demonstrated in a case study on one stage of a 6- stage operational water treatment plant. The model constructed captured the interactions among components in the selected stage. In addition, a set of attacks, attack detection mechanisms, and security specifications were also modeled using TA. These TA models were conjoined into a network and implemented in UPPAAL. The models so implemented were found effective in detecting the attacks considered. The study suggests the use of TA as an effective tool to model an ICS and study its attack detection mechanisms as a complement to doing so in a real plant-operational or under design.

Suh, Y. K., Ma, J..  2017.  SuperMan: A Novel System for Storing and Retrieving Scientific-Simulation Provenance for Efficient Job Executions on Computing Clusters. 2017 IEEE 2nd International Workshops on Foundations and Applications of Self* Systems (FAS*W). :283–288.

Compute-intensive simulations typically charge substantial workloads on an online simulation platform backed by limited computing clusters and storage resources. Some (or most) of the simulations initiated by users may accompany input parameters/files that have been already provided by other (or same) users in the past. Unfortunately, these duplicate simulations may aggravate the performance of the platform by drastic consumption of the limited resources shared by a number of users on the platform. To minimize or avoid conducting repeated simulations, we present a novel system, called SUPERMAN (SimUlation ProvEnance Recycling MANager) that can record simulation provenances and recycle the results of past simulations. This system presents a great opportunity to not only reutilize existing results but also perform various analytics helpful for those who are not familiar with the platform. The system also offers interoperability across other systems by collecting the provenances in a standardized format. In our simulated experiments we found that over half of past computing jobs could be answered without actual executions by our system.

Sui, T., Marelli, D., Sun, X., Fu, M..  2019.  Stealthiness of Attacks and Vulnerability of Stochastic Linear Systems. 2019 12th Asian Control Conference (ASCC). :734—739.
The security of Cyber-physical systems has been a hot topic in recent years. There are two main focuses in this area: Firstly, what kind of attacks can avoid detection, i.e., the stealthiness of attacks. Secondly, what kind of systems can stay stable under stealthy attacks, i.e., the invulnerability of systems. In this paper, we will give a detailed characterization for stealthy attacks and detection criterion for such attacks. We will also study conditions for the vulnerability of a stochastic linear system under stealthy attacks.
Sui, Zhiyuan, de Meer, Hermann.  2019.  BAP: A Batch and Auditable Privacy Preservation Scheme for Demand-Response in Smart Grids. IEEE Transactions on Industrial Informatics. :1–1.
Advancing network technologies allows the setup of two-way communication links between energy providers and consumers. These developing technologies aim to enhance grid reliability and energy efficiency in smart grids. To achieve this goal, energy usage reports from consumers are required to be both trustworthy and confidential. In this paper, we construct a new data aggregation scheme in smart grids based on a homomorphic encryption algorithm. In the constructed scheme, obedient consumers who follow the instruction can prove its ajustment using a range proof protocol. Additionally, we propose a new identity-based signature algorithm in order to ensure authentication and integrity of the constructed scheme. By using this signature algorithm, usage reports are verified in real time. Extensive simulations demonstrate that our scheme outperforms other data aggregation schemes.
Suksomboon, Kalika, Ueda, Kazuaki, Tagami, Atsushi.  2018.  Content-centric Privacy Model for Monitoring Services in Surveillance Systems. Proceedings of the 5th ACM Conference on Information-Centric Networking. :190–191.
This paper proposes a content-centric privacy (CCP) model that enables a privacy-preserving monitoring services in surveillance systems without cloud dependency. We design a simple yet powerful method that could not be obtained from a cloud-like system. The CCP model includes two key ideas: (1) the separation of the private data (i.e., target object images) from the public data (i.e., background images), and (2) the service authentication with the classification model. Deploying the CCP model over ICN enables the privacy central around the content itself rather than relying on a cloud system. Our preliminary analysis shows that the ICN-based CCP model can preserve privacy with respect to the W3 -privacy in which the private information of target object are decoupled from the queries and cameras.
Suksomboon, Kalika, Shen, Zhishu, Ueda, Kazuaki, Tagami, Atsushi.  2019.  C2P2: Content-Centric Privacy Platform for Privacy-Preserving Monitoring Services. 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC). 1:252–261.
Motivated by ubiquitous surveillance cameras in a smart city, a monitoring service can be provided to citizens. However, the rise of privacy concerns may disrupt this advanced service. Yet, the existing cloud-based services have not clearly proven that they can preserve Wth-privacy in which the relationship of three types of information, i.e., who requests the service, what the target is and where the camera is, does not leak. We address this problem by proposing a content-centric privacy platform (C2P2) that enables the construction of a Wth-privacy-preserving monitoring service without cloud dependency. C2P2 uses an image classification model of a target serving as the key to access the monitoring service specific to the target. In C2P2, communication is based on information-centric networking (ICN) that enables privacy preservation to be centered on the content itself rather than relying on a centralized system. Moreover, to preserve the privacy of bystanders, C2P2 separates the sensitive information (e.g., human faces) from the non-sensitive information (e.g., image background), while the privacy-aware forwarding strategies in C2P2 enable data aggregation and prevent privacy leakage resulting from false positive of image recognition. We evaluate the privacy leakage of C2P2 compared to that of the cloud-based system. The privacy analysis shows that, compared to the cloud-based system, C2P2 achieves a lower privacy loss ratio while reducing the communication cost significantly.
Sulavko, A. E., Eremenko, A. V., Fedotov, A. A..  2017.  Users' Identification through Keystroke Dynamics Based on Vibration Parameters and Keyboard Pressure. 2017 Dynamics of Systems, Mechanisms and Machines (Dynamics). :1–7.

The paper considers an issues of protecting data from unauthorized access by users' authentication through keystroke dynamics. It proposes to use keyboard pressure parameters in combination with time characteristics of keystrokes to identify a user. The authors designed a keyboard with special sensors that allow recording complementary parameters. The paper presents an estimation of the information value for these new characteristics and error probabilities of users' identification based on the perceptron algorithms, Bayes' rule and quadratic form networks. The best result is the following: 20 users are identified and the error rate is 0.6%.

Sule, Rupali, Chaudhari, Sangita.  2018.  Preserving Location Privacy in Geosocial Applications using Error Based Transformation. 2018 International Conference on Smart City and Emerging Technology (ICSCET). :1–4.
Geo-social applications deal with constantly sharing user's current geographic information in terms of location (Latitude and Longitude). Such application can be used by many people to get information about their surrounding with the help of their friend's locations and their recommendations. But without any privacy protection, these systems can be easily misused by tracking the users. We are proposing Error Based Transformation (ERB) approach for location transformation which provides significantly improved location privacy without adding uncertainty in to query results or relying on strong assumptions about server security. The key insight is to apply secure user-specific, distance-preserving coordinate transformations to all location data shared with the server. Only the friends of a user can get exact co-ordinates by applying inverse transformation with secret key shared with them. Servers can evaluate all location queries correctly on transformed data. ERB privacy mechanism guarantee that servers are unable to see or infer actual location data from the transformed data. ERB privacy mechanism is successful against a powerful adversary model where prototype measurements used to show that it provides with very little performance overhead making it suitable for today's mobile device.
Sule, Rupali, Chaudhari, Sangita.  2018.  Preserving Location Privacy in Geosocial Applications using Error Based Transformation. 2018 International Conference on Smart City and Emerging Technology (ICSCET). :1–4.
Geo-social applications deal with constantly sharing user's current geographic information in terms of location (Latitude and Longitude). Such application can be used by many people to get information about their surrounding with the help of their friend's locations and their recommendations. But without any privacy protection, these systems can be easily misused by tracking the users. We are proposing Error Based Transformation (ERB) approach for location transformation which provides significantly improved location privacy without adding uncertainty in to query results or relying on strong assumptions about server security. The key insight is to apply secure user-specific, distance-preserving coordinate transformations to all location data shared with the server. Only the friends of a user can get exact co-ordinates by applying inverse transformation with secret key shared with them. Servers can evaluate all location queries correctly on transformed data. ERB privacy mechanism guarantee that servers are unable to see or infer actual location data from the transformed data. ERB privacy mechanism is successful against a powerful adversary model where prototype measurements used to show that it provides with very little performance overhead making it suitable for today's mobile device.
Sullivan, Daniel, Colbert, Edward, Cowley, Jennifer.  2018.  Mission Resilience for Future Army Tactical Networks. 2018 Resilience Week (RWS). :11—14.

Cyber-physical systems are an integral component of weapons, sensors and autonomous vehicles, as well as cyber assets directly supporting tactical forces. Mission resilience of tactical networks affects command and control, which is important for successful military operations. Traditional engineering methods for mission assurance will not scale during battlefield operations. Commanders need useful mission resilience metrics to help them evaluate the ability of cyber assets to recover from incidents to fulfill mission essential functions. We develop 6 cyber resilience metrics for tactical network architectures. We also illuminate how psychometric modeling is necessary for future research to identify resilience metrics that are both applicable to the dynamic mission state and meaningful to commanders and planners.

Sultana, K. Z..  2017.  Towards a software vulnerability prediction model using traceable code patterns and software metrics. 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE). :1022–1025.

Software security is an important aspect of ensuring software quality. The goal of this study is to help developers evaluate software security using traceable patterns and software metrics during development. The concept of traceable patterns is similar to design patterns but they can be automatically recognized and extracted from source code. If these patterns can better predict vulnerable code compared to traditional software metrics, they can be used in developing a vulnerability prediction model to classify code as vulnerable or not. By analyzing and comparing the performance of traceable patterns with metrics, we propose a vulnerability prediction model. This study explores the performance of some code patterns in vulnerability prediction and compares them with traditional software metrics. We use the findings to build an effective vulnerability prediction model. We evaluate security vulnerabilities reported for Apache Tomcat, Apache CXF and three stand-alone Java web applications. We use machine learning and statistical techniques for predicting vulnerabilities using traceable patterns and metrics as features. We found that patterns have a lower false negative rate and higher recall in detecting vulnerable code than the traditional software metrics.

Sultana, K. Z., Williams, B. J..  2017.  Evaluating micro patterns and software metrics in vulnerability prediction. 2017 6th International Workshop on Software Mining (SoftwareMining). :40–47.

Software security is an important aspect of ensuring software quality. Early detection of vulnerable code during development is essential for the developers to make cost and time effective software testing. The traditional software metrics are used for early detection of software vulnerability, but they are not directly related to code constructs and do not specify any particular granularity level. The goal of this study is to help developers evaluate software security using class-level traceable patterns called micro patterns to reduce security risks. The concept of micro patterns is similar to design patterns, but they can be automatically recognized and mined from source code. If micro patterns can better predict vulnerable classes compared to traditional software metrics, they can be used in developing a vulnerability prediction model. This study explores the performance of class-level patterns in vulnerability prediction and compares them with traditional class-level software metrics. We studied security vulnerabilities as reported for one major release of Apache Tomcat, Apache Camel and three stand-alone Java web applications. We used machine learning techniques for predicting vulnerabilities using micro patterns and class-level metrics as features. We found that micro patterns have higher recall in detecting vulnerable classes than the software metrics.

Sultana, K. Z., Williams, B. J., Bosu, A..  2018.  A Comparison of Nano-Patterns vs. Software Metrics in Vulnerability Prediction. 2018 25th Asia-Pacific Software Engineering Conference (APSEC). :355—364.

Context: Software security is an imperative aspect of software quality. Early detection of vulnerable code during development can better ensure the security of the codebase and minimize testing efforts. Although traditional software metrics are used for early detection of vulnerabilities, they do not clearly address the granularity level of the issue to precisely pinpoint vulnerabilities. The goal of this study is to employ method-level traceable patterns (nano-patterns) in vulnerability prediction and empirically compare their performance with traditional software metrics. The concept of nano-patterns is similar to design patterns, but these constructs can be automatically recognized and extracted from source code. If nano-patterns can better predict vulnerable methods compared to software metrics, they can be used in developing vulnerability prediction models with better accuracy. Aims: This study explores the performance of method-level patterns in vulnerability prediction. We also compare them with method-level software metrics. Method: We studied vulnerabilities reported for two major releases of Apache Tomcat (6 and 7), Apache CXF, and two stand-alone Java web applications. We used three machine learning techniques to predict vulnerabilities using nano-patterns as features. We applied the same techniques using method-level software metrics as features and compared their performance with nano-patterns. Results: We found that nano-patterns show lower false negative rates for classifying vulnerable methods (for Tomcat 6, 21% vs 34.7%) and therefore, have higher recall in predicting vulnerable code than the software metrics used. On the other hand, software metrics show higher precision than nano-patterns (79.4% vs 76.6%). Conclusion: In summary, we suggest developers use nano-patterns as features for vulnerability prediction to augment existing approaches as these code constructs outperform standard metrics in terms of prediction recall.

Sultana, K. Z., Deo, A., Williams, B. J..  2017.  Correlation Analysis among Java Nano-Patterns and Software Vulnerabilities. 2017 IEEE 18th International Symposium on High Assurance Systems Engineering (HASE). :69–76.

Ensuring software security is essential for developing a reliable software. A software can suffer from security problems due to the weakness in code constructs during software development. Our goal is to relate software security with different code constructs so that developers can be aware very early of their coding weaknesses that might be related to a software vulnerability. In this study, we chose Java nano-patterns as code constructs that are method-level patterns defined on the attributes of Java methods. This study aims to find out the correlation between software vulnerability and method-level structural code constructs known as nano-patterns. We found the vulnerable methods from 39 versions of three major releases of Apache Tomcat for our first case study. We extracted nano-patterns from the affected methods of these releases. We also extracted nano-patterns from the non-vulnerable methods of Apache Tomcat, and for this, we selected the last version of three major releases (6.0.45 for release 6, 7.0.69 for release 7 and 8.0.33 for release 8) as the non-vulnerable versions. Then, we compared the nano-pattern distributions in vulnerable versus non-vulnerable methods. In our second case study, we extracted nano-patterns from the affected methods of three vulnerable J2EE web applications: Blueblog 1.0, Personalblog 1.2.6 and Roller 0.9.9, all of which were deliberately made vulnerable for testing purpose. We found that some nano-patterns such as objCreator, staticFieldReader, typeManipulator, looper, exceptions, localWriter, arrReader are more prevalent in affected methods whereas some such as straightLine are more vivid in non-affected methods. We conclude that nano-patterns can be used as the indicator of vulnerability-proneness of code.

Sultana, Kazi Zakia, Chong, Tai-Yin.  2019.  A Proposed Approach to Build an Automated Software Security Assessment Framework using Mined Patterns and Metrics. 2019 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC). :176–181.

Software security is a major concern of the developers who intend to deliver a reliable software. Although there is research that focuses on vulnerability prediction and discovery, there is still a need for building security-specific metrics to measure software security and vulnerability-proneness quantitatively. The existing methods are either based on software metrics (defined on the physical characteristics of code; e.g. complexity or lines of code) which are not security-specific or some generic patterns known as nano-patterns (Java method-level traceable patterns that characterize a Java method or function). Other methods predict vulnerabilities using text mining approaches or graph algorithms which perform poorly in cross-project validation and fail to be a generalized prediction model for any system. In this paper, we envision to construct an automated framework that will assist developers to assess the security level of their code and guide them towards developing secure code. To accomplish this goal, we aim to refine and redefine the existing nano-patterns and software metrics to make them more security-centric so that they can be used for measuring the software security level of a source code (either file or function) with higher accuracy. In this paper, we present our visionary approach through a series of three consecutive studies where we (1) will study the challenges of the current software metrics and nano-patterns in vulnerability prediction, (2) will redefine and characterize the nano-patterns and software metrics so that they can capture security-specific properties of code and measure the security level quantitatively, and finally (3) will implement an automated framework for the developers to automatically extract the values of all the patterns and metrics for the given code segment and then flag the estimated security level as a feedback based on our research results. We accomplished some preliminary experiments and presented the results which indicate that our vision can be practically implemented and will have valuable implications in the community of software security.

Sultana, Nik, Kohlweiss, Markulf, Moore, Andrew W..  2016.  Light at the Middle of the Tunnel: Middleboxes for Selective Disclosure of Network Monitoring to Distrusted Parties. Proceedings of the 2016 Workshop on Hot Topics in Middleboxes and Network Function Virtualization. :1–6.

Network monitoring is vital to the administration and operation of networks, but it requires privileged access that only highly trusted parties are granted. This severely limits the opportunity for external parties, such as service or equipment providers, auditors, or even clients, to measure the health or operation of a network in which they are stakeholders, but do not have access to its internal structure. In this position paper we propose the use of middleboxes to open up network monitoring to external parties using privacy-preserving technology. This will allow distrusted parties to make more inferences about the network state than currently possible, without learning any precise information about the network or the data that crosses it. Thus the state of the network will be more transparent to external stakeholders, who will be empowered to verify claims made by network operators. Network operators will be able to provide more information about their network without compromising security or privacy.

Sultana, Subrina, Nasrin, Sumaiya, Lipi, Farhana Kabir, Hossain, Md Afzal, Sultana, Zinia, Jannat, Fatima.  2019.  Detecting and Preventing IP Spoofing and Local Area Network Denial (LAND) Attack for Cloud Computing with the Modification of Hop Count Filtering (HCF) Mechanism. 2019 International Conference on Computer, Communication, Chemical, Materials and Electronic Engineering (IC4ME2). :1–6.
In today's world the number of consumers of cloud computing is increasing day by day. So, security is a big concern for cloud computing environment to keep user's data safe and secure. Among different types of attacks in cloud one of the harmful and frequently occurred attack is Distributed Denial of Service (DDoS) attack. DDoS is one type of flooding attack which is initiated by sending a large number of invalid packets to limit the services of the victim server. As a result, server can not serve the legitimate requests. DDoS attack can be done by a lot of strategies like malformed packets, IP spoofing, smurf attack, teardrop attack, syn flood attack, local area network denial (LAND) attack etc. This paper focuses on IP spoofing and LAND based DDoS attack. The objective of this paper is to propose an algorithm to detect and prevent IP spoofing and LAND attack. To achieve this objective a new approach is proposed combining two existing solutions of DDoS attack caused by IP spoofing and ill-formed packets. The proposed approach will provide a transparent solution, filter out the spoofed packets and minimize memory exhaustion through minimizing the number of insertions and updates required in the datatable. Finally, the approach is implemented and simulated using CloudSim 3.0 toolkit (a virtual cloud environment) followed by result analysis and comparison with existing algorithms.
Sultangazin, Alimzhan, Tabuada, Paulo.  2019.  Symmetries and privacy in control over the cloud: uncertainty sets and side knowledge*. 2019 IEEE 58th Conference on Decision and Control (CDC). :7209–7214.
Control algorithms, like model predictive control, can be computationally expensive and may benefit from being executed over the cloud. This is especially the case for nodes at the edge of a network since they tend to have reduced computational capabilities. However, control over the cloud requires transmission of sensitive data (e.g., system dynamics, measurements) which undermines privacy of these nodes. When choosing a method to protect the privacy of these data, efficiency must be considered to the same extent as privacy guarantees to ensure adequate control performance. In this paper, we review a transformation-based method for protecting privacy, previously introduced by the authors, and quantify the level of privacy it provides. Moreover, we also consider the case of adversaries with side knowledge and quantify how much privacy is lost as a function of the side knowledge of the adversary.
Sumantra, I., Gandhi, S. Indira.  2020.  DDoS attack Detection and Mitigation in Software Defined Networks. 2020 International Conference on System, Computation, Automation and Networking (ICSCAN). :1—5.
This work aims to formulate an effective scheme which can detect and mitigate of Distributed Denial of Service (DDoS) attack in Software Defined Networks. Distributed Denial of Service attacks are one of the most destructive attacks in the internet. Whenever you heard of a website being hacked, it would have probably been a victim of a DDoS attack. A DDoS attack is aimed at disrupting the normal operation of a system by making service and resources unavailable to legitimate users by overloading the system with excessive superfluous traffic from distributed source. These distributed set of compromised hosts that performs the attack are referred as Botnet. Software Defined Networking being an emerging technology, offers a solution to reduce network management complexity. It separates the Control plane and the data plane. This decoupling provides centralized control of the network with programmability and flexibility. This work harness this programming ability and centralized control of SDN to obtain the randomness of the network flow data. This statistical approach utilizes the source IP in the network and various attributes of TCP flags and calculates entropy from them. The proposed technique can detect volume based and application based DDoS attacks like TCP SYN flood, Ping flood and Slow HTTP attacks. The methodology is evaluated through emulation using Mininet and Detection and mitigation strategies are implemented in POX controller. The experimental results show the proposed method have improved performance evaluation parameters including the Attack detection time, Delay to serve a legitimate request in the presence of attacker and overall CPU utilization.
Sumec, S..  2014.  Software tool for verification of sampled values transmitted via IEC 61850-9-2 protocol. Electric Power Engineering (EPE), Proccedings of the 2014 15th International Scientific Conference on. :113-117.

Nowadays is increasingly used process bus for communication of equipments in substations. In addition to signaling various statuses of device using GOOSE messages it is possible to transmit measured values, which can be used for diagnostic of system or other advanced functions. Transmission of such values via Ethernet is well defined in protocol IEC 61850-9-2. Paper introduces a tool designed for verification of sampled values generated by various devices using this protocol.
 

Sumit, S., Mitra, D., Gupta, D..  2014.  Proposed Intrusion Detection on ZRP based MANET by effective k-means clustering method of data mining. Optimization, Reliabilty, and Information Technology (ICROIT), 2014 International Conference on. :156-160.

Mobile Ad-Hoc Networks (MANET) consist of peer-to-peer infrastructure less communicating nodes that are highly dynamic. As a result, routing data becomes more challenging. Ultimately routing protocols for such networks face the challenges of random topology change, nature of the link (symmetric or asymmetric) and power requirement during data transmission. Under such circumstances both, proactive as well as reactive routing are usually inefficient. We consider, zone routing protocol (ZRP) that adds the qualities of the proactive (IARP) and reactive (IERP) protocols. In ZRP, an updated topological map of zone centered on each node, is maintained. Immediate routes are available inside each zone. In order to communicate outside a zone, a route discovery mechanism is employed. The local routing information of the zones helps in this route discovery procedure. In MANET security is always an issue. It is possible that a node can turn malicious and hamper the normal flow of packets in the MANET. In order to overcome such issue we have used a clustering technique to separate the nodes having intrusive behavior from normal behavior. We call this technique as effective k-means clustering which has been motivated from k-means. We propose to implement Intrusion Detection System on each node of the MANET which is using ZRP for packet flow. Then we will use effective k-means to separate the malicious nodes from the network. Thus, our Ad-Hoc network will be free from any malicious activity and normal flow of packets will be possible.