Visible to the public Biblio

Found 1186 results

Filters: First Letter Of Last Name is S  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R [S] T U V W X Y Z   [Show ALL]
S
Srinivasa Rao, Routhu, Pais, Alwyn R..  2017.  Detecting Phishing Websites Using Automation of Human Behavior. Proceedings of the 3rd ACM Workshop on Cyber-Physical System Security. :33–42.

In this paper, we propose a technique to detect phishing attacks based on behavior of human when exposed to fake website. Some online users submit fake credentials to the login page before submitting their actual credentials. He/She observes the login status of the resulting page to check whether the website is fake or legitimate. We automate the same behavior with our application (FeedPhish) which feeds fake values into login page. If the web page logs in successfully, it is classified as phishing otherwise it undergoes further heuristic filtering. If the suspicious site passes through all heuristic filters then the website is classified as a legitimate site. As per the experimentation results, our application has achieved a true positive rate of 97.61%, true negative rate of 94.37% and overall accuracy of 96.38%. Our application neither demands third party services nor prior knowledge like web history, whitelist or blacklist of URLS. It is able to detect not only zero-day phishing attacks but also detects phishing sites which are hosted on compromised domains.

Srinivasan, Avinash, Dong, Hunter, Stavrou, Angelos.  2017.  FROST: Anti-Forensics Digital-Dead-DROp Information Hiding RobuST to Detection & Data Loss with Fault Tolerance. Proceedings of the 12th International Conference on Availability, Reliability and Security. :82:1–82:8.

Covert operations involving clandestine dealings and communication through cryptic and hidden messages have existed since time immemorial. While these do have a negative connotation, they have had their fair share of use in situations and applications beneficial to society in general. A "Dead Drop" is one such method of espionage trade craft used to physically exchange items or information between two individuals using a secret rendezvous point. With a "Dead Drop", to maintain operational security, the exchange itself is asynchronous. Information hiding in the slack space is one modern technique that has been used extensively. Slack space is the unused space within the last block allocated to a stored file. However, hiding in slack space operates under significant constraints with little resilience and fault tolerance. In this paper, we propose FROST – a novel asynchronous "Digital Dead Drop" robust to detection and data loss with tunable fault tolerance. Fault tolerance is a critical attribute of a secure and robust system design. Through extensive validation of FROST prototype implementation on Ubuntu Linux, we confirm the performance and robustness of the proposed digital dead drop to detection and data loss. We verify the recoverability of the secret message under various operating conditions ranging from block corruption and drive de-fragmentation to growing existing files on the target drive.

Srinivasan, Shruthi, Mazumdar, Arka Prokash.  2019.  Mitigating Content Poisoning in Content Centric Network: A Lightweight Approach. 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1–6.
The internet paradigm was designed to forward packets from host-to-host. But nowadays the focal point has moved to data. The Internet Centric Network (ICN) provides architectures to meet this requirement. The Content Centric Network (CCN) is the most widely used ICN architecture. Information Centric Network's ability to perform in-network caching lead to faster retrieval of data on subsequent request. Although latency is solved, caching in a router makes it vulnerable to attacks that focus on the cache. One such attack is content poisoning, that will fill the router with poisoned content making the end user difficult to retrieve original valid data. In this paper, we propose a solution to mitigate content poisoning attack that will consume minimum time and require minimal storage overhead during the verification process.
Srinivasan, Venkatesh, Reps, Thomas.  2016.  An Improved Algorithm for Slicing Machine Code. Proceedings of the 2016 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications. :378–393.

Machine-code slicing is an important primitive for building binary analysis and rewriting tools, such as taint trackers, fault localizers, and partial evaluators. However, it is not easy to create a machine-code slicer that exhibits a high level of precision. Moreover, the problem of creating such a tool is compounded by the fact that a small amount of local imprecision can be amplified via cascade effects. Most instructions in instruction sets such as Intel's IA-32 and ARM are multi-assignments: they have several inputs and several outputs (registers, flags, and memory locations). This aspect of the instruction set introduces a granularity issue during slicing: there are often instructions at which we would like the slice to include only a subset of the instruction's semantics, whereas the slice is forced to include the entire instruction. Consequently, the slice computed by state-of-the-art tools is very imprecise, often including essentially the entire program. This paper presents an algorithm to slice machine code more accurately. To counter the granularity issue, our algorithm performs slicing at the microcode level, instead of the instruction level, and obtains a more precise microcode slice. To reconstitute a machine-code program from a microcode slice, our algorithm uses machine-code synthesis. Our experiments on IA-32 binaries of FreeBSD utilities show that, in comparison to slices computed by a state-of-the-art tool, our algorithm reduces the size of backward slices by 33%, and forward slices by 70%.

Srinu, Sesham, Reddy, M. Kranthi Kumar, Temaneh-Nyah, Clement.  2019.  Physical layer security against cooperative anomaly attack using bivariate data in distributed CRNs. 2019 11th International Conference on Communication Systems Networks (COMSNETS). :410—413.
Wireless communication network (WCN) performance is primarily depends on physical layer security which is critical among all other layers of OSI network model. It is typically prone to anomaly/malicious user's attacks owing to openness of wireless channels. Cognitive radio networking (CRN) is a recently emerged wireless technology that is having numerous security challenges because of its unlicensed access of wireless channels. In CRNs, the security issues occur mainly during spectrum sensing and is more pronounced during distributed spectrum sensing. In recent past, various anomaly effects are modelled and developed detectors by applying advanced statistical techniques. Nevertheless, many of these detectors have been developed based on sensing data of one variable (energy measurement) and degrades their performance drastically when the data is contaminated with multiple anomaly nodes, that attack the network cooperatively. Hence, one has to develop an efficient multiple anomaly detection algorithm to eliminate all possible cooperative attacks. To achieve this, in this work, the impact of anomaly on detection probability is verified beforehand in developing an efficient algorithm using bivariate data to detect possible attacks with mahalanobis distance measure. Result discloses that detection error of cooperative attacks by anomaly has significant impact on eigenvalue-based sensing.
Srisopha, Kamonphop, Phonsom, Chukiat, Lin, Keng, Boehm, Barry.  2019.  Same App, Different Countries: A Preliminary User Reviews Study on Most Downloaded iOS Apps. 2019 IEEE International Conference on Software Maintenance and Evolution (ICSME). :76—80.
Prior work on mobile app reviews has demonstrated that user reviews contain a wealth of information and are seen as a potential source of requirements. However, most of the studies done in this area mainly focused on mining and analyzing user reviews from the US App Store, leaving reviews of users from other countries unexplored. In this paper, we seek to understand if the perception of the same apps between users from other countries and that from the US differs through analyzing user reviews. We retrieve 300,643 user reviews of the 15 most downloaded iOS apps of 2018, published directly by Apple, from nine English-speaking countries over the course of 5 months. We manually classify 3,358 reviews into several software quality and improvement factors. We leverage a random forest based algorithm to identify factors that can be used to differentiate reviews between the US and other countries. Our preliminary results show that all countries have some factors that are proportionally inconsistent with the US.
Srivastava, Animesh, Jain, Puneet, Demetriou, Soteris, Cox, Landon P., Kim, Kyu-Han.  2017.  CamForensics: Understanding Visual Privacy Leaks in the Wild. Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems. :30:1–30:13.

Many mobile apps, including augmented-reality games, bar-code readers, and document scanners, digitize information from the physical world by applying computer-vision algorithms to live camera data. However, because camera permissions for existing mobile operating systems are coarse (i.e., an app may access a camera's entire view or none of it), users are vulnerable to visual privacy leaks. An app violates visual privacy if it extracts information from camera data in unexpected ways. For example, a user might be surprised to find that an augmented-reality makeup app extracts text from the camera's view in addition to detecting faces. This paper presents results from the first large-scale study of visual privacy leaks in the wild. We build CamForensics to identify the kind of information that apps extract from camera data. Our extensive user surveys determine what kind of information users expected an app to extract. Finally, our results show that camera apps frequently defy users' expectations based on their descriptions.

Srivastava, Ankush, Ghosh, Prokash.  2019.  An Efficient Memory Zeroization Technique Under Side-Channel Attacks. 2019 32nd International Conference on VLSI Design and 2019 18th International Conference on Embedded Systems (VLSID). :76–81.
Protection of secured data content in volatile memories (processor caches, embedded RAMs etc) is essential in networking, wireless, automotive and other embedded secure applications. It is utmost important to protect secret data, like authentication credentials, cryptographic keys etc., stored over volatile memories which can be hacked during normal device operations. Several security attacks like cold boot, disclosure attack, data remanence, physical attack, cache attack etc. can extract the cryptographic keys or secure data from volatile memories of the system. The content protection of memory is typically done by assuring data deletion in minimum possible time to minimize data remanence effects. In today's state-of-the-art SoCs, dedicated hardwares are used to functionally erase the private memory contents in case of security violations. This paper, in general, proposes a novel approach of using existing memory built-in-self-test (MBIST) hardware to zeroize (initialize memory to all zeros) on-chip memory contents before it is being hacked either through different side channels or secuirty attacks. Our results show that the proposed MBIST based content zeroization approach is substantially faster than conventional techniques. By adopting the proposed approach, functional hardware requirement for memory zeroization can be waived.
Srivastava, M..  2014.  In Sensors We Trust – A Realistic Possibility? Distributed Computing in Sensor Systems (DCOSS), 2014 IEEE International Conference on. :1-1.

Sensors of diverse capabilities and modalities, carried by us or deeply embedded in the physical world, have invaded our personal, social, work, and urban spaces. Our relationship with these sensors is a complicated one. On the one hand, these sensors collect rich data that are shared and disseminated, often initiated by us, with a broad array of service providers, interest groups, friends, and family. Embedded in this data is information that can be used to algorithmically construct a virtual biography of our activities, revealing intimate behaviors and lifestyle patterns. On the other hand, we and the services we use, increasingly depend directly and indirectly on information originating from these sensors for making a variety of decisions, both routine and critical, in our lives. The quality of these decisions and our confidence in them depend directly on the quality of the sensory information and our trust in the sources. Sophisticated adversaries, benefiting from the same technology advances as the sensing systems, can manipulate sensory sources and analyze data in subtle ways to extract sensitive knowledge, cause erroneous inferences, and subvert decisions. The consequences of these compromises will only amplify as our society increasingly complex human-cyber-physical systems with increased reliance on sensory information and real-time decision cycles.Drawing upon examples of this two-faceted relationship with sensors in applications such as mobile health and sustainable buildings, this talk will discuss the challenges inherent in designing a sensor information flow and processing architecture that is sensitive to the concerns of both producers and consumer. For the pervasive sensing infrastructure to be trusted by both, it must be robust to active adversaries who are deceptively extracting private information, manipulating beliefs and subverting decisions. While completely solving these challenges would require a new science of resilient, secure and trustworthy networked sensing and decision systems that would combine hitherto disciplines of distributed embedded systems, network science, control theory, security, behavioral science, and game theory, this talk will provide some initial ideas. These include an approach to enabling privacy-utility trade-offs that balance the tension between risk of information sharing to the producer and the value of information sharing to the consumer, and method to secure systems against physical manipulation of sensed information.

Srivastava, M..  2014.  In Sensors We Trust – A Realistic Possibility? Distributed Computing in Sensor Systems (DCOSS), 2014 IEEE International Conference on. :1-1.

Sensors of diverse capabilities and modalities, carried by us or deeply embedded in the physical world, have invaded our personal, social, work, and urban spaces. Our relationship with these sensors is a complicated one. On the one hand, these sensors collect rich data that are shared and disseminated, often initiated by us, with a broad array of service providers, interest groups, friends, and family. Embedded in this data is information that can be used to algorithmically construct a virtual biography of our activities, revealing intimate behaviors and lifestyle patterns. On the other hand, we and the services we use, increasingly depend directly and indirectly on information originating from these sensors for making a variety of decisions, both routine and critical, in our lives. The quality of these decisions and our confidence in them depend directly on the quality of the sensory information and our trust in the sources. Sophisticated adversaries, benefiting from the same technology advances as the sensing systems, can manipulate sensory sources and analyze data in subtle ways to extract sensitive knowledge, cause erroneous inferences, and subvert decisions. The consequences of these compromises will only amplify as our society increasingly complex human-cyber-physical systems with increased reliance on sensory information and real-time decision cycles.Drawing upon examples of this two-faceted relationship with sensors in applications such as mobile health and sustainable buildings, this talk will discuss the challenges inherent in designing a sensor information flow and processing architecture that is sensitive to the concerns of both producers and consumer. For the pervasive sensing infrastructure to be trusted by both, it must be robust to active adversaries who are deceptively extracting private information, manipulating beliefs and subverting decisions. While completely solving these challenges would require a new science of resilient, secure and trustworthy networked sensing and decision systems that would combine hitherto disciplines of distributed embedded systems, network science, control theory, security, behavioral science, and game theory, this talk will provide some initial ideas. These include an approach to enabling privacy-utility trade-offs that balance the tension between risk of information sharing to the producer and the value of information sharing to the consumer, and method to secure systems against physical manipulation of sensed information.
 

Srivastava, P., Pande, S.S..  2014.  A novel architecture for identity management system using virtual appliance technology. Contemporary Computing (IC3), 2014 Seventh International Conference on. :171-175.

Identity management system has gained significance for any organization today for not only storing details of its employees but securing its sensitive information and safely managing access to its resources. This system being an enterprise based application has time taking deployment process, involving many complex and error prone steps. Also being globally used, its continuous running on servers lead to large carbon emissions. This paper proposes a novel architecture that integrates the Identity management system together with virtual appliance technology to reduce the overall deployment time of the system. It provides an Identity management system as pre-installed, pre-configured and ready to go solution that can be easily deployed even by a common user. The proposed architecture is implemented and the results have shown that there is decrease in deployment time and decrease in number of steps required in previous architecture. The hardware required by the application is also reduced as its deployed on virtual machine monitor platform, which can be installed on already used servers. This contributes to the green computing practices and gives costs benefits for enterprises. Also there is ease of migration of system from one server to another and the enterprises which do not want to depend on third party cloud for security and cost reasons, can easily deploy their identity management system in their own premises.
 

Srivastava, V., Pathak, R. K., Kumar, A., Prakash, S..  2020.  Using a Blend of Brassard and Benett 84 Elliptic Curve Digital Signature for Secure Cloud Data Communication. 2020 International Conference on Electronics and Sustainable Communication Systems (ICESC). :738–743.

The exchange of data has expanded utilizing the web nowadays, but it is not dependable because, during communication on the cloud, any malicious client can alter or steal the information or misuse it. To provide security to the data during transmission is becoming hot research and quite challenging topic. In this work, our proposed algorithm enhances the security of the keys by increasing its complexity, so that it can't be guessed, breached or stolen by the third party and hence by this, the data will be concealed while sending between the users. The proposed algorithm also provides more security and authentication to the users during cloud communication, as compared to the previously existing algorithm.

Ssin, S. Y., Zucco, J. E., Walsh, J. A., Smith, R. T., Thomas, B. H..  2017.  SONA: Improving Situational Awareness of Geotagged Information Using Tangible Interfaces. 2017 International Symposium on Big Data Visual Analytics (BDVA). :1–8.

This paper introduces SONA (Spatiotemporal system Organized for Natural Analysis), a tabletop and tangible controller system for exploring geotagged information, and more specifically, CCTV. SONA's goal is to support a more natural method of interacting with data. Our new interactions are placed in the context of a physical security environment, closed circuit television (CCTV). We present a three-layered detail on demand set of view filters for CCTV feeds on a digital map. These filters are controlled with a novel tangible device for direct interaction. We validate SONA's tangible controller approach with a user study comparing SONA with the existing CCTV multi-screen method. The results of the study show that SONA's tangible interaction method is superior to the multi-screen approach, both in terms of quantitative results, and is preferred by users.

St-Martin, Michel, Felty, Amy P..  2016.  A Verified Algorithm for Detecting Conflicts in XACML Access Control Rules. Proceedings of the 5th ACM SIGPLAN Conference on Certified Programs and Proofs. :166–175.

We describe the formalization of a correctness proof for a conflict detection algorithm for XACML (eXtensible Access Control Markup Language). XACML is a standardized declarative access control policy language that is increasingly used in industry. In practice it is common for rule sets to grow large, and contain unintended errors, often due to conflicting rules. A conflict occurs in a policy when one rule permits a request and another denies that same request. Such errors can lead to serious risks involving both allowing access to an unauthorized user as well as denying access to someone who needs it. Removing conflicts is thus an important aspect of debugging policies, and the use of a verified algorithm provides the highest assurance in a domain where security is important. In this paper, we focus on several complex XACML constructs, including time ranges and integer intervals, as well as ways to combine any number of functions using the boolean operators and, or, and not. The latter are the most complex, and add significant expressive power to the language. We propose an algorithm to find conflicts and then use the Coq Proof Assistant to prove the algorithm correct. We develop a library of tactics to help automate the proof.

Staffa, M., Mazzeo, G., Sgaglione, L..  2018.  Hardening ROS via Hardware-assisted Trusted Execution Environment. 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). :491—494.

In recent years, humanoid robots have become quite ubiquitous finding wide applicability in many different fields, spanning from education to entertainment and assistance. They can be considered as more complex cyber-physical systems (CPS) and, as such, they are exposed to the same vulnerabilities. This can be very dangerous for people acting that close with these robots, since attackers by exploiting their vulnerabilities, can not only violate people's privacy, but, more importantly, they can command the robot behavior causing them bodily harm, thus leading to devastating consequences. In this paper, we propose a solution not yet investigated in this field, which relies on the use of secure enclaves, which in our opinion could represent a valuable solution for coping with most of the possible attacks, while suggesting developers to adopt such a precaution during the robot design phase.

Stafford, Tom.  2017.  On Cybersecurity Loafing and Cybercomplacency. SIGMIS Database. 48:8–10.
As we begin to publish more articles in the area of cybersecurity, a case in point being the fine set of security papers presented in this particular issue as well as the upcoming special issue on Advances in Behavioral Cybersecurity Research which is currently in the review phase, it comes to mind that there is an emerging rubric of interest to the research community involved in security. That rubric concerns itself with the increasingly odd and inexplicable degree of comfort that computer users appear to have while operating in an increasingly threat-rich online environment. In my own work, I have noticed over time that users are blissfully unconcerned about malware threats (Poston et al., 2005; Stafford, 2005; Stafford and Poston, 2010; Stafford and Urbaczewski, 2004). This often takes the avenue of "it can't happen to me," or, "that's just not likely," but the fact is, since I first started noticing this odd nonchalance it seems like it is only getting worse, generally speaking. Mind you, a computer user who has been exploited and suffered harm from it will be vigilant to the end of his or her days, but for those who have scraped by, "no worries," is the order of the day, it seems to me. This is problematic because the exploits that are abroad in the online world these days are a whole order of magnitude more harmful than those that were around when I first started studying the matter a decade ago. I would not have commented on the matter, having long since chalked it up to the oddities of civilian computing, so to speak, but an odd pattern I encountered when engaging in a research study with trained corporate users brought the matter back to the fore recently. I have been collecting neurocogntive data on user response to security threats, and while my primary interest was to see if skin conductance or pupillary dilation varied during exposure to computer threat scenarios, I noticed an odd pattern that commanded my attention and actually derailed my study for a while as I dug in to examine it.
Staicu, C.-A., Torp, M. T., Schäfer, M., Møller, A., Pradel, M..  2020.  Extracting Taint Specifications for JavaScript Libraries. 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE). :198—209.

Modern JavaScript applications extensively depend on third-party libraries. Especially for the Node.js platform, vulnerabilities can have severe consequences to the security of applications, resulting in, e.g., cross-site scripting and command injection attacks. Existing static analysis tools that have been developed to automatically detect such issues are either too coarse-grained, looking only at package dependency structure while ignoring dataflow, or rely on manually written taint specifications for the most popular libraries to ensure analysis scalability. In this work, we propose a technique for automatically extracting taint specifications for JavaScript libraries, based on a dynamic analysis that leverages the existing test suites of the libraries and their available clients in the npm repository. Due to the dynamic nature of JavaScript, mapping observations from dynamic analysis to taint specifications that fit into a static analysis is non-trivial. Our main insight is that this challenge can be addressed by a combination of an access path mechanism that identifies entry and exit points, and the use of membranes around the libraries of interest. We show that our approach is effective at inferring useful taint specifications at scale. Our prototype tool automatically extracts 146 additional taint sinks and 7 840 propagation summaries spanning 1 393 npm modules. By integrating the extracted specifications into a commercial, state-of-the-art static analysis, 136 new alerts are produced, many of which correspond to likely security vulnerabilities. Moreover, many important specifications that were originally manually written are among the ones that our tool can now extract automatically.

Stan, O., Bitton, R., Ezrets, M., Dadon, M., Inokuchi, M., Yoshinobu, O., Tomohiko, Y., Elovici, Y., Shabtai, A..  2020.  Extending Attack Graphs to Represent Cyber-Attacks in Communication Protocols and Modern IT Networks. IEEE Transactions on Dependable and Secure Computing. :1–1.
An attack graph is a method used to enumerate the possible paths that an attacker can take in the organizational network. MulVAL is a known open-source framework used to automatically generate attack graphs. MulVAL's default modeling has two main shortcomings. First, it lacks the ability to represent network protocol vulnerabilities, and thus it cannot be used to model common network attacks, such as ARP poisoning. Second, it does not support advanced types of communication, such as wireless and bus communication, and thus it cannot be used to model cyber-attacks on networks that include IoT devices or industrial components. In this paper, we present an extended network security model for MulVAL that: (1) considers the physical network topology, (2) supports short-range communication protocols, (3) models vulnerabilities in the design of network protocols, and (4) models specific industrial communication architectures. Using the proposed extensions, we were able to model multiple attack techniques including: spoofing, man-in-the-middle, and denial of service attacks, as well as attacks on advanced types of communication. We demonstrate the proposed model in a testbed which implements a simplified network architecture comprised of both IT and industrial components
Stan, Oana, Carpov, Sergiu, Sirdey, Renaud.  2016.  Dynamic Execution of Secure Queries over Homomorphic Encrypted Databases. Proceedings of the 4th ACM International Workshop on Security in Cloud Computing. :51–58.

The wide use of cloud computing and of data outsourcing rises important concerns with regards to data security resulting thus in the necessity of protection mechanisms such as encryption of sensitive data. The recent major theoretical breakthrough of finding the Holy Grail of encryption, i.e. fully homomorphic encryption guarantees the privacy of queries and their results on encrypted data. However, there are only a few studies proposing a practical performance evaluation of the use of homomorphic encryption schemes in order to perform database queries. In this paper, we propose and analyse in the context of a secure framework for a generic database query interpreter two different methods in which client requests are dynamically executed on homomorphically encrypted data. Dynamic compilation of the requests allows to take advantage of the different optimizations performed during an off-line step on an intermediate code representation, taking the form of boolean circuits, and, moreover, to specialize the execution using runtime information. Also, for the returned encrypted results, we assess the complexity and the efficiency of the different protocols proposed in the literature in terms of overall execution time, accuracy and communication overhead.

Stanciu, Valeriu-Daniel, Spolaor, Riccardo, Conti, Mauro, Giuffrida, Cristiano.  2016.  On the Effectiveness of Sensor-enhanced Keystroke Dynamics Against Statistical Attacks. Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy. :105–112.

In recent years, simple password-based authentication systems have increasingly proven ineffective for many classes of real-world devices. As a result, many researchers have concentrated their efforts on the design of new biometric authentication systems. This trend has been further accelerated by the advent of mobile devices, which offer numerous sensors and capabilities to implement a variety of mobile biometric authentication systems. Along with the advances in biometric authentication, however, attacks have also become much more sophisticated and many biometric techniques have ultimately proven inadequate in face of advanced attackers in practice. In this paper, we investigate the effectiveness of sensor-enhanced keystroke dynamics, a recent mobile biometric authentication mechanism that combines a particularly rich set of features. In our analysis, we consider different types of attacks, with a focus on advanced attacks that draw from general population statistics. Such attacks have already been proven effective in drastically reducing the accuracy of many state-of-the-art biometric authentication systems. We implemented a statistical attack against sensor-enhanced keystroke dynamics and evaluated its impact on detection accuracy. On one hand, our results show that sensor-enhanced keystroke dynamics are generally robust against statistical attacks with a marginal equal-error rate impact (textless0.14%). On the other hand, our results show that, surprisingly, keystroke timing features non-trivially weaken the security guarantees provided by sensor features alone. Our findings suggest that sensor dynamics may be a stronger biometric authentication mechanism against recently proposed practical attacks.

Stange, M., Tang, C., Tucker, C., Servine, C., Geissler, M..  2019.  Cybersecurity Associate Degree Program Curriculum. 2019 IEEE International Symposium on Technologies for Homeland Security (HST). :1—5.

The spotlight is on cybersecurity education programs to develop a qualified cybersecurity workforce to meet the demand of the professional field. The ACM CCECC (Committee for Computing Education in Community Colleges) is leading the creation of a set of guidelines for associate degree cybersecurity programs called Cyber2yr, formerly known as CSEC2Y. A task force of community college educators have created a student competency focused curriculum that will serve as a global cybersecurity guide for applied (AAS) and transfer (AS) degree programs to develop a knowledgeable and capable associate level cybersecurity workforce. Based on the importance of the Cyber2yr work; ABET a nonprofit, non-governmental agency that accredits computing programs has created accreditation criteria for two-year cybersecurity programs.

Stanić, B., Afzal, W..  2017.  Process Metrics Are Not Bad Predictors of Fault Proneness. 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :493–499.

The correct prediction of faulty modules or classes has a number of advantages such as improving the quality of software and assigning capable development resources to fix such faults. There have been different kinds of fault/defect prediction models proposed in literature, but a great majority of them makes use of static code metrics as independent variables for making predictions. Recently, process metrics have gained a considerable attention as alternative metrics to use for making trust-worthy predictions. The objective of this paper is to investigate different combinations of static code and process metrics for evaluating fault prediction performance. We have used publicly available data sets, along with a frequently used classifier, Naive Bayes, to run our experiments. We have, both statistically and visually, analyzed our experimental results. The statistical analysis showed evidence against any significant difference in fault prediction performances for a variety of different combinations of metrics. This reinforced earlier research results that process metrics are as good as predictors of fault proneness as static code metrics. Furthermore, the visual inspection of box plots revealed that the best set of metrics for fault prediction is a mix of both static code and process metrics. We also presented evidence in support of some process metrics being more discriminating than others and thus making them as good predictors to use.

Stanisavljevic, Z., Stanisavljevic, J., Vuletic, P., Jovanovic, Z..  2014.  COALA - System for Visual Representation of Cryptography Algorithms. Learning Technologies, IEEE Transactions on. 7:178-190.

Educational software systems have an increasingly significant presence in engineering sciences. They aim to improve students' attitudes and knowledge acquisition typically through visual representation and simulation of complex algorithms and mechanisms or hardware systems that are often not available to the educational institutions. This paper presents a novel software system for CryptOgraphic ALgorithm visuAl representation (COALA), which was developed to support a Data Security course at the School of Electrical Engineering, University of Belgrade. The system allows users to follow the execution of several complex algorithms (DES, AES, RSA, and Diffie-Hellman) on real world examples in a step by step detailed view with the possibility of forward and backward navigation. Benefits of the COALA system for students are observed through the increase of the percentage of students who passed the exam and the average grade on the exams during one school year.
 

Stanković, I., Brajović, M., Daković, M., Stanković, L., Ioana, C..  2020.  Quantization Effect in Nonuniform Nonsparse Signal Reconstruction. 2020 9th Mediterranean Conference on Embedded Computing (MECO). :1–4.
This paper examines the influence of quantization on the compressive sensing theory applied to the nonuniformly sampled nonsparse signals with reduced set of randomly positioned measurements. The error of the reconstruction will be generalized to exact expected squared error expression. The aim is to connect the generalized random sampling strategy with the quantization effect, finding the resulting error of the reconstruction. Small sampling deviations correspond to the imprecisions of the sampling strategy, while completely random sampling schemes causes large sampling deviations. Numerical examples provide an agreement between the statistical results and theoretical values.
Stanley Bak, University of Illinois at Urbana-Champaign, Fardin Abdi, University of Illinois at Urbana-Champaign, Zhenqi Huang, University of Illinois at Urbana-Champaign, Marco Caccamo, University of Illinois at Urbana-Champaign.  2013.  Using Run-Time Checking to Provide Safety and Progress for Distributed Cyber-Physical Systems. 2013 IEEE 19th International Conference on Embedded and Real-Time Computing Systems and Applications.

Cyber-physical systems (CPS) may interact and manipulate objects in the physical world, and therefore ideally would have formal guarantees about their behavior. Performing statictime proofs of safety invariants, however, may be intractable for systems with distributed physical-world interactions. This is further complicated when realistic communication models are considered, for which there may not be bounds on message delays, or even that messages will eventually reach their destination. In this work, we address the challenge of proving safety and progress in distributed CPS communicating over an unreliable communication layer. This is done in two parts. First, we show that system safety can be verified by partially relying upon runtime checks, and that dropping messages if the run-time checks fail will maintain safety. Second, we use a notion of compatible action chains to guarantee system progress, despite unbounded message delays.We demonstrate the effectiveness of our approach on a multi-agent vehicle flocking system, and show that the overhead of the proposed run-time checks is not overbearing.