Visible to the public Biblio

Found 1021 results

Filters: First Letter Of Title is C  [Clear All Filters]
A B [C] D E F G H I J K L M N O P Q R S T U V W X Y Z   [Show ALL]
T
Tabiban, Azadeh, Jarraya, Yosr, Zhang, Mengyuan, Pourzandi, Makan, Wang, Lingyu, Debbabi, Mourad.  2020.  Catching Falling Dominoes: Cloud Management-Level Provenance Analysis with Application to OpenStack. 2020 IEEE Conference on Communications and Network Security (CNS). :1—9.

The dynamicity and complexity of clouds highlight the importance of automated root cause analysis solutions for explaining what might have caused a security incident. Most existing works focus on either locating malfunctioning clouds components, e.g., switches, or tracing changes at lower abstraction levels, e.g., system calls. On the other hand, a management-level solution can provide a big picture about the root cause in a more scalable manner. In this paper, we propose DOMINOCATCHER, a novel provenance-based solution for explaining the root cause of security incidents in terms of management operations in clouds. Specifically, we first define our provenance model to capture the interdependencies between cloud management operations, virtual resources and inputs. Based on this model, we design a framework to intercept cloud management operations and to extract and prune provenance metadata. We implement DOMINOCATCHER on OpenStack platform as an attached middleware and validate its effectiveness using security incidents based on real-world attacks. We also evaluate the performance through experiments on our testbed, and the results demonstrate that DOMINOCATCHER incurs insignificant overhead and is scalable for clouds.

Ta, H. Q., Kim, S. W..  2019.  Covert Communication Under Channel Uncertainty and Noise Uncertainty. ICC 2019 - 2019 IEEE International Conference on Communications (ICC). :1-6.

Covert or low probability of detection communication is crucial to protect user privacy and provide a strong security. We analyze the joint impact of imperfect knowledge of the channel gain (channel uncertainty) and noise power (noise uncertainty) on the average probability of detection error at the eavesdropper and the covert throughput in Rayleigh fading channel. We characterize the covert throughput gain provided by the channel uncertainty as well as the covert throughput loss caused by the channel fading as a function of the noise uncertainty. Our result shows that the channel fading is essential to hiding the signal transmission, particularly when the noise uncertainty is below a threshold and/or the receive SNR is above a threshold. The impact of the channel uncertainty on the average probability of detection error and covert throughput is more significant when the noise uncertainty is larger.

T, Baby H., R, Sujatha B..  2016.  Chaos based Combined Multiple Recursive KEY Generator for Crypto-Systems. 2016 2nd International Conference on Applied and Theoretical Computing and Communication Technology (iCATccT). :411–415.

With the ever increasing growth of internet usage, ensuring high security for information has gained great importance, due to the several threats in the communication channels. Hence there is continuous research towards finding a suitable approach to ensure high security for the information. In recent decades, cryptography is being used extensively for providing security on the Internet although primarily used in the military and diplomatic communities. One such approach is the application of Chaos theory in cryptosystems. In this work, we have proposed the usage of combined multiple recursive generator (CMRG) for KEY generation based on a chaotic function to generate different multiple keys. It is seen that negligible difference in parameters of chaotic function generates completely different keys as well as cipher text. The main motive for developing the chaos based cryptosystem is to attain encryption that provides high security at comparatively higher speed but with lower complexity and cost over the conventional encryption algorithms.

S
Szabo, Roland, Gontean, Aurel.  2019.  The Creation Process of a Secure and Private Mobile Web Browser with no Ads and no Popups. 2019 IEEE 25th International Symposium for Design and Technology in Electronic Packaging (SIITME). :232—235.
The aim of this work is to create a new style web browser. The other web browsers can have safety issues and have many ads and popups. The other web browsers can fill up cache with the logging of big history of visited web pages. This app is a light-weight web browser which is both secure and private with no ads and no popups, just the plain Internet shown in full screen. The app does not store all user data, so the navigation of webpages is done in incognito mode. The app was made to open any new HTML5 web page in a secure and private mode with big focus on loading speed of the web pages.
Syafalni, I., Fadhli, H., Utami, W., Dharma, G. S. A., Mulyawan, R., Sutisna, N., Adiono, T..  2020.  Cloud Security Implementation using Homomorphic Encryption. 2020 IEEE International Conference on Communication, Networks and Satellite (Comnetsat). :341—345.

With the advancement of computing and communication technologies, data transmission in the internet are getting bigger and faster. However, it is necessary to secure the data to prevent fraud and criminal over the internet. Furthermore, most of the data related to statistics requires to be analyzed securely such as weather data, health data, financial and other services. This paper presents an implementation of cloud security using homomorphic encryption for data analytic in the cloud. We apply the homomorphic encryption that allows the data to be processed without being decrypted. Experimental results show that, for the polynomial degree 26, 28, and 210, the total executions are 2.2 ms, 4.4 ms, 25 ms per data, respectively. The implementation is useful for big data security such as for environment, financial and hospital data analytics.

Swain, P., Kamalia, U., Bhandarkar, R., Modi, T..  2019.  CoDRL: Intelligent Packet Routing in SDN Using Convolutional Deep Reinforcement Learning. 2019 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS). :1—6.

Software Defined Networking (SDN) provides opportunities for flexible and dynamic traffic engineering. However, in current SDN systems, routing strategies are based on traditional mechanisms which lack in real-time modification and less efficient resource utilization. To overcome these limitations, deep learning is used in this paper to improve the routing computation in SDN. This paper proposes Convolutional Deep Reinforcement Learning (CoDRL) model which is based on deep reinforcement learning agent for routing optimization in SDN to minimize the mean network delay and packet loss rate. The CoDRL model consists of Deep Deterministic Policy Gradients (DDPG) deep agent coupled with Convolution layer. The proposed model tends to automatically adapts the dynamic packet routing using network data obtained through the SDN controller, and provides the routing configuration that attempts to reduce network congestion and minimize the mean network delay. Hence, the proposed deep agent exhibits good convergence towards providing routing configurations that improves the network performance.

Sutton, Sara, Bond, Benjamin, Tahiri, Sementa, Rrushi, Julian.  2019.  Countering Malware Via Decoy Processes with Improved Resource Utilization Consistency. 2019 First IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :110—119.
The concept of a decoy process is a new development of defensive deception beyond traditional honeypots. Decoy processes can be exceptionally effective in detecting malware, directly upon contact or by redirecting malware to decoy I/O. A key requirement is that they resemble their real counterparts very closely to withstand adversarial probes by threat actors. To be usable, decoy processes need to consume only a small fraction of the resources consumed by their real counterparts. Our contribution in this paper is twofold. We attack the resource utilization consistency of decoy processes provided by a neural network with a heatmap training mechanism, which we find to be insufficiently trained. We then devise machine learning over control flow graphs that improves the heatmap training mechanism. A neural network retrained by our work shows higher accuracy and defeats our attacks without a significant increase in its own resource utilization.
Sutcliffe, Richard J., Kowarsch, Benjamin.  2016.  Closing the Barn Door: Re-Prioritizing Safety, Security, and Reliability. Proceedings of the 21st Western Canadian Conference on Computing Education. :1:1–1:15.

Past generations of software developers were well on the way to building a software engineering mindset/gestalt, preferring tools and techniques that concentrated on safety, security, reliability, and code re-usability. Computing education reflected these priorities and was, to a great extent organized around these themes, providing beginning software developers a basis for professional practice. In more recent times, economic and deadline pressures and the de-professionalism of practitioners have combined to drive a development agenda that retains little respect for quality considerations. As a result, we are now deep into a new and severe software crisis. Scarcely a day passes without news of either a debilitating data or website hack, or the failure of a mega-software project. Vendors, individual developers, and possibly educators can anticipate an equally destructive flood of malpractice litigation, for the argument that they systematically and recklessly ignored known best development practice of long standing is irrefutable. Yet we continue to instruct using methods and to employ development tools we know, or ought to know, are inherently insecure, unreliable, and unsafe, and that produce software of like ilk. The authors call for a renewed professional and educational focus on software quality, focusing on redesigned tools that enable and encourage known best practice, combined with reformed educational practices that emphasize writing human readable, safe, secure, and reliable software. Practitioners can only deploy sound management techniques, appropriate tool choice, and best practice development methodologies such as thorough planning and specification, scope management, factorization, modularity, safety, appropriate team and testing strategies, if those ideas and techniques are embedded in the curriculum from the beginning. The authors have instantiated their ideas in the form of their highly disciplined new version of Niklaus Wirth's 1980s Modula-2 programming notation under the working moniker Modula-2 R10. They are now working on an implementation that will be released under a liberal open source license in the hope that it will assist in reforming the CS curriculum around a best practices core so as to empower would-be professionals with the intellectual and practical mindset to begin resolving the software crisis. They acknowledge there is no single software engineering silver bullet, but assert that professional techniques can be inculcated throughout a student's four-year university tenure, and if implemented in the workplace, these can greatly reduce the likelihood of multiplied IT failures at the hands of our graduates. The authors maintain that professional excellence is a necessary mindset, a habit of self-discipline that must be intentionally embedded in all aspects of one's education, and subsequently drive all aspects of one's practice, including, but by no means limited to, the choice and use of programming tools.

Sun, Yueming, Zhang, Yi, Chen, Yunfei, Jin, Roger.  2016.  Conversational Recommendation System with Unsupervised Learning. Proceedings of the 10th ACM Conference on Recommender Systems. :397–398.

We will demonstrate a conversational products recommendation agent. This system shows how we combine research in personalized recommendation systems with research in dialogue systems to build a virtual sales agent. Based on new deep learning technologies we developed, the virtual agent is capable of learning how to interact with users, how to answer user questions, what is the next question to ask, and what to recommend when chatting with a human user. Normally a descent conversational agent for a particular domain requires tens of thousands of hand labeled conversational data or hand written rules. This is a major barrier when launching a conversation agent for a new domain. We will explore and demonstrate the effectiveness of the learning solution even when there is no hand written rules or hand labeled training data.

Sun, J., Sun, K., Li, Q..  2017.  CyberMoat: Camouflaging Critical Server Infrastructures with Large Scale Decoy Farms. 2017 IEEE Conference on Communications and Network Security (CNS). :1–9.

Traditional deception-based cyber defenses often undertake reactive strategies that utilize decoy systems or services for attack detection and information gathering. Unfortunately, the effectiveness of these defense mechanisms has been largely constrained by the low decoy fidelity, the poor scalability of decoy platform, and the static decoy configurations, which allow the attackers to identify and bypass the deployed decoys. In this paper, we develop a decoy-enhanced defense framework that can proactively protect critical servers against targeted remote attacks through deception. To achieve both high fidelity and good scalability, our system follows a hybrid architecture that separates lightweight yet versatile front-end proxies from back-end high-fidelity decoy servers. Moreover, our system can further invalidate the attackers' reconnaissance through dynamic proxy address shuffling. To guarantee service availability, we develop a transparent connection translation strategy to maintain existing connections during shuffling. Our evaluation on a prototype implementation demonstrates the effectiveness of our approach in defeating attacker reconnaissance and shows that it only introduces small performance overhead.

Sultana, K. Z., Williams, B. J., Bosu, A..  2018.  A Comparison of Nano-Patterns vs. Software Metrics in Vulnerability Prediction. 2018 25th Asia-Pacific Software Engineering Conference (APSEC). :355—364.

Context: Software security is an imperative aspect of software quality. Early detection of vulnerable code during development can better ensure the security of the codebase and minimize testing efforts. Although traditional software metrics are used for early detection of vulnerabilities, they do not clearly address the granularity level of the issue to precisely pinpoint vulnerabilities. The goal of this study is to employ method-level traceable patterns (nano-patterns) in vulnerability prediction and empirically compare their performance with traditional software metrics. The concept of nano-patterns is similar to design patterns, but these constructs can be automatically recognized and extracted from source code. If nano-patterns can better predict vulnerable methods compared to software metrics, they can be used in developing vulnerability prediction models with better accuracy. Aims: This study explores the performance of method-level patterns in vulnerability prediction. We also compare them with method-level software metrics. Method: We studied vulnerabilities reported for two major releases of Apache Tomcat (6 and 7), Apache CXF, and two stand-alone Java web applications. We used three machine learning techniques to predict vulnerabilities using nano-patterns as features. We applied the same techniques using method-level software metrics as features and compared their performance with nano-patterns. Results: We found that nano-patterns show lower false negative rates for classifying vulnerable methods (for Tomcat 6, 21% vs 34.7%) and therefore, have higher recall in predicting vulnerable code than the software metrics used. On the other hand, software metrics show higher precision than nano-patterns (79.4% vs 76.6%). Conclusion: In summary, we suggest developers use nano-patterns as features for vulnerability prediction to augment existing approaches as these code constructs outperform standard metrics in terms of prediction recall.

Sultana, K. Z., Deo, A., Williams, B. J..  2017.  Correlation Analysis among Java Nano-Patterns and Software Vulnerabilities. 2017 IEEE 18th International Symposium on High Assurance Systems Engineering (HASE). :69–76.

Ensuring software security is essential for developing a reliable software. A software can suffer from security problems due to the weakness in code constructs during software development. Our goal is to relate software security with different code constructs so that developers can be aware very early of their coding weaknesses that might be related to a software vulnerability. In this study, we chose Java nano-patterns as code constructs that are method-level patterns defined on the attributes of Java methods. This study aims to find out the correlation between software vulnerability and method-level structural code constructs known as nano-patterns. We found the vulnerable methods from 39 versions of three major releases of Apache Tomcat for our first case study. We extracted nano-patterns from the affected methods of these releases. We also extracted nano-patterns from the non-vulnerable methods of Apache Tomcat, and for this, we selected the last version of three major releases (6.0.45 for release 6, 7.0.69 for release 7 and 8.0.33 for release 8) as the non-vulnerable versions. Then, we compared the nano-pattern distributions in vulnerable versus non-vulnerable methods. In our second case study, we extracted nano-patterns from the affected methods of three vulnerable J2EE web applications: Blueblog 1.0, Personalblog 1.2.6 and Roller 0.9.9, all of which were deliberately made vulnerable for testing purpose. We found that some nano-patterns such as objCreator, staticFieldReader, typeManipulator, looper, exceptions, localWriter, arrReader are more prevalent in affected methods whereas some such as straightLine are more vivid in non-affected methods. We conclude that nano-patterns can be used as the indicator of vulnerability-proneness of code.

Suksomboon, Kalika, Shen, Zhishu, Ueda, Kazuaki, Tagami, Atsushi.  2019.  C2P2: Content-Centric Privacy Platform for Privacy-Preserving Monitoring Services. 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC). 1:252–261.
Motivated by ubiquitous surveillance cameras in a smart city, a monitoring service can be provided to citizens. However, the rise of privacy concerns may disrupt this advanced service. Yet, the existing cloud-based services have not clearly proven that they can preserve Wth-privacy in which the relationship of three types of information, i.e., who requests the service, what the target is and where the camera is, does not leak. We address this problem by proposing a content-centric privacy platform (C2P2) that enables the construction of a Wth-privacy-preserving monitoring service without cloud dependency. C2P2 uses an image classification model of a target serving as the key to access the monitoring service specific to the target. In C2P2, communication is based on information-centric networking (ICN) that enables privacy preservation to be centered on the content itself rather than relying on a centralized system. Moreover, to preserve the privacy of bystanders, C2P2 separates the sensitive information (e.g., human faces) from the non-sensitive information (e.g., image background), while the privacy-aware forwarding strategies in C2P2 enable data aggregation and prevent privacy leakage resulting from false positive of image recognition. We evaluate the privacy leakage of C2P2 compared to that of the cloud-based system. The privacy analysis shows that, compared to the cloud-based system, C2P2 achieves a lower privacy loss ratio while reducing the communication cost significantly.
Suksomboon, Kalika, Ueda, Kazuaki, Tagami, Atsushi.  2018.  Content-centric Privacy Model for Monitoring Services in Surveillance Systems. Proceedings of the 5th ACM Conference on Information-Centric Networking. :190–191.
This paper proposes a content-centric privacy (CCP) model that enables a privacy-preserving monitoring services in surveillance systems without cloud dependency. We design a simple yet powerful method that could not be obtained from a cloud-like system. The CCP model includes two key ideas: (1) the separation of the private data (i.e., target object images) from the public data (i.e., background images), and (2) the service authentication with the classification model. Deploying the CCP model over ICN enables the privacy central around the content itself rather than relying on a cloud system. Our preliminary analysis shows that the ICN-based CCP model can preserve privacy with respect to the W3 -privacy in which the private information of target object are decoupled from the queries and cameras.
Su, W., Antoniou, A., Eagle, C..  2017.  Cyber Security of Industrial Communication Protocols. 2017 22nd IEEE International Conference on Emerging Technologies and Factory Automation (ETFA). :1–4.

In this paper, an industrial testbed is proposed utilizing commercial-off-the-shelf equipment, and it is used to study the weakness of industrial Ethernet, i.e., PROFINET. The investigation is based on observation of the principles of operation of PROFINET and the functionality of industrial control systems.

Su, Fang-Hsiang, Bell, Jonathan, Harvey, Kenneth, Sethumadhavan, Simha, Kaiser, Gail, Jebara, Tony.  2016.  Code Relatives: Detecting Similarly Behaving Software. Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. :702–714.

Detecting “similar code” is useful for many software engineering tasks. Current tools can help detect code with statically similar syntactic and–or semantic features (code clones) and with dynamically similar functional input/output (simions). Unfortunately, some code fragments that behave similarly at the finer granularity of their execution traces may be ignored. In this paper, we propose the term “code relatives” to refer to code with similar execution behavior. We define code relatives and then present DyCLINK, our approach to detecting code relatives within and across codebases. DyCLINK records instruction-level traces from sample executions, organizes the traces into instruction-level dynamic dependence graphs, and employs our specialized subgraph matching algorithm to efficiently compare the executions of candidate code relatives. In our experiments, DyCLINK analyzed 422+ million prospective subgraph matches in only 43 minutes. We compared DyCLINK to one static code clone detector from the community and to our implementation of a dynamic simion detector. The results show that DyCLINK effectively detects code relatives with a reasonable analysis time.

Straub, Jeremy.  2019.  Cyber Mutual Assured Destruction as a System of Systems and the Implications for System Design. 2019 14th Annual Conference System of Systems Engineering (SoSE). :137—139.

Mutual assured destruction is a Cold War era principle of deterrence through causing your enemy to fear that you can destroy them to at least the same extent that they can destroy you. It is based on the threat of retaliation and requires systems that can either be triggered after an enemy attack is launched and before the destructive capability is destroyed or systems that can survive an initial attack and be launched in response. During the Cold War, the weapons of mutual assured destructions were nuclear. However, with the incredible reliance on computers for everything from power generation control to banking to agriculture logistics, a cyber attack mutual assured destruction scenario is plausible. This paper presents this concept and considers the deterrent need, to prevent such a crippling attack from ever being launched, from a system of systems perspective.

Strasburg, Chris, Basu, Samik, Wong, Johnny.  2016.  A Cross-Domain Comparable Measurement Framework to Quantify Intrusion Detection Effectiveness. Proceedings of the 11th Annual Cyber and Information Security Research Conference. :11:1–11:8.

As the frequency, severity, and sophistication of cyber attacks increase, along with our dependence on reliable computing infrastructure, the role of Intrusion Detection Systems (IDS) gaining importance. One of the challenges in deploying an IDS stems from selecting a combination of detectors that are relevant and accurate for the environment where security is being considered. In this work, we propose a new measurement approach to address two key obstacles: the base-rate fallacy, and the unit of analysis problem. Our key contribution is to utilize the notion of a `signal', an indicator of an event that is observable to an IDS, as the measurement target, and apply the multiple instance paradigm (from machine learning) to enable cross-comparable measures regardless of the unit of analysis. To support our approach, we present a detailed case study and provide empirical examples of the effectiveness of both the model and measure by demonstrating the automated construction, optimization, and correlation of signals from different domains of observation (e.g. network based, host based, application based) and using different IDS techniques (signature based, anomaly based).

Strasburg, Chris, Basu, Samik, Wong, Johnny.  2016.  A Cross-Domain Comparable Measurement Framework to Quantify Intrusion Detection Effectiveness. Proceedings of the 11th Annual Cyber and Information Security Research Conference. :11:1–11:8.
As the frequency, severity, and sophistication of cyber attacks increase, along with our dependence on reliable computing infrastructure, the role of Intrusion Detection Systems (IDS) gaining importance. One of the challenges in deploying an IDS stems from selecting a combination of detectors that are relevant and accurate for the environment where security is being considered. In this work, we propose a new measurement approach to address two key obstacles: the base-rate fallacy, and the unit of analysis problem. Our key contribution is to utilize the notion of a `signal', an indicator of an event that is observable to an IDS, as the measurement target, and apply the multiple instance paradigm (from machine learning) to enable cross-comparable measures regardless of the unit of analysis. To support our approach, we present a detailed case study and provide empirical examples of the effectiveness of both the model and measure by demonstrating the automated construction, optimization, and correlation of signals from different domains of observation (e.g. network based, host based, application based) and using different IDS techniques (signature based, anomaly based).
Stewart, Chase E., Vasu, Anne Maria, Keller, Eric.  2017.  CommunityGuard: A Crowdsourced Home Cyber-Security System. Proceedings of the ACM International Workshop on Security in Software Defined Networks & Network Function Virtualization. :1–6.

In this paper, we propose and implement CommunityGuard, a system which comprises of intelligent Guardian Nodes that learn and prevent malicious traffic from coming into and going out of a user's personal area network. In the CommunityGuard model, each Guardian Node tells others about emerging threats, blocking these threats for all users as soon as they begin. Furthermore, Guardian Nodes regularly update themselves with latest threat models to provide effective security against new and emerging threats. Our evaluation proves that CommunityGuard provides immunity against a range of incoming and outgoing attacks at all points of time with an acceptable impact on network performance. Oftentimes, the sources of DDoS attack traffic are personal devices that have been compromised without the owner's knowledge. We have modeled CommunityGuard to prevent such outgoing DDoS traffic on a wide scale which can hamstring the otherwise very frightening prospects of crippling DDoS attacks.

Steinebach, Martin, Ester, Andre, Liu, Huajian.  2018.  Channel Steganalysis. Proceedings of the 13th International Conference on Availability, Reliability and Security. :9:1-9:8.

The rise of social networks during the last 10 years has created a situation in which up to 100 million new images and photographs are uploaded and shared by users every day. This environment poses an ideal background for those who wish to communicate covertly by the use of steganography. It also creates a new set of challenges for steganalysts, who have to shift their field of work away from a purely scientific laboratory environment and into a diverse real-world scenario, while at the same time having to deal with entirely new problems, such as the detection of steganographic channels or the impact that even a low false positive rate has when investigating the millions of images which are shared every day on social networks. We evaluate how to address these challenges with traditional steganographic and statistical methods, rather then using high performance computing and machine learning. To achieve this we first analyze the steganographic algorithm F5 applied to images with a high degree of diversity, as would be seen in a typical social network. We show that the biggest challenge lies in the detection of images whose payload is less then 50% of the available capacity of an image. We suggest new detection methods and apply these to the problem of channel detection in social network. We are able to show that using our attacks we are able to detect the majority of covert F5 channels after a mix containing 10 stego images has been classified by our scheme.

Stanisavljevic, Z., Stanisavljevic, J., Vuletic, P., Jovanovic, Z..  2014.  COALA - System for Visual Representation of Cryptography Algorithms. Learning Technologies, IEEE Transactions on. 7:178-190.

Educational software systems have an increasingly significant presence in engineering sciences. They aim to improve students' attitudes and knowledge acquisition typically through visual representation and simulation of complex algorithms and mechanisms or hardware systems that are often not available to the educational institutions. This paper presents a novel software system for CryptOgraphic ALgorithm visuAl representation (COALA), which was developed to support a Data Security course at the School of Electrical Engineering, University of Belgrade. The system allows users to follow the execution of several complex algorithms (DES, AES, RSA, and Diffie-Hellman) on real world examples in a step by step detailed view with the possibility of forward and backward navigation. Benefits of the COALA system for students are observed through the increase of the percentage of students who passed the exam and the average grade on the exams during one school year.
 

Stange, M., Tang, C., Tucker, C., Servine, C., Geissler, M..  2019.  Cybersecurity Associate Degree Program Curriculum. 2019 IEEE International Symposium on Technologies for Homeland Security (HST). :1—5.

The spotlight is on cybersecurity education programs to develop a qualified cybersecurity workforce to meet the demand of the professional field. The ACM CCECC (Committee for Computing Education in Community Colleges) is leading the creation of a set of guidelines for associate degree cybersecurity programs called Cyber2yr, formerly known as CSEC2Y. A task force of community college educators have created a student competency focused curriculum that will serve as a global cybersecurity guide for applied (AAS) and transfer (AS) degree programs to develop a knowledgeable and capable associate level cybersecurity workforce. Based on the importance of the Cyber2yr work; ABET a nonprofit, non-governmental agency that accredits computing programs has created accreditation criteria for two-year cybersecurity programs.

Stafford, Tom.  2017.  On Cybersecurity Loafing and Cybercomplacency. SIGMIS Database. 48:8–10.
As we begin to publish more articles in the area of cybersecurity, a case in point being the fine set of security papers presented in this particular issue as well as the upcoming special issue on Advances in Behavioral Cybersecurity Research which is currently in the review phase, it comes to mind that there is an emerging rubric of interest to the research community involved in security. That rubric concerns itself with the increasingly odd and inexplicable degree of comfort that computer users appear to have while operating in an increasingly threat-rich online environment. In my own work, I have noticed over time that users are blissfully unconcerned about malware threats (Poston et al., 2005; Stafford, 2005; Stafford and Poston, 2010; Stafford and Urbaczewski, 2004). This often takes the avenue of "it can't happen to me," or, "that's just not likely," but the fact is, since I first started noticing this odd nonchalance it seems like it is only getting worse, generally speaking. Mind you, a computer user who has been exploited and suffered harm from it will be vigilant to the end of his or her days, but for those who have scraped by, "no worries," is the order of the day, it seems to me. This is problematic because the exploits that are abroad in the online world these days are a whole order of magnitude more harmful than those that were around when I first started studying the matter a decade ago. I would not have commented on the matter, having long since chalked it up to the oddities of civilian computing, so to speak, but an odd pattern I encountered when engaging in a research study with trained corporate users brought the matter back to the fore recently. I have been collecting neurocogntive data on user response to security threats, and while my primary interest was to see if skin conductance or pupillary dilation varied during exposure to computer threat scenarios, I noticed an odd pattern that commanded my attention and actually derailed my study for a while as I dug in to examine it.
Srivastava, Animesh, Jain, Puneet, Demetriou, Soteris, Cox, Landon P., Kim, Kyu-Han.  2017.  CamForensics: Understanding Visual Privacy Leaks in the Wild. Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems. :30:1–30:13.

Many mobile apps, including augmented-reality games, bar-code readers, and document scanners, digitize information from the physical world by applying computer-vision algorithms to live camera data. However, because camera permissions for existing mobile operating systems are coarse (i.e., an app may access a camera's entire view or none of it), users are vulnerable to visual privacy leaks. An app violates visual privacy if it extracts information from camera data in unexpected ways. For example, a user might be surprised to find that an augmented-reality makeup app extracts text from the camera's view in addition to detecting faces. This paper presents results from the first large-scale study of visual privacy leaks in the wild. We build CamForensics to identify the kind of information that apps extract from camera data. Our extensive user surveys determine what kind of information users expected an app to extract. Finally, our results show that camera apps frequently defy users' expectations based on their descriptions.