Visible to the public Biblio

Found 116 results

Filters: Keyword is Cognitive Security  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z   [Show ALL]
Kaveh Waddell.  2019.  The 2020 campaigns aren't ready for deepfakes. Axios.

There is expected to be a surge in deepfakes during the 2020 presidential campaigns. According to experts, little has been done to prepare for fake videos in which candidates are depicted unfavorably in order to sway public perception.

Doron Kolton.  2018.  5 ways deception tech is disrupting cybersecurity. The Next Web.

Deception is a tactic that could be used in cybersecurity to attack adversaries. Deception technology goes beyond the honeypot concept in that it can be used to actively lure and bait attackers to an environment in which deception is applied. Organizations can use deception technology to reduce false positives, trigger early threat hunting operations, and more. 

Doron Kolton.  2018.  5 ways deception tech is disrupting cybersecurity. The Next Web.

Enterprises and their Security Operations Centers (SOCs) are under siege. Security events are being triggered from all corners of the security stack – from the firewall, endpoints, and servers, from intrusion detection systems and other security solutions.

Here are the five ways deception tech is disrupting cybersecurity:
•    Maximum accuracy with minimal human investment
•    Get personal with your business
•    Ensure a post-breach defense for any type of attack
•    Triggers threat hunting operations
•    Empowers organizations towards strategy and active defense

George Hulme.  2016.  9 biases killing your security program. CSO Online.

George V. Hulme at CSO Online highlighted nine cognitive biases often held by security professionals that may be affecting the success of security programs. These biases include the availability heuristic, confirmation bias, information bias, the ostrich effect, and more. It is important to reduce these biases as they could lead to inaccurate judgments in the defense against cyberattacks. 

[Anonymous].  2019.  ADL Partners with Network Contagion Research Institute to Study How Hate and Extremism Spread on Social Media. ADL.

The Anti-Defamation League (ADL) partnered with the Network Contagion Research Institute (NCRI) to examine the ways in which extremism and hate is spread on social media. The partnership is also in support of developing methods for combatting the spread of both.

Carolyn Crandall.  2017.  Advanced Deception: How It Works & Why Attackers Hate It. Dark Reading.

The growing complexity and frequency of cyberattacks call for advanced methods to enhance the detection and prevention of such attacks. Deception is a cyber defense technique that is drawing more attention from organizations. This technique could be used to detect, deceive, and lure attackers away from sensitive data upon infiltration into a system. It is important to look at the most common features of distributed deception platforms such as high-interaction deception, adaptive deception, and more. 

B. Biggio, g. fumera, P. Russu, L. Didaci, F. Roli.  2015.  Adversarial Biometric Recognition : A review on biometric system security from the adversarial machine-learning perspective. IEEE Signal Processing Magazine. 32:31-41.

In this article, we review previous work on biometric security under a recent framework proposed in the field of adversarial machine learning. This allows us to highlight novel insights on the security of biometric systems when operating in the presence of intelligent and adaptive attackers that manipulate data to compromise normal system operation. We show how this framework enables the categorization of known and novel vulnerabilities of biometric recognition systems, along with the corresponding attacks, countermeasures, and defense mechanisms. We report two application examples, respectively showing how to fabricate a more effective face spoofing attack, and how to counter an attack that exploits an unknown vulnerability of an adaptive face-recognition system to compromise its face templates.

L. Chen, Y. Ye, T. Bourlai.  2017.  Adversarial Machine Learning in Malware Detection: Arms Race between Evasion Attack and Defense. 2017 European Intelligence and Security Informatics Conference (EISIC). :99-106.
Since malware has caused serious damages and evolving threats to computer and Internet users, its detection is of great interest to both anti-malware industry and researchers. In recent years, machine learning-based systems have been successfully deployed in malware detection, in which different kinds of classifiers are built based on the training samples using different feature representations. Unfortunately, as classifiers become more widely deployed, the incentive for defeating them increases. In this paper, we explore the adversarial machine learning in malware detection. In particular, on the basis of a learning-based classifier with the input of Windows Application Programming Interface (API) calls extracted from the Portable Executable (PE) files, we present an effective evasion attack model (named EvnAttack) by considering different contributions of the features to the classification problem. To be resilient against the evasion attack, we further propose a secure-learning paradigm for malware detection (named SecDefender), which not only adopts classifier retraining technique but also introduces the security regularization term which considers the evasion cost of feature manipulations by attackers to enhance the system security. Comprehensive experimental results on the real sample collections from Comodo Cloud Security Center demonstrate the effectiveness of our proposed methods.
Robert Gutzwiller, Kimberly Ferguson-Walter, Sunny Fugate.  2019.  Are Cyber Attackers Thinking Fast and Slow? Exploratory Analysis Reveals Evidence of Decision-Making Biases in Red Teamers Human Factors and Ergonomics Society Annual Meeting.

We report on whether cyber attacker behaviors contain decision making biases. Data from a prior experiment were analyzed in an exploratory fashion, making use of think-aloud responses from a small group of red teamers. The analysis provided new observational evidence of traditional decision-making biases in red team behaviors (confirmation bias, anchoring, and take-the-best heuristic use). These biases may disrupt red team decisions and goals, and simultaneously increase their risk of detection. Interestingly, at least part of the bias induction may be related to the use of cyber deception. Future directions include the development of behavioral measurement techniques for these and additional cognitive biases in cyber operators, examining the role of attacker traits, and identifying the conditions where biases can be induced successfully in experimental conditions.

Sunny Fugate, Kimberly Ferguson-Walter.  2019.  Artificial Intelligence and Game Theory Models for Defending Critical Networks with Cyber Deception. AI Magazine. 40(1):49-62.

Traditional cyber security techniques have led to an asymmetric disadvantage for defenders. The defender must detect all possible threats at all times from all attackers and defend all systems against all possible exploitation. In contrast, an attacker needs only to find a single path to the defender's critical information. In this article, we discuss how this asymmetry can be rebalanced using cyber deception to change the attacker's perception of the network environment, and lead attackers to false beliefs about which systems contain critical information or are critical to a defender's computing infrastructure. We introduce game theory concepts and models to represent and reason over the use of cyber deception by the defender and the effect it has on attackerperception. Finally, we discuss techniques for combining artificial intelligence algorithms with game theory models to estimate hidden states of the attacker using feedback through payoffs to learn how best to defend the system using cyber deception. It is our opinion that adaptive cyber deception is a necessary component of future information systems and networks. The techniques we present can simultaneously decrease the risks and impacts suffered by defenders and dramatically increase the costs and risks of detection for attackers. Such techniques are likely to play a pivotal role in defending national and international security concerns.

James Sanders.  2018.  Attackers are using cloud services to mask attack origin and build false trust. Tech Republic.

According to a report released by Menlo Security, the padlock in a browser's URL bar gives users a false sense of security as cloud hosting services are being used by attackers to host malware droppers. The use of this tactic allows attackers to hide the origin of their attacks and further evade detection. The exploitation of trust is a major component of such attacks.

Steven Templeton, Matt Bishop, Karl Levitt, Mark Heckman.  2019.  A Biological Framework for Characterizing Mimicry in Cyber-Deception. ProQuest. :508-517.

Deception, both offensive and defensive, is a fundamental tactic in warfare and a well-studied topic in biology. Living organisms use a variety deception tools, including mimicry, camouflage, and nocturnality. Evolutionary biologists have published a variety of formal models for deception in nature. Deception in these models is fundamentally based on misclassification of signals between the entities of the system, represented as a tripartite relation between two signal senders, the “model” and the “mimic”, and a signal receiver, called the “dupe”. Examples of relations between entities include attraction, repulsion and expected advantage gained or lost from the interaction. Using this representation, a multitude of deception systems can be described. Some deception systems in cybersecurity are well-known. Consider, for example, all of the many different varieties of “honey-things” used to ensnare attackers. The study of deception in cybersecurity is limited compared to the richness found in biology. While multiple ontologies of deception in cyberenvironments exist, these are primarily lists of terms without a greater organizing structure. This is both a lost opportunity and potentially quite dangerous: a lost opportunity because defenders may be missing useful defensive deception strategies; dangerous because defenders may be oblivious to ongoing attacks using previously unidentified types of offensive deception. In this paper, we extend deception models from biology to present a framework for identifying relations in the cyber-realm analogous to those found in nature. We show how modifications of these relations can create, enhance or on the contrary prevent deception. From these relations, we develop a framework of cyber-deception types, with examples, and a general model for cyber-deception. The signals used in cyber-systems, which are not directly tied to the “Natural” world, differ significantly from those utilized in biologic mimicry systems. However, similar concepts supporting identity exist and are discussed in brief.

Rachel Alter, Tonay Flattum-Riemers.  2019.  Breaking Down the Anti-Vaccine Echo Chamber. State of the Planet.

Social media echo chambers in which beliefs are significantly amplified and opposing views are easily blocked can have real-life consequences. Communication between groups should still take place despite differences in views. Blame has been placed on those who seek to profit off ignorance and fear for the growth of echo chambers in relation to the anti-vaccination movement.

[Anonymous].  2019.  Can AI help to end fake news? Horizon Magazine.

Artificial intelligence (AI) has been used in the generation of deep fakes. However, researchers have shown that AI can be used to fight misinformation.

Kelly Sheridan.  2019.  Cognitive Bias Can Hamper Security Decisions. Dark Reading.

A report published by Forcepoint, titled Thinking About Thinking: Exploring Bias in Cybersecurity with Insights from Cognitive Science, highlights availability bias as one of the biases held by security and business teams. Availability bias occurs when a person lets the frequency with which they receive information influence their decisions. For example, if there are more headlines about nation-state attacks, such attacks may become a greater priority to major decision-makers in the development and spending surrounding cybersecurity solutions. 

Sarah Garcia.  2019.  Cognitive Bias is the Threat Actor you may never detect. The Security Ledger.

Implicit biases held by security professionals could lead to the misinterpretation of critical data and bad decision-making, thus leaving organizations vulnerable to being attacked. It has been highlighted that biases, including aggregate bias, confirmation bias, anchoring bias, and more, can also affect cybersecurity policies and procedures. Organizations are encouraged to develop a structured decision-making plan for security professionals at the security operations levels and the executive levels in order to mitigate these biases. 

Mohammad Sujan Miah, Marcus Gutierrez, Oscar Veliz, Omkar Thakoor, Christopher Kiekintveld.  2019.  Concealing Cyber-Decoys using Two-Sided Feature Deception Games. 10th International Workshop on Optimization in Multi-agent Systems 2019.

An increasingly important tool for securing computer net- works is the use of deceptive decoy objects (e.g., fake hosts, accounts, or files) to detect, confuse, and distract attackers. One of the well-known challenges in using decoys is that it can be difficult to design effective decoys that are hard to distinguish from real objects, especially against sophisticated attackers who may be aware of the use of decoys. A key issue is that both real and decoy objects may have observable features that may give the attacker the ability to distinguish one from the other. However, a defender deploying decoys may be able to modify some features of either the real or decoy objects (at some cost) making the decoys more effective. We present a game-theoretic model of two-sided deception that models this scenario. We present an empirical analysis of this model to show strategies for effectively concealing decoys, as well as some limitations of decoys for cyber security. 

Gomez, Steven R., Mancuso, Vincent, Staheli, Diane.  2019.  Considerations for Human-Machine Teaming in Cybersecurity. Augmented Cognition. :153–168.

Understanding cybersecurity in an environment is uniquely challenging due to highly dynamic and potentially-adversarial activity. At the same time, the stakes are high for performance during these tasks: failures to reason about the environment and make decisions can let attacks go unnoticed or worsen the effects of attacks. Opportunities exist to address these challenges by more tightly integrating computer agents with human operators. In this paper, we consider implications for this integration during three stages that contribute to cyber analysts developing insights and conclusions about their environment: data organization and interaction, toolsmithing and analytic interaction, and human-centered assessment that leads to insights and conclusions. In each area, we discuss current challenges and opportunities for improved human-machine teaming. Finally, we present a roadmap of research goals for advanced human-machine teaming in cybersecurity operations.

Rachael Flores.  2018.  Consistent Deception vs. a Malicious Hacker. Bing U News.

Computer scientists at Binghamton University are working to increase the effectiveness of cyber deception tools against malicious hackers. Cyber deception is a security defense method that can be used to detect, deceive, and lure attackers away from sensitive data once they have infiltrated a system. Researchers want to improve the consistency of deception. The goal is to reduce the use of ‘bad lies’ in cyber deception. 

Peter Dizikes.  2019.  Could this be the solution to stop the spread of fake news? World Economic Forum.

This article pertains to cognitive security. False news is becoming a growing problem. During a study, it was found that a crowdsourcing approach could help detect fake news sources.

Dave Climek, Anthony Macera, Walt Tirenin.  2016.  Cyber Deception. Cyber Security and Information Systems Information Analysis Center Journal. 4(1)

Defense through deception can potentially level the cyber battlefield by altering an enemy’s perception of reality through delays and disinformation which can reveal attack methods and provide the attributions needed to identify the adversary’s strategy

C. Wang, Z. Lu.  2018.  Cyber Deception: Overview and the Road Ahead. IEEE Security Privacy. 16:80-85.

Since the concept of deception for cybersecurity was introduced decades ago, several primitive systems, such as honeypots, have been attempted. More recently, research on adaptive cyber defense techniques has gained momentum. The new research interests in this area motivate us to provide a high-level overview of cyber deception. We analyze potential strategies of cyber deception and its unique aspects. We discuss the research challenges of creating effective cyber deception-based techniques and identify future research directions.

Barford, Paul, Dacier, Marc, Dietterich, Thomas G., Fredrikson, Matt, Giffin, Jon, Jajodia, Sushil, Jha, Somesh, Li, Jason, Liu, Peng, Ning, Peng et al..  2010.  Cyber SA: Situational Awareness for Cyber Defense. Cyber Situational Awareness: Issues and Research. 46:3–13.

Cyber SA is described as the current and predictive knowledge of cyberspace in relation to the Network, Missions and Threats across friendly, neutral and adversary forces. While this model provides a good high-level understanding of Cyber SA, it does not contain actionable information to help inform the development of capabilities to improve SA. In this paper, we present a systematic, human-centered process that uses a card sort methodology to understand and conceptualize Senior Leader Cyber SA requirements. From the data collected, we were able to build a hierarchy of high- and low- priority Cyber SA information, as well as uncover items that represent high levels of disagreement with and across organizations. The findings of this study serve as a first step in developing a better understanding of what Cyber SA means to Senior Leaders, and can inform the development of future capabilities to improve their SA and Mission Performance.

Herbert Lin, Jaclynn Kerr.  2019.  On Cyber-Enabled Information Warfare and Information Operations. forthcoming, Oxford Handbook of Cybersecurity. :29pages.

The United States has no peer competitors in conventional military power. But its adversaries are increasingly turning to asymmetric methods for engaging in conflict. Much has been written about cyber warfare as a domain that offers many adversaries ways to counter the U.S. conventional military advantages, but for the most part, U.S. capabilities for prosecuting cyber warfare are as potent as those of any other nation. This paper advances the idea of cyber-enabled information warfare and influence operations (IWIO) as a form of conflict or confrontation to which the United States (and liberal democracies more generally) are particularly vulnerable and are not particularly potent compared to the adversaries who specialize in this form of conflict. IWIO is the deliberate use of information against an adversary to confuse, mislead, and perhaps to influence the choices and decisions that the adversary makes. IWIO is a hostile activity, or at least an activity that is conducted between two parties whose interests are not well-aligned, but it does not constitute warfare in the sense that international law or domestic institutions construe it. Cyber-enabled IWIO exploits modern communications technologies to obtain benefits afforded by high connectivity, low latency, high degrees of anonymity, insensitivity to distance and national borders, democratized access to publishing capabilities, and inexpensive production and consumption of information content. Some approaches to counter IWIO show some promise of having some modest but valuable defensive effect. But on the whole, there are no good solutions for large-scale countering of IWIO in free and democratic societies. Development of new tactics and responses is therefore needed.

[Anonymous].  2015.  Cybersecurity 101: An Introduction to Cyber Deception Technology. Illusive Networks. 2015

Deception technology is an outside-the-box cybersecurity approach that aims to turn the current paradigm on its head – from reactionary to proactive defense.Traditional, signature-based security measures continue to fall prey to sophisticated zero-day attacks and advanced persistent threats, despite the fact that companies are spending upwards of $3 million per year on information security. It’s time for organizations to get proactive, and use deception technology to enhance the way they architect a comprehensive security strategy. The article presents 4 Things Every CISO Must Know About Deception Cybersecurity.