Biblio

Found 107 results

Filters: Keyword is Cognitive Security in Cyber  [Clear All Filters]
2019-09-25
Warren Perils.  2019.  Deception Technology.

A proactive approach to security can be adopted by organizations through the use of deception technology. The application of deception technology allows organization to reduce dwell time, quickly detect attackers, and lessen false positives.  Modern deception platforms use machine learning and AI to be scalable and easy to manage. 

2019-09-11
Caleb Townsend.  2019.  Deepfake Technology: Implications for the Future. U.S. Cybersecurity Magazine.

Deepfakes' most menacing consequence is their ability to make us question what we are seeing. The more popular deepfake technology gets, the less we will be able to trust our own eyes.

2019-09-06
Gregory Barber.  2019.  Deepfakes Are Getting Better, But They're Still Easy to Spot. Wired.

This article pertains to cognitive security. There are deep concerns about the growing ability to create deepfakes. There is also deep concern about the malicious use of deepfakes to change the opinions of how people see a public figure.

Lily Hay Newman.  2019.  Facebook Removes a Fresh Batch of Innovative, Iran-Linked Fake Accounts. Wired.

This article pertains to cognitive security and human behavior. Facebook announced a recent takedown of 51 Facebook accounts, 36 Facebook pages, seven Facebook groups and three Instagram accounts that it says were all involved in coordinated "inauthentic behavior." Facebook says the activity originated geographically from Iran.

2019-09-12
Omkar Thakoor, Milind Tambe, Phebe Vayanos, Haifeng Xu, Christopher Kiekintveld.  2019.  General-Sum Cyber Deception Games under Partial Attacker Valuation Information. Cais USC.

The rapid increase in cybercrime, causing a reported annual economic loss of $600 billion [20], has prompted a critical need for effective cyber defense. Strategic criminals conduct network reconnaissance prior to executing attacks to avoid detection and establish situational awareness via scanning and fingerprinting tools. Cyber deception attempts to foil these reconnaissance efforts; by disguising network and system attributes, among several other techniques. Cyber Deception Games (CDG) is a game-theoretic model for optimizing strategic deception, and can apply to various deception methods. Recently introduced initial model for CDGs assumes zero-sum payoffs, implying directly conflicting attacker motives, and perfect defender knowledge on attacker preferences. These unrealistic assumptions are fundamental limitations of the initial zero-sum model, which we address by proposing a general-sum model that can also handle uncertainty in the defender’s knowledge.

Sarah Cooney, Phebe Vayanos, Thanh H. Nguyen, Cleotilde Gonzalez, Christian Lebiere, Edward A. Cranford, Milind Tambe.  2019.  Warning Time: Optimizing Strategic Signaling for Security Against Boundedly Rational Adversaries. Team Core USC.

Defender-attacker Stackelberg security games (SSGs) have been applied for solving many real-world security problems. Recent work in SSGs has incorporated a deceptive signaling scheme into the SSG model, where the defender strategically reveals information about her defensive strategy to the attacker, in order to influence the attacker’s decision making for the defender’s own benefit. In this work, we study the problem of signaling in security games against a boundedly rational attacker. 

2019-09-10
Tom Warren.  2019.  Microsoft is trying to fight fake news with its Edge mobile browser. The Verge.

This article pertains to cognitive security. Microsoft Edge mobile browser will use software called NewsGuard, which will rate sites based on a variety of criteria including their use of deceptive headlines, whether they repeatedly publish false content, and transparency regarding ownership and financing.

Peter Dizikes.  2019.  Want to squelch fake news? Let the readers take charge MIT News.

An MIT study suggests the use of crowdsourcing to devalue false news stories and misinformation online. Despite differences in political opinions, all groups can agree that fake and hyperpartisan sites are untrustworthy.

Casey Newton.  2019.  People older than 65 share the most fake news, a new study finds. The Verge.

This article pertains to cognitive security. Older users shared more fake news than younger ones regardless of education, sex, race, income, or how many links they shared. In fact, age predicted their behavior better than any other characteristic -- including party affiliation.

2019-09-26
[Anonymous].  2019.  Cybersecurity Deception Technology. :BlogPost.

The effective deployment of deception technology still requires the fundamentals foundations of cybersecurity to be in place. Without network segmentation, proper access control, security systems and reporting – deception technology alone will add little value.

2019-09-10
Peter Dizikes.  2019.  Could this be the solution to stop the spread of fake news? World Economic Forum.

This article pertains to cognitive security. False news is becoming a growing problem. During a study, it was found that a crowdsourcing approach could help detect fake news sources.

2019-09-12
Tanmoy Chakraborty, Sushil Jajodia, Jonathan Katz, Antonio Picariello, Giancarlo Sperli, V. S. Subrahmanian.  2019.  FORGE: A Fake Online Repository Generation Engine for Cyber Deception. IEEE.

Today, major corporations and government organizations must face the reality that they will be hacked by malicious actors. In this paper, we consider the case of defending enterprises that have been successfully hacked by imposing additional a posteriori costs on the attacker. Our idea is simple: for every real document d, we develop methods to automatically generate a set Fake(d) of fake documents that are very similar to d. The attacker who steals documents must wade through a large number of documents in detail in order to separate the real one from the fakes. Our FORGE system focuses on technical documents (e.g. engineering/design documents) and involves three major innovations. First, we represent the semantic content of documents via multi-layer graphs (MLGs). Second, we propose a novel concept of “meta-centrality” for multi-layer graphs. Our third innovation is to show that the problem of generating the set Fake(d) of fakes can be viewed as an optimization problem. We prove that this problem is NP-complete and then develop efficient heuristics to solve it in practice. We ran detailed experiments with a panel of 20 human subjects and show that FORGE generates highly believable fakes.

2019-09-25
Mark Rockwell.  2019.  Sandia digs deeper into its cyber deception sandbox. GCN.

Sandia National Laboratory’s virtual cybersecurity sandbox environment, called HADES (High fidelity Adaptive Deception & Emulation System), applies deceptive techniques. HADES lures hackers into the simulated virtual environment, which includes replicated virtual real hard drives, memory, and data sets. Sandia analysts can then analyze the hackers in real-time.

2019-09-10
[Anonymous].  2019.  Peering under the hood of fake-news detectors. Science Daily.

MIT researchers conducted a study in which they examined automated fake-news detection systems. The study highlights the need for more research into the effectiveness of fake-news detectors.

Mikaela Ashburn.  2019.  Ohio University study states that information literacy must be improved to stop spread of ‘fake news’. Ohio University News.

A study done by researchers at Ohio University calls for the improvement of information literacy as it was found that most people do not take time to verify whether information is accurate or not before sharing it on social media. The study uses information literacy factors and a theoretical lens to help develop an understanding of why people share "fake news" on social media.

[Anonymous].  2019.  ADL Partners with Network Contagion Research Institute to Study How Hate and Extremism Spread on Social Media. ADL.

The Anti-Defamation League (ADL) partnered with the Network Contagion Research Institute (NCRI) to examine the ways in which extremism and hate is spread on social media. The partnership is also in support of developing methods for combatting the spread of both.

Jeff Grabmeier.  2019.  Tech fixes can’t protect us from disinformation campaigns. Ohio State News.

Experts at Ohio State University suggest that policymakers and diplomats further explore the psychological aspects associated with disinformation campaigns in order to stop the spread of false information on social media platforms by countries. More focus needs to be placed on why people fall for "fake news".

[Anonymous].  2019.  Sprawling disinformation networks discovered across Europe ahead of EU elections. Homeland Security News Wire.

A U.K.-based global citizen activist organization, called Avaaz, conducted an investigation, which revealed the spread of disinformation within Europe via Facebook ahead of EU elections. According to Avaaz, these pages were found to be posting false and misleading content. These disinformation networks are considered to be weapons as they are significant in size and complexity.

Gregory Barber.  2019.  Deepfakes Are Getting Better, But They're Still Easy to Spot. Wired.

This article pertains to cognitive security. There are deep concerns about the growing ability to create deepfakes. There is also deep concern about the malicious use of deepfakes to change the opinions of how people see a public figure.

Ian Bogost.  2019.  Facebook’s Dystopian Definition of ‘Fake’. The Atlantic.

Facebook's response to a altered video of Nancy Pelosi has sparked a debate as to whether social media platforms should take down videos that are considered to be "fake". The definition of "fake" is also discussed.

Shannon Vavra.  2019.  Middle East-linked social media accounts impersonated U.S. candidates before 2018 elections. Cyber Scoop.

This article pertains to cognitive security and human behavior. Social media users with ties to Iran are shifting their disinformation efforts by imitating real people, including U.S. congressional candidates.

2019-09-06
Lily Hay Newman.  2019.  To Fight Deepfakes, Researchers Built a Smarter Camera. Wired.

This article pertains to cognitive security. Detecting manipulated photos, or "deepfakes," can be difficult. Deepfakes have become a major concern as their use in disinformation campaigns, social media manipulation, and propaganda grows worldwide.

2019-09-10
[Anonymous].  2019.  From viruses to social bots, researchers unearth the structure of attacked networks. Science Daily.

A machine learning model of the protein interaction network has been developed by researchers to explore how viruses operate. This research can be applied to different types of attacks and network models across different fields, including network security. The capacity to determine how trolls and bots influence users on social media platforms has also been explored through this research.

2019-09-06
Pawel Korus, Nasir Memon.  2019.  Outsmarting deep fakes: AI-driven imaging system protects authenticity. Science Daily.

Researchers at the NYU Tandon School of Engineering developed a technique to prevent sophisticated methods of altering photos and videos to produce deep fakes, which are often weaponized to influence people. The technique developed by researchers involves the use of artificial intelligence (AI) to determine the authenticity of images and videos.

2019-09-10
Rachel Alter, Tonay Flattum-Riemers.  2019.  Breaking Down the Anti-Vaccine Echo Chamber. State of the Planet.

Social media echo chambers in which beliefs are significantly amplified and opposing views are easily blocked can have real-life consequences. Communication between groups should still take place despite differences in views. Blame has been placed on those who seek to profit off ignorance and fear for the growth of echo chambers in relation to the anti-vaccination movement.