SoS Musings

Science of Security Musings

~ We are winning the battles but the war is elusive ~

Brief articles on areas of concern or interest in areas of Science of Security

The Thinker, wire design

SoS Musings #1 - We are winning the battles but the war is elusive!

The Thinker, wire bodySoS Musings #1

We are winning the battles but the war is elusive!

The news continues to report instances of cyber incidents. These intrusions continue to compromise privacy, security, and call into question the stability and assurance of the power grid, elections, and the internet of things. The research community has ramped up and produced excellent theoretical and empirical research that can be leveraged preventing compromise and stimulating further research.

Representative work can be seen; there are papers as well as emerging research initiatives throughout the Science of Security web site. Without knowing the exact details of the compromises, it is not without reason to think existing work might have thwarted a number of these attacks but is not being incorporated. Transition to practice is problematic. Many suspected factors can be cited: rush to product, ignorance of research, poor publication and exposition of results, not-invented-here syndrome, and so on. The nation needs a cadre of researchers and companies aware of all the areas that contribute to security and to be continuously aware of the state of the art in these areas to proactively prevent future failures.

It is not clear what actions would move this forward. Not all the research produces gems, but in the scholarly written papers/reports the assumptions made and the hypotheses tested can be reviewed, challenged, and independently validated. The nation spends a good amount of money to stimulate strategic security work. It is impossible to judge how much of this work is examined or affirmed by the community. What is clear is until we establish a vetting and adoption of strategic research into the internet of things, we will continue to be pressing and spending on "the now"--correcting problems that surface rather than science that protects the future.

Please take the online survey and let us know your views. Results will be published in a future newsletter.

SoS Musings #2 - Empirical Research

The Thinker, wire bodySoS Musings #2

Empirical Research

Lately I have been reading a lot of security papers. What I am reading describes a preponderance of empirical work. What is satisfying about what I am seeing is there is less of "fix a bug" work and more of studying a class of problems. Unfortunately, what is weak, in most cases, is the pursuit of the underlying causes of the empirical findings. This I fear is at best incomplete results. In general, it leads to better security/privacy. Without knowing the cause, it can divert science. Other work based on it can chase tangential effects as meaningful diverting researchers and corrupting research. It can also lead to continued on-going work without a scientific conclusion needed to build a Security Science.

So, what should be done? Nothing! --to a point. Well done empirical research provides a basis of reality to results; however, we must be on guard about its shortcomings. It would be interesting to know how much of this work ever gets put into practice.

SoS Musings #3 - Progress of Security Science on Hard Problems

SoS Musings #3

Progress of Security Science on Hard Problems

The Science of Security and Privacy Annual Report (2016) provides a good account of its sponsored research and how it has moved the science which underpins security in five broad areas - with supporting papers. What additional knowledge and techniques from disciplines might be useful in building this science? The tendency of researchers is to continue to pursue ideas with familiar techniques. New views/ideas have the ability to push Security Science forward faster. Please share your thoughts and help advance security.

What follows is a short caricature of the progress on the five research areas. For a richer description review the report cited above.

The science underpinning Scalability and Composability has gained knowledge of the foundation and factors needed for a range of components to operate securely.

Policy-Governed Secure Collaboration has developed some key components and methods for understanding and enforcing policies and requirements for secure collaboration. Scientific foundations which reduce complexity are aiding in understanding the basis for privacy.

Security Metrics and Models is a daunting task because of its overarching goals of measuring and quantifying security properties of a system given the nascent broad security science. Fixed and controlled models yielded good measures. Work has opened up the boundaries of the unknown with progress in understanding sources of metrics.

Resilient Architectures has made progress modeling and understanding what components are necessary for resiliency. It has also developed the ability to quantify the resilience.

Understanding Human Behavior through empirical studies continues progression but privacy concerns make large scale data collection difficult. Progress has been made on empirical studies from collected data sets whose population is inherently biased.

Building a science is a long never ending endeavor of theory, hypotheses, and repeatable experiments. Insights that hasten its progress are required.

SoS Musings #4 - Really?

SoS Musings #4


The previous Musings (#3) asked what additional knowledge and techniques might be useful in building security science. At the time of this writing there has been no responses to the survey. REALLY?

It was hoped that this Musings would present and discuss the results. Instead it will highlight a range of some of the on-going research that is interesting and potentially useful to those who design and use software:

Some of the research at CMU has focused on highlighting and providing tools which incorporate discoveries made on the "hard problem" of scalability and composability. As an example, research has led to PoliDroid - a tool suite focused on identifying inconsistencies between Android applications and their corresponding privacy policies.

The NCSU lablet consists of a strong team of researchers. Research at the NCSU aimed at helping software teams prioritize security efforts by approximating the attack surface of a software system via stack track analysis was highlighted at the doctoral symposium at the 38th International Conference on Software Engineering and published (15% acceptance rate) in ICSE '16 Proceedings of the 38th International Conference on Software Engineering Companion.

The lablet at UIUC sponsored a workshop to identify opportunities and challenges in software-defined networking SoSSDN 2016. The goal of the workshop was to identify opportunities and challenges in using Software-defined Networking to advance the "science of security".

A new area of research from the lablet at UMD is seeking to understand how users process security advice. "I Think They're Trying To Tell Me Something: Advice Sources and Selection for Digital Security" in the Proceedings of the IEEE Symposium on Security and Privacy is a start to finding some possible answers.

SoS Musings #5 - IoT

SoS Musings #5


In June, the PEW Research Center published an article on the implications of the growing connectivity of the Internet of Things. They state 49% of the world's population is connected online and estimate 8.4 billion connected things are in use worldwide. PEW and Elon University canvassed a large number of technologists, scholars, practitioners, strategic thinkers and other leaders to posit the implications. The group concluded that 15% of the population would disconnect while 85% would move deeply into connected life. They also came up with seven major themes on the future of the Internet of Things and connected life. All of these themes are driven by Human Behavior.

Human Behavior is one of the "Hard Problems" enumerated by the SoS lablets. Studies are hampered by lack of large open data bases from non-heterogeneous sources because of access and privacy concerns. The SOUPS 2017 conference proceedings give a glimpse of some of the current research in this area.

The Atlantic recently had an article on the need for the Internet of Things to have a code of Ethics. It claims technology is evolving faster than legal and moral frameworks needed to manage it. It posits that security is the critical Achilles heel and the need to think from the design stage about use in the social framework. NCSU is conducting research on norms. A paper entitled "Tosca: Operationalizing Commitments Over Information Protocols" was presented at the 26th International Joint Conference on Artificial Intelligence (IJCAI) highlighting some progress.

McKinsey published "Security in the Internet of Things" stating security issues might represent the greatest obstacle to the growth of the Internet of Things and indicates a role for semiconductor companies in security. Hopefully, security claims made should require support by security science research.

Many are predicting an explosion of the IoT. A Forbes article provides an IoT roundup.

Clearly a large community of scientists is needed to provide solutions based on science. Increased funding for such research is problematic.

SoS Musings #6 - Toward Improving Security

SoS Musings #6

Toward Improving Security

This year the National Academies Press published A Consensus Study Report of the National Academies of Sciences, Engineering, and Medicine. 2017. Foundational Cybersecurity Research: Improving Science, Engineering, and Institutions. Washington, DC: The National Academies Press. It is focused on strategies rather than recommendations. The committee's analysis was organized under the four following broad aims for cybersecurity research:

It is interesting to compare the ongoing research programs and initiatives, reported on the VO, to the strategies. Some form of all four strategies are represented. What appears weak is a strategy to transition research to practice.

The New York State Department of Financial Services (NYDFS) proposed a broad set of regulations for banks, insurers, and other financial institutions in 2016. NYDFS issued a set of regulations called "Cybersecurity Requirements for Financial Services Companies" which became effective March 1st 2017. The individual requirements are being phased in over two years. August 28th 2017 ended an initial 180-day transition period requiring: The Formal Risk-Based Cybersecurity Program, 14-Point Cybersecurity Policy, Seven-Point Incident Response Plan, A Qualified Chief Information Security Officer, Continuously Trained Cybersecurity Personnel, Limited User Access Privileges, 72-Hour Notice of Certain Events. NYDFS is mandating rules requiring financial institutions to take certain measures to safeguard their data and inform state regulators about cybersecurity incidents which is intended to thwart future cyberattacks and protect consumers. It does not follow NIST Framework guidelines but creates its own. Other states are watching and some are inclined to do the same. The effect of having multiple specifications on security science is unclear.

SoS Musings #7 - Building Blocks for Security Science

SoS Musings #7

Building Blocks for Security Science

Recently NSA announced its winner of the 5th Annual Best Scientific Cybersecurity Paper Competition: "You Get Where You're Looking for: The Impact of Information Sources on Code Security". The paper seeks to systematically analyze how the use of information resources impacts code security in apps for mobile Android devices. What is uncovered is analyzed against prevailing anecdotal causes.

The contest was created to stimulate security research that would contribute to the building of a Science of Security. The award recognizes papers that display the attributes of scholarly scientific work in the past year and commends the author(s).

This being the paper competition's fifth year it raises the question as to its effect. This is a difficult question to answer. One insight may be given by the number of papers whose authors have cited this work in their research. There is much debate as to what number would indicate influence. You can judge for yourselves. The number of citations is as given by Google Scholar.

The initial winner "The Science of Guessing: Analyzing an Anonymized Corpus of 70 Million Passwords" was awarded in 2013 and has 396 citations. The paper offered careful and rigorous measurements of password use in practice and theoretical contributions to how to measure and model password strength.

The 2013 winning paper "Memory Trace Oblivious Program Execution", awarded in 2014 has 28 citations. This work leverages programming language techniques to offer efficient memory-trace oblivious program execution, while providing formal security guarantees.

The 2014 winner "Additive and Multiplicative Notions of Leakage and Their Capacities", awarded in 2015 has 32 citations. It proposes a theory of channel capacity, generalizing the Shannon capacity of information theory, that can apply both to additive and to multiplicative forms of a recently proposed measure known as g-leakage.

Last year's winner "Nomad: Mitigating Arbitrary Cloud Side Channels via Provider-Assisted Migration", awarded in 2016 has 24 citations. It shows that Nomad, a system that offers vector agnostic defense against known and future side channels, is scalable to large cloud deployments, achieves near-optimal information leakage subject to constraints on migration overhead, and imposes minimal performance degradation for typical cloud applications such as web services and Hadoop MapReduce.

It appears that researchers are using these papers to influence their work! Hopefully they also follow suit and continue to display the attributes of scholarly scientific research.

SoS Musings #8 - Need for Scientifically Backed Security

SoS Musings #8

Need for Scientifically Backed Security

At the request of the National Security Council, the President's National Infrastructure Advisory Council (NIAC) examined how federal authorities and capabilities can best be applied to improve cybersecurity of the most critical infrastructure assets. The report and cover letter sent to the President convey their findings. This appears to be a noble effort to use existing products and techniques to strengthen the infrastructure. In conjunction with this effort there needs to be scientific research which details what are the input assumptions for any product used and supporting evidence for any assurances made. As multiple sets of band aids are applied to the infrastructure it would be easy to unknowingly make it less secure and even more opaque with these efforts. Best in practice techniques are sometimes driven by folklore or proprietary knowledge. With the explosion of the Internet of Things it seems problematic that you could really achieve a separate fully secure infrastructure and give it the up to date capabilities it would require to perform in the way it would be intended.

The University of Illinois Urbana Champaign (UIUC) has been doing scientific research on networks and their work illustrates some of the research and knowledge needed including how to protect a network to make more secure, reliable, and safe. Several examples include:

Researchers are developing the analysis methodology needed to support scientific reasoning about the security of networks, with a particular focus on information and data flow security. The core of this vision is Network Hypothesis Testing Methodology (NetHTM), a set of techniques for performing and integrating security analyses applied at different network layers, in different ways, to pose and rigorously answer quantitative hypotheses about the end-to-end security of a network. To fully realize NetHTM, effective evaluation methodologies for large-scale and complex networked systems are needed. Using advances in scalable evaluation methodology and platform they used virtual-machine-based emulation and parallel simulation and developed DSSNet, and utilized it to evaluate the SDN-based self-healing ability in critical energy systems and study the impact of various cyber-attacks on network behavior.

A prototype design called Plankton that has the capability to predict and verify future behavior of networks including temporal properties has been developed. An APNet'17 paper details this research.

A summary of recent UIUC lablet research is available for review.

SoS Musings #9 - State of Cybersecurity Education

SoS Musings #9

State of Cybersecurity Education

Cybersecurity education is maturing within university undergraduate and graduate level programs with an emerging surge to develop a K-12 curriculum. The acknowledgement that the topic goes beyond classic computer science and engineering and includes multidisciplinary aspects like psychology, economics, ethics, law, international relations, safety and others is quite refreshing. But the numbers don't lie. The demand for cybersecurity jobs far outweighs the supply. The challenge is not only providing good K-20 curriculum at all learning institutions, but also promoting and understanding what cybersecurity careers look like. At the recent National Initiative for Cybersecurity Education (NICE) K-12 Cybersecurity Education Conference, a Keynote Speaker, Paul Vann, an 11th grade student in the Commonwealth Governor's School in Virginia, said students know what nurses, doctors, lawyers, fireman, and policeman do in their profession, but very few understand what cybersecurity professionals really do. As cybersecurity curriculum develops and matures, it is also important to go beyond the how. We must comprehend the why. Law enforcement helps us live safely in physical space. Cybersecurity professionals help us live safely in cyberspace.

Cybersecurity education is not a new shiny object that just arrived on our national stage of challenges to solve. Cloaked in its new name, cybersecurity is the evolution of computer security terminology dating back to the 1970s with the associated efforts to train and educate practitioners. Some will argue that today's cybersecurity is more complex than computer security of yesteryear. Technological advances inherently increase the complexity of designing functionality and their corresponding protections. The past provides insights into accomplishments from which we build future successes. Cybersecurity education may trace its roots back to the Computer Security Act of 1987 which mandated security awareness training. This legislation incited programs to organically blossom. Among the surviving efforts are the National INFOSEC Education and Training Program (NIETP) providing oversight of the National Security Agency and Department of Homeland Security-sponsored National Centers of Academic Excellence in Cyber Defense Education and Research (CAE-CDs & CAE-R) along with the NIST-led National Initiative for Cybersecurity Education (NICE) providing the Cybersecurity Workforce Framework (NIST SP 800-181). Adding certification programs aimed at training the practical application of cybersecurity techniques like ISC2 Certified Information Systems Security Professional, CompTIA programs and SANS training to name a few, results in an ever-building quantity of education and training. Over the past several years, NSA's Research Directorate Science of Security (SoS) initiative has funded research to produce scientifically supported cybersecurity advancement in the establishment of cybersecurity as a science. The 4 SoS lablets, 25 Sub-lablets, and over 150 additional collaborating institutions worldwide have promoted cybersecurity awareness and training, and many have added scientifically-based cybersecurity courses to their curricula. These success stories plowed the field of developing university and workforce curriculum along with a taxonomy of what knowledge is required to perform cybersecurity work.

Innovative Cybersecurity education techniques and curricula are popping up all over. This is good news. It is driven by the acknowledged incidents in cybercrime, attacks, and exploitations. In the past, an argument was required to gain the attention of the C-Suite. Today it is the C-Suite that is demanding better cybersecurity professionals. So, the demand is clearly visible. But are we doing enough? Are we seeking the graduate who understands the why and not only the how? It is more than a checklist. It is the comprehensive cyber knowledge that stimulates emerging ideas anticipating future functionality and exploitation capabilities. CAE-designated universities are fertile grounds to mature cybersecurity curricula encouraging "out of the box" thought and research. In addition, emphasis on K-12 is growing as evidenced by NICE's National K-12 Cybersecurity Education Implementation Plan. But work within the educational community to normalize a standards-based curriculum that mandates some level of cybersecurity as a requirement for graduation remains on the to-do list. Complicating the dialog is the 10th Amendment to the Constitution making education a function of the states. Grappling with educating students to fill a critical national cybersecurity workforce gap that some feel is a national security matter and balancing it with an amendment that limits a national educational approach is tricky. Efforts to address this programmatic shortfall across federal and state jurisdictional boundaries are critical.

Time is not on the side of Cybersecurity Education. Clearly the workforce gap demonstrates that our current national ability to educate cybersecurity-ready individuals is inadequate. The pipeline is long and mostly empty. However, positive steps are underway. A Presidential Memorandum expanding access to high-quality Science, Technology, Engineering and Math (STEM) and Computer Science education to K-12 students is a great step in the right direction. The inceptive private/public partnership efforts to work this challenge are evident at NICE conferences and the plethora of cybersecurity blogs and websites to include professional journals with peer-reviewed scholarly papers. Efforts like GenCyber to inspire the next generation of Cyber stars through inspired summer camps focused on engaging the learners and teachers with sound cybersecurity principles and teaching techniques makes a positive impact on K-12 students. But more needs to be done to educate students about cybersecurity careers. Explaining in simple everyday language is critical. Outcomes are more important that a laundry list of duties. For example, a cybersecurity career results in protecting and defending our cyber way of life in a similar manner to law enforcement protecting and defending our physical way of life. Informing students and, daresay, the population at large, is critical to stimulate a desire to seek cyber careers. The NICE Framework identifies the complexities of cybersecurity work roles but we must boil them down to simple digestible stories explaining why cybersecurity professionals are essential to our safe existence in cyberspace. Many references discuss the fact that cybersecurity is an in-demand field, yet students remain unaware. Clearly this is an area rich with opportunities to increase the career field awareness.

Today's rapidly increasing dependence on computing technology for our daily living demands new and innovative approaches to inspire people to seek cybersecurity careers. The positive work begun by NICE, CAE-CD/R, the SoS initiative, and other private/public partnerships to develop robust mature cybersecurity K-20 curricula is half the challenge and must continue to mature. Perhaps it's time to dedicate energy into enticing students to seek the education needed to tackle the toughest cybersecurity workforce challenges. We must not delay in building a robust cybersecurity professional community. The bad guys continue to poke and prod our networks seeking the next breach to steal our identities, impair lifestyles and threaten our national security. Their successes are early predictors of imposed lifestyle changes. If we fail, the consequences will be unacceptable. The time to act is now.

SoS Musings #10 - 2018 Internet Security and Regulations

SoS Musings #10

2018 Internet Security and Regulations

Around the new year articles predicting internet security start popping up. 2018 is not different.

Govtech offers "The Top 18 Security Predictions for 2018". The article lists predictions from eighteen sources and includes their links. A substantial amount of these are amplifications from last year's predictions, returning in a larger way. The coming General Data Protection Regulation (GDPR) requirements from the European Union in May is also a factor and has mixed assessments as to its possible effectiveness. It requires businesses to protect personal data and privacy of EU citizens for transactions within EU states; as well as regulation of exportation of data outside of the EU. The UK also has a new Internet Safety Strategy. It is focused on protecting citizens rights and well being online. The U. S. White House Strategy, posted on the VO, is another take.

For the most part these all call out areas that need to be addressed without mentioning the details needed to achieve their goals. Forbes goes a little farther creating 11 Cybersecurity resolutions for 2018 aimed to increase security from 11 members of their tech council. A Harvard Business Review article states that the Internet of Things (IoT) is a game changer. Removing the human risk component, IoT and other security issues aren't user interaction problems; they're device and system interaction problems. It states that it is time to take the human factor out of the equation and hand those problems to intelligent systems.

How do we respond to the predictions and regulations? The devil is ALWAYS in the details. The internet is a global phenomenon whose interconnectivity we have come to need and expect. What assurance do we need for products/programs which claim to mitigate some piece of the puzzle? Do we accept the claims of industry or governments? What evidence is needed? What repercussions, actions, restitutions occur if the claim is found to be untrue?

One way to help bolster trust is to "open source" all solutions including assumptions, research and proofs so that they can be studied, verified, and built on. The Science of Security Virtual Organization is an attempt at this. It is unlikely that companies will freely divulge their intellectual capital which serves to generate their revenue. Should there be an international board that verifies (without disclosure) claims? Can it work together in a timely manner? How would it be funded?

Lacking some agreed to equilibrium that balances security, privacy, and revenue the Internet will contain some safe and unsafe upgrades/solutions. Believing in fixes that don't work is worse than no fix as that flaw will no longer be studied with as much scrutiny. Perhaps the most urgent order of business is agreeing on the ground rules or we will be improving some security features but continuing to chase our tails.

SoS Musings #11 - What’s the Buzz in Academic Research?

SoS Musings #11

What's the Buzz in Academic Research?

Universities are abuzz with cybersecurity research activity. Significant grant money is flowing from industry and government to put our academic institutions to work. And why wouldn't we tap these centers of higher learning with their wealth of faculty and student talent? Faculty need research projects to achieve tenure. Students need projects to achieve course work. It is a win-win for universities and those institutions with unsolved funded research challenges. A quick scan of the Science of Security (SoS) universities sponsored by the National Security Agency Research Directorate and the Centers of Academic Excellence in Cyber Defense Research (CAE-R) as designated by the National Security Agency and the Department of Homeland Security results in a broad set of research projects. Although difficult to pigeonhole into categories, one may be surprised by the emerging themes.

In no order of preference or hierarchical importance, here are some areas of interest:

Assured Identity & Privacy: As the number of users and systems interacting with data grows, so do internal and external threats. Identity, Credential and Access Management (ICAM) plays a critical role in protecting data. Many universities are engaged in a full spectrum of research projects that fit under this area of interest. Topics include hardening against counterfeiting and tampering, trust-based approaches for securing mobile peer-to-peer networks, digital supply chains, trusted medical information systems and health informatics, just to name a few.

Forensics: Universities are exploring the use of science and technology in the process of investigating a cyber incident to maximize the effectiveness of proving the perpetrator has committed the malicious act in a court of law. For example, computational models and heuristic algorithms are being developed to improve the overall effectiveness of a cyber crime scene investigation procedure in Digital Forensics.

Human Centric Security: Human-centric places the human front and center. Data is more valuable to the user when it's at the point of access, when it's being displayed and/or used by a person. That's also when it's most vulnerable. Making certain that the data is available and protected at the point of contact between human and information is vital. Research in this area includes phishing, password cracking, biometrics, whitelisting, human factors, trusted collaborative computing, and user authentication across cyberspace.

Network Security: This is a tried and true area of research with deep roots all the way back to the Orange Book days. It will always be with us as long as we have networks. Classic examples include covert communication analysis, secure data architecture, attack path complexities, attack defense development, resilience, intrusion detection systems, protocol vulnerability discovery and risk and vulnerability analysis. With the ever-changing landscape of technology, this research area remains essential to overall security.

Prevention, Detection and Response: Closely related to Network Security, this area peels the onion back a bit more and explores deep into network intrusion detection, attribution, social media analytics, visual analytics, recovery, assessing relationships in hacking and personality traits, psycholinguistics, and tracking sensor networks. It focuses on deployable tactical solutions to harden networks against adversary attacks.

Policy & Law: An increasing area of research across many universities, policy and law is addressing aspects of criminal activities, information privacy, free speech, commerce, international collaboration, intelligence, counterterrorism and national security. Impacts on cyber insurance and setting the norms for legal litigation hang in the balance.

Secure Software: Security flaws and vulnerabilities are all too common in software today. Many universities are conducting research to identify and prevent security flaws during development, where it is much more cost effective than in the test phase or post-deployment.

University cyber security research is healthy. These topic areas merely scratch the surface of broad spectrum research. It appears that cyber security is on everyone's research radar. Funding is rich. Topics are abundant. The next step is to identify the leading cyber security research in critical areas and implement concrete solutions.

SoS Musings #12 - The Trouble with Data

SoS Musings #12

The Trouble with Data

There is a growing concern about data on or accessible by the Internet. Businesses are worried about their legal responsibilities and liabilities for data. A FTI consulting study highlights the State of the Union on Data Privacy and Security. "A clear and recurring theme is that in-house legal teams are under greater pressure to meet ever-changing and increasing data-related challenges," said Chris Zohlen, a Managing Director in the Technology segment at FTI Consulting and co-author of the study. Several participants said that they do not believe their organizations are safer than they were five years ago. Cloud storage and GDPR regulation were among their concerns.

The growing IoT aggravates the problem. We are in a zettabyte era. A Tripwire article explores what it means and advocates for using Artificial Intelligence (AI) tools and also being smart about and honest about what are dealing with. It reports that the Cisco 2017 Annual Cybersecurity Report reveals to us that less than half of legitimate alerts actually lead to some sort of correction and less than 1% of severe/critical alerts are ever investigated.

In 2016 an article in the Federal Technology Insider proposed a three-step triage to avoid alert overload. This consisted of: Set a Goal (categorize alerts and reduce false positives), Get the Right Information (have the alert include device name and rule that generated the alert) and Consolidate (informational alerts should be given in a daily or weekly fashion). Helpful advice if you could do it but impractical given volume.

An article in Infosecurity magazine reports: A full 80% of organizations receiving 500 or more severe/critical alerts per day currently investigate fewer than 1% of them. According to research from Enterprise Management Associates (EMA), it's mainly a resource issue: Not only do 68% of organizations suffer from some sort of staffing impact to their security teams, but larger organizations are collecting gigabytes to terabytes of data each day. In the end, detailed analysis showed that in aggregate, 80% of the organizations receiving 500 or more severe/critical alerts per day were only able to investigate 11 to 25 of those events per day, leaving them with what EMA characterized as "a huge, and frankly insurmountable, daily gap."

In March GCN magazine reported on GSA's recent plans for modernizing IT infrastructure. Based on direction from the White House, the General Services Administration's Technology Transformation Service is taking the lead to help agencies modernize their IT infrastructure. Joanne Collins Smee, acting director of TTS and deputy commissioner of the Federal Acquisition Service, outlined GSA's priorities for the next year at a March 1 AFCEA Bethesda event. Smee listed action items from the American Technology Council's December report on IT modernization, the creation of five Centers of Excellence at the Department of Agriculture and the administration of the Technology Modernization Fund (TMF) as her top priorities to modernize government systems. The five CoEs are: Cloud Adoption, Infrastructure Optimization, Customer Experience, Contact Center, and Service Delivery Analytics. Privacy and Security do not appear.

AI can provide help but more is needed. Privacy and Security should always be addressed, especially at the inception of a new system. We are going to continue to want and need more data. Strategic scientific research to discover provable methods to lessen the attack surface; which assures the designers and users, needs to be strongly supported and grown.

SoS Musings #13 - Uncle Sam Underdog in Cyber Fight

SoS Musings #13

Uncle Sam Underdog in Cyber Fight

Each time a government employee opens their email inbox, they face the risk of initiating a data breach that can inflict significant damage to the organization in which they work. Verizon's 11th edition of its Data Breach Investigations Report (DBIR) provides information on recent cyber incidents in an effort to raise awareness and understanding about the evolving state of cybersecurity threats. According to the findings highlighted in the report, a majority of the 304 confirmed data breaches experienced by the public sector in 2017 were launched by state-affiliated actors. The motive of such actors is often to perform cyber espionage in order to steal government secrets and personal data belonging to government employees. The methods used to perform this malicious activity include phishing, creating backdoors, using C2 channels, and more.

The DBIR highlights the fact that financial pretexting and phishing were involved in 93 percent of the breaches investigated by Verizon. In conjunction with this finding, email continues to be cited as the main entry point used to execute attacks. In addition, studies show that organizations are more likely to suffer a data breach as a result of social attacks. These findings call for further employee education on phishing.

SoS Musings #14 - Concerns with a Ray of Hope

SoS Musings #14

Concerns with a Ray of Hope

Nextgov reports the Trump 2019 budget boosts Cyber spending but cuts research. There are proposed funding hikes at DHS and Defense. The DHS cyber research has been in the Science and Technology Directorate but will now be in the cyber and infrastructure protection, NPPD. The budget, however, includes a massive cut of 18 percent to the government's main cyber standards organization, the National Institute of Standards and Technology. NIST just issued Version 1.1 of the Framework for Improving Critical Infrastructure Cybersecurity and has plans to produce a companion document "Roadmap for Improving Critical Infrastructure Cybersecurity" highlighting key areas for further collaboration.

This certainly will increase tactical research at the price of strategic research. It appears it will be aimed at current problems plugging known gaps as opposed to trying to get ahead by anticipating and providing research that can solve future problems.

Are we eating our seed corn?

We certainly require both types of advances.

A 15-year old security researcher was able to compromise the firmware on Leger's virtual wallet.

At the end of March, the U.S. Justice Department charged nine Iranian nationals involved in a massive attack on behalf of the Iranian National Guard. The intellectual capital of academic institutions was targeted.

Eight novel flaws in computer chips have been found, dubbed Spectra Next Generation.

The TechNewsWorld points out retail, industrial and government breaches that signal increasing consumer and business vulnerabilities in an article "No Cure for Cyber Insecurity?"

Some current work:

Army scientists recently found that the best, high-performing cybersecurity teams have relatively few interactions with their team-members and team captain. While this result may seem counterintuitive, it is actually consistent with major theoretical perspectives on professional team development.

ARL also published a paper detailing "Current and Future Applications of Machine Learning for the US Army." There are those in the community who caution that it is possible to discover the way a particular program "learns" and to use that knowledge to spoof the system.

Security magazine notes that at RSA 2018 we were cautioned that "In the Golden Age of Cyber Crime we have a People Problem."

McAfee offers some suggestions.

How the direction of the ongoing research will fulfill the needs will be seen.

SoS Musings #15 - Bolstering Resilience to Defeat Automated and Distributed Cyber Threats

SoS Musings #15

Bolstering Resilience to Defeat Automated and Distributed Cyber Threats

The U.S. Department of Homeland Security (DHS) and the Department of Commerce released a joint report, titled “Enhancing the Resilience of the Internet and Communications Ecosystem Against Botnets and Other Automated, Distributed Threats” in response to the May 11, 2017, Executive Order (EO) 13800, “Strengthening the Cyber Security of Federal Networks and Critical Infrastructure”. The order required that the Secretaries of Commerce and Homeland Security departments develop a clear process for appropriate stakeholders in the identification and mitigation of threats posed by botnets and other similar cyberattacks. As requested, the Departments of Commerce and Homeland Security collaborated to establish an open and transparent process, which involved hosting workshops, publishing requests for comments, and initiating an inquiry through the President’s National Security Telecommunications Advisory Committee (NSTAC). The Departments of Defense, Justice, and State, along with the Federal Bureau of Investigation, Federal Communications Commission, Federal Trade Commission, sector-specific agencies, and other agencies interested in the effort to combat the threat of adversarial botnets were also consulted during the process. As a result, themes, goals, and actions in relation to the reduction of threats from automated, distributed attacks such as botnets have been determined and identified. 

Principal Themes

Six themes have been established based on information gathered from the input of experts and stakeholders, as well as consultations. The following themes describe opportunities and challenges in regards to pursuing the reduction of threats posed by automated, distributed attacks:

Goals and Actions

The following goals and actions are in support of strengthening the resilience of the Internet and communications ecosystem against threats posed by automated, distributed cyberattacks such as botnets.

Goal 1: Identify a clear pathway toward an adaptable, sustainable, and secure technology marketplace.

The continuous research, development, and adoption of novel security technologies and secure processes must be encouraged and rewarded by the technology marketplace. As the exponential growth and development of insecure IoT devices has contributed to the launch of massive botnets such as the infamous Mirai IoT botnet, it is important that performance-based security capability baselines are developed to establish standards for the secure design, development, and lifecycle of IoT devices and systems in different threat environments. The development of these baselines should be collaborative in that there is participation from customers and the governments, as well as industry leadership. Industry-developed capability baselines should also be used to establish federal IoT security capability baselines in order to ensure that the IoT devices and systems used in the federal environment, fulfill federal security requirements, which could then be used to encourage international standardization. The federal government in collaboration with industry and civil society should support the advancement and adoption of software development tools, approaches, and processes by manufacturers to reduce the vulnerabilities contained by commercial-off-the-shelf software. Research development, and deployment of innovative technologies aimed at preventing and mitigating distributed attacks should also be facilitated and prioritized through collaborative technology transition activities, federal funding, and support from civil society. The government, industry, and civil society must work together in support of the widespread adoption of IoT security best practices, frameworks, guidelines, and procedures for transparency.

Goal 2: Promote innovation in the infrastructure for dynamic adaptation to evolving threats.

Standards and practices developed for the prevention and mitigation of botnets and other automated, distributed threats should continue to be applied, followed, and improved upon in all areas of the digital ecosystem to manage evolving threats. The arrangements in which information pertaining to threats, network management techniques, and defensive strategies are shared domestically and globally among ISPs and their peering partners, should be enhanced in a way that is comprehensive, up-to-date, and effective. A Framework for Improving Critical Infrastructure Cybersecurity (CSF) Profile in support of guiding enterprises in the prevention and mitigation of DDoS attacks should be developed. There must be collaboration between stakeholders and subject matter experts with consultation from the National Institute of Standards and Technology (NIST) behind the development of the CSF Profile. In order to create market incentives for early adopters of secure IoT technologies and practices, the federal government should demonstrate the effectiveness of such technologies and processes, as well as create procurement guidelines based on IoT security baselines and procurement regulations that encourage the use of securely-developed commercial-off-the-shelf software. Information-sharing protocols must continue to be standardized and improved upon through the collaboration of stakeholders in industry, government, and civil society to combat automated, distributed threats. Best practices and tools for the management of network traffic across the ecosystem should also be enhanced or developed with support from the federal government, industry, academia, and civil society in collaboration with infrastructure providers.

Goal 3: Promote innovation at the edge of the network to prevent, detect, and mitigate automated, distributed attacks.

Infrastructure services with the purpose of providing security against attacks should be strengthened through the improvement of detecting and mitigating the compromise of devices in home and enterprise networks. The advancement of network security products and standards aimed at ensuring the security of network traffic should continue to be encouraged by the networking industry. The secure use and configuration of IT and IoT products used in home and small business networks should be simplified for owners. It is important that enterprises redesign their networks to consider security in a way that isolates insecure devices, controls flows of communication, and more. Enterprises should examine the ways in which their networks pose a risk to others in order to improve upon network security practices. The potential impact of Internet Protocol Version 6 (IPv6) and its widespread adoption on the launch of automated, distributed attacks and the defense against such attacks should also be investigated by the federal government. 

Goal 4: Promote and support coalitions between the security, infrastructure, and operational technology communities domestically and around the world.

Critical stakeholder communities must establish alliances and collaborations in support of countering automated, distributed threats. The way in which actionable information in relation to such threats, should be improved upon through increased cooperation from ISPs, cybersecurity and incident response teams, cyber threat intelligence companies, and more, with government agencies, including law enforcement. The increase in shared cyber threat information would greatly facilitate the performance of law enforcement in preventing and mitigating cybercrimes. Engagements in support of cybersecurity between the U.S. and international partners should continue to encourage the use of best practices, tools, and services aimed at strengthening the security of IoT products and prevention of automated, distributed attacks. There should be collaboration between sector-specific regulatory agencies and industry to ensure that the IoT products deployed within a specific sector is appropriately secured and deceptive marketing by IoT and information technology vendors in relation to security claims is prevented. Reputation data and information-sharing measures must be leveraged and implemented in order to identify and examine attackers and the tools they use to launch attacks. Cybersecurity challenges posed by the growing connectivity of operational technology such as SCADA systems should also be addressed through continuous engagement between the cybersecurity community and the operational technology community with facilitation from the federal government. 

Goal 5: Increase awareness and education across the ecosystem.

The prevention and mitigation of distributed threats calls for the rise in cybersecurity awareness and enhancement of skills among all stakeholders. The private sector should establish a labeling approach for home IoT devices through the support of an assessment process to help security-conscious consumers identify securely-designed IoT products and create market incentives for the secure design and development of such products. In addition to home IoT devices, the private sector should also establish voluntary labeling schemes for IoT devices deployed in industrial and critical infrastructure environments with the objective of assisting security-conscious enterprises in identifying securely-designed IoT products as well as create market incentives for the secure development of these products. As the use of security development tools and practices during the design and development of IoT products can significantly reduce the number of vulnerabilities contained by such products, it is important that the government encourage the application of secure-by-design software methodologies and security-aware software development tools within the academia and the training industry. The National Initiative for Cybersecurity Education (NICE) should work with the academic sector to integrate cybersecurity principles into the curriculum of the engineering discipline and other related disciplines in order to further increase awareness surrounding the security of home IoT devices. A public awareness campaign should also be developed by the federal government to encourage users and small organizations to recognize and adopt IoT device security baselines.

Initial Next Steps for Stakeholder Action 

As the five goals and 24 supporting actions identified in the report are mutually supportive by design, failing to execute an action could delay the achievement of multiple goals towards increasing the resilience of the Internet and communications ecosystem. Therefore, stakeholders should take initial steps to drive the execution of actions. The Departments of Commerce and Homeland Security should continue to work with industry, civil society, and international partners to develop an initial road map that prioritizes the identified actions. The federal government should demonstrate the efficacy of best practices in support of reducing automated, distributed threats in order to encourage other parties to take action. The leadership and coordination of industry, academia and civil society should be encouraged to track the implementation of the prioritized road map. A 365-day status report will also be provided by the Departments of Commerce and Homeland Security to the President following the initial publication of the road map. International participation should also be encouraged through greater engagement from stakeholders and the federal government in the development of international policies, standards, and best practices.

SoS Musings #16 - Biometrics Growth, Concerns, and Research

SoS Musings #16

Biometrics Growth, Concerns, and Research

The adoption and implementation of biometrics technology continues to increase. Biometrics can be used to identify and authenticate a person via the analysis and measurement of physical human characteristics such as face, voice, fingerprint, retina, and more. The increasing utilization of such technology has been linked to the need for enhanced security for Internet of Things (IoT) devices, replacement of password-based authentication systems, and facilitation of law enforcement activities. An article in Infosecurity Magazine recently highlighted findings of a report from ABI Research called Biometric Technologies and Applications, which further ignited the expectation of biometrics technology to become an essential factor of a user's digital ID within the IoT ecosystem. Bleeping Computer reported a new attack called Thermanator that can determine passwords by capturing thermal residue on keyboards, thus adding to the collection of password-stealing attacks and further indicating the need to replace passwords with alternative forms of authentication such as those that involve biometrics. Police were able to identify the suspect in a mass shooting that occurred at the Capital Gazette newsroom in Annapolis, Maryland on June 28, 2018 through the use of a facial recognition system. Biometrics offers benefits such as improved authentication and identification of individuals. However, there are still major concerns surrounding such technology.

The increased use and application of biometrics brings with it security concerns. Although biometric authentication offers improved security as the biological data used to verify the identity of individuals is distinctive and impossible for attackers to guess, this technology can still be defeated through the use of methods recently demonstrated by researchers. Recent research has brought further attention to the possibility of fooling different types of biometric authentication systems such as fingerprint scanners, facial recognition, voice recognition, and iris recognition. A team of researchers from Fudan University in China, the Chinese University of Hong Kong, Indiana University, and Alibaba Inc. have demonstrated the use of a LED baseball cap that they created to trick facial recognition software into misidentifying an individual. Researchers at the University of Eastern Finland conducted a study which showed that voice recognition systems could be deceived by voice impersonators, along with different technologies such as voice conversion, speech synthesis, and more. Hackers from the Chaos Computer Club in Germany were able to defeat the iris-recognition feature in Samsung's Galaxy S8 smartphone through the use of an artificial eye, which they created using a digital camera, printer, and contact lens shortly after it was released. The fingerprint scanners on the Samsung Galaxy S6 and Huawei Honor 7 smartphones were successfully fooled by researchers at Michigan State University using an inkjet-printed fingerprint. As methods to trick biometric security systems emerge, advancements must continue to be made against the use of such techniques.

The expanded use of biometrics has also raised concerns in pertinence to privacy. A Wired article in which security and privacy concerns surrounding biometrics are discussed, emphasizes the public nature of biometrics in that the data used for identification are publicly visible unlike passwords or credit cards that are inherently private. A malicious actor could take a high resolution picture of an individual's iris or recover a fingerprint left on glass and attempt to bypass security features in which these physical characteristics are used for identification or authentication. The increasing utilization of biometrics such as facial recognition in law enforcement also evokes the concerns of privacy advocates. A report released by the Center for Privacy & Technology at the Georgetown University law school in 2016 highlighted the storing of facial recognition data of over 117 million Americans by U.S. law enforcement agencies. Critics have expressed fears of the use of biometric facial recognition by law enforcement due to the potential abuse of data, occurrence of errors leading to misidentification, performance of extreme surveillance, and lack of regulation.

Efforts have been made to strengthen the accuracy, security, and privacy of biometric authentication and identification. Researchers at the Georgia Institute of Technology developed a new approach to login authentication called Real-Time Captcha, which improves upon the security of biometric techniques in which video or images of users' faces are used. The application of this technique requires that a user look into the built-in camera of their smartphone and respond to a randomly-selected question appearing as a Captcha within a short amount of time. This complicates the process of spoofing legitimate users. A smartphone fingerprint sensor has been developed by Korean researchers to measure skin temperature and pressure in order to prevent the use of an artificial hand or fingerprint. A 3D facial recognition model called FR3DNet designed by researchers from the University of Western Australia improves upon the accuracy and performance of facial recognition. Major tech company Microsoft has called for the regulation of facial recognition technology by the US government in support of protecting against the threats that such technology poses on privacy. The evolution of biometrics and the implementation of regulations for this technology should continue to be supported by researchers and organizations.

Biometric technology calls for continued advancement as it expands in application. The privacy, security, and performance of biometric technology and standards must continue to be supported through further research and development efforts.

SoS Musings #17 - Hacking Bodies and Networks

SoS Musings #17

Hacking Bodies and Networks

The realm of medical technology is rising in the ranks of targeted areas by cyber attackers. The purpose of medical technology is to save and improve the quality of life as such technology is used in the diagnosis, monitoring, and treatment of an extensive range of major illnesses, minor ailments, and injuries. However, as the healthcare sector increases in connectivity, hackers are becoming more enticed to target medical devices. In addition to higher connectivity, the healthcare sector also contains valuable data and inadequate security practices, which greatly contributes to the increased targeting of medical devices. Medical technology such as pacemakers, anesthesia systems, electroencephalogram systems, and more, have been found to be vulnerable to cyberattacks. Cyberattacks on such medical devices call for concern as the disruption of these devices poses a danger to the operation of healthcare providers as well as the safety and privacy of patients.

Recent discoveries of security vulnerabilities contained by medical devices have brought further attention to the insecurity of such devices, which poses a threat to the welfare and privacy of patients, as well as the security of hospital networks. Security researchers from WhiteScope and QED Secure Solutions, have found vulnerabilities that put widely-used pacemakers and implantable insulin pumps manufactured by Medtronic at risk of being controlled by hackers. The vulnerabilities discovered in Medtronic's software delivery network, could be exploited by hackers to remotely perform life-threatening activities via pacemakers and insulin pumps such as manipulating electrical impulses used to regulate heart rate and disrupting the administration of insulin. In addition to posing threats to the well-being of patients, cyberattacks on medical devices can also lead to breaches in patient privacy as indicated in a study conducted by a Spirent SecurityLabs researcher in which the security of IV infusion pumps and digital smart pens used by doctors was examined. The study found that these devices, which are used to deliver fluids to a patient's body and prescribe medications, contain vulnerabilities that could allow attackers to steal information such as the names, contact information, and sensitive medical data of patients. Researchers at Cisco's Talos Intelligence Group uncovered vulnerabilities contained by software used in several electroencephalogram (EEG) devices called NeuroWorks, which could allow attackers to gain unauthorized access to patient data on EEG devices and systems connected to the hospital network, as well as execute larger attacks on the network. According to Ben Gurion University's (BGU) Malware Lab researchers, medical imaging devices such as computed tomography (CT) and magnetic resonance imaging (MRI) machines, also contain vulnerabilities that could lead to the high discharge of radiation, disruption of image results, and more. Other recent research findings have also brought attention to the possibility of hackers falsifying patient vitals and invading hospital networks due to the use of a weak communications protocol and presence of Wi-Fi security flaws in medical equipment, which could lead to incorrect diagnoses and the invasion of hospital networks. Discoveries of security vulnerabilities surrounding medical devices call for further research and development in the protection of such technology against cyberattacks.

Efforts continue to be made in the improvement of security for medical devices against cyberattacks. The vulnerabilities discovered in highlighted research pertaining to medical device cybersecurity have derived from a lack in the implementation of digital code signing, the abuse of network security protocols, insecure software code, and more. An article in Infosecurity Magazine also emphasizes poor user practices that contribute to the insecurity of medical devices, which include the inappropriate use of embedded browsers on medical workstations and use of outdated software. As a result of the increasing risk of cyberattacks on medical devices, the Food and Drug Administration (FDA) has released a "Medical Device Safety Action Plan", which supports the continuous patching and updating of medical devices, improvement of vulnerability disclosure, and more. The healthcare industry has also increased spending on cybersecurity resources in order to defend against cyber threats facing systems and technology used by the industry. In addition, researchers have developed and proposed new methods for securing medical devices. Researchers at MIT have developed an innovative new transmitter to prevent the compromise of wireless devices such as medical devices, which applies an ultrafast frequency-hopping method to protect data transmitted between devices. An encryption method has been proposed by researchers from KU Leuven, Belgium, that would improve the security of implantable neurostimulators. In an attempt to secure CT devices, BGU Malware Lab Researchers have been working on a machine learning-based algorithm to improve the detection of anomalies in such machines. Progress must continue to be made in the research and development of cybersecurity solutions for medical devices as these devices increasingly become targeted by hackers.

As cyberattacks on medical devices could lead to major consequences such as the physical harming of patients, breach of sensitive medical data, and disruption to the operation of healthcare providers, it is vital that research and developments surrounding the advancement of security for such devices continue to grow.

SoS Musings #18 - Get Smart About Smart City Cybersecurity

SoS Musings #18

Get Smart About Smart City Cybersecurity

The ultimate goal of a "smart city" is to improve upon the quality of life for those residing within the city. However the execution of attacks on smart city systems could lead to devastating consequences for residents. A smart city deploys technologies with the purpose of managing the performance of urban services via the analysis of data collected by internet-connected devices. These internet-connected devices include environmental sensors, traffic monitors, water level gauges, and more. Smart city systems can be implemented to manage air quality, water flow, traffic signals, transportation, disaster warnings, and more. The compromise of these systems by cyberattackers could lead to mass panic similar to that of the incident in Hawaii on January 13, 2018 in which a false ballistic missile alert was sent out via the Emergency Alert System by an employee, leaving Hawaiians fearing for their lives. Although the incident occurred as a result of human error, it did ignite concerns surrounding the deliberate abuse of such systems by cyberattackers to raise havoc.

Recent research has highlighted the security weaknesses contained by smart city systems and the havoc that could occur if these weaknesses were to be exploited by malicious actors. The panic that followed the false missile alert in Hawaii is what influenced researchers from Threatcare and IBM X-Force Red to further investigate the vulnerability of smart city systems to being hacked and the dangers that could arise as a result of such incidents. Smart city systems provided by companies, Libelium, Echelon and Battelle, were discovered to contain 17 zero-day vulnerabilities that could be exploited by hackers to manipulate the sensors and data used by these systems in order to cause major disruption or harm. The vulnerabilities discovered in the systems examined in this study could lead to a number of disruptive and potentially disastrous outcomes such as the issuing of false alerts pertaining to floods and radiation leaks, creation of gridlocks, shutdown of lights, and more. The ways in which these vulnerabilities emerged call for vendors to prioritize and examine security in the development of these smart city systems.

Many of the vulnerabilities discovered in smart city systems by IBM X-Force and Threatcare were reported to be simple to exploit as they fell into common groupings including default passwords, authentication bypass, and SQL injection. In addition, many of smart city devices used in these systems were found to be vulnerable to remote access online through the use of search engines, Shodan and Censys, which could allow attackers to determine how these devices are being used, where they are located, who they have been purchased by, and the security features they contain. Following the disclosure of discovered security vulnerabilities to the vendors of affected smart city products, patches and software updates were issued. However, further steps need to be taken to ensure the security of these smart city systems.

Researchers have urged manufacturers and users of smart city devices to take further actions to securing such devices. The leaders of cities in which these smart systems are being utilized as well as the vendors of devices being used in these systems are expected to make security a priority in the development and implementation of this type of technology. The security of smart cities could be improved through the examination of security protocols, creation of security frameworks, and establishment of procedures for executing patches for security vulnerabilities. Researchers have also emphasized the importance of specific practices such as managing who can connect to smart city devices through the application of IP address restrictions, using vulnerability-identifying applications, enforcement of stronger password practices, deactivation of remote administration features that are not required, and more.

This study calls for further investigation of vulnerabilities contained by smart city systems. More strategies and best practices for bolstering the security of such systems should be developed as the exploitation of the vulnerabilities contained by these systems could have serious implications with respect to the security and well-being of city residents.

SoS Musings #19 - Unpacking Cryptojacking

SoS Musings #19

Unpacking Cryptojacking

Cryptocurrency is a digital currency that is becoming an increasingly popular form of investment. In contrast to regular forms of currency, cryptocurrency is not centrally managed or facilitated by banks or other financial institutions. The "crypto" in cryptocurrency derives from the utilization of cryptography to secure and verify the transfer of funds. Cryptocurrency transactions are processed and finalized via a decentralized distributed ledger, called blockchain. Cryptocurrency mining is the process in which the digital currency is created. The process of cryptomining requires miners in a blockchain network to verify cryptocurrency transactions by solving complex mathematical problems with cryptographic hash functions in order to add blocks of cryptocurrency transactions to the chain. This process is competitive as the miner who successfully solves a problem and adds the block first, is rewarded with cryptocurrency. In addition, a cryptominer must also have the equipment necessary to effectively mine cryptocurrency. A standard PC is no longer sufficient enough for cryptomining as the process requires a massive amount of processing power and electricity to perform, and the amount of people mining has increased significantly. Therefore, miners must have high quality GPUs or computers containing specialized hardware geared towards cryptomining, which drives up the cost of participating in this process. In order to compete, hackers have turned to the malicious act of cryptojacking to steal computing sources.

The computationally demanding process that is mining cryptocurrency has led to a significant increase in cryptojacking, which is a type of attack known as the unauthorized hijacking of unsuspecting users' computer processing power in order to mine cryptocurrency. Hackers are continuing to use the illicit method of cryptojacking to increase the speed at which cryptocurrency mining occurs without having to invest in their own specialized computer equipment required to legally and effectively mine cryptocurrency. There are two main forms of cryptojacking, one of which involves the use of phishing tactics to deceive users into downloading cryptomining malware onto their computing devices and the other involving the injection of cryptomining scripts into websites or widely distributed web ads. According to Symantec's Internet Security Threat Report, there was a rapid increase in cryptojacking attacks in 2017 as the security company reported that an estimate of 8 million cryptojacking events were detected and blocked just in the month of December. In the first quarter of 2018, McAfee observed a 629% increase in cryptojacking malware samples. Although these attacks do not affect data, they can drain computing resources leading to slower computer performance, higher electricity bills, and a decrease of device lifespans. These attacks affect both individuals and organizations, but the consequences faced by organizations when hit by cryptojacking attacks are higher in that they may significantly raise the costs of electricity and IT labor as well as diminish opportunities.

Recent reports surrounding cryptojacking attacks have highlighted the prevalence and amplification of such attacks. According to Check Point's Global Threat Index for September 2018, two of the top malware threats detected by the security company performs cryptocurrency mining, which include Coinhive and Cryptoloot. Coinhive and Cryptoloot are legitimate online services that allow website owners to generate an alternative source of revenue by mining cryptocurrency on their sites using JavaScript libraries. However, these JavaScript libraries have been misused by malware authors to perform cryptojacking on hacked sites, mobile apps, desktop software, and more. Coinhive has been observed by Check Point researchers to be the most prevalent mining malware as 19% of organizations are now being impacted by this malware globally. Recently, Coinhive mining malware has been used in the infection of more than 30,000 MikroTik routers across India following an infection of over 200,000 MikroTik routers across Brazil that also used Coinhive mining malware. These incidents involved the use of routers' capabilities to inject the malware into web pages visited by users of the compromised devices to mine Monero cryptocurrency. In addition, a 400% rise in cryptojacking attacks targeting Apple iPhones using Coinhive mining malware has been reported by Check Point. Another cryptomining malware, called XMRig, has recently been discovered by Palo Alto Networks' Unit 42 threat research team to be distributed via fraudulent Adobe Flash Updates that appear to be legitimate as it actually downloads Flash Player updates whilst installing the cryptomining malware on unsuspecting victims' PCs to mine Monero. RedLock has also brought further attention to the infiltration of public cloud environments by hackers to use the cloud computing resources of these environments to mine cryptocurrency. Cryptojacking is also a threat to critical infrastructure as shown by the injection of cryptomining malware into a water utility's control system in Europe that could have disrupted the management of the plant. As cryptojacking attacks continue to grow in frequency and intensity, security practices must be implemented or strengthened to prevent and detect such attacks.

Cryptojacking attacks are expected to continue to rise in conjunction with the increasing popularity of cryptocurrency, which calls for the improvement of security practices by organizations and individuals to prevent such attacks. Recent cases in which MikroTik routers were compromised to perform cryptojacking highlight the importance of device owners applying patches issued to address the critical vulnerabilities that can be exploited by hackers to install cryptomining malware. The increase in cloud cryptojacking calls for database encryption and constant monitoring of cloud resources. As the execution of cryptojacking attacks relies mostly on phishing tactics and the injection of malicious scripts into websites or web ads, it is important that security awareness training, ad blocking, and advanced endpoint protection are considered to enhance knowledge about the ways in which phishing can be used to distribute cryptomining malware as well as effectively detect and block crypto miners. Network monitoring is a recommended solution for organizations in detecting cryptomining activity as the monitoring of all web traffic would most likely detect such activity. In efforts to combat the increase in malicious cryptojacking activities, Google has also banned and removed cryptocurrency mining extensions from its browser and Chrome Web Store.

As cryptojacking attacks continues to grow, individuals and organizations must be aware of the ways in which such attacks are distributed and performed in order to avoid falling victim to such attacks. Researchers must also continue in an effort at detecting these attacks and encouraging the use of best practices against these attacks.

SoS Musings #20 - Time to Get Rural America up to Cyber Speed

SoS Musings #20

Time to Get Rural America up to Cyber Speed

The domain that is cyberspace continues to be enhanced as a result of the constant development and advancement of artificial intelligence, Internet of Things, and more. However, many communities and institutions situated in rural states lack the educational means to reap the benefits provided by the continuously growing cyberspace domain. The deficiency in access to the benefits offered by digital economy contributes to socio-economic and political insecurity as well as weaker national security and the growth of wealth inequality. Mark Hagerott, chancellor of the North Dakota University system and former professor and deputy director in the Center for Cyber Security Studies at the U.S. Naval Academy, suggested the establishment of a Cyber Land-Grant University system to address these issues in an article entitled "Silicon Valley Must Help Rural America. Here's How." The Cyber Land-Grant University system is an initiative that would follow in the footsteps of the Land Grant College Act of 1862, also called the Morrill Act, which provided land grants to states in support of the development of universities specifically aimed at offering education surrounding agriculture and mechanical arts. Similarly, The Cyber Land-Grant University system would support the development of technical courses in relation to computer science and cybersecurity, along with courses related to business, humanities, and law, in rural states lagging behind in the realm of cyberspace.

An educational initiative such as the Cyber Land-Grant University system would effectively address the challenges associated with the digital divide more so than current educational efforts, which remain considerably inadequate. Education Dive highlights the lack of resources in rural K-12 schools in addition to the difficulty such schools face in trying to attract educators and retain teachers in the field of STEM, which leads to less interest in cybersecurity and other related topics. Universities in rural and post-industrial districts have less resources needed to keep research faculty on board and significantly contribute to the development of digital innovations as a result of lower federal funding and donations. Vocational teachers that would prepare people to work in the cyber field are also hard to find and retain in rural areas like South Dakota as result of lower salary and less access to highly-skilled workers that can occupy teaching positions. The Moneytree report from PwC and CB Insights further highlights the concentration of venture capital funding in the U.S., revealing California, Massachusetts, New York, and Texas to be the top states that have received such funding in first quarter of 2018. The report also reveals that these four states have the most venture capital deals going to cybersecurity companies, giving them a heads up in the digital economy over rural states and discouraging the establishment of cyber companies in rural states. The lack of cyber-related companies in rural states means low access to potential faculty experts that could help such states succeed in the digital economy. The unavailability of cybersecurity education programs in rural schools is further indicated by the absence of National Security Agency (NSA) and the Department of Homeland Security (DHS) CAE-CD (National Centers of Academic Excellence in Cyber Defense) designations in schools located in states such as Alaska, North Dakota, Wyoming, and other rural states. The combination of shortcomings faced by communities and institutions in rural states in regard to lack of funding, companies, and education programs centered on digital and cyberspace advancements lead to concentrated expertise and wealth in areas that contain businesses and universities that are already flourishing in the cyberspace domain.

The Cyber Land-Grant University system suggested by Hagerott can help communities and institutions in the most remote rural states catch up with the innovatively bustling Silicon Valley and achieve success in the digital economy. The proposed system would support the hiring, retraining, and retainment of scholars that would serve as faculty members tasked with developing technical courses aimed at fostering skilled well-rounded experts and professionals in computer science and cybersecurity. In recognizing the importance of physical interaction and engagement with mentors and teachers, these courses would mostly be available on campus although some online instruction may be provided too. Research surrounding the development of digital innovations would also be conducted by the faculty experts. Hagerott suggests the financing of this new system through the institution of a cyber education tax on wealthy cybermedia giants such as Facebook and Google seeing as though such companies benefit greatly from the cyberspace and digital economy, racking up billions of dollars in revenue. Benefactors could also be motivated to support the system through the offering of tax incentives created by the U.S. government. Technology companies and top universities would also be offered incentives such as faculty rank and joint appointments by the system to contribute its resources and expertise by collaborating with the faculty group, develop programs, hold a teaching position, or participate in research at a cyber land-grant institution. U.S. governments, colleges, and Silicon Valley should consider collaborating to establish the Cyber Land-Grant University system to ensure that rural America flourishes in the digital economy as well.

While work should be done to achieve the establishment of a cyber land-grant system, other initiatives towards bolstering cybersecurity education and practices in rural states must continue to be developed as threats of high-profile cyberattacks that could impact the security and safety continues to be faced by the nation. The Center for Cybersecurity Research and Education at The University of Alabama in Huntsville (CCRE), which conducts research through a range of different projects aimed at strengthening the cybersecurity of critical infrastructure systems, automotive systems, and more, has designed the Expanding Cybersecurity Innovative Incubator to Extended Demographics (ExCIITED) program to get high school students in rural Alabama interested in working in the field of cybersecurity. NRECA's (National Rural Electric Cooperative Association) Rural Cooperative Cybersecurity Capabilities (RC3) Program was designed to help small- and mid-sized cooperative organizations in rural locations improve their cybersecurity by developing tools, resources, and training that such organizations can apply. More efforts must continue to be made to strengthen cyber education and practices in rural locations, and ultimately address the cybersecurity skills gap and develop defenses against cyberattacks on critical infrastructure.

The bolstering of cyber education for rural communities and institutions through these initiatives would not only decrease wealth inequality but would also boost national security.

SoS Musings #21 - VR and AR Adventures in Cybersecurity​​​​​​​

SoS Musings #21

VR and AR Adventures in Cybersecurity

Virtual reality (VR) and augmented reality (AR) have the potential to improve cybersecurity operations and training. Virtual reality is described as a computer-generated three-dimensional environment in which a person can explore and interact with objects. Augmented reality differs from virtual reality in that users interact with computer-generated content in the real-world environment. Virtual reality can be achieved through the use of devices such as headsets, while augmented reality can be achieved through the use of mobile devices such as smartphones. Although these technologies are widely-known for their use in gaming, they can also be used for the enhancement of education, training, and operations. As the VR and AR market is expected to reach $108 billion by 2021, such technologies are expected to be utilized more in different areas other than gaming. Cybersecurity is one area that could benefit greatly from the use of VR and AR technologies in relation to security operations and enticing the younger generation to the field.

VR and AR can be utilized to improve upon operations in cybersecurity. The performance of security operation centers (SOCs) within organizations can be enhanced through the use of VR and AR. SOCs are facilities in which security specialists monitor, detect, investigate, prevent, and respond to cybersecurity problems faced by organizations. Challenges associated with the traditional SOC model stem from its requirement of a central geographic site. As a traditional SOC is usually tied to a physical infrastructure and geographic location, organizations make significant investments in the hardware, configuration, and maintenance of these centers. The essential components used within SOCs are digital displays and advanced servers, which help security teams monitor and collect data by means of information and event management (SIEM) software. In an article entitled "The Emergence of Virtual Reality and Augmented Reality in the Security Operations Center", Maria Hyland and Jason Flood, Security Program Director and CTO of Security Gamification and Modeling at IBM, highlight the potential benefits of employing VR and AR in a SOC. The benefits of using VR in a SOC include, but are not limited to, mobility, scalability, the reduction of maintenance costs, increased awareness surrounding an organization's security posture, lower complexity, along with the ability to monitor and examine more endpoints in addition to visualizing potential cyber threats and vulnerabilities instantaneously. Illusive Networks harness the capabilities of VR and AR to deceive, detect, and get rid of attackers through the creation of false versions of company networks. The cybersecurity team at IBM Ireland developed a prototype VR solution, which merges with the IBM QRadar SIEM product and allows cybersecurity professionals to be immersed in a virtual 3D galaxy consisting of planets, stars, comets, and more, representing different nodes of a network or service that needs to be monitored. In this environment, visual cues such as solar flares and supernova bring the operator's attention to cybersecurity activities that may be malicious. A Colorado-based security company, ProtectWise, has developed a product, called Immersive Grid, that could allow cybersecurity professionals to monitor and patrol computer networks in a VR environment for unusual activity and security threats. As the use of AR in a SOC can allow operators to lay digital contexts and views on top of presented security data to an operator's real-world vision, activities among security operators such as forecasting, decision-making, and investigating can be enhanced. The NSA has been working on developing an AR system that could help security professionals increase the efficiency of their tasks. Professionals could use AR devices similar to that of the Google Glass, which are able to quickly present security information to them.

Organizations can use VR and AR technologies to attract, educate, and increase the recruitment of people into the cybersecurity workforce. The talent gap in the cybersecurity industry continues to be a major problem as the cybersecurity workforce gap is expected to reach 1.8 million by 2022. The results of a study conducted by ESG for ProtectWise in which 524 U.S. residents ranging from ages 16 to 24 were surveyed indicate, that the use of VR and AR tools in cybersecurity operations would entice more people into pursuing careers in the cybersecurity field. Participants of the survey expressed that they have been deterred from pursuing cybersecurity careers due to feelings of inadequacy in technical skill and the lack of exposure to cybersecurity on account of the unavailability of cybersecurity courses in their schools. According to findings of the survey, millennials and post-millennials would be more likely to consider careers in cybersecurity if VR and AR tools were present in cybersecurity operations as such tools have been said to decrease complexity as well as increase efficiency and enjoyment. Most millennials and post-millennials have a positive attitude towards VR and AR technologies since they have great exposure to such technologies through online and video games. In addition to increased awareness surrounding gaming principles, skills in relation to spatial reasoning and teamwork have also been developed through the use of VR and AR technologies in gaming.

Although there are benefits to using VR and AR technologies in relation to the enhancement of cybersecurity operations and recruitment, there are risks associated with these technologies that must be considered before being implemented. Since such technologies continue to advance, they are expected to introduce new privacy and security threats. VR and AR headsets have the ability to gather information about users' physical behavior such as eye and head movements, along with reactions to presented visual content in addition to other personal information, which presents new privacy risks. VR and AR technologies also give attackers a additional paths to manipulating users. Companies and malicious actors can use data collected by VR and AR technologies to get a better idea of how users interact with content in order to enhance targeted advertising and ad engagement. Through the use of data from VR and AR technologies, companies and malicious actors can adjust advertisements based on the colors and locations on a screen that draw the most attention. In the realm of cybersecurity, security teams must be aware of the possibility of hackers compromising VR and AR displays to alter what users see and provide fake information, leading to failures in security operations and preventing the detection and analysis of attacks. VR displays could also be manipulated by hackers in ways that could induce physical discomfort to users such as dizziness or nausea. VR and AR technologies may also be faced with ransomware attacks as threats may emerge to publicly release recorded behavior and interactions unless ransoms are paid. The fast pace at which VR and AR environments are updated may also diminish the quality of security checks and testing, leaving vulnerabilities undiscovered. These risks must be considered in the development and implementation of such technologies.

VR and AR can be used as tools that could significantly improve upon cybersecurity operations and address the cybersecurity talent gap. However, there must be continued research and advancements surrounding the security and privacy of such technologies as they are increasingly considered for use in domains other gaming, especially in cybersecurity.

SoS Musings #22 - Exploring the Art of Deception in Cybersecurity

SoS Musings #22

Exploring the Art of Deception in Cybersecurity

The performance of deception has mainly been linked to the realms of warfare, politics, and commerce, but this technique is now considered one of the more promising strategies that could improve cybersecurity. The use of deceptive strategies and technologies in cyber defense operations could further improve the prevention of malicious adversarial operations as well as reduce the exposure and theft of real technology assets. The main goals behind the use of deception in cyber defense operations are to detect, examine, trick, and lure attackers away from sensitive assets once they have successfully infiltrated a targeted system or network. Deception can be performed through the generation of traps and placement of bait, which consists of simulated assets modeled after real technology assets within a real or virtual environment. According to a new report shared by MarketWatch, the deception technology market has been forecasted to be valued at over $2.50 billion by 2022, indicating the expected rise in the development and application of deception technology.

Through the application of deception techniques and technologies, organizations can improve upon the reduction of the cyber risks they face as well as improve their security posture. It is important that organizations increase the speed at which they detect and respond to cyberattacks as the longer hackers stay within the network or system that they have infiltrated, the more damage they could inflict and the harder it is for them to be detected. These damages include the theft of sensitive data, deletion or alteration of files, planting of malware, and more. As the use of deception technology fools hackers into thinking that they have gained access to assets such as workstations, servers, applications, and more, in a real environment, security teams can observe and monitor the operations, navigation, and tools of the hackers without the concern that any damage will occur on real assets. False positives are reduced by deception technology since any access to the deception layer can be considered malicious, thus immediately triggering alerts to accurate events. The information gathered through the application of deception technology such as the behavior and methods of attackers can be used to quickly detect and respond to attacks, post-breach, as well as develop better defense strategies and technologies.

There have been advancements in the research and development of deception technology. Cyber researchers at Sandia National Laboratories apply deceptive techniques in a patented alternative reality, called HADES (High fidelity Adaptive Deception & Emulation System). This system applies deception in that hackers are lured into a simulated reality upon entering a network, which includes replicated virtual real hard drives, memory, and data sets, some of which have been modified in a discreet manner. Within this environment, hackers will expose their tools and techniques as they operate or try to discern from real or fake data. The top vendors in the realm of deception technology includes Illusive Networks, Attivo Networks, Smokescreen, Trapx and Cymmetria. Illusive Networks has been recognized by Frost & Sullivan for offering the best cyber deception technology that allows organizations to detect and trap attackers through the use of a deceptive path as well as generate detailed forensics data such as the specific activities, tools, and files of the attackers, along with the command and control center with which they are connected. Assistant Professor, Guanhua Yan, and PhD student, Zhan Shu, at Binghamton University are conducting research in support of further improving the effectiveness of existing cyber deception tools by making the deception of such tools more consistent. The computer scientists want to ensure that the deceptive environment remains consistent with what hackers have already observed as they navigate, so that environment is not recognized as deceptive.

Although deception technology offers benefits such as the early detection of attacks and the reduction of hackers' dwell time in a network, it is still not a silver-bullet solution for cybersecurity defense. Deception technology should continue to be improved in that the deceptive environments in which attackers are trapped, must remain consistent with what they have already observed as they navigate in order to prevent the detection of deception. In addition, deception technology must continue to evolve alongside the everchanging cyber threat landscape by continuously being fed with threat intelligence to alter its decoys.

SoS Musings #23 - Unveiling Steganographic Cyberattacks

SoS Musings #23

Unveiling Steganographic Cyberattacks

Steganography is a method that can be used by hackers to deliver malware in a secretive manner. The concept behind steganography is to communicate data covertly via a format that conceals the sending of that data. The technique that is steganography differs from cryptography in that the communication of data is concealed, not just the data itself. Steganography is applied by hackers when they hide malicious data or malware in or by way of image files, video clips, audio files, and other unsuspecting mediums. This is an attractive method to hackers because most users would not suspect that such objects would launch attacks upon being opened. As there are many ways in which steganography can be performed by hackers, most modern anti-malware solutions are still incapable of fully protecting against this type of attack and this calls for the development of other defensive strategies.

There is a number of different ways in which steganography can be performed by hackers. According to a CUIng team of experts, the goals of cybercriminals in the use steganography are to search for, examine, and access targets as well as circumvent detection and cover their tracks of malicious activity. During the different stages of an attack, steganography can be applied through the use of information-hiding techniques, including anonymization, traffic-type obfuscation, code obfuscation, and more. Information that can be hidden by using steganography, includes the identities cybercriminals, communication between attackers, content, and malicious code. Older notable examples of malware in which steganography was used, include AdGholas, FAKEM, Vawtrack, Stegano, and RedBaldKnight. AdGholas was a 2015 malvertising campaign in which encrypted malicious JavaScript code was hidden in images displayed by rogue ads, infecting thousands of computers and making it difficult for security firms to detect the impacted sites and ad networks. FAKEM also known as a family of Remote Access Trojans (RATs) avoided being detected by mimicking legitimate network traffic such as that of the now discontinued Yahoo! Messenger. Vawtrack is another malware that applied the technique of steganography by hiding its updated files in Favicons, which are small icons used to represent a website that appears in the address bar of a web browser. In 2016, an exploit kit, called Stegano, allowed malicious JavaScript code to be hidden in the pixels of banner ads for products, named "Browser Defence" and "Broxu". Such incidents highlight how the performance of steganography is versatile.

Attacks in which steganography is weaponized by hackers continue as indicated by recent reports. A malvertiser targeting Apple users, called VeryMal, was recently reported by Confiant and Malwarebytes to be distributing malicious Javascript code via images contained by online banner ads, allowing the code to bypass security filters. Once the VeryMal payload is executed, victims are redirected to sites where they are tricked into downloading fake Adobe Flash updates containing a strain of Mac malware, called Shlayer. Matthew Rowan, a researcher at Bromium recently discovered a malware campaign targeting Italian users in which an image of Super Mario is used to conceal malicious code that leads to the launch of the Ursnif banking Trojan. A new type of malware makes use of memes posted on Twitter to hide the communication of attackers with malware. According to security researchers at Trend Micro, two memes posted on Twitter were found to be malicious as they were embedded with commands that would be used to instruct malware to perform activities such as capture screenshots of a victim's infected computer, gather system information, capture clipboard content and more. Steganographic attacks are expected to rise in frequency and sophistication.

Research and development in the study of digital steganalysis must continue in order to combat steganographic threats and attacks. Steganalysis is the study or process of detecting information that has been concealed using steganography. Research was conducted at Ben-Gurion University of the Negev (BGU) towards preventing the use of internet videos and images to execute cyberattacks. Through this research, a series of algorithms have been developed to prevent the infiltration and extraction of information via videos and images, thus helping to combat the use of steganography by attackers. In addition, as a result of the growing use of steganography by hackers, the Criminal Use of Information Hiding (CUIng) Initiative has been established, which gathers experts and researchers in academia, industry, law enforcement agencies, and institutions, to address the problem of malicious use of steganography by cybercriminals. It is important to continue raising awareness and increasing the sharing of intelligence about steganography for the advancement of defense methods as more cybercriminals utilize this technique.

SoS Musings #24 - Credential Stuffing Attacks

SoS Musings #24

Credential Stuffing Attacks

Credential stuffing is a hacker technique in which usernames and passwords obtained from data breaches faced by companies are used to gain access to accounts on other sites. The performance of credential stuffing relies on the use of automated tools to reduce the time in which a large number of username and password combinations are entered into the login pages of different online platforms. The technique of credential stuffing remains popular among hackers as the majority of users continue to reuse passwords across multiple accounts on different services. According to Akamai's 2019 State of the Internet / Retail Attacks and API Traffic report, there was a significant increase in the launch of credential stuffing attacks in the second half of 2018 with an estimate of 28 billion attempts at performing such attacks. Retail and financial industries continue to be the main targets of credential stuffing attacks. The performance of credential stuffing attacks can result in significant consequences such as the hijacking of personal and business banking accounts, downtime of applications, damage to the reputations of affected businesses, and more. Recent incidents in which consumers and businesses have fallen victim to credential stuffing attacks, highlight the importance of strengthening security against such attacks.

Incidents of credential stuffing attacks that have recently been faced by individuals and organizations bring further attention to the increase in such attacks and the importance of following stronger security practices. Intuit, the financial software company and maker of the popular tax preparation software, TurboTax, recently informed users of the software that they may have been affected by a credential stuffing attack, which allowed an unauthorized party to access data, including Social Security numbers, financial information, addresses, and more, from a previous year's tax return or current tax return in progress. The enterprise technology provider, Critix Systems, required its customers to change their passwords after discovering that a credential stuffing attack was performed against its Sharefile content collaboration service, allowing an unauthorized party to access information stored on customers' Sharefile accounts. Dunkin' Donuts was hit by two credential stuffing attacks in a span of three months. The credential stuffing attacks experienced by Dunkin' Donuts took aim at DD Perks rewards accounts associated with the coffee shop chain's loyalty program in order to sell direct access to these accounts, gather private information, and more. Hackers were also able to speak to and watch those who use Nest home security cameras through the performance of credential stuffing attacks on Nest user accounts. These are just a few incidents of a series of credential stuffing attacks as companies, including Reddit, OkCupid, and Daily Motion have also recently faced such attacks.

Credential stuffing is performed by cybercriminals through the use of breached user credentials and botnets to automatically inject those credentials into login pages. The availability of collections of data gathered from previous massive data breaches faced by companies such as LinkedIn, Dropbox, Yahoo, and more, facilitates the performance of credential stuffing by hackers. Security researchers recently discovered the sharing of a collection of 2.2 billion unique usernames together with their associated passwords among hackers via forums and torrents. Credentials exposed by such mega-dumps can then be tried against multiple online platforms in an automated manner, using tools such as Sentry MBA, Vertex, Apex, and more. Security defense systems and practices must be developed and bolstered against the use of stolen credentials and automated tools to perform credential stuffing attacks.

In addition to increasing research and development surrounding security approaches to combat credential stuffing attacks, individuals and organizations must make an effort to follow and enforce proper security practices. Shape Security, a cybersecurity firm based in California, released an AI (artificial intelligence) system, called Blackfish, to help companies protect themselves against credential stuffing attacks. Blackfish can identify credentials that have been stolen from data breaches, which have not yet been discovered, disclosed, or distributed on the dark web. The system is able to detect when stolen credentials are being used to login to an end user's account and invalidate those credentials. The Digital Identity Guidelines released by the National Institute of Standards and Technology (NIST) has recommended that organizations examine stolen password breach corpuses in order to restrict their users' choice of passwords that have been exposed in previous data breaches. Organizations are encouraged to implement two-factor authentication (2FA) to protect their customers from credential stuffing attacks. 2FA strengthens the security of online user accounts against credential stuffing in that this method requires another factor to verify the identity of users in addition to passwords. Methods of 2FA include SMS 2FA in which a phone number has to be provided by the user, push-based 2FA in which login attempts are validated based on the acknowledgment of a prompt sent to a user's device, and more. Individuals should follow good password hygiene and organizations should enforce stronger password policies in which passwords are changed after a period of time and are not reused. Organizations should also continue monitoring their network traffic and systems for slowdowns, major rises in network inquiries, and more, as they may indicate the performance of credential stuffing attacks. As the performance and sophistication of credential stuffing attacks continue to grow, more advanced tools and approaches to fighting such attacks must be developed.

SoS Musings #25 - Cloudy with a Chance of Data Hauls

SoS Musings #25
Cloudy with a Chance of Data Hauls

It is imperative that cloud security is improved as the adoption of the cloud by businesses continues to increase. Findings of LogicMonitor's "Cloud Vision 2020: The Future of the Cloud" survey to which 300 influencers responded, including industry analysts, consultants, vendor strategists, Amazon Web Services AWS re:Invent attendees, and more, indicate that over 80 percent of enterprise workloads will be in the cloud by 2020. Digitally transforming enterprises and the goal of attaining IT agility are the major factors behind the growing adoption of cloud services. However, security remains an issue of great concern among IT professionals with 66 percent of those who responded to the survey citing security as the biggest challenge in regard to the adoption of an enterprise cloud computing strategy. The importance of securing the cloud is also indicated by the predicted global spending of 12.7 billion on cloud security by 2023. The security of cloud computing must be further explored in order to develop or bring more attention to cloud security methods.

The "cloud" is a combination of networks, servers, and applications, which can be used by organizations for different reasons in a variety of ways. According to NIST SP 800-145, "The NIST Definition of Cloud Computing", the characteristics of a cloud computing model include on-demand self-service, broad network access, the pooling of computing resources, scalable provisioning of capabilities, and measured service. NIST SP 800-145 also highlights the three main types of cloud computing services, Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). IaaS offers access to fundamental virtualized computing resources such as servers, storage hardware, and networking hardware, via the cloud. PaaS is the cloud computing model in which a Cloud Solution Provider (CSP) provides a platform, consisting of hardware and software, that could be used by enterprises to design, build, and deploy their own applications. SaaS is a distribution model for software in which software is hosted by a third-party provider and accessed by the customer via the internet. In addition, there are four cloud deployment models that enterprises can choose from, which include public cloud, private cloud, hybrid cloud, and community cloud.The public cloud refers to the use of computing services provisioned by third-party providers via the public internet, while the private cloud refers to the use of proprietary resources and the dedication to the needs of a single organization. A community cloud refers to the sharing of a cloud service environment by a community of consumers from different organizations that share similar missions, policies, security requirements, and more. The hybrid cloud is defined by NIST as the combination of two or more distinct cloud infrastructures, including private, community, or public, through the use of standardized or proprietary technology. Security experts have long argued about which cloud deployment model is the most secure. The private cloud deployment model is often said to be the most secure because of the associated benefits such as higher levels of visibility, control, security, and privacy, along with closer access to data. However, data breaches are possible with any cloud deployment model if best security practices are not being followed. Before an enterprise considers the use of cloud computing services, they should examine the security risks and challenges associated with each type of cloud computing service and cloud deployment model.

There are threats, risks, and vulnerabilities that are unique to the realm of cloud computing, which should be considered by organizations prior to its adoption. As highlighted by Carnegie Mellon University, the top five cloud-unique threats and risks touch on the reduced visibility and control of consumers, the simplification of unauthorized use, the compromise of internet-accessible management APIs, the failure of separation among tenants, and the incomplete deletion of data. Organizations must remember that when they move their assets and operations to the cloud, they relinquish some control and visibility over those assets and operations as well as shift responsibility to the cloud service provider (CSP) for some policies and the infrastructure. In conjunction with the loss of some visibility and control by organizations over the assets and operations that they transfer to the cloud, there is the threat of reduced ability to verify that data is being deleted in a secure manner. The features of the cloud that facilitate the provisioning of on-demand self-service can allow employees to provide services from their organization's CSP without the permission of their IT department, thus increasing the risk of unauthorized use of cloud services, which could lead to more incidents of malware infections, data exposure, and loss of control. The application programming interfaces (APIs) used by organizations to perform activities such as supplying, managing, arranging, and monitoring assets and users, as well as interacting with cloud services, are exposed by CSPs. CSP APIs can be accessed through the internet, increasing the likelihood of their abuse by hackers. In addition, CSP APIs may contain software vulnerabilities that could allow malicious actors to launch attacks, resulting in the hijacking of an organization's cloud assets and possibly the execution of attacks on other CSP customers. The exploitation of vulnerabilities in a CSP's infrastructure can also result in the failure to separate tenants, which could allow attackers to access an organization's resources via the access of another organization's assets or data in the cloud. Other threats and risks to consider in the adoption of cloud computing is the theft of cloud credentials, the complexity of transitioning to other CSPs on account of vendor lock-in, increased complexity for IT staff, the compromise of the CSP supply chain, the misuse of authorized access by insiders, and the loss of data stored in the cloud. The Cloud Security Alliance (CSA) also highlighted the top threats facing cloud vendors with data breaches, insecure application programming interface (APIs), system and application vulnerabilities, the inadequate management of cloud identities, credentials, and access, with account hijacking, topping the list as the most severe threats. The threats, risks, and vulnerabilities surrounding cloud computing call for the development, consideration, and implementation of cloud security solutions.

Methods for bolstering cloud security must continue to be developed, explored, and implemented. Although sensitive data stored in the cloud can be encrypted, the way in which this data is accessed by users can still make it vulnerable to being exposed to hackers. Wensheng Zhang, an associate professor of computer science at Iowa State University brought attention to the possibility of hackers observing cloud storage access patterns to assume the value of data and to determine what parts of a file should be prioritized in the performance of cracking. Therefore, computer scientists are working on developing the technology to disguise access patterns in order to secure sensitive data stored in the cloud. Scientists from the Laboratory of Problem-Oriented Cloud Computing at South Ural State University (SUSU) have also worked on improving cloud security, particularly the security of information stored in cloud systems. SUSU scientists developed an algorithm that involves the double coding of information in the cloud to reduce the risk of collusion between cloud service providers. The U.S. National Security Agency (NSA) also funded a cybersecurity lab project conducted by Dr. Mengjun Xie, an associate professor of computer science at the University of Arkansas, called Networking and Network Security in the Cloud (NetSiC), which is aimed at helping students develop their networking and cyber defense skills, as well as address problems associated with the security of cloud-based computing. Google's Data Loss Prevention (DLP) tool is capable of performing scans of large amounts of data in the cloud in order to identify and redact the data that is sensitive through the use of machine learning capabilities such as image recognition, machine vision, natural language processing, and context analysis. This tool is used in many Google products, but it can also be used by administrators outside of Google's ecosystem as the tool offers an application programming interface. Google recently upgraded its DLP tool to allow those with no technical expertise to easily use it. In addition to the development and implementation of cloud security solutions, it is important that organizations follow best practices for cloud security in order to reduce the security risks associated with cloud computing. Best practices for cloud security include understanding the shared responsibility model in which the security obligations of CSPs and customers are established, asking CSPs questions pertaining to the security measures they have implemented to secure their clients' applications and store data, establishing cloud security policies to specify who can use cloud services and how it could can be used by employees, encrypting data stored in the cloud as well as when it is in transit, and more. CSPs should also follow best practices to increase the security of their services such as performing regular examinations of their systems for vulnerabilities, establishing data deletion policies, achieving compliance certifications to highlight their ability to maintain the highest level of data security, encrypting data in motion and at rest, and providing role-based access control (RBAC) to customers. Research, development, and implementations of cloud security technologies and strategies must continue.

Moving to the cloud still presents many security risks, however the development and consideration of cloud security technologies and best practices can reduce these risks.

SoS Musings #26 - Social Engineering Attacks

SoS Musings #26
Social Engineering Attacks

Organizations often fall victim to cyberattacks in which their data and/or systems are compromised as a result of social engineering attacks, further indicating that humans remain one of the weakest links in cybersecurity. Social engineering refers to the use of methods to exploit human weaknesses in order to gain access to sensitive information and systems. Using social engineering tactics, people are often deceived into exposing sensitive information that could be used by attackers to gain access to protected systems. Social engineering attacks continue to succeed as it is often easy to exploit humans' psychological attributes such as being trusting and having the desire to help others. According to Proofpoint's Quarterly Threat Report for Q2 2018, there was a 500% increase in social engineering attacks with cybercriminals continuing to explore new ways to abuse humans' psychological weaknesses to launch attacks. Proofpoint's 2018 report, titled The Human Factor: People-Centered Threats Define the Landscape, also highlights the increased use of social engineering by attackers over automated exploits. This report states that in 2018, 95% of observed web-based attacks were executed using social engineering tactics, 55% of social media attacks in which customer-support accounts were impersonated targeted financial service companies' customers, and 35% of social media scams used links and clickbait to trick users into visiting streaming and movie download websites. It is important that organizations continue to explore the ways in which social engineering attacks and the impacts of such attacks could be prevented and mitigated.

There are many ways in which social engineering attacks can be performed. Infosec has cited the most common social engineering attacks as of 2019, which include phishing attacks, watering hole attacks, whaling attacks, pretexting, baiting, and tailgating. Among these attacks, phishing is the most common as it allows attackers to trick unsuspecting users into divulging sensitive information via emails, social media, instant messaging, SMS, and clicking links to malicious websites containing malware that would enable users' systems to be infiltrated. Attackers can execute watering hole attacks by gathering information about a targeted group in relation to what websites they frequently visit and probing those websites installing malware on those sites to infect that group's systems. A whaling attack is a specific type of phishing attack that targets high-profile individuals such as public spokespersons, CEOs, CFOs, and more, to impersonate such entities and gain access to sensitive data or other assets. Pretexting refers to the practice of masquerading as another person in order to gain access to private information, which is usually performed by attackers through the careful creation of fake identities. When attackers abuse the humans' curious nature by making promises of relinquishing a good used to trick victims, they are performing an attack known as baiting. The placement of an infected USB drive or optical disk in a public area in hopes of someone taking it and using it on a device is an example of baiting. Another common social engineering attack is called tailgating, also known as piggybacking, which refers to the unauthorized entry into a facility by way of authorized individuals who have been tricked into giving access to this entrance. An example of tailgating is when an unauthorized person claims to have forgotten their RFID and requests that an authorized person hold a door open for them, giving them unauthorized access to the facility or other protected area. Other social engineering attacks that end users should be aware of include phone-based phishing known as vishing, low tech ransomware, phishing via Dropbox, Box, or OneDrive, and more. The different ways social engineering can be carried out must continue to be explored and highlighted by security professionals and researchers.

Recent incidents in which social engineering tactics were used by attackers to obtain access to systems and sensitive data bring further attention to the continued success of these attacks. Verizon's 2019 Data Breach Investigations Report (DBIR) reveals that C-level executives are increasingly being hit with social engineering attacks as they have access to sensitive information, posing a significant threat to supply chains. Hackers have gained the trust of C-level executives through fraudulent business emails, tricking them into clicking on malicious links and revealing passwords. A British teenager, named Kane Gamble, successfully infiltrated email accounts belonging to CIA and DNI chiefs, and accessed sensitive databases through the use of social engineering. He managed to deceive call centers and help desks into helping him gain access to these email accounts and databases. Attackers have even used the distress and bewilderment caused by tragic events such as the Christchurch massacre in New Zealand to perform social engineering attacks. Following this tragic event, CERT NZ received reports on the distribution of phishing emails asking for donations in support of relief efforts. This would instead redirect users to malicious banking pages that appeared to be legitimate donation pages. According to Barracuda's latest 2019 Spear Phishing Report, cybercriminals have been improving upon the social engineering tactic, brand impersonation, in the performance of spear phishing attacks as indicated by recent findings. Brand impersonation is involved in 83% of all phishing attacks in which Office 365, financial institutions, and Apple have been impersonated. Following the study of hacker-for-hire services conducted by Google and academics at the University of California, it was discovered that all attacks launched by these services involve social engineering with hackers performing spear-phishing against victims. These incidents bring attention to the advancement and potential impact of social engineering attacks, which must be prevented and mitigated.

As a huge component of social engineering is human behavior, preventing such attacks remain a significant challenge. Security professionals often overlook the psychological aspects of social engineering and instead focus on ways to prevent these attacks through technological implementations. According to Dr. Jessica Barker, an independent consultant and sociologist whose research delved into the psychology surrounding why humans often fall victim to social engineering attacks, the human instincts of curiosity, naivety, narcissism, overconfidence, and the desire to reciprocate are the main reasons as to why social engineering attacks are so successful. It is important that the underlying psychological elements of social engineering are also explored by security researchers to help combat such attacks. When asked the question of how to prevent social engineering attacks, one of the key answers given by security professionals is that security awareness education and training should be provided to end users, IT staff, managers, and more in support of bringing further attention to social engineering attacks and strategies to avoid such attacks. Users often fall victim to social engineering attacks because they are unaware of the different ways in which these attacks are performed, thus calling for more education and training that will explore social engineering in addition to other attack techniques. In the process of being trained, users should be made aware of the possible spoofing of trusted sources in order to avoid clicking on links or attachments sent from suspicious sources. As social engineers often exploit the impulsive behavior of users that leads them to click emails without considering the source, users should be encouraged to slow down and conduct a careful review to verify the identity of the suspicious source. Other current best practices for protection include deleting requests to reply with personal information as any message that asks for such information is most likely a scam. They should not give out sensitive information or credentials. Antivirus software should be installed and kept up-to-date, and email spam filters should be set to significantly decrease the amount of junk mail. In regard to physical social engineering attacks, individuals should always be asked to show appropriate credentials and proof of authorization before entering premises at which systems and sensitive information is handled and stored. More technological efforts must be made to deal with social engineering attacks such as that of the Defense Advanced Research Projects Agency (DARPA), which has established the Active Social Engineering Defense (ASED) program, the purpose of which is to develop technology that uses bots to detect, disrupt, and examine such attacks. Researchers from Northumbria University proposed the use of nudges, particularly social saliency nudges, to help users better evaluate emails and detect phishing, thus providing further protection from social engineering attacks. The use of external devices to nudge users towards good cybersecurity behaviors could also help users avoid falling victim to these attacks. The Adafruit Circuit Playground is a circuit board that can detect a person's movement and trigger nudges via lights, sounds, and vibration, to users to lock their computer screens as they leave their desk. Security professionals and researchers are urged to continue exploring the different aspects of social engineering, other ways to prevent social engineering attacks, and bring further awareness to such attacks.

SoS Musings #27 - DNS Attacks

SoS Musings #27
DNS Attacks

The Domain Name System (DNS) is a fundamental element of the Internet as it acts as a phone book that provides a distributed directory, mapping easily remembered hostnames such as to their associated IP addresses. Domain names are translated to their numerical IP addresses, which are used by computers and network devices to locate and communicate with each other. DNS servers are responsible for matching domain names to their associated addresses. When a user types a domain name into their browser, the computer asks a DNS server what IP address matches with the requested domain name. Once the connection is made, the correct web page is retrieved. Requests are most likely being immediately sent to DNS servers provided by an Internet service provider (ISP). However, if a user is behind a router, that router may be used by the computer as a DNS server, which also forwards requests to an ISP's default DNS servers. DNS information containing the domain name and IP address mapping is then stored in a local cache, improving the speed of connection as the DNS request phase can be skipped when that specific domain is requested again. Concerns arise as security was not considered in the design of DNS, allowing hackers to abuse weaknesses and vulnerabilities in the Internet system through a variety of different attacks. In 2018, findings of a survey conducted by EfficientIP, brought further attention to the growth of DNS attacks in regard to frequency and associated costs. According to the survey to which 1,000 IT managers in North America, Asia, and Europe responded, the average costs of DNS attacks increased by 57% from $456,000 in 2017 to $715,000 in 2018. In addition, organizations experienced an average of seven DNS attacks within this time frame. Proofpoint's Domain Fraud Threats Report and IDC's 2019 Global DNS Threat Report also reveal the increased launch and cost of DNS attacks. There has been a 34% increase in DNS attacks experienced by organizations as well as a 49% increase in the average cost of such attacks since 2018. Security professionals must continue to develop and follow best practices for securing DNS against attacks.

Security experts have cited a number of different DNS attacks which need to be further explored and prevented. There are many types of DNS attacks that are often cited as the most executed by hackers in attempt to infiltrate networks, perform phishing, disrupt responses to legitimate DNS requests, and more. These DNS attacks include DNS hijacking, DNS flood attack, distributed reflection denial of service (DRDoS), cache poisoning, DNS tunneling, and more. DNS hijacking refers to attacks in which DNS requests are intercepted and redirected to rogue or compromised DNS servers or domains through the modification of DNS records or the exploitation of vulnerabilities in the domain name registrar's system. Hackers carry out DNS flood attacks, which are a type of distributed denial-of-service attack (DDoS), to disrupt DNS resolution for a targeted domain by flooding that domain's DNS server with requests. Disruption to DNS resolution leads to the inability to respond to legitimate traffic. Another common DNS attack is DNS cache poisoning also known as DNS spoofing, which allows rerouting of traffic from real DNS servers to fake ones. Attackers perform DNS cache poisoning by sending forged DNS responses via a fraudulent DNS server, which are then cached by legitimate DNS servers, changing information in the servers pertaining to what IP address corresponds with a specific domain name. DNS cache poisoning can be used to send unsuspecting users to malicious phishing websites at which malware is spread. If attackers want to use DNS as a covert communication protocol or a way in which data can be exfiltrated from a network, they can perform DNS tunneling by inserting data from other programs inside DNS responses and queries. Through the performance of DNS tunneling, attackers can bypass network security technology such as firewalls to evade detection. Other attacks that have been highlighted by security experts are random subdomain attacks, phantom domain attacks, and NXDOMAIN attacks, and more.

Recent research and incidents of DNS attacks have brought further attention to the rising frequency, complexity, and severity of DNS attacks. Sea Turtle is a hacker group that was discovered to be targeting government organizations primarily located in the Middle East or North Africa, including intelligence agencies, ministries of foreign affairs, and more, in an espionage campaign to gain access to sensitive networks via the performance of DNS hijacking. The Sea Turtle DNS hijacking campaign hijacked the domains of 40 different organizations in 13 countries. A team of researchers discovered a new DNS cache-poisoning attack that targets the client-side DNS cache. The attack can be launched against Android, Ubuntu Linux, MacOS, and Windows to poison the DNS cache of these operating systems with malicious DNS mappings, allowing different users of a machine to visit the same domain that leads to an attacker-controlled web server. Gmail, Netflix, and Paypal users recently fell victim to DNS hijacking attacks. The users of these highly-popular online services were redirected to fake websites designed to trick them into providing their credentials to these sites as a result of the modification of DNS settings in compromised consumer routers. Fidelis Cybersecurity highlighted the use of the DNS protocol by malware authors as a cover communications channel in which data is transferred. According to the Fidelis, traffic analyzers often overlook the use of the DNS protocol as a means of communication between a victim's machine and a bad actor's command and control (C&C) server to go undetected. DNS can be used as a means of covertly transferring data in a number of different ways, calling for traffic analyzers to examine DNS traffic for anomalies in order to detect such malicious operations. Such attacks are expected to grow more sophisticated.

Efforts to increase the level of security for DNS must continue to be made by organizations. The Engineers in the Internet Engineering Task Force (IETF), an international standards organization, developed DNS Security Extensions (DNSSEC) to add a layer of security to the DNS protocol by cryptographically verifying the source of DNS response data and ensuring the integrity of this data. The Internet Corporation for Assigned Names and Numbers (ICANN) encourages the full deployment of DNSSEC across all domains to prevent DNS attacks such as DNS hijacking, DNS cache poisoning, and more. Security experts have also highlighted additional best practices that should also be used in conjunction with DNSSEC. In order for an organization to bolster their DNS security, they must ensure the privacy of their resolver, which is the DNS server responsible for receiving DNS queries and tracking the IP addresses for domain names, by restricting the use of the resolver to users on their network. This practice would prevent cache poisoning by external users. Organizations should make use of DNS software capabilities that would enable the addition of variability to outgoing requests such as randomizing query IDs, using a random source port, and more, in order to make it harder for fake DNS responses to get accepted. DNS servers must also be kept up-to-date against known vulnerabilities through the installation of patches. There are many other steps that could be taken by organizations to prevent DNS attacks, including using isolated DNS servers, using DDoS mitigation providers, implementing two-factor authentication, and more. As DNS attacks grow more complex and frequent, security professionals must keep exploring new ways of strengthening DNS security and encouraging the use of best DNS security practices by organizations.

SoS Musings #28 - The Dark Web

SoS Musings #27
The Dark Web

The threat landscape faced by organizations has been significantly expanded by an elusive part of the World Wide Web known as the dark web. The term "dark web" refers to the collection of websites and networks that cannot be accessed via regular search engines such as Google and Yahoo. Access to the dark web requires the use of special tools and software, including peer-to-peer (P2P) browsers or the Onion Router (Tor). The dark web is often used as the grounds for a marketplace of illicit services and tools since this part of the internet provides anonymity through encryption. Some examples of crimes that can be committed via the use of the dark web include extortion, sex trafficking, terrorism, selling illegal drugs, and hiring assassins. In pertinence to the realm of cybercrime, the dark web allows cybercriminals to collaborate with each other, purchase or sell stolen credentials to online accounts, advertise hacking tools, and more. The dark web has made many headlines in recent years and raised concern about cybercriminal activity.

Researchers and law enforcement have made many discoveries surrounding the dark web. A report published by Deloitte, titled "Black Market Ecosystem: Estimating the Cost of 'Pwnership'" emphasizes that cybercriminals do not need a high level of technical expertise to carry out cybercriminal operations as they can purchase tools and services on the dark web to conduct such operations for them, increasing the chances of cybercrime. A study conducted by researchers from Georgia State University and the University of Surrey revealed the availability of Secure Sockets Layer (SSL) and Transport Layer Security (TSL) certificates in the dark web, which are packaged with crimeware to enable the delivery of machine identities to cybercriminals. These machine identities can then be used to spoof websites, intercept encrypted traffic, steal sensitive data, and perform other attacks. Cybercriminals have automated social engineering services available to them in the dark web as discovered by security researchers. Security researchers also discovered an automated phone calling service being offered to cybercriminals in the dark web for $250 per month that allows them to deceive victims into giving them their credit card pins or other sensitive information. This service was expected to garner much attention from cybercriminals as the stolen credit card and debit card numbers often exchanged between them within the dark web would be useless without victims' ATM pins if the aim was just to steal cash. A traffic distribution system (TDS), called BlackTDS, was also discovered being offered on the dark web as a service that would allow low-skilled cybercriminals to execute malicious sophisticated drive-by attacks. According to researchers, BlackTDS, would simply the launch of large-scale malware campaigns by performing social engineering, redirecting victims to exploit kits, and preventing the detection of such attacks by researchers and sandboxes. Recent observations made by researchers at IBM X-Force have brought further attention to the increasing shift of the dark web marketplace towards cybercrime services such as malware-as-a-service (MaaS) and infrastructure-as-a-service (IaaS) in which prepackaged malware and access to compromised devices are sold to threat actors. The dark web must continue to be examined for changes in available products and services, as well as shifts in business approaches.

As the dark web provides a platform for cyberattack-as-a-service (CAaaS) marketplaces and forums at which hackers could buy services and tools aimed at facilitating the development and launch of attacks, it is important for the cyber defense community to understand how the dark web ecosystem works in order to develop more effective defenses. MIT researchers conducted a study in which they analyzed services available on the dark web, examined literature about cyberattacks, and interviewed cybersecurity professionals to better understand how cybercriminals advance and operate on the dark web. The study revealed a CAaaS value chain of activities required to create and support cyberattacks, which include discovering vulnerabilities, selecting targets, recruiting new hackers, developing a marketplace for trading, and more. Researchers used the CAaaS value chain to identify 24 primary and supporting services being sold on the dark web such as Exploit-as-a-Service, Payload-as-a-Service, Target Selection-as-a-Service, Hacker-Recruiting-as-a-Service, and more, that could be combined by hackers in the development and escalation of attacks. By understanding the dark web's cybercrime ecosystem in which these services are available, organizations can improve their approaches to combating cyber attacks. Organizations are encouraged to employ dark web monitoring solutions and dedicate some of their threat intelligence processes to collecting data about the services provided in dark web marketplaces in order to gain insight into potential attacks, attack trends, attacker motivations, indicators of compromise, as well as cybercriminals' techniques, tactics, and procedures (TTPs). In addition, intelligence collected from the dark web could be used by organizations to develop advanced defense mechanisms.

SoS Musings #29 - Ransomware Nightmare 

SoS Musings #29
Ransomware Nightmare

Ransomware attacks remain a significant threat to government agencies, financial institutions, schools, businesses, and individuals, calling for continued research and advancements surrounding the prevention of such attacks. Ransomware is a type of malware that encrypts files and demands the payment of a ransom in order to decrypt the files. It has been discovered that ransomware is often delivered through actions initiated by users such as clicking on malicious email attachments and URLs as well as malvertising and drive-by-downloads. The McAfee Labs Threats Report for August 2019 highlighted an increase in ransomware attacks by 118% in the first quarter of 2019. In addition, security researchers have observed the use of more powerful malware and the adoption of new attack techniques by cybercriminals in the launch of ransomware attacks. According to Malwarebytes' quarterly report, titled Cybercrime Tactics and Techniques: Ransomware Retrospect, there has been a 365% increase from Q2 2018 to Q2 2019 in the detection of ransomware targeting businesses, while there has been a decline in ransomware attacks targeting individual consumers as it is suspected that cybercriminals are seeking gain more profit by targeting higher value targets. More than 50% of Malwarebytes' ransomware detections account for attacks against machines located in the U.S. Organizations and security professionals are encouraged to continue their efforts to fighting ransomware attacks.

In the development of techniques towards preventing ransomware attacks, it is important for security professionals to examine past and current ransomware attacks. There are six ransomware attacks that have made the biggest impact within the last five years, which include Teslacrypt, SimpleLocker, WannaCry, NotPetya, SamSam, and Ryuk. From 2015 to 2016, TeslaCrypt ransomware largely targeted the gaming community in that it encrypted ancillary files such as saved games, user profiles, and more, associated with 40 popular video games, including Call of Duty and World of Warcraft, as well as PDF documents, photos, iTunes files, and Word documents. A $500 Bitcoin ransom payment was demanded of TeslaCrypt victims in order to decrypt these files and if there were a delay in payment, the ransom increased to $1,000. In 2014, SimpleLocker emerged as the first Android-based ransomware, encrypting SD card files, including images, documents, and videos, and demanding the payment of 260 Ukrainian Hryvnia worth $21, in order to decrypt of these files. WannaCry ransomware arrived in 2017, infecting thousands of computers in more than 100 countries at a rapid rate and impacting the operations of over 100,000 businesses. Following closely behind WannaCry, was NotPetya ransomware, which was initially reported as a variant of Petya, a strain of ransomware that emerged in early 2016, demanding that victims pay to recover their files. NotPetya was discovered to be purely destructive in that it kept computers' master boot records and master file tables encrypted despite the payment of the demanded ransom. Multinational companies, including Danish business conglomerate Maersk, pharmaceutical company Merck, FedEx's European subsidiary TNT Express, food producer Mondelez, and more, were impacted by NotPetya. Since 2016, SamSam ransomware and its variants have been targeting organizations with a significantly low tolerance for downtime, such as those within the public-facing civil sector or the healthcare sector. These types of organizations are attractive targets for the hackers behind SamSam as they rely on real-time data and networked systems, thus the longer it takes to pay the ransom for the decryption of such data and systems, the more damage could occur. Ryuk is another of strain ransomware that has been active since August 2018, impacting more than 100 U.S. businesses, most of which have been logistics companies, technology firms, and small municipalities. The FBI recently issued a flash alert in which it is stated that Ryuk is capable of deleting files related to its intrusion, stealing credentials, establishing persistence in the registry, and more. The newest Ryuk ransomware instructs victims to contact the attackers via one of several email addresses to find out how much the ransom is and which Bitcoin wallet must be used to pay the ransom. The trends in ransomware strains and incidents must be further explored.

Recent incidents indicate the rise in ransomware attacks on municipalities, educational institutions, and healthcare organizations. A ransomware attack on Johannesburg's electric utility, City Power, left some of the city's residents without power and impacted residents' ability to purchase electricity, upload invoices, and access the electricity provider's website. Baltimore City suffered a ransomware attack, which disrupted city government emails, the processing of calls at the city's 311 call center, 911 services, and more. Over 20 municipalities in Texas have recently been hit with ransomware, affecting computer systems, city businesses, and financial operations. Other municipalities that have fallen victim to ransomware attacks include Key Biscayne, Lake City, Riviera Beach. Louisiana Governor John Bel Edwards, declared a state of emergency in response to ransomware attacks on three Louisiana public school districts - Sabine, Morehouse and City of Monroe - which resulted in the loss of data stored on servers, the disabling of some technology systems, and the takedown of office phone systems. Grays Harbor Community Hospital in Aberdeen Washington just faced a ransomware attack that has resulted in the encryption of more than 85,000 patients' health data by attackers contingent on the payment of a ransom. Although much of this data was recovered, there are parts of the electronic medical record that are still encrypted and inaccessible by the hospital and Holston Medical Group. Such incidents call for the development of solutions.

As ransomware remains a major threat, there must be continued research, developments, and exploration surrounding the protection against this malicious software as well as the response to it. Andrea Continella and his team of researchers at NECSLab developed a tool, called ShieldFS, that automatically detects ransomware and performs a system restore from backups before the targeted system can be locked down by hackers. ShieldFS detects new ransomware-like attacks in addition to known types of ransomware through the identification of cryptographic behaviors attributed to ransomware. Researchers from the Coordinated Science Laboratory at the University of Illinois describe a tool that can be used to prevent ransomware attacks in a paper, titled Project Almanac: A Time-Traveling Solid State Drive. According to researchers, the tool can allow ransomware victims to save their files without having to succumb to the demands for ransom payments. The tool discussed in the paper enables solid-state drives, which are used in most computers as a component of the storage system, to save old versions of files instead of getting rid of them when the files are modified. The Cybersecurity and Infrastructure Security Agency (CISA), Multi-State Information Sharing and Analysis Center (MS-ISAC), National Governors Association (NGA), and the National Association of State Chief Information Officers (NASCIO) encourage State and Local government partners to regularly back-up their systems, increase employee cybersecurity awareness and education to draw further attention to the importance of not clicking suspicious links that could lead to ransomware infection or other attacks, and develop or strengthen their cyber incident response plans. In addition, organizations are advised to apply security patches, verify email senders, maintain preventive software programs such as antivirus software, and use caution when clicking links, opening emails, and attachments. When hit with ransomware, it is recommended by the FBI that victims do not give into the demands for ransom payments as the payment of hackers' ransoms would motivate them to execute more ransomware attacks if their demands are met. Solutions to ransomware must continue to be explored and developed by the Science of Security community.

SoS Musings #30 - Improving Cybersecurity for Aviation

SoS Musings #30
Improving Cybersecurity for Aviation

It is only a matter of time before an aircraft is significantly impacted by a hacking incident as indicated by recent discoveries made by cybersecurity researchers and the U.S. government. According to a report released by, titled Aviation Cyber Security Market - Growth, Trends, and Forecast, the aviation cybersecurity market is expected to grow at a compound annual growth rate (CAGR) of 11% from 2019 to 2024. Although the increasing connectivity and digitalization in the aviation sector has brought benefits in regard to better customer service, operations, and passenger flight experience, such advancements in aviation technology in addition to the growing connectivity of this technology has increased the vulnerability of the aviation sector to possible cyberattacks. The aviation industry is expected to invest more in technological advancements aimed at detecting and preventing cyberattacks on the aviation sector's IT infrastructure and networks, which are critical for ground and flight operations. One key market trend is that North America holds the largest share in the aviation market with the U.S. investing mostly in the research and development of advanced cybersecurity systems. The 2018 Air Transport Cybersecurity Insights report highlights the current challenges faced by the aviation industry in regard to cybersecurity based on the results of a survey to which 59 senior decision makers at major airlines and airports, including CEOs, CISOs, VPs, and IT Directors responded. According to the report, there is a high level of awareness surrounding cybersecurity in the aviation industry. However, current challenges are hindering efforts towards great aviation cybersecurity advancements. These challenges include growing cybersecurity costs, lack of CISOs, and low empowerment of cybersecurity teams. The aviation industry also faces similar challenges to other industries when it comes to cybersecurity such as limited resources, inadequate staff training, network visibility, and a skills gap. As aviation technology continues to grow in Internet-connectivity, posing a greater threat to safety, it is important that research efforts and developments aimed at improving the security of this technology increases.

Researchers have conducted studies that highlighted the importance of improving aviation cybersecurity. Robert Hickey, aviation manager within the Cyber Security Division of the DHS S&T Directorate and his team of experts from government, academia, and industry demonstrated that it is possible to remotely hack a commercial aircraft. According to Hickey, he and his team were successful in hacking a Boeing 757 by accessing its systems through radio frequency communications, further highlighting the possibility of compromising an airplane without having to physically access it. IOActive industrial cybersecurity expert, Ruben Santamarta, brought attention to the vulnerability of the Boeing 787 to remote hacking as he discovered Boeing Co. server that was exposed to the internet. The server contained firmware applications for the aviation manufacturer's 787 airplane networks in which he discovered multiple security vulnerabilities, including buffer overflow, memory corruption, stack overflow, and denial-of-service flaws. These vulnerabilities could be exploited by attackers to gain remote access to the plane's sensitive avionics network, which is also considered the crew information systems network. Santamarta found these security vulnerabilities by reverse-engineering binary code and examining configuration files in the firmware applications for the Boeing 787 airplane network. He also discovered the exposure of proxy servers, used by airlines to communicate with their 787 planes, to the public internet, which is another way an attacker can compromise the plane's network. Santamarta was also behind the discovery of vulnerabilities in a commercial aircraft's satellite communications equipment that could allow hackers to remotely spy on hundreds of planes from the ground. Using these vulnerabilities, hackers could compromise onboard systems, snoop on in-flight Wi-Fi, and perform surveillance on all connected passenger devices. According to presentations and risk assessments conducted by the U.S. government researchers, tests performed on an aircraft have proven the vulnerability of planes to hacking incidents in which flight operations are impacted and shown that cybersecurity protections for airborne vehicles are lacking. One presentation conducted by the Pacific Northwest National Laboratory (PNNL) indicated the lab's attempt to hack an aircraft through its Wi-Fi Internet and information distribution systems. Researchers from Khoury College of Computer Sciences at Northeastern University in Boston demonstrated how aircraft instrument landing systems can be attacked and misguided into landing incorrectly. Instrument landing systems are precision approach systems that give critical real-time guidance pertaining to the plane's alignment with a runway and angle of decent. Pilots rely on this radio-based navigation system to guide them in situations when visibility is low, such as when there is rain or fog. According to researchers, most wireless systems used in aviation are vulnerable to cyber-physical attacks as supported by the demonstrated spoofing of wireless signals to critical aircraft landing systems through the use of inexpensive software-defined radios (SDRs). The spoofing attacks demonstrated by the Northeastern University researchers involved the use of commercially available (SDRs), worth between $400 and $600. These SDRs were used in two varieties of spoofing attacks, one in which high-powered signals were broadcasted to overshadow legitimate signals sent by the airport ILS transmitter and another in which lower-powered signals were broadcasted to merge with portions of legitimate signals to cause a pilot's course deviation indicator to give incorrect readings. The researchers also developed a real-time offset correction and signal generation algorithm to continuously adjust fake signals so that misalignments are consistent as the plane lands. If attackers are not sophisticated enough to perform seamless spoofing, they can still use malicious signals to execute denial-of-service attacks to prevent pilots from using instrument landing systems as they approach the runway. The U.S. Department of Homeland Secrurity's Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) issued a security alert in July for small planes following the discovery of a vulnerability that impacts modern flight systems. The ICS alert brought attention to a possible attack on a small plane in which a small device is attached to an avionic Controller Area Network (CAN) bus to allow an attacker to alter engine readings, compass data, altitied, and other critical readings. False instrument readings could cause a pilot to lose control of their aircraft, especially when a pilot depends on such readings. As such attacks pose a threat to the safety of an aircraft, efforts to reduce vulnerabilities in avionics systems must continue.

Aviation cybersecurity has become one of the top concerns for the nation. Raytheon, a U.S. defense contractor, is building new technology aimed at alerting pilots in the event that their planes are being hacked. The lack of security in the design of avionics systems and the U.S. military's expectation that adversaries will hack a plane as major tactic in warfare, prompted Raytheon's development of the Cyber Anomaly Detection System. This system will provide details to the pilot about a hacking incident in real time, which enable them to quickly make decisions as to what needs to be done to address the incident. According to Fry, a cyber-resiliency product manager at Raytheon, the serial data bus to which important electronics and avionics systems are connected in most aircraft lacks security in many U.S. military planes. Fry also stated that the implementation of more technology and commercial products to an aircraft increases the plane's attack surface. DEF CON 2019 featured an Aviation Village, which security researchers and representatives from the U.S. Air Force and the U.S. Department of Defense Digital Service gathered to explore and discuss how on-board airplane electronic device communicate and operate as well as the security vulnerabilities contained by such devices that could be exploited by malicious hackers and efforts to discover these vulnerabilities. However, there was little involvement by airplane manufacturers and other commercial airlines at this event, calling for increased participation and efforts from these entities to work with security security researchers. Gerard Duerrmeyer, chief information security officer at Norwegian Air Shuttle, who was the only representative of a commercial airline to attend the Aviation Village said that he is looking to the automotive industry for lessons on how to improve the security of avionics systems as efforts to secure connected vehicles are improving. The Department of Homeland Security (DHS) decided to revive its efforts toward bolstering aircraft cybersecurity via a program. This decision followed a recent incident in which the European aerospace and defense giant, Airbus, experienced state-sponsored cyberattacks through its third-party supplier chain's VPNs. The program will examine and test actual aircraft with help from the Pentagon and Transportation Department to identify and mitigate cybersecurity risks facing the aviation industry and improve the cyber resiliency of critical public infrastructure. Security researchers will need to further explore the major security holes contained by the avionics CAN bus system in order to develop countermeasures against potential attacks against this standard. According to Chris King, a cybersecurity expert who has conducted vulnerability analyses of large-scale systems, the CAN bus was never designed with security in mind in that there is no way of validating whether the source that is telling the system what to do is legitimate. The Cybersecurity and Infrastructure Security Agency (CISA) recommends that manufacturers of aircraft review the implementation of CAN bus networks in avionics and evaluate safeguards such as filtering, whitelisting, and segregation. There must be an increase in collaborative efforts among experts in government, academics, and private industry to develop methods and technologies for improving aviation cybersecurity.

SoS Musings #31 - Kid Hackers

SoS Musings #31 -
Kid Hackers

Young people may deliberately or inadvertently be contributing to the continuously evolving cyber threat landscape and the rise in cybercrime, calling for an increase in awareness among members of this demographic about ethical hacking as well as an improvement of intervention efforts centered on young cybercrime offenders. Officials from the FBI and the U.S. Department of Justice have spoken about a noticeable surge in teenage hackers, pointing out that the increased accessibility of inexpensive easy-to-use hacking tools via marketplaces in the dark web have contributed to this spike. In 2016, a hacking group composed of teens from Scotland, England, and the U.S. was arrested for launching attacks against U.S. government agencies and high-level officials to expose sensitive information. Young hackers often start off as "script kiddies", which are inexperienced hackers who use existing programs to launch attacks on computers and network systems. A hacking incident suspected to be caused by script kiddies resulted in a diplomatic crisis in Qatar, further emphasizing that such hackers are not to be dismissed as they can still cause quite a lot of damage despite their lack of knowledge in programing. Studies have been conducted on factors such as social behavior, relationships, environment, and level of computer competency that can indicate whether a child is likely to engage in cybercrime. In addition to the individual characteristics of adolescents that would indicate an increased susceptibility to becoming cyber juvenile delinquents, studies have delved into the common pathways to cybercrime such as an aptitude for technology, willingness to perform low-level illegal activities on the internet, and lack of self-esteem in the real world that would increase the need to build a reputation online. Other studies have examined the correlation between those with characteristics most associated with the autism spectrum disorder and those that perform cyber-deviant acts such as unethical hacking, identity theft, and computer virus creation. The findings of these studies should be further examined and communicated to parents, organizations, and government entities in order to bolster efforts towards steering kids in the direction of white hat hacking or ethical hacking in which hacking skills are applied in a constructive rather than a destructive manner. It is also especially important to encourage kids to use their hacking skills for career paths in the field of cybersecurity in order to build a workforce of highly-skilled cyber professionals as the shortage of professionals in field is expected to reach 1.8 million by 2022. Studies on the predictors of juvenile hacking and the pathways to cybercrime should be further expanded.

There have been studies on the different factors that could lead kids to cybercrime, which also highlight the different ways in which predictors could be determined. Researchers at Michigan State University (MSU) conducted a study in which the characteristics and gender-specific behaviors that have the potential to lead children to juvenile hacking were explored. Research in the realm of cybersecurity has largely focussed on the scope and threat posed by hacking instead of the factors that indicate when and how hacking behavior is initiated. Thomas Holt, lead author of the study and cybercrime expert in the School of Criminal Justice at Michigan State University, examined responses given by 50,000 teens from all over the world to find out what are the predictors of hackings. Findings of the study show that low self-control, negative peer-associations, along with excessive video game playing and TV watching, are predictors of juvenile hacking. Furthermore, it was discovered that the weight of these predictors may differ based on gender. According to the study, peer associations are more likely to influence girls to turn to malicious hacking, while TV and video games have a greater influence on boys. Gender roles are said to contribute to the differences in predictors in that boys are often encouraged to play video games, while girls are pushed to do other activities. When children have their own bedroom and computer, as well as a lack of parental supervision over what they do on the internet, they are more likely to enter cybercrime. Other contributing factors include the use of mobile phones from an early age and the performance of digital piracy activities such as pirating movies and music. Another study conducted by a team of researchers from the European Cybercrime Centre, UCD Geary Institute for Public Policy, and Middlesex University, explored the youth pathways to cybercrime by looking through the lenses of adolescent psychology, criminology, cyberpsychology, and neurobiology. Theories of criminology give insight into the reasons as to why adolescents may choose to engage in hacking behavior, which include social deviance and antisocial behavior. Adolescent psychology delves into the elements associated with the maturation of teenagers that make them more susceptible to committing cybercrimes, such as mood disruptions, impulsivity, and peer pressure. Through the lens of neurobiology, researchers can find out how the brain and its components can contribute the susceptibility of the youth to cybercrime engagement such as the release of dopamine when achievements are made online, the link between the prefrontal cortex and frontal lobe functionality to poor decision-making, and other associations with the brain that reinforce certain behaviors. Exploring youth hacking from a cyberpsychology point of view allows researchers to examine the elements of anonymity/invisibility and the online disinhibition effect that can make adolescents more prone to committing potentially damaging acts online. The researchers found that the overlap of these research areas would help law enforcement and industry identify, prevent, and intervene in malicious youth hacking. The intersection between these areas of research in the exploration of youth pathways to hacking, led to the identification of individual characteristics and common pathway factors that influence young individuals to commit cybercrimes. According to key findings from interviews, researchers found that adolescents were more vulnerable to committing cybercrime if they were extremely intelligent, highly computer literate, highly curious about technology, and socially isolated from others that are not similar to them. Additionally, if they were withdrawn, and in need of emotional support and validation online, they would be vulnerable to juvenile hacking. Young people are more likely to take to the path of hacking if they have a lot of interest and skills in technology, are willing to engage in low level illegal internet activities, are easily encouraged to perform illegal behavior, exhibit addictive behavior, lack self-esteem, desire to increase their online reputation, and more. Vince Warrington, director of Protective Intelligence and cybersecurity leader who has helped private companies and the government improve the security of their data, compiled a list of signs that a child may be a hacker. The list was developed as a result of his involvement in a program, called Hackers to Heroes, a program ran by YouthFed aimed at encouraging kids to use their computer skills for careers in cybersecurity instead of cybercrime. According to Warrington, the signs include spending a large amount of time on the computer alone, the use of hacking terminology in their conversations, the use of multiple accounts on one social media platform, claims of making money from online video games, the malfunction of monitoring tools installed on their computers, and other signs that could indicate that a child may be hacker. Parents should lookout for these signs to provide guidance that would steer kids away from committing cybercrime.

Children on the autism spectrum have also been found to possess traits that deem them more vulnerable to being led into cybercrime. A study conducted by researchers at Purdue University suggests that there is a correlation between individuals with traits associated with Asperger syndrome, one of the autism spectrum disorders, and those who engage in hacking, identity theft, and the creation of computer viruses. Rebecca Ledingham, vice president at Mastercard and former cyber agent for INTERPOL's Global Complex for Innovation, stated "there's no other organic set of offenders that may be predisposed to cybercrime due to the nuances of their disorder". As kids on the autism spectrum possess traits of curiosity, willingness to learn, and other exploratory skills, they could be more easily led down the path of cybercrime. Kids with autism are often found to be highly skilled in subjects such as math and science. Hyperlexia is a condition often associated with autism, which refers to an intense interest for letters and numbers as well as the advanced ability to read. This syndrome could ease the performance of switching between English and coding. Those with autism are often found to be pattern-thinkers, which could help them avoid logical or syntax errors when writing code because they could easily detect missing semicolons and other instances in which the pattern is abnormal. Photographic memory is another common trait in kids with autism that could allow them to easily visualize a network setup and the potential security vulnerabilities contained by that setup. As kids with with autism often face bullying from their peers, they often turn to online communities commonly associated with gaming for solace. However, Ledingham has pointed out that gaming has been found to be one of the common paths to cybercrime. Given the proper guidance, encouragement, and opportunities, highly skilled autistic kids could use their rare skills to improve cybersecurity and be valuable assets to any organization in the future.

It is important that parents, law enforcement, and industry increase efforts to raise awareness among young people about of proper behavior in cyberspace and the consequences of cyber criminality. Kids' development of sophisticated technological skills should not be hindered, but there should be guidance as to how they can use their skills in a positive way. As human factors play a significant role in the factors that contribute to kids considering a life of cybercrime, it is important for parents to be more aware of what their kids do online and the risks posed by their activities in cyberspace. Education and knowledge are essential for parents to provide guidance as to what constitutes a cybercrime as well as the legal implications and consequences of such crimes. In addition to providing key study findings in relation to the characteristics and common pathways to cybercrime, researchers from the European Cybercrime Centre, UCD Geary Institute for Public Policy, and Middlesex University, also gave recommendations for the industry on how to prevent youth from engaging in cybercrime. There should be an increase in collaboration among law enforcement entities, industry, and policy makers to foster an online environment in which intervention mechanisms are implemented to deter young people from performing illegal activities. A new study conducted by researchers from the University of Cambridge and the University of Strathclyde explored the different ways in which law enforcement attempts to prevent young people from engaging in cybercrime to see how effective these methods are. According to the findings of this study, the removal of infrastructure and the launch of highly-targeted messaging campaigns by law enforcement are effective at reducing cyberattacks over a longer period of time as opposed to high-profile arrests and convictions of cybercriminals. There must also be an increase in support for hackathons that encourage and recognize young people for their demonstration of ethical hacking skills. For example, the U.S. Army partnered with the security firm Synack to teach kids white hat or ethical hacking skills at DEF CON as they were given the opportunity to learn how to hack a variety of technologies associated with door locks, computer games, hardware, and more, using open source tools and basic command line tools. Young first-time cybercrime offenders should also be given a second chance at using their skills ethically and legally. Police in the U.K. and the Netherlands established a legal intervention campaign, called Hack_Right, which would give individuals between the ages of 12 and 23 who have been suspected of committing cybercrimes the opportunity to do community service instead of face legal consequences. The community service would consist of 10 to 20 hours of ethical computer training. Following the completion of community service, the young hackers would then be put in contact with professionals who can introduce them to potential career paths and educational courses in support of their interests and skills. The U.K.'s first cybercrime intervention workshop was another initiative aimed at providing rehabilitation for young hackers that have committed low-level cybercrimes and received low level interventions such as warnings or cease and desist orders in order to prevent them going further down the path to high-level, damaging cybercrime. The workshop was also designed to encourage young offenders to consider using their skills for ethical practices and legal jobs in cybersecurity as well as increase their understanding of the consequences they can face if they commit serious cybercrime. This initiative was supported by PGI, BT, IRM, Grillatech, Ferox Security, and the Whitehatters Academy, further highlighting the importance of collaboration in providing intervention for young cyber criminals. There should also be an integration of educational courses aimed at teaching kids about ethical hacking such as the cybersecurity course offered by CodeHS for high schoolers. CodeHS' year-long cybersecurity course was designed for schools in which there is a lack of advanced computer science departments and faculty with expertise in cybersecurity as it provides training to teachers and students, using a comprehensive curriculum. Students would learn about programming, cyber hygiene, and the ethical implications of hacking. In addition, students would be given a chance to explore career opportunities that would be open to them if they decide to pursue such subjects after they graduate from high school.

If given proper guidance on ethical hacking and online use as well as the opportunities to learn and practice skills in white hat hacking, kids can be encouraged to use their skills to improve cybersecurity rather than engage in cybercrime. In turn, these kids could one day be the next generation of cybersecurity experts to fill the cybersecurity workforce gap, thus increasing the safety of the nation.

SoS Musings #32 - Neurodiversity in Cybersecurity

SoS Musings #32 -
Neurodiversity in Cybersecurity

According to recent studies, embracing neurodiversity could serve as an advantage to the cybersecurity field and help fill the cybersecurity workforce gap. Neurodiversity is the concept that differences in neurological functions are natural variations in the human genome that should be just as respected as any other type of human differences such as race, age, gender, religious beliefs, and more. The term "neurodiversity" covers conditions including autistic spectrum disorders, ADHD, dyslexia, OCD, Tourette's syndrome, and other conditions within the neuro-diverse spectra. While education systems have increased their efforts to support neurodiversity, most organizations still do not seek such diversity because of the perception that neurodiverse candidates have very limited skills. Studies have shown that people with conditions such as autism possess skills and abilities that could significantly benefit companies, especially those within the cybersecurity field. The Centers for Disease Control and Prevention (CDC) estimates that 1 in 59 children in the US have autism with the incidence of autism in boys being 1 in 42 and 1 in 189 among girls. According to the CDC, more than 70 million people worldwide are living with autism, with over 3.5 million Americans on the autism spectrum. An estimate suggests that 80% of adults with autism are underemployed or unemployed worldwide. A survey conducted by the Center for Strategic and International Studies (CSIS) revealed that more than 80% of employers are facing a shortage of skilled cybersecurity professionals. The Center for Cyber Safety and Education also reported an expected 20% increase in unfilled cybersecurity jobs from 1.5 million in 2015 to 1.8 million by 2022. Given that security professionals are well aware of the shortage of cybersecurity skills and the impact that this shortage can have on their organizations, it could be beneficial to recognize the talent possessed by this group of individuals. Cybersecurity is a discipline that requires focus, logic, problem-solving, the will to learn, and pattern detection, making many people with autism and other conditions on the neurodiversity spectra suitable for positions in this field. Companies might explore how they could tailor their approaches to recruiting, selecting, and retaining these types of candidates.

At an event organized to discuss neurodiversity and the occupation of cybersecurity jobs by neurodiverse people, a neurodiversity consultant spoke about the many benefits of considering an autistic candidate for a role in cybersecurity. She highlighted the traits they possess, such as their investigative nature, inquisitiveness, dedication, logical ways of thinking, systematic approaches to operations, and intense interest for their role and the subjects associated with the role. She also listed potential roles that are well suited for autistic candidates, which include penetration testers and SOC (Security Operations Center) analysts. Other traits that make those on the autism spectrum suitable for jobs in cybersecurity include high levels of curiosity and willingness to solve problems. Autistic people are often found to have Hyperlexia, which refers to a deep level of interest for letters and numbers, and extraordinary reading comprehension, which could facilitate shifts between English and programming language. The ability to think based on patterns allows one to easily detect syntax errors in source code, such as a missing semicolon or an extra bracket, increasing the effectiveness, security, and safety of programs used in cybersecurity operations. Another trait commonly associated with autism, photographic memory, eases one's visualization of network architecture and the security flaws that could be present in the architecture. Rhett Greenhagen, Casey Hurt, and Dr. Stacy Thayer gave a presentation at BLACKHAT USA 2018 in which they discussed how people with autism could enhance the cybersecurity workforce, presenting the results of a survey to which 290 computer security professionals diagnosed with autism responded. The survey highlighted the ability of those on the autism spectrum to quickly filter out the noise that masks attacks as well as to detect the concealed signals and indicators of attacks. Organizations seeking to hire more cybersecurity professionals should not overlook the unique talents of those with autism but instead increase their efforts to utilize these talents and alter their workplace culture and recruitment for those on the spectrum.

In order for companies to tap into neurodiverse talent, HR processes should be scaled to consider behaviors and abilities that may not fit the standard neurotypical profile as the criteria commonly used in recruitment often rule out neurodiverse people. Traditional practices in recruiting, hiring, and development need to be altered accordingly as they often depend on candidates that are proficient at social interactions, reading body language, and picking up social cues. An article published by the Harvard Business Review pointed out two factors that often cause organizations to miss out on neurodiverse talent: interviewing, and complete conformity to standardized methods. Interviews present a major obstacle for autistic people in that they often lack good eye contact, are more susceptible to going off on a conversational tangent, and can be overly expressive about their weaknesses due to confidence problems stemming from past interviews, thus causing such individuals to score lower in interviews than less-talented neurotypical prospects. Therefore, companies should consider alternative methods of preparing autism spectrum candidates, such as implementing month-long workshops and mentorships. Microsoft's Autism Hiring Program was established in 2015 with the goal of hiring autistic people for full-time jobs. The program concentrates on job assistance and training for people on the spectrum. The interview process implemented by the program is also unique in that it is more of a workshop in which potential hires can demonstrate their skills instead of just talking about them. Although the program emphasizes the demonstration of skills, candidates are still given the opportunity to practice doing presentations and one-on-one talking. Another problem typically experienced in large companies is associated with the reluctance to deviate from standardized approaches. Companies are encouraged to change managers' focus from enforcing compliance through the use of established practices to adjusting work environments based on individuals' needs. Although these accommodations do not require much expense, they do require managers to alter work settings to fit individuals. Companies must look into programs and other initiatives geared explicitly toward helping companies implement changes to support neurodiverse candidates and employees.

There needs to be an increase in collaborative and exploratory efforts in support of increasing neurodiversity in the cybersecurity field. A pilot program, called Neurodiversity in Cybersecurity, was one of three grand prize winners of the Government Effectiveness Advanced Research (GEAR) Center challenge, supporting the recruitment of neurodiverse adults for cybersecurity jobs in the federal government. The program, created through the partnership of George Mason University, Mercyhurst University, Rochester Institute of Technology, Drexel University, SAP, Specialisterne, the DXC Dandelion Program, and the MITRE Corporation, supports management and co-worker training in addition to career and social development programs for neurodiverse candidates. Those involved in this effort emphasized the importance of embracing this specific part of the population when seeking to fill positions in cybersecurity, pointing out that talent attraction and retainment remains a significant challenge for the US government, states, and organizations within the private sector. The Neurodiversity in Cybersecurity project aims to tap into the talent pool of neurodiverse individuals, using an approach that involves key practices and tools adopted by the private sector and non-governmental organizations. The Frist Center for Autism and Innovation at the Vanderbilt University School of Engineering gathers experts in neuroscience and education, in addition to engineers, business scholars, and disability researchers to improve and increase the recruitment of neurodiverse talent. The Center underlines the exploration of autism and neurodiversity to develop and commercialize new technologies inspired by neurodiverse abilities, which in turn provides support to neurodiverse people in the pursuit and fulfillment of roles in careers, including those in the cybersecurity field. Additionally, the Center focuses on the development of tools and training programs and the establishment of policies and workplace practices that support neurodiverse people in the workforce. Other efforts to increase hiring of neurodiverse talent include those of Specialisterne, a Danish consulting organization with locations in the USA, Canada, Australia, Spain, Singapore, and more, with a specific focus on filling technology roles with autistic people and other neurodiverse individuals. Specialisterne examines recruitment, training, and retainment processes and cultures in corporations, universities, high schools, and community agencies to help them create environments in which neurodiverse people can thrive. The organization also uses its resources to help neurodiverse candidates prepare to take on roles in cybersecurity and other technology fields. Organizations should find inspiration in companies, including Microsoft, Hewlett Packard Enterprise (HPE), Ford, SAP, and Willis Towers Watson, and other companies that tailored their human resource (HR) processes for neurodiverse people. Efforts to increase neurodiversity in the cybersecurity field should continue to flourish.

SoS Musings #33 - Put the Brakes on Deepfakes

SoS Musings #33 -

Put the Brakes on Deepfakes

Deepfakes--fake, realistic-looking images, text, and video generated using a Machine Learning (ML) model called a Generative Adversarial Network (GAN)--are one of the top cybersecurity threats to look out for in 2020. Security experts expect to see a rise in deepfakes in 2020 as a result of the increased implementation of biometrics used in technologies to identify and authenticate individuals, such as smartphones and airport boarding gates, among others. Advancements in deepfakes will pose new security challenges as cybercriminals will use such forms of fake media to masquerade as legitimate persons to steal money or other critical assets. Deepfakes can also be used to spread disinformation across social media platforms, undermine political candidates and perform other activities that involve fraud. Deepfake technology will strengthen social engineering attacks since cybercriminals will not need to perform special hacking skills to execute attacks as they can use deepfakes to impersonate high-level users and trick others into revealing sensitive information that could be used to gain access to protected systems. According to researchers at McAfee, accurate facial recognition will be more challenging to achieve because of deepfakes, adding to the growing list of problems faced by this type of biometrics system. A report released by Forrester Research, "Predictions 2020: Cybersecurity," highlights that the costs associated with deepfake attacks will be more than $250 million in 2020. Studies on the creation and malicious application of deepfakes will help push the development of techniques and tools to help combat deepfake attacks in the future.

Recent incidents and studies have shown what threat actors can do through the use of AI-generated deepfakes and the manipulation of images. Engineers at Facebook demonstrated that it is possible to clone an individual's voice, using their ML system, named MelNet. With the MelNet system, the engineers generated audio clips of what seems to be Microsoft founder Bill Gates saying a series of harmless phrases. According to researchers, MelNet was trained on a 452-hour dataset consisting of TED talks and audiobooks. Deepfake voice attacks are already a significant threat to the business realm, as indicated by recent incidents in which threat actors used AI-generated audio to impersonate CEOs to steal millions of dollars. According to an article posted by Axios, Symantec observed three successful deepfake audio attacks against private companies, each of which impersonated a CEO to request money transfers. According to Symantec, in all attacks, scammers used an AI program to mimic the voices of the targeted CEOs. The program, similar to that of MelNet, was trained using speech from phone calls, YouTube videos such as TED talks, and other media that contained audio of the CEOs' voices. In the case of AI-generated images, Zao is one deepfake face-swapping app that quickly gained considerable popularity as it allows users to replace the faces of their favorite characters in TV shows or movies with theirs by uploading a single photograph. One user shared an example of how advanced the app is, showing a video of their face superimposed onto Leonardo Dicaprio in the Titanic, which was generated in under 8 seconds using a picture as small as a thumbnail. Another indication of the progression of deepfakes is a site, called "This Person Does Not Exist" that continuously generates images of realistic-looking human faces using Nvidia's GAN, named StyleGAN. Using such techniques, one can masquerade as journalists on social media with AI-generated profile pictures to press for personal information from users, such as in the case of "Maisy Kinsley," a supposed "senior journalist at Bloomberg." These studies, incidents, and technologies, which highlight deepfake capabilities, bring further attention to the increased risk posed by uploading photos and videos of one's likeness online where anyone can access and use them for malicious purposes.

Security researchers, as well as social media platforms, are being encouraged to continue their efforts to fight deepfakes of all formats. A team of researchers at the University of Oregon is studying the complex sound processing of mice to gain insight into the mechanisms associated with the mammalian auditory system to detect fake audio and then develop new deepfake audio-detection algorithms based on this analysis. A Canadian AI company, Dessa, built a system, called RealTalk, aimed at discerning between real and fake audio. The system is capable of differentiation between real and fake audio samples by using spectrograms, which are visual representations of audio clips. According to engineers at Dessa, these visual representations can be used by the deepfake detector model to predict the fake audio clips it is fed, with an accuracy of 90%. A study conducted by researchers at New York University's Tandon School of Engineering aims to combat deepfakes by making it easier to determine whether a photo has been altered. A significant challenge associated with the detection of manipulated photos is that digital photo files are not coded to show evidence of tampering. Therefore, the NYU team proposed the implementation of ML into a camera's image signal processing to add watermarks to each photo's code, which would act as a tamper-resistant seal. Siwei Lyu, a professor at the University of Albany, and his team examined the steps taken by one particular deepfake video creating-program, called DeepFake. The examination of this software found that the failure of such programs to pick up on physiological signals inherent to human beings such as blinking, could be used to reveal most deepfake videos of individuals. In addition, Facebook recently announced plans to ban deepfake videos from its platform. Monika Bickert, Facebook's vice president for global policy management, further stressed the potential deepfake videos have to impact the social media industry and society in general significantly. Therefore, any video posted on Facebook that has been generated using AI or machine learning to manipulate a video to make it appear authentic will be removed. However, this change in Facebook's policy does not apply to satirical content or videos edited for the purpose of removing or altering the order of words. Researchers and social media organizations must continue to explore and develop new tools and policies to accurately differentiate between real and fake media as well as reduce the generation of deepfakes. These efforts are essential as deepfakes will be weaponized to spread disinformation and increase the success of social engineering attacks.

SoS Musings #34 - Mind the Air Gap

SoS Musings #34 -

Mind the Air Gap

Is air-gapping an effective method of securing highly sensitive computer networks and systems? The idea behind air-gapping a computer is to ensure that it is not connected to the internet or any other internet-connected systems to protect it from unsecured networks. A computer is truly air-gapped when it can only accept data from a USB flash drive or other removable media. Air-gapped machines are often found in high-security environments such as those in the realms of military, government, financial services, and industrial control systems. Life-critical systems, including aviation computers, FADECs (Full Authority Digital Engine Control), and Avionics, as well as those used in nuclear power plants and medical facilities, are also often air-gapped. However, air-gapping gives organizations a false sense of security, as there have been studies that demonstrate the possibility of defeating this security strategy. Researchers have shown that even truly air-gapped networks can be compromised by determined adversaries.

Mordechai Guri, director of the Cybersecurity Research Center at Ben Gurion University (BGU), and his team of researchers have conducted many studies on how communication with air-gapped computers can occur through the development of covert channels. They created malware, called Fansmitter, to alter the speed at which a computer's internal fan rotates so that the sound it produces can be controlled, encoded, and picked up by any listening device such as a smartphone. Another proof-of-concept attack devised by BGU, called BitWhisper, uses heat emissions and an air-gapped computer's built-in thermal sensors to send commands to the computer or steal data from it. In a video demonstration of the BitWhisper attack, researchers showed one computer emitting heat and sending a command to an adjacent air-gapped computer to change the position of a missile-launch toy connected to the air-gapped system. BGU researchers also demonstrated the use of the Caps Lock, Num Lock, and Scroll Lock LEDs on a keyboard to exfiltrate data from a secure air-gapped system. The method, which they named CTRL-ALT-LED, was tested on a variety of different optical capturing devices, including smartphone cameras, security cameras, high-grade optical/light sensors. It involves the use of malware to make the LEDs of a USB-connected keyboard blink fast in a certain pattern, which could then be encoded, recorded, and decoded by hackers to get information from the air-gapped system. In another attack demonstrated by BGU researchers, a drone was used to capture and steal data from an air-gapped computer's blinking Hard-Disk Drive (HDD) and then decode the information from its blinking light. Another method, called MOSQUITOuses speakers to secretly transmit data via inaudible ultrasonic sound waves between air-gapped computers at a maximum distance of nine meters away from each other. An attack, dubbed aIR-Jumper, could be executed to jump an isolated network's air gap to exfiltrate data and send commands by controlling the infrared (IR) LEDs inside surveillance cameras. The BGU researchers also developed malware, called AirHopper, which is capable of decoding radio frequencies emitted from an isolated computer's monitor, video card, or cable to steal data. AirHopper picks up data from an air-gapped machine using the FM radio receivers contained by many mobile devices. Out of all of the covert channels developed by Guri and his team of researchers, MAGNETO is considered the most dangerous in that it can allow attackers to steal data from air-gapped computers in Faraday cages, which are metallic enclosures designed to block all radio signals. MAGNETO is performed by installing malware on an air-gapped computer to coordinate the operations of the computer's CPU cores. These processes generate magnetic fields that could then be captured by a phone's magnetometer via an app developed by BGU researchers, called ODINI. Security researchers' studies, focused on defeating air-gapped systems, are intended to raise awareness about the potential security vulnerabilities of this security strategy and the ways in which they can be avoided.

The ability to beat air-gaps poses a significant threat to critical systems such as those used to monitor and control industrial processes. Security researchers at CyberX, a major ICS security vendor, were able to exfiltrate sensitive data from an air-gapped ICS network by injecting specially-written ladder logic code into Programmable Logic Controllers (PLCs). The code, explicitly written to be injected into PLCs, converted radio signals into a coded form, which can then be received by regular AM radios, allowing sensitive data to be extracted from the air-gapped ICS network. According to researchers, PLCs ease the performance of data exfiltration because they run embedded real-time operating systems and have limitations in regard to processor and memory resources, making it difficult to run anti-malware programs. The execution of this attack relies on the abuse of the intrinsic characteristics of most modern industrial protocols that stem from insecure design, such as poor authentication. The technique demonstrated by researchers could be used by hackers to steal highly sensitive data such as proprietary formulas, nuclear blueprints, and other corporate trade or military secrets. Hackers could also gather reconnaissance data pertaining to ICS network topologies and device configurations that could later be used to execute damaging attacks. Operations supposedly conducted by Russian government hackers from 2016 to 2018 brought further attention to the potential dangers of jumping air-gapped networks. According to the US Department of Homeland Security (DHS), the Russian hackers allegedly were able to gain access into isolated air-gapped networks in control rooms at US electric utilities to gather confidential information and blueprints that would give them insight into the inner workings of America's power plants and grid. The Kudankulam Nuclear Power Plant (KKNPP) in Tamil Nadu, India faced a cyberattack, which showed that air-gapped systems at nuclear facilities are still vulnerable to targeted attacks in which hackers exploit human weaknesses and supply chains. Critical infrastructures require more advanced protection from cyberattacks than isolation.

While isolated systems are more secure than others, air-gapping is not a silver bullet solution to protecting critical networks and systems from cyberattacks. Studies continue to prove that an attacker with the right resources, technical skills, and level of determination, can gain access to air-gapped systems. Therefore, defenses surrounding highly sensitive systems must go beyond the restriction of personal computers, laptops, and removable media. In addition to air-gapping, security experts are encouraged to implement advanced defenses for isolated systems such as performing deep packet inspection, deploying firewalls, using intrusions detection systems, applying layered authentication controls, and more.

SoS Musings #35 - Better Secure Those Satellites

SoS Musings #35 -

Better Secure Those Satellites

Satellites are human-built objects placed into orbit that enhance our lives in various ways, from sending television signals directly to our homes, to powering navigation systems such as the Navstar Global Positioning Systems (GPS), to monitoring weather. Earlier in the year, SpaceX successfully launched 60 Starlink satellites into orbit, bringing the total number of satellites launched by the company to 242 and making SpaceX the world's largest active satellite constellation. Other companies, including Amazon and OneWeb, are racing to put satellites in space as well. According to the NSR's (North Sky Research) "Small Satellite Markets Report, 5th Edition", over 7,000 additional small satellites will be launched by 2027. These satellites are expected to increase internet access in remote areas of the world, improve global navigation systems, and environmental monitoring. Global Navigation Satellite Systems (GNSS) encompass all the satellite navigation systems that provide Positioning, Navigation, and Timing (PNT) services with comprehensive coverage. If the GNSS were to suffer a significant outage for one day because of an attack, it would cost the U.S. an estimated $1 billion in damages as these systems support automation and safety, and maintain efficiency. Cybersecurity and policy experts have expressed concerns about the vulnerability of satellite systems to attacks by hackers which poses a significant threat to global security and safety. As the number of satellites increases, and nation-states and rogue actors increasingly target critical infrastructure, more attention must be given to the protection of these systems.

Studies have shown the impact that cyberattacks on satellites could have on safety and security. Ruben Santamarta, a principal security consultant at IOActive, gave a presentation at the 2018 Black Hat conference in Las Vegas, in which he brought attention to the vulnerability of popular satellite communication systems to cyber-physical attacks. These attacks pose a risk to the ships, planes, and military that use these systems to connect to the internet. Research conducted by Santamarta revealed that hackers could execute attacks aimed at turning satellite antennas into radio frequency weapons acting as "microwave ovens" to cause physical damage to electrical systems and possibly injure soldiers or passengers. The exploitation of security vulnerabilities contained by the software that operates satellite antennas could also allow attackers to interrupt, prevent, or alter satellite communications, as well as execute additional attacks against other equipment connected to the satellite network. In the military realm, such attacks pose a higher safety risk as they could be used to extract precise GPS coordinates of a satellite antenna, potentially leading to the exposure of the exact location of a military base. Chatham House, the London-based independent policy institute, released a paper titled "Cybersecurity of NATO's Space-based Strategic Assets" that emphasizes the possible execution of GPS digital spoofing attacks against satellite systems to interrupt the transmission of radio frequency signals or send fake messages. These attacks could be used by attackers to present false information, thus leading to confusion and redirection of military troops as well as the hijacking of autonomous vehicles and robotic devices. These studies suggest the need to bolster satellite security as the compromise of this technology by hackers could result in significant consequences.

There have already been incidents in which hackers took control of satellites. In 1998, hackers compromised the U.S.-German ROSAT X-Ray satellite, allowing the hackers to aim the satellite's solar panels at the sun, which resulted in the damage of its batteries. Hackers supposedly sponsored by the Chinese government were able to take control of two NASA satellites, Landsat-7 and Terra EOS, in 2007 and 2008. In 2018, Symantec revealed the detection of a hacking campaign launched by Chinese-state backed actors that aimed to hack two U.S. satellite companies and gain access to operational technology implemented to send commands to satellite systems. It essential to examine the different factors that contribute to satellite security breaches.

There are several contributing factors to the vulnerability of satellites to hacking. Brian Weeden, Director of Program Planning for Secure World Foundation, pointed out that satellites and their ground systems are just as vulnerable to same cyberattacks faced by other computer systems because they often run widely-used operating systems such as Unix or Linux in addition to some specialized software. Satellite-makers, especially those that build inexpensive miniature satellites known as CubeSats, use off-the-shelf technology. As the components used to create CubeSats are in hands reach of hackers, they can easily be examined for exploitable vulnerabilities that could allow attackers to take control of these satellites. Many of these components are also based on open-source technologies. The technical structure, launch, and management of satellites also rely on contributions from multiple manufacturers, which increases outsourcing, thus increasing the number of entry points for hackers. The complexity of satellite supply chains and management makes it difficult to determine which party is responsible if a satellite suffers a cyberattack, which impedes efforts to secure satellite systems. Also, as SpaceX, Amazon, and other companies compete to become the most powerful satellite operator, they are cutting costs to increase the speed at which satellites are manufactured. The increasing pressure to reduce costs leads companies to skimp on the implementation of cybersecurity measures when producing satellites. Cybersecurity standards and regulations for space assets such as satellites and their controls are also lacking. There needs to be an increase in efforts and research towards the improvement of satellite security.

Improving satellite security requires an increase in efforts in the realms of technology and regulation. An article published by Defense One discusses the efforts of researchers at the National Security Agency (NSA) to improve satellite cybersecurity by exploring the use of Artificial Intelligence to characterize unusual behavior exhibited by small satellites and to help determine whether adversaries have secretly compromised these satellites. According to the encryption solutions office of NSA's Capabilities Directorate, the NSA research team is also looking at how malware can be deployed to small satellites via a ground station to get a better understanding of the vulnerability of these devices to cyberattacks. The U.S. Air force is planning to launch the Infrastructure Asset Pre-Assessment (IA-Pre) program, which will function as a cybersecurity screening program for commercial satellite communications providers that involves third-party audits to verify compliance with NIST 800-53 cybersecurity standards. Studies conducted by researchers at Penn State University further highlight the dangers posed by unauthorized access or exposure of satellite data to national security and civil liberties, calling for the appropriate regulation and handling of this type of data by lawmakers, satellite owners, and operators. Efforts towards exploring and ensuring the cybersecurity of satellite systems must continue.

SoS Musings #36 - Stop Attackers From Pulling the Strings on the Internet of Things

SoS Musings #36 -

Stop Attackers From Pulling the Strings on the Internet of Things

The Internet of Things (IoT) refers to the system or network made up of physical devices that are connected to the internet and are capable of sending and receiving data. The support provided by IoT devices for expanded internet connection extends beyond traditional devices such as laptop computers, desktops, smartphones, and tablets to various consumer devices embedded with technology that make it is possible for them to communicate or interact over the internet such as smart appliances, wearables, toys, speakers, TVs, and more. According to a report released by Market Forecast in April 2019, titled "Global IoT Security- Market and Technology Forecast to 2027," the current market for IoT security spending is estimated at around $10 billion and is expected to reach $74 billion by 2027. The Statista Research Department had projected the amount of IoT devices to hit 31 billion by 2020 and 75 billion by 2025. IoT offers a number of advantages for consumers, including support for efficiency through Machine-to-Machine (M2M) communication, automation, ease of control, information-sharing, and product monitoring. However, the amount of data being transmitted via IoT devices, as well as the insecurity in the design, development, configuration, and implementation of these devices, makes them more vulnerable to hacking. The exponential growth in the use of IoT devices by commercial entities and consumers as well as recent discoveries surrounding vulnerabilities contained by IoT devices calls on manufacturers, consumers, the security community, and government to increase efforts toward improving the security and privacy of these devices.

In recent years, there have been several discoveries of vulnerabilities in IoT devices and incidents involving the exploitation of these flaws that pose threats to the security and privacy of users. Paul Marrapese, a security engineer, discovered more than two million vulnerable IoT devices, including IP security cameras, baby monitors, and smart doorbells manufactured and distributed by vendors such as HiChip, TENVIS, SV3C, VStarcam, Wanscam, NEO Coolcam, Sricam, Eye Sight and HVCAM. According to Marrapese, the vulnerability of these devices to getting hijacked by hackers derive from flaws contained by the Peer-to-Peer (P2P) communication technology, called iLinkP2P, which is a firmware component that enables the devices to talk to vendors' servers through the P2P protocol. Researchers at North Carolina State University discuss their findings of extensive design flaws in "smart home" IoT devices in a paper, titled "Blinded and Confused: Uncovering Systemic Flaws in Device Telemetry for Smart-Home Internet of Things." Such flaws could allow threat actors to execute remote suppression attacks in which security-related signals from IoT devices are blocked. One such example would be when a motion sensor is tripped by movement, and the attack would prevent homeowners from being notified in the event that a break-in occurs. Security researchers at F-Secure disclosed flaws in the KeyWe Smart Lock IoT device designed to allow homeowners to lock and unlock their homes via an app. The vulnerabilities found in these smart locks could be exploited by attackers using inexpensive network-sniffing equipment to intercept traffic between the mobile app and the smart lock to recover the key required to unlock the device. The FBI issued a warning about cybercriminals' abuse of unsecured smart TVs to gain entry into homes and listen in on users via TV microphones and built-in cameras. Similarly, hackers can spy on users via IoT robot vacuums, as discovered by researchers at Checkmarx. The exploitation of vulnerabilities found in the internet-connected Trigo Ironpie M6 smart vacuum cleaner enables hackers to control the vacuum and monitor the video feeds recorded by the device's cameras. One incident that garnered major headlines is the access of an indoor Ring camera by a hacker to harass an 8-year old girl in her bedroom. Ring claimed that the incident was not the result of a breach of compromise of Ring's security, but the result of a potential credential-stuffing attack, calling on users to stop reusing credentials across different services. Other major IoT security incidents include the leak of millions of voice recordings by an IoT Teddy Bear and the exposure of kids' GPS data by an IoT smartwatch.

The growth in botnets accompanies the increasing number of insecure IoT devices. An IoT botnet is a network of IoT devices infected with malware that allows malicious actors to gain control over them and perform different types of attacks such as Distributed-Denial-of-Service (DDoS) attacks to overwhelm targets, credential-stuffing attacks to take over accounts, web application attacks to steal data, and spamming. IoT botnets can have a wider impact than traditional botnets in that they can be composed of hundreds of thousands of devices. According to Radware, there are several reasons as to why IoT devices are attractive targets for attackers when building botnets. Cybercriminals perceive IoT as low-hanging fruit because of such problems as IoT devices already having default passwords and exposed services. Also, IoT devices are rarely monitored, inadequately maintained, poorly configured, and always functioning, allowing attackers to strike them at any time and exploit significantly large numbers of devices. In addition, the malware used to enslave IoT devices is often found to be capable of easily changing devices' factory-set (default) passwords in order to block users from logging into their devices. One IoT botnet that gained worldwide attention in 2016, is the infamous Mirai botnet that crippled Krebs on Security, the French cloud computing company OVH, and the Internet performance management and web application security company DYN, through the launch of massive DDoS attacks performed via more than 600,000 IoT devices such as air-quality monitors, personal surveillance cameras, routers, and digital video recorders (DVRs). Other notable large scale IoT botnet attacks include Linux.Aidra, Bashlite, LuaBot, Remaiten, and Linux/IRCTelnet. In 2019, researchers at Imperva discovered a massive botnet attack, similar to that of a Mirai botnet, which used more than 400,000 IoT devices to perform DDoS attacks against an online streaming application. Researchers said this particular botnet produced more than 292,000 requests per minute. The recruitment of IoT devices by botnets can create a significant loss of operation and downtime for organizations.

Manufacturers, developers, and consumers must be aware of the security problems commonly found with IoT devices so that better choices can be made in the production, implementation, and management of such devices. The Open Web Application Security Project (OWASP), a nonprofit organization dedicated to improving software security, released a list of vulnerabilities commonly associated with IoT devices. Included on the list is the hardcoding of weak credentials into IoT devices that can be easily brute-forced by attackers. IoT devices often run on insecure network services, leaving them vulnerable to attacks in which the data they store or transfer is stolen or remotely controlled by hackers. Insecure ecosystem interfaces resulting from lack of authentication, encryption, and filtering contributed to the vulnerability of IoT devices to compromise. Many IoT applications lack mechanisms for security updates such as firmware validation since vendors and enterprises often do not consider the future of IoT devices and how they might be implemented. IoT devices also often use insecure third-party software or hardware components that leave them vulnerable to compromise. Other vulnerabilities highlighted by OWASP include inadequate privacy protection, lack of data encryption, poor security management, and insecure default settings. Increased understanding of these vulnerabilities can improve efforts to bolster IoT security.

Efforts are being made in the realms of academia and industry to improve IoT security. Researchers at Massachusetts Institute of Technology (MIT) built a chip to efficiently execute public-key encryption at a significantly higher speed for IoT devices while also consuming less power and memory. MIT researchers also conducted research aimed at securing IoT devices in the coming era of quantum computers. They developed a novel circuit architecture capable of protecting low-power IoT devices from quantum computer attacks using lattice-based cryptography. Perry Alexander, director of the Information and Telecommunication Technology Center at the University of Kansas, and his multidisciplinary team of researchers, including computer engineers, psychologists, sociologists, and philosophers, received funding from the National Security Agency (NSA) to improve IoT cybersecurity. This team is working to develop technology to address IoT side-channel attacks, enhance IoT devices' resiliency against interruptions, and advance human behavior to improve secure interaction with such devices. A team of Penn State World Campus researchers developed a multi-pronged data analysis approach that combines different methods involving the use of statistical data, machine learning, intrusion detection tools, visualization tools, and more for strengthening security for IoT devices including smart TVs, home video cameras, and baby monitors, and wearables. Innovators at Purdue University developed hardware technology that uses mixed-signal circuits to reduce electromagnetic and power information leakage that can be leveraged in side-channels attacks against IoT devices. Another hardware-based technique to increase IoT security has been developed by engineers at Rice University, which aims to defend against new types of attacks specifically designed to compromise IoT and mobile systems. The engineers' custom-built circuits are energy efficient and would make IoT systems 14,000 times stronger than existing protective technologies. Rapid7 IoT research lead Deral Heiland, gave a talk in which he emphasized the importance of developing a comprehensive IoT security testing methodology that would help companies determine the traits of IoT to improve the detection and security of IoT devices. Heiland addressed the characteristics of IoT technology, which are based on four key areas, including management control, cloud service APIs and storage, the capability to be moved to the cloud, and embedded technology. According to Heiland, companies can improve the protection of their IoT ecosystem if they know the traits of this technology and apply a methodology in the development and testing of IoT. A Swiss firm specializing in cryptography Teserakt, introduced a cryptographic implant called E4 that IoT manufacturers can integrate into their servers to ensure end-to-end encryption for IoT devices. In an effort to save IoT from botnets, researchers at the Department of Information Engineering at the University of L'Aquila, Italy, are developing an approach to detecting and stopping botnet attacks using deep learning techniques. Testing of their approach showed that it could detect botnet attacks on systems with an accuracy of 97%. Security experts call for continued collaboration and innovation in IoT security research and development.

The government is continuing efforts towards promoting and regulating IoT security. In 2019, the Internet of Things Cybersecurity Improvement Act was introduced to establish a vulnerability disclosure process for agencies to report the vulnerabilities they find in the IoT devices used by federal agencies. The bipartisan bill would prohibit U.S. government agencies from purchasing IoT devices from companies that fail to adopt the coordinated vulnerability disclosure policies. The bill would also require the National Institute of Standards and Technology (NIST) to provide guidance to federal agencies on how to manage IoT security risk and properly used such devices. Such legislative efforts push manufacturers of connected devices to consider security in the design and building of these devices.

As the number of IoT devices, as well as the frequency and sophistication of IoT attacks, continue to grow, research and development efforts surrounding IoT cybersecurity solutions must continue.

SoS Musings #37 - The Double-Edged Sword of AI and ML

SoS Musings #37 -

The Double-Edged Sword of AI and ML

Artificial Intelligence (AI) and Machine Learning (ML) technologies are increasingly being implemented by organizations to protect their assets from cyberattacks. AI is defined as the concept and development of intelligent machines that are capable of performing activities that would usually require human intelligence, such as problem-solving, reasoning, visual perception, speech recognition, language translation, and more. ML is an application or subset of AI involving the use of training algorithms that enables machines to learn from provided data and make decisions. Many applications implement AI and ML to enhance our everyday life. These applications include image recognition, news classification, video surveillance, virtual personal assistants, medical diagnosis, and social media services. The realm of cybersecurity is experiencing an acceleration in the importance and use of AI and ML applications. Stefaan Hinderyckx, the Senior Vice President for Security at NTT Ltd., recently spoke about the growing importance of advanced AI and ML tools in identifying, detecting, and combating cybersecurity threats. Organizations have been encouraged to adopt solutions that can help identify security threats efficiently and quickly as they handle large amounts of data and face challenges in recruiting professionals with the skills needed to maintain the security of their systems against cyber adversaries. The growth of AI and ML in cybersecurity calls for the security community to explore further the benefits and potential risks posed by this technology.

Security professionals can benefit from the use of AI and ML in their operations. According to Avertium's 2019 Cybersecurity and Threat Preparedness Survey to which over 200 cybersecurity and IT executives in the US responded, most professionals believe technology will be playing a significant role in the future of cybersecurity operations. AI and ML were cited by most professionals as technologies that will solve more problems than humans. However, other survey findings still highlight the importance of human intervention in identifying and combating cyber threats, with more than half of the respondents revealing plans to expand their cybersecurity teams. One example of a platform that aims to support human-machine collaboration in the performance of security analysis is PatternEx's Virtual Analysis Platform. Cybersecurity analysts are likely to be overwhelmed by the amount of data produced by employees and customers at their respective companies, making it increasingly difficult to identify data generated by attacks before any damage occurs. The platform developed by the Massachusetts Institute of Technology's (MIT's) startup company, PatternEx, uses Machine Learning models to flag potential attacks and allows cybersecurity analysts to provide feedback to the models, which reduces false positives and increases analyst productivity. In comparison with a generic anomaly detection software program, the Virtual Analyst Platform identified ten times more threats with the help of the same number of alerts generated by the generic system. Another AI tool, called DeepCode, has been developed by Boston University computer scientists in collaboration with researchers at Draper. DeepCode uses a class of ML algorithms known as neural networks to help identify software flaws that could be exploited by hackers to infiltrate corporate networks. The tool is expected to be capable of fixing the software vulnerabilities it identifies in the future. A team of computer scientists led by Prasad Calyam from the University of Missouri designed a deception-based cybersecurity system, called Dolus, that uses ML techniques to mislead malicious actors into thinking they are successfully attacking a targeted site or system in order to give security teams extra time to respond and prevent the success of Distributed Denial-of-Service (DDoS) attacks and Advanced Persistent Threats (APT). Dolus applies ML techniques to improve the detection of and defense against attacks aimed at gaining access to data and resources in small-to-large scale enterprise networks. Although there are many ways the security community can use AI and ML to improve security operations and the prevention of cyberattacks, there are still issues associated with this technology that must be considered, such as potential abuse by threat actors.

Several studies have brought further attention to the potential abuse of ML and AI systems, and other issues surrounding this advanced technology. Dawn Song, a professor at UC Berkeley with a specific focus on the security risks associated with AI and ML, warned of the emergence of new techniques that malicious entities can use to delve into ML systems and manipulate functions. These techniques are known as "adversarial machine learning" methods, which are capable of exposing information used to train an ML algorithm and causing an ML system to produce incorrect output. Researchers at Princeton University conducted a series of studies exploring how an adversary can trick ML systems. They demonstrated three broad types of adversarial ML attacks that target different phases of the ML life cycle, including data poisoning attacks, evasion attacks, and privacy attacks. Data poisoning attacks occur when adversaries inject bad data into an AI system's training set to cause the system to produce incorrect output or predictions. Evasion attacks abuse the successful training and high accuracy of an ML model by modifying its input so that the system incorrectly classifies it, and errorsare unnoticeable to the human eye during real-world decision-making. Adversaries could execute privacy attacks against ML models to retrieve sensitive information such as credit card numbers, health records, and users' locations, using the data in an ML model's training pool. A research group led by the De Montfort University Leicester (DMU) found that online hackers target AI using search engines, social media platforms, and recommendation websites to execute attacks more often than people realize. SHERPA, a project funded by the European Union with a focus on the impact of AI and big data analytics, published a report highlighting this research, which also states that hackers often focus more on manipulating existing AI systems to perform malicious activities instead of introducing novel attacks that apply ML methods. However, security researchers have pointed out how hackers can use ML to launch attacks such as social engineering attacks, ransomware, CAPTCHA violations, and DDoS attacks. ML can enhance social engineering, with its capability to quickly collect information about business and employees that could be used in tactics to trick individuals into giving up sensitive data. For example, criminals successfully impersonated CEOs to steal millions of money in three separate attacks using an AI program trained on hours of their speech from Youtube videos such as TED talks, and other audio sources. Researchers in China and Lancaster University were able to trick Google's bot and spammer detection system, reCAPTCHA, into thinking an AI program is a human user. Ransomware attacks driven by AI can significantly increase the damage inflicted by such attacks in that ML models can disable a system's security measures as well as quickly create convincing malware-loaded fake emails and alter words in messages for each target with the right training data. Other potential uses of AI by adversaries include taking control over digital home assistants, hijacking autonomous military drones, and spreading fake news on social media. There are also concerns about AI-driven systems, such as recommendation engines, with regard to privacy as a result of AI and ML algorithms' inability to forget the customer or user data they use for training. In addition to the problems with controlling data once it's fed into ML algorithms, researchers have also found that AI can be abused into revealing secrets, posing a significant threat to privacy and global security.

The possible exploitation of AI and ML mechanisms for malicious purposes has sparked efforts to protect this technology. Researchers at the Berryville Institute of Machine Learning (BIML) developed a formal risk framework to support the development of secure ML systems. The architectural risk analysis conducted by BIML focuses on issues that engineers and developers must keep in mind when designing and constructing ML systems. Their analysis explored the common elements associated with the setup, training, and deployment of a typical ML system, including raw data, datasets, learning algorithms, inputs, and outputs. The data security risks associated with each of these components, such as adversarial examples, data poisoning, online system manipulation, and more, were then identified, ranked, and categorized to inform the implementation of mitigation controls by engineers and developers. Kaggle, the data science community, held a competition to encourage the exploration of best defenses against AI. Participants were asked to battle against each other using offensive and defensive AI algorithms in hopes of improving insights into how ML systems can be protected against attacks. The growing advancement and execution of AI-based attacks call for continued exploration of such attacks can be prevented.

AI is considered a double-edged sword because it can be used by security teams to improve cybersecurity, or it can be used by hackers to execute heightened attacks. The battle against adversarial ML requires collaborative efforts among researchers, academics, policymakers, and private entities that develop advanced AI systems.

SoS Musings #38 - Critical Infrastructure Cybersecurity

SoS Musings #38 -

Critical Infrastructure Cybersecurity

According to the U.S. Department of Homeland Security's (DHS) Cybersecurity and Infrastructure Security Agency (CISA), there are 16 critical infrastructure sectors, which include chemical plants, energy, communications, critical manufacturing, emergency services, dams, transportation, information technology, healthcare, and more. These infrastructure sectors are deemed critical due to the necessity and sensitivity of their assets, systems, and networks. Security, national economic security, public health, and safety would be significantly weakened if such elements of a critical infrastructure sector were disabled or destroyed by malicious hackers. Claroty, a cybersecurity company focused on developing security solutions for industrial control networks, released a report in March, titled "The Global State of Industrial Cybersecurity," which reveals that there is a higher level of concern among IT security professionals across the U.S., UK, Germany, France, and Australia, about cyberattacks on critical infrastructure than an enterprise breach. Over 70 percent of the 1,000 participants in Claroty's survey believe that cyberattacks on critical infrastructure are more likely to inflict more damage than a data breach experienced by a company. A major cyberattack on U.S. critical infrastructure could lead to significant consequences. Research conducted by Lloyd's of London and the University of Cambridge's Center for Risk Studies found that if the electric grid in fifteen states and Washington, D.C. were to be taken down by hackers, it would result in power outages for 93 million people. Such an incident would lead to increases in mortality rates, a decline in trade, poor water supply, and the damage of transport networks. A cyberattack of such a scale on critical infrastructure could cost the U.S. economy $243 billion to $1 trillion. As cyberattacks on critical infrastructure have the potential to impact people's health and well-being as well as economic security, it is essential to explore the vulnerabilities and threats faced by such infrastructure and improve efforts to address them.

The different critical infrastructure components face threats and contain vulnerabilities that call for the continued development and research of security solutions. Operational Technology (OT) encompasses the hardware and software used to monitor and control the performance of physical devices, processes, and infrastructure. Industrial Control System (ICS) is the main component of OT that refers to the various kinds of control systems and related tools, including devices, systems, networks, and controls, used in the operation or automation of industrial processes. SCADA (Supervisory Control and Data Acquisition) is a subset of ICS, which refers to systems of software and hardware-based components that enable industrial organizations to locally control industrial processes, monitor real-time data, log events, and directly interact with devices such as sensors via Human-Machine Interface (HMI) software. SCADA systems help to support industrial organizations' efficiency, decision-making, and communication of systems problems to reduce downtime. Several critical infrastructure and SCADA/ICS cybersecurity vulnerabilities and threats exist due in part to the lack of basic security controls for OT systems. According to Check Point Software Technologies, a leading cybersecurity solutions provider for governments and corporate enterprises globally, the most common vulnerabilities include the use of legacy software, default configuration, poor remote access policies, policies and procedures, and lack of encryption. The top threats are distributed denial-of-service attacks, web application attacks, malware, command injection and parameter manipulation, and lack of network segmentation. Other top cyber threats that critical infrastructure firms must be aware of are the growing use of vulnerable Internet of Things (IoT) devices that hackers could use to infiltrate critical infrastructure networks, the lack of security in the design of OT, and the inability to identify all devices connected to an OT network as well as the security flaws these devices possess. The recent growth in remote workers due to COVID-19 increases the risk of cyberattacks on critical infrastructure, as there are employees who must now access ICSs and OT networks from home where secure connections and data protection are often inadequate. The cybersecurity skills shortage also plays a role in the vulnerability of critical infrastructure to cyberattacks as indicated by a study conducted by E&Y, which pointed out the lack of skilled professionals to help identify and remediate threats to OT systems as well as the inadequacy of cybersecurity function organizations within the Oil and Gas (O&G) sector. Jeanette Manfra, former Assistant Director for cybersecurity for CISA, also emphasized the threat posed to national security by the cybersecurity workforce gap. The growing cybersecurity skills shortage means fewer professionals to protect the nation's critical assets from cyberattacks that could allow adversaries to cause damage, inflict harm, or manipulate the public's trust. These vulnerabilities and threats must be addressed.

Several studies highlight the vulnerability of critical infrastructure systems and the different ways in which this vulnerability can be exploited by adversaries. A researcher who goes by the online name Wojciech used a tool that he developed, called "Kamerka" and open-source intelligence (OSINT) to demonstrate how adversaries can easily gather information on critical infrastructure in the U.S. Using Kamerka, Wojciech scanned the internet for ICS devices and protocols, which led to the discovery of 26,000 internet-exposed ICS devices in the U.S. Kamerka could also be used to determine where ICS devices are geographically located and which critical infrastructure targets may be considered attractive to an adversary, further highlighting the ease at which a threat actor could gather intelligence on U.S. critical infrastructure that could be used to find valuable targets. According to researchers at the New York University Tandon School of Engineering, public electric vehicle charging stations could be exploited by hackers to execute remote attacks against urban power grids using information generated about a station's location, charging time, and average hourly power draw, which informs the manipulation of demand at a particular charging station. Another study by researchers at Princeton found that a botnet made up of thousands of compromised connected home appliances such as air conditioners and water heaters could be used to overwhelm the power grid and cause mass blackouts. Security researchers at Ben-Gurion University of the Negev (BGU) also warned of the potential exploitation of firmware vulnerabilities in widely sold commercial smart irrigation systems to allow attackers to control watering systems remotely. The BGU researchers said attackers could form massive botnets of smart irrigation systems that could empty an urban water tower or a flood water reservoir in a short amount of time. The North American Electric Reliability Corporation (NERC) published a report discussing an incident that occurred in March 2019 in which external entities exploited a known vulnerability to cause firewalls at multiple U.S. power generation sites to reboot for ten hours repeatedly. Findings from an audit of the water system in upstate Middleton, New York, conducted last year by the New York State Comptroller's Office, revealed cybersecurity flaws in policies and procedures that could have allowed hackers to infiltrate the city's networked water system. The policies and procedures lacked information on technology employee duties, proper portable device usage, or monitoring of networked water system devices. Employees were also not provided with security awareness training. These studies call for the improvement of cybersecurity policies, IoT device security, maintenance of security systems such as firewalls, security training, and more, to bolster the security of critical infrastructure.

There are efforts to bolster critical infrastructure cybersecurity. The U.S. Department of Homeland Security (DHS) Science and Technology Directorate (S&T) awarded Cyber Apex Solutions, LLC, - a five-year Other Transaction Agreement (OTA) valued at a maximum of $70 million in support of applied research on prototype cybersecurity defense technologies that would bolster the protection of critical national infrastructure sectors. The funding provided by S&T through this OTA contract helps further the testing, evaluation, and transition of prototype cyber-defenses that would reduce the potential damage to critical infrastructure sectors that could be caused by cyberattacks. Security training is one area of focus in the improvement of critical infrastructure cybersecurity. Jeanette Manfra, said the agency is improving its prioritization of training for cybersecurity professionals to fill the gap of talent needed to help strengthen security for U.S. critical infrastructure. According to Manfra, DHS is working on developing a curriculum aimed at cultivating the cybersecurity skills of kids in grades K-12 as well as a workforce training program to recruit and retain those skilled in cybersecurity. Researchers are also continuing their efforts to improve the security of different critical infrastructure sectors. For example, Milos Manic, professor of computer science and director of the Virginia Commonwealth University's Cybersecurity Center, in collaboration with researchers at the Idaho National Laboratory (INL), developed a power grid protection system, called Automatic Intelligent Cyber Sensor (AICS), which is inspired by the human body's autonomic nervous system in that it uses Artificial Intelligence (AI) algorithms to continually learn and improve itself as the power grid faces attempted hacking attacks. INL itself has a cybersecurity program that works to protect control systems such as those used for energy pipelines, nuclear power plants, drinking water systems across the U.S. A new global alliance, called the Operational Technology Cyber Security Alliance (OTCSA), was formed to improve OT security through a five-pronged approach which involves strengthening the cyber-physical risk posture of OT interfaces and guiding OT operating on best security practices. Research, training, and development are essential to improving critical infrastructure security.

The manipulation of these critical infrastructure systems by malicious actors poses significant threats to citizens' safety and well-being. Government agencies, private companies, and the security community are encouraged to develop or improve methods for bolstering the security of such systems.

SoS Musings #39 - Cryptographers Prepare for the Arrival of Quantum Computers

SoS Musings #39 -

Cryptographers Prepare for the Arrival of Quantum Computers

Quantum computing is the leading application of quantum physics in technology. Quantum computers are expected to provide several advancements and benefits for many different fields such as artificial intelligence, molecular modeling, financial modeling, weather forecasting, drug design, and more. However, much preparation is needed for the risks and threats presented by quantum computers.

Quantum computers are expected to solve the most complex problems since the way in which they operate allows them to perform calculations that classical computers cannot. Classical computers encode information in bits, with each bit representing a 0 or a 1. These zeros and ones behave as on-off switches that enable computers to carry out operations. In quantum computing, qubits are used to encode information. Qubits apply the principles of quantum physics, known as superposition and entanglement, to solve problems. Superposition describes a qubit's ability to exist in many states simultaneously, which allows qubits to be either one or zero or both one and zero. Entanglement refers to the correlation between two qubits in a superposition that forces the state of one qubit (e.g., a zero, a one, or both) to depend on the other qubit's state in the relationship. The application of entanglement reduces the number of logic operations needed to solve a problem. These quantum mechanics principles enable quantum computers to perform calculations and solve a given problem exponentially faster than classical computers. IBM further illustrates why quantum computers are more powerful than traditional computers, using an example involving solving a maze. A classical computer would use its bits to test each possible route individually until it finds the correct one. In contrast, quantum computers would apply the principles of superposition and entanglement via qubits to find the right path significantly faster with fewer calculations. However, quantum computers' ability to solve complex problems will require the development of new cryptographic approaches.

While there are significant potential benefits to quantum computing, security experts have expressed concern about the threat posed by quantum computers to the current security protocols that protect passwords, digital signatures, health records and other types of data stored in systems managed by the government, military, financial industry, and more. The quantum-mechanical properties possessed by quantum computers that enable them to calculate at a significantly faster rate than today's computers give them the potential to break current encryption algorithms, including RSA and ECC. When quantum computing renders modern encryption algorithms useless, troves of sensitive data will be left open as attackers will use quantum computers to break secure communications.

The potential impact of such attacks has prompted the race among researchers and security firms to develop new approaches to cryptography, characterized as Post-Quantum Cryptography (PQC), that can withstand such attacks. An article published by MIT Technology Review defines PQC as the development of new types of cryptographic methods that can be applied via modern computers while being impervious to future quantum attacks. The U.S. National Institute of Standards and Technology (NIST) is making an effort to get quantum-resistant cryptographic standards ready before the age of practical quantum computing arrives. The government agency initiated the PQC Standardization Process in which researchers from academia and private industry are challenged to develop a new generation of cryptographic algorithms that are impenetrable by quantum attacks as well as replace modern cryptography. The NIST process seeks algorithms that fall into two general categories: key-establishment algorithms and algorithms for digital signatures. Key-establishment algorithms aim to enable the agreement on a shared key between two parties that have never met, while algorithms for digital signatures verify whether data is authentic. Both categories call for new algorithms based on mathematical approaches that could not be deciphered by quantum computers. NIST recently announced that the PQC process has entered the third phase in which the number of submissions initially received has been boiled down from 69 to 15. At the end of this round, NIST will standardize one or more of the quantum-resistant algorithms. NIST plans to conclude the process and draft standards for PQC in 2022. Qrypt, Inc. licensed a Quantum Random Number Generator (QRNG) from the Department of Energy's Oak Ridge National Laboratory (ORNL) to include the generator in its existing encryption platform and leverage inherent randomness of quantum technology to create unique and unpredictable encryption that is unbreakable by cyberattacks, including those executed by quantum computers. The QRNG technology is said to be capable of detecting and measuring the characteristics of electromagnetic waves, called photons, to create truly unique, unpredictable, and indecipherable encryption keys. IT experts at Monash University devised a post-quantum secure privacy-preserving algorithm, called the Lattice-Based One Time Linkable Ring Signature (L2RS), powerful enough to prevent attacks launched using quantum supercomputers in the future. The L2RS enhances the security and privacy of large transactions and the transfer of data to the extent at which they are unable to be hacked by quantum computers. IBM researchers have also proposed lattice cryptography as a security method to protect data from crypto-breaking quantum computers. Researchers must continue their efforts to develop post-quantum cryptography before quantum computers are ready.

Although it remains unknown as to when quantum computing will render modern cryptography algorithms obsolete, researchers from academia and private industry, as well as the government, must collaborate and continue advancing cryptography for the post-quantum future.

SoS Musings #40 - The Need for Stronger Social Media Security

SoS Musings #40 -

The Need for Stronger Social Media Security

Social media has changed the way we communicate in the personal and professional realms of our everyday lives. According to Statista, an online portal for statistics, an estimated 3.6 billion people are using social media worldwide in 2020, with the number projected to reach 4.41 billion by 2025. Statista has also revealed that Facebook is the most popular social media platform worldwide, as the platform has more than 2.6 billion monthly active users. Other highly popular social media platforms include Instagram, Twitter, and LinkedIn. Using social media can help people build relationships, share their expertise, increase brand visibility, and learn about current events. Social media networks could also be beneficial to the security community in that they could be used to find information about newly-discovered software vulnerabilities and raise awareness about such vulnerabilities so researchers, organizations, or governments can fix them before threat actors exploit them. Though there are benefits to using social media, its continuous existence in users' lives has the potential to negatively impact security and privacy in many different ways.

One issue is that social media networks could be a breeding ground for malware. Research from Bromium has shown that high-traffic social media sites such as Facebook, Twitter, and Instagram have become massive centers for the distribution of malware and the performance of cybercriminal activity. Not only are individual users at risk of their personal systems being infected by malware distributed via social media sites, but organizations must also pay attention to the threat posed by social media sites to the security of their customer data and intellectual property as more than 12% of businesses revealed that they had experienced a security incident because of the use of these platforms by employees. A report published by Bromium, titled "Social Media Platforms and the Cybercrime Economy," highlights the various techniques applied by cybercriminals to abuse the social media ecosystem and spread malware infections. The tactics used to lure users into downloading malware include inserting malicious code or links into ads, plug-ins, apps, and posts with news, friend updates, photos, and videos. The wide variety of content that can be accessed on social media platforms also leave users vulnerable to drive-by downloads, which in the case of social media, refers to inadvertent malware downloads by visiting a website recommended in a post that contains a piece of malicious code or redirects users to another page infected with malware.

Social engineering attacks like phishing are prevalent on social media platforms as well. Arkose Labs' analysis of over 1.2 billion transactions across industry segments including social media, technology, financial services, travel, and retail, revealed that more than half (53%) of all social media logins are fraudulent, and 25% of all new social media accounts are fake. These findings suggest that social media platforms can easily be abused by cybercriminals using any form of the social engineering method phishing. In 2018, PhishLabs found that social media abuse through phishing attacks increased significantly, with the number of such attacks against these platforms continuing to grow. There are several forms of phishing that cybercriminals can perform on social media networks for fraud, stealing information and more. Threat actors can befriend a targeted user and gather more information about them to create personalized posts containing infected links to websites where their credentials can then be captured via a login-page. Using these credentials, threat actors can access the user's account to launch more attacks against new targets. Impersonation plays a significant role in the success of phishing attacks on social media--by posing as someone with authority, one could easily gain a targeted user's trust to push them into performing a specific action such as revealing personal information or clicking a malicious link. Some of Twitter's employees were recently targeted in a coordinated social engineering attack, which was later revealed to have involved another form of phishing called "phone spear phishing," also known as "vishing" or "voice phishing." The employees received phone calls from hackers posing as IT staff to deceive them into giving their passwords for internal Twitter tools. Access to these tools led to the compromise of 45 accounts belonging to CEOs, celebrities, politicians, and other high-profile users. Their accounts were used to promote a bitcoin scam. Elected officials have expressed major concern about the incident, as the compromise of an account belonging to a world leader could impact national security in the U.S. and other countries, as well as crash markets and create political conflict.

Social media bots could also facilitate the execution of phishing attacks on social media networks. These bots are social media accounts that use artificial intelligence to automate news aggregation, customer assistance for online retailers, and more. As these bots continue to become more advanced at mimicking human behavior, it is getting more difficult to detect them. Not only can social media bots help perform phishing scams, but they could also be used to spread disinformation. The security community is encouraged to continue developing solutions to reduce phishing attacks on social media and raise awareness among users on how to avoid falling victim to such attacks.

Social media users should limit the amount of information they share about themselves on social media. According to Joseph Turow, a Professor of Communication at the Annenberg School for Communication, photos and other personal information shared on social media leave users' accounts vulnerable to being accessed by unauthorized entities. Attackers can use photos shared on Facebook and other social platforms to gain more insight into the context of an individual and their relationships with others. Shared photos with hashtags can reveal information about a user, such as where they went to high school, when they graduated, what type of car they have, their favorite shows, and more. This information could provide answers to the most common security questions for bank accounts and other financial online accounts. Sharing too much about yourself on social media could increase the success of online scams, hacks, or spear-phishing attacks.

Social media puts one's privacy and security at risk, whether they have an account on such platforms or not. A team of scientists from the University of Vermont and the University of Adelaide found that information in Twitter messages from 8 or more of a person's contact can be used to predict what that person will tweet later. The study also showed that even if a person has left a social media platform or never had a social media account, that person's future activities and identity could be predicted based on their friends' online posts and words. This finding also suggests that information gathered from other people's social media accounts and posts can be used to track and potentially help facilitate the execution of phishing attacks against users on other types of platforms.

Incidents have highlighted issues with the social media platform systems and features that could impact users' privacy, security, reputation, and safety. Facebook experienced a massive security incident that impacted almost 50 million user accounts. Attackers exploited a series of bugs associated with a Facebook feature, called "View As," which is designed to let users see how their profile would appear to another user when they have their privacy settings enabled. The set of bugs related to this feature enabled Facebook's video upload tool to appear on the "View As" page and caused the uploader to generate an access token, which hackers then used to access the affected user's account. A bug in Facebook's software led to 14 million users' posts being publicly viewable to anyone even if they were meant to be private. Another vulnerability associated with the Facebook App caused iPhone cameras to activate when the app is opened, potentially allowing users to be recorded. Instagram faced a security slip-up due to a vulnerability in its contact importer feature that could have been exploited using a brute-force algorithm and automated bots to link users' phone numbers to their accounts.

It has also been demonstrated that attackers can abuse certain functions of social media platforms to incriminate or physically harm users. Security researchers at the Ben-Gurion University of the Negev identified weaknesses in the management of the posting systems of Facebook, Twitter, and LinkedIn, that could be exploited by an attack in which mechanisms implemented to prevent posts from being changed or indicate when a post has been identified can be overridden. The Online Social network (OSN) attack, dubbed "Chameleon," can change the way a user's content is displayed publicly without indicating that any change has occurred until the user logs back in. For example, a user could watch and click "like" on a post displaying a video with a kitten only to log back into their account and find that the same post that they 'liked' is an ISIS execution. The Chameleon attack can destroy one's reputation and even incriminate a user. Such an attack could also facilitate the creation and management of fake profiles on social media platforms as well as the circumvention of censorship and monitoring. Attackers sent GIFs and videos to followers of the Epilepsy Foundation's Twitter account during National Epilepsy Awareness Month using the account's handle and hashtags. The GIFs and videos displayed flashing strobe lights, which could have caused those with Epilepsy to have seizures. Although this incident does not resemble a traditional cyberattack since the Twitter account was not hacked and users were not tricked into clicking malicious links, it is still considered a cyberattack designed to cause physical harm. Security researchers must explore the untraditional ways in which attackers can harm users on social media platforms because of certain features.

There are efforts to bolster social media security. Researchers at the University of North Georgia (UNG) are working to provide Facebook, Twitter, and Instagram users with tools to protect their sensitive data. They conducted controlled experiments to see how information is stored on social media account holders' computers and web browsers, and how easy it is to extract personal data when those users are logged into their account on a certain machine. Their research also looks for security flaws in popular social media platforms and news ways for people to secure their accounts and information. Researchers at ZeroFox investigated 40,000 fake social media accounts using honeypot accounts to better understand how social engineering attacks such as impersonation work on social networking sites. More research is needed in the development of social media security solutions.

As social media usage continues to grow, especially during the COVID-19 pandemic due to stay-at-home recommendations, there are more opportunities for executing cyberattacks. The security community is encouraged to develop and improve techniques for securing social media users and the information they share on such platforms.

SoS Musings #41 - 5G Security: Are We Ready?

SoS Musings #41 -

5G Security: Are We Ready?

The improvements offered by the 5G mobile communication standard are expected to be accompanied by new security challenges. 5G is the fifth generation of wireless technology that is being rolled out by all of the US carriers, including Verizon, T-Mobile, and AT&T, nationwide. A survey conducted by Gartner, a global research and advisory company, predicted that more than 60% of organizations had made plans to adopt 5G by 2020; many of these companies will use 5G networks mostly for supporting IoT devices across their business. A report from Cisco predicts that 5G's significantly faster broadband will support over 12 billion mobile-ready devices and IoT connections by 2022, a much higher estimate than in 2017, which was 9 billion. This next generation of mobile internet connectivity is expected to improve our lives vastly by offering faster speed, lower latency, and increased capacity, increased bandwidth, and other benefits. However, 5G is expected to have a major impact on cybersecurity strategy, as indicated by a survey conducted by Information Risk Management (IRM), which revealed that over 80% of the participating senior cybersecurity and risk management decision-makers expressed concerns that 5G developments will introduce new cybersecurity challenges. It's essential to understand these challenges in order to prepare for them.

As a result of the transition from centralized hardware to distributed software-based functions, 5G networks create new opportunities for hackers to perform malicious activities. Accenture, a global professional services company, did a study in which over 2,600 business and technology decision-makers in 12 industry sectors in Europe, North America, and Asia-Pacific participated. The results of this study revealed that over 60% of them fear that 5G will increase their organization's vulnerability to cyberattacks. The Brookings Institution published an article highlighting a number of reasons as to why 5G networks are sparking security concerns. The 5G network transition from centralized, hardware-based switching practiced by earlier LTE mobile communications standards to software-defined, distributed digital routing removes the presence of hardware choke points where choke point inspection and control could be implemented. 5G now virtualizes higher-level network functions in software, which were previously performed by physical appliances, thus increasing their vulnerability to being hijacked by malicious actors. Even if all of the software vulnerabilities within the 5G network were to be addressed, software is now used to manage 5G networks, which could be infiltrated by an attacker to gain control over the network. The significant bandwidth boost offered by 5G networks will also increase the attack surface for bad actors since short-range, small-cell antennas placed in urban areas will proliferate and serve as new targets. Also, these cell sites will use 5G's Dynamic Spectrum Sharing technology, which is a technology that allows streams of information to share bandwidth in the form of slices, with each slice being at different levels of cyber vulnerability. The growth in IoT devices driven by 5G will also expand cyber risks as billions of new IoT devices with varied security levels are expected to connect to 5G networks. The security community must prepare for these new challenges.

Researchers have made efforts to discover new security flaws that impact the 5G networks. Researchers from Purdue University and the University of Iowa discovered 11 vulnerabilities in the design of 5G protocols that could be exploited by attackers to expose a user's location, change their current service to old mobile data networks, raise their wireless bill, as well as track their calls, text conversations, and web browser history. They also discovered the inheritance of vulnerabilities from 3G and 4G by 5G networks, stemming from the adoption of security features from these generations of cellular networks. These discoveries were made through the use of a custom tool they developed, called 5GReasoner. A comprehensive security analysis of the 5G standard conducted by researchers in the Information Security Group revealed security gaps associated with the 5G Authentication and Key Agreement (AKA) protocol, which is supposed to guarantee security by allowing the device and network to authenticate each other, ensuring data exchange confidentiality, and protecting the privacy of a user's identity and location. Using a tool they developed called Tamarin, they found that the 5G standard is not adequate enough to achieve the security objectives established by the 5G AKA protocol. Security gaps in the 5G standard could still allow the performance of traceability attacks despite the implementation of the 5G AKA. Researchers at Positive Technologies released a report covering the Diameter protocol, which is a component of the Long Term Evolution (LTE) standard that supports communication translation between Internet protocol network elements. 4G networks use this protocol for the authentication of authorization of messages, and, according to the report, architectural flaws in the Diameter protocol leave every 4G network vulnerable to Denial-of-Service (DoS) attacks. As 5G networks are built on the existing architecture and the protocol, they are also expected to inherit the existing security weaknesses. An article published by IEEE Spectrum further highlights 5G networks' inheritance of 3G and 4G security flaws stemming from the different timetables for 5G deployments among network operators. Since 5G networks will have to work in conjunction with legacy networks for the next few years, the next-generation cellular networks will remain vulnerable to spoofing, fraud, user impersonation, and other attacks. Network operators' continued dependence on the General Packet Radio Service (GPRS) Tunneling Protocol (GTP), used to carry packets between different wireless networks, will also leave 5G networks vulnerable to attack as the protocol contains several vulnerabilities, one of which could allow an attacker to steal credentials or spoof user session data to impersonate a user.

In efforts to address 5G network security, Idaho National Laboratory (INL) established the INL Wireless Security Institute last year, to guide and organize research efforts of government, academia, and industry aimed at improving the security of 5G wireless technology. The Cybersecurity and Infrastructure Security Agency (CISA) released a strategic plan for 5G infrastructure outlining five strategic initiatives with specific actions and responsibilities that seek to ensure the security and resilience of 5G technology in the United States. These initiatives include supporting the development of 5G policies and standards, raising situational awareness of 5G supply chain security, partnering with stakeholders to bolster existing infrastructure, increasing the number of trusted vendors in the 5G marketplace by encouraging innovation, and sharing information about newly discovered vulnerabilities and risk management strategies associated with 5G technology. These efforts require increased collaboration and information-sharing. Verizon recently shared details about its efforts to enhance 5G network security, which include exploring the use of an Artificial Intelligence (AI) and Machine Learning (ML) security framework to detect security anomalies and analyze the performance of cell towers. Other 5G network security solutions being tested by Verizon apply network accelerators and data fingerprints to increase the speed at which security breaches are detected and help companies determine whether the integrity of their data has been compromised following a cyberattack. Such efforts must continue to be made to strengthen 5G security.

As the adoption and implementation of 5G technology continues to grow, technical solutions, protocols, and research in support of bolstering 5G security must continue to be developed, validated, and conducted.

SoS Musings #42 - Medical Device Vulnerabilities: Healthcare is at Risk

SoS Musings #42 -

Medical Device Vulnerabilities: Healthcare is at Risk

The cybersecurity risk to connected medical devices has grown during the COVID-19 pandemic. Therefore, it is now more important than ever to bolster the security of these devices by addressing their vulnerabilities. The U.S. Department of Health and Human Services (HHS) reported a 50% increase in cybersecurity attacks against hospitals and healthcare providers' networks during the COVID-19 crisis, with hackers increasingly targeting medical devices as the number of hospital patients increases. A study conducted by researchers from Vanderbilt University and the University of Central Florida further solidified that healthcare cyberattacks can indeed decrease the quality of medical treatment provided to patients. The influx of patients during the pandemic, resulting in the increased use of healthcare devices, has expanded the attack surface for hackers, further heightening the threat posed to patient privacy and security.

There are several contributing factors to the vulnerability of medical devices to hacking. Based on Palo Alto Networks' analysis of 1.2 million Internet of Things (IoT) devices in thousands of healthcare organizations in the U.S., more than 80% of healthcare devices run on outdated operating systems, including Windows 7 and Windows XP. Medical equipment such as X-RAY machines, Magnetic Resonance Imaging (MRI) machines, and Computerized Axial Tomography (CAT) scanners have been found to be running on old, unsupported operating systems, leaving them significantly vulnerable to being targeted by cybercriminals. In addition to the continued reliance on outdated software, medical devices are often found to be configured with default passwords and left with open standard management ports. The vulnerability of medical devices leads to the manipulation of device functions, Denial-of-Service (DoS), remote code execution, and other attacks that could put a patient's life at risk.

There have been many discoveries surrounding the vulnerabilities associated with medical devices that threaten patients' safety and privacy. Researchers at the healthcare security firm CyberMDX uncovered two vulnerabilities in the Becton Dickinson Alaris Gateway Workstation used in hospital wards and intensive care units to run, monitor, and control infusion pumps. As infusion pumps are medical devices used to deliver specific doses of medicine such as insulin, painkillers, and more, continually or intermittently, any attack on these devices could put a patient's life at risk. The exploitation of one of the vulnerabilities discovered in the Alanis Gateway could allow attackers to remotely install malicious firmware on the workstation to adjust specific commands on the infusion pump, such as those that alter the rate at which drugs are administered to a patient. The U.S. Food and Drug Administration (FDA) issued an alert about a set of vulnerabilities named URGENT/11, stemming from a third-party software component that impacts medical devices and hospital networks. These vulnerabilities could be used by attackers to remotely take over devices, alter their functions, launch Denial-of-Service (DoS) attacks, leak sensitive information, and cause logical flaws. The FDA also raised awareness among patients, healthcare providers, and manufacturers about a set of cybersecurity vulnerabilities called SweynTooth that affect various medical devices with Bluetooth Low Energy (BLE), which may be pacemakers, blood glucose monitors, and ultrasound devices. Through the abuse of SweynTooth vulnerabilities, attackers can disable devices or access functions that should only be available to authorized users. Philips, a global leader in health technology, reported a vulnerability to the Cybersecurity and Infrastructure Agency (CISA) that was discovered in its ultrasound systems, which are used to produce pictures of soft body tissue structures to help in the diagnosis of various diseases and conditions. The vulnerability contained by Philips' ultrasound medical devices could allow an attacker to view or alter information using an alternative path or channel that does not require authentication, potentially leading to misdiagnosis. JSOF security researchers disclosed another series of security flaws, dubbed Ripple20, originating from a low-level TCP/IP software library that many IoT device manufacturers implement into their devices or use via embedded third-party components. Ripple20 vulnerabilities could also enable DoS, information disclosure, remote code execution, and device takeover. According to researchers, Ripple20 affects Baxter infusion pumps and other connected devices essential for providing medical care.

There are efforts from academia, industry, and other government agencies to bolster medical device security. Researchers at Purdue University developed a prototype device aimed at preventing remote hacks on medical devices by keeping these devices' signals from radiating outside the human body. This technology works via the facilitation of medical device communication in the electro-quasistatic range, which is much lower on the electromagnetic spectrum than Bluetooth communication. The Sensing, Processing, Analytics, and Radio Communication (SPARC) lab is working with the entities in government and industry to implement this technology into pacemakers, insulin pumps, and other medical devices. Researchers at the Ben-Gurion University of the Negev developed a new Artificial Intelligence (AI)-based method for protecting medical imaging devices such as Computed Tomography (CT), MRI, and ultrasound machines from malicious, abnormal, or anomalous operating instructions that may lead to a or indicate a cyberattack. Their technique uses a dual-layer architecture that applies AI to analyze instructions sent from a host PC to a medical device's physical components, thus allowing the detection of different types of anomalous instructions. The National Institute of Standards and Technology (NIST), together with the National Cybersecurity Center of Excellence (NCCoE), worked with industry vendors and integrators to develop a set of standards that Healthcare Delivery Organizations (HDOs) should follow to strengthen the security of connected medical devices. The FDA also has a guide for managing cybersecurity in medical devices, called the Postmarket Management of Cybersecurity in Medical Devices, which urges manufacturers to monitor, identify, and remediate cybersecurity vulnerabilities, as well as address exploits in their management of medical devices. Further research and guidance towards the improvement of medical device security are encouraged.

Healthcare providers, device manufacturers, and the security community must continue to be informed about the vulnerability of medical devices and other risks to healthcare, such as ransomware, to develop or improve security strategies or mechanisms.

SoS Musings #43 - Crowdsourcing Security with Bug Bounty Programs

SoS Musings #43 -

Crowdsourcing Security with Bug Bounty Programs

Companies are increasingly enlisting the help of ethical hackers through bug bounty programs. Bug bounty programs are crowdsourcing initiatives that encourage security researchers to find and appropriately report the security issues they discover to the sponsoring organization. These programs support cooperation between security researchers and organizations, and allow researchers to receive awards to discover zero-day exploits and flaws in a particular application. The incentivization of vulnerability reporting by recognizing and compensating individuals for submitting findings regarding security exploits and software bugs allows organizations to crowdsource security testing to a community of hackers with the intent to enhance the security of the Internet ecosystem. According to HackerOne, a company that hosts bug bounty programs for organizations, including the US Department of Defense and Google, participating white hat hackers have discovered and reported more than 565,000 software vulnerabilities. They have earned over $100 million through their reports as of September 2020. HackerOne's fourth annual report also reveals an increase in organizations turning to hackers to help them find security holes in their cyber defenses and software during the COVID-19 pandemic as the expedition of digital initiatives supporting the transition to remote working has created new vulnerabilities. The report highlights the 86% year-on-year increase in total bounties, with more than $44.75 million paid out to hackers over the past year, most of which was rewarded by organizations in the US. Companies have adopted the bug bounty program model for a variety of systems.

Organizations in different sectors have turned to bug bounty programs to discover vulnerabilities in various applications. Microsoft launched a bug bounty program for its free, open-source Software Development Kit (SDK) called ElectionGuard aimed at improving the security, transparency, and accessibility of the voting process. Participating security researchers were asked to find flaws in ElectionGuard specification and documentation, verifier reference implementation, proof generation, proof checking code, and more, in return for rewards ranging from $500 to $15,000. The US Department of Defense's (DoD) tenth bug bounty challenge and fourth Air Force program called Hack the Air Force 4.0 invited ethical hackers to find and disclose vulnerabilities in the Air Force Virtual Data Center, which is a group of cloud-based servers and systems. A total of 60 hackers were able to uncover more than 460 vulnerabilities in the data center, which earned them over $290,000 in bounties. The submissions made by hackers in the previous editions of the Hack the Air Force bug bounty challenge resulted in the discovery of a total of more than 430 vulnerabilities and the payout of over $360,000 for valid findings. Since the Hack the Pentagon program launched in 2016, more than 12,000 vulnerabilities in DoD's public-facing web sites and applications and internal systems have been discovered. A team of security researchers participating in Apple's bug bounty program was rewarded $288,500 for discovering 55 critical vulnerabilities in the company's online services, some of which could allow attackers to compromise customer and employee applications and execute a worm capable of taking over a victim's iCloud account. Facebook paid security researchers a total of $2.2 million for reporting their discoveries of vulnerabilities to the social media platform's bug bounty program. One of the participating researchers received the highest reward of $65,000 from Facebook's bug bounty program for their discovery of a vulnerability that could result in data leaks from a copyright management endpoint. A security researcher reported a Denial-of-Service (DoS) vulnerability found in the Tesla Model 3 automobile's web browser through Tesla's bug bounty program that could lead to the dysfunction of the vehicle's touchscreen once the car's boarding computer visits a specific website. This vulnerability posed a significant threat to safety as it could disable autopilot notifications, climate controls, navigation, the speedometer, and Tesla's other essential functions. The expected increase in Internet of Things (IoT) devices, resulting from the roll out of 5G connections will not only increase cybersecurity risks, but will also likely drive the growth in the adoption of bug bounty programs. The effectiveness of these programs depends on various factors.

There are certain aspects for organizations to consider before adopting a bug bounty program to increase its effectiveness. Organizations must realize that these programs do not eliminate the need for secure software development, ongoing vulnerability scanning, software testing, and penetration testing. Bug bounty programs are meant to be incremental to those practices in that they are designed to find the security bugs that internal and external testing processes miss. Organizations must have an in-house security program already in place to protect valuable assets because sole dependence on bug bounty programs is not enough to fill gaps in an enterprise's security. Researchers from the Hong Kong University of Science and Technology published a paper titled, "Bug Bounty Programs, Security Investment and Law Enforcement: A Security Game Perspective," emphasizing that the use of bug bounty program is not a one-size-fits-all solution and that there is still a need to assess the security environment, value of systems, vulnerabilities faced by these systems, and in-house protection methods to increase the effectiveness of such programs. Katie Moussouris, MIT Sloan School of Management visiting scholar and former Chief Policy Officer for HackerOne, gave a presentation at the 2018 RSA Conference. She discussed the elements needed to derive value from bug bounty programs. These elements include understanding commonly discovered flaws by bug bounties, fixing those bugs internally, and avoiding "low-hanging fruit" security flaws, which are vulnerabilities that are easy to detect and fix, such as XSS bugs, SQL injection, improper access control, and more. Participating white-hat hackers should be encouraged to find new and more complex vulnerabilities. There are certain steps that organizations should take when designing and administering bug bounty programs.

The Department of Justice (DoJ) provided guidance for organizations to follow when adopting bug bounty programs to reduce the risks associated with such programs. Organizations are urged to set the scope for adopted bug bounty programs to specify what data and systems are subject to exploration, as well as the methods that can be used to find vulnerabilities. If an organization wants to include systems containing sensitive information in the bug bounty program, it must consider imposing restrictions regarding accessing, copying, transferring, storing, and using such information. It is essential to make the procedures and form in which participating hackers can submit discovered vulnerabilities clear. Organizations should determine who should be the point-of-contact responsible for receiving and handling disclosure reports. A plan should also be in place for handling accidental or deliberate malicious violations of established bug bounty program policies or procedures. Vulnerability disclosure policies should clearly specify what conduct is authorized and unauthorized, and the consequences of violating program rules and complying with the policy. Participating hackers should also be encouraged to seek clarification before performing actions that the policy may not address or may result in a violation. The rules of the bug bounty program must also be easily accessible and available to participants. The procedures, policies, and channels for reporting vulnerabilities in a bug bounty program must be made clear to participants to prevent the improper disclosure and handling of discovered vulnerabilities.

Bug bounty programs are expected to grow in adoption due to the increase in remote work during the COVID-19 pandemic and advancements in technology, which will create new cybersecurity vulnerabilities. It is essential for organizations and the security community to further explore the benefits and challenges associated with such programs to increase their effectiveness in improving security.

SoS Musings #44 - Industrial Robots and Cybersecurity

SoS Musings #44 -

Industrial Robots and Cybersecurity

Industrial robots are continuing to grow in use and sophistication in the realm of manufacturing, but are they secure against cyberattacks? An industrial robot is described as a complex cyber-physical system or mechanical device used for manufacturing that is automated, programmable, and capable of movement in three or more axes. Industrial robots are used in place of humans in manufacturing operations to perform tasks considered highly dangerous or repetitive in an accurate manner. These types of robots are typically applied in manufacturing areas that require high endurance, speed, and precision, such as welding, painting, ironing, assembly, palletizing, product inspection, and testing. The global Industrial Robotics market value is expected to reach more than $70,000 million by 2023, exhibiting a Compound Annual Growth Rate (CAGR) of 9.4% during the forecast period. The major catalyst behind this market growth is the significant increase in labor charges, which has pushed manufacturers into replacing human labor with machines, especially during the COVID-19 pandemic. The growth in the use of industrial robots must be accompanied by increased security for such technology.

Several studies have shown the vulnerability of industrial robots to cyberattacks that could significantly impact safety and production activities. Research conducted by Trend Micro found more than 83,000 industrial robots from Belden, Eurotech, Moxa, Westermo, Sierra Wireless, Digi, and more, vulnerable to remote cyberattacks due to their exposure via FTP servers and the exposure of industrial routers. Over 5,100 of these vulnerable industrial robots did not have authentication in place. Trend Micro's report, titled "Rogue Robots: Testing the Limits of an Industrial Robot's Security," also details five types of robot attacks that could inflict harm to human operators and damage equipment, significantly reducing safety for factory workers and the quality of products. The Trend Micro researchers have shown how attackers can abuse software security flaws to carry out such attacks. Two of the demonstrated attacks involve manipulating robot status information to reduce a human operator's awareness of a robot's true status and increase the likelihood of the operator losing control and getting injured. The other demonstrated attacks allow malicious actors to alter control-loop parameters, calibration parameters, and production logic. These attacks can cause robots to move inaccurately or unexpectedly, or manipulate the programs used by the robots into introducing a flaw into the workplace, posing a threat to operators' safety in addition to the integrity and accuracy of manufacturing operations. The researchers pointed out several vulnerabilities that could be lead to the execution of such attacks against industrial robots, which include the use of outdated software components, default credentials, weak authentication, poor transport encryption, insecure web interfaces, unencrypted storage, inadequate software protection, and the ease at which industrial routers can be found and recognized using easily accessible technical materials. IOActive researchers pointed out the potential execution of attacks on industrial robots by insiders such as the robot operators themselves. Malicious robot operators have the potential to be major insider threats in that they can use their direct access to a robot's hardware or manual interface to alter its behavior, possibly causing operation failures and injuries to others. A malicious robot operator can tamper with exposed connectivity ports using special USB devices and Ethernet connections. A joint study by researchers at the Polytechnic University of Milan and Trend Micro brought further attention to legacy programming languages such as RAPID, KRL, AS, PDL2, and PacScript that were designed decades ago without security in mind and how they leave industrial robots vulnerable to being hijacked by attackers to disrupt production lines and steal intellectual property. The researchers analyzed 100 open-source automation programs developed with these languages and found that many of them contained vulnerabilities that could be exploited by hackers to control and interrupt industrial robot activities. Another study by researchers at IOActive demonstrated how robots could be hijacked by ransomware through the exploitation of vulnerabilities that can allow attackers to execute commands on the robot remotely, potentially crippling factories and businesses. Such attacks and vulnerabilities must be addressed by increased security development and research efforts.

There are several different factors in need of more research and collaborative efforts to improve cybersecurity for industrial robots. The Robotic Industries Association (RIA) calls on robot manufacturers, integrators, and operators to be held more accountable for the security of these robots. As manufacturers design and make robots, robot controllers, and devices such as machine vision cameras, laser scanners, and robot end-of-arm tooling that support the activities and operation of these products, they must ensure that they are implementing security measures throughout the design process and writing secure firmware. Implementors and systems integrators must ensure that manufacturers' robotic products are applied and configured in a way that doesn't leave the products susceptible to tampering and remote access by unauthorized entities. Manufacturers should force robot operators to change default user names and passwords at set up. Robot operators need to make sure that the physical environment in which the robot operates remains secure and that cyber risks can be mitigated as quickly as possible. The RIA encourages the adoption of a defense-in-depth approach to robot security. This approach refers to building security into each layer of a robot's control system architecture. Security defenses must be implemented for a robot's embedded operating system, application code, communications code, cloud servers, and more. The development of guidance and tools aimed at bolstering robot security must also be continued. For example, Trend Micro Research, in collaboration with the Robotic Operating System Industrial Consortium, created guidelines to help Industry 4.0 developers securely write the task programs that rely on legacy programming languages and are used to control industrial robots' automatic movements in order to reduce the risk of attacks on such robots. These guidelines cover secure configuration and deployment procedures, authentication for communication between systems, the implementation of access control policies and proper error handling, as well as the performance of input validation and output sanitization. The Polytechnic University of Milan, together with Trend Micro researchers, also developed a tool for the detection of malicious code in task programs used by industrial robots, helping to prevent damage at runtime.

As industrial robots continue to grow in use and complexity, it is essential for the security community, as well as robot manufacturers, integrators, and operators, to take further steps towards developing and implementing better security mechanisms or practices for robotics.

SoS Musings #45 - Privacy in Data Sharing and Analytics

SoS Musings #45 -

Privacy in Data Sharing and Analytics

Data privacy continues to ignite concerns in all realms, including those involving consumers, scientific discovery, and analytics. The consulting firm McKinsey & Co. conducted a survey to which 1,000 North American consumers responded and revealed their views on data collection, privacy, hacks, breaches, regulations, and communications, as well as their trust in the companies they support. The survey revealed low levels of consumer trust regarding data management and privacy, as each sector has a trust rating of less than 50 percent for data protection. Data privacy is essential because the exposure of highly sensitive personal information could impact an individual's livelihood, reputation, relationships, and more. However, data is a crucial source of information for researchers. For example, in the case of COVID-19, data must be shared among government authorities, companies, and researchers to support the advancement of public health, contact tracing, and other studies regarding the pandemic. Efforts are being made to enforce the use of practices that support data privacy like the establishment of data privacy laws such as the European General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), which are among the most prominent of the various privacy laws set in more than a hundred countries. More technological advancements and methods are needed to help organizations and researchers share and effectively analyze datasets while maintaining the privacy of the data.

Studies have demonstrated that subjects in anonymized datasets could still be identified using certain methods and technology, posing a significant threat to individuals' privacy. Aleksandra Slavkovic, a professor of statistics and associate dean for graduate education at the Penn State Eberly College of Science, pointed out the growing risks to data privacy. These risks stem from continued technological advancements in data collection and record linkage, as well as the increased availability of various data sources that could be linked with a retained dataset. Methods for linking two datasets, such as those that contain voter records and health insurance data, have improved significantly. In one study, researchers at University College London and the Alan Turing Institute were able to identify any user in a group of 10,000 Twitter users at an accuracy rate of 96.7 percent using tweets, publicly available metadata from Twitter, and three different machine learning algorithms. In another study published in Nature Communications, researchers from Imperial College London and Belgium's Universite Catholique de Louvain developed a machine learning model claimed to enable the accurate re-identification of 99.98 percent of Americans in any anonymized dataset using 15 basic demographic attributes such as date of birth, gender, ethnicity, and marital status. Such discoveries call on the continued development of advanced methods that combat the de-anonymization of datasets and re-identification of individuals represented in data.

There are various studies and developments aimed at bolstering data privacy. Slavkovic proposed the use of synthetic networks to satisfy the need to share confidential data for statistical analysis while maintaining the statistical accuracy and integrity of the data being shared. A new tool dubbed DoppelGANger developed by researchers at Carnegie Mellon University's CyLab and IBM executes the idea of using synthetic network data. DoppelGANger synthesizes new data that mimics the original dataset while ensuring that the sensitive information is omitted. This tool uses powerful machine learning models called Generative Adversarial Networks (GANs) to synthesize datasets containing statistics of the original training data, simplifying data sharing and preserving the privacy of sensitive information shared between companies, organizations, and governments. Google rolled out an open-source version of its differential privacy library to help organizations draw useful insights from datasets containing private and sensitive information while preventing the re-identification or distinguishing of individuals in the dataset. Differential privacy is an approach involving the combination of random noise with data, resulting in the inability to identify specific individuals using analysis results. Google's Data Loss Prevention tool (DLP) applies machine learning capabilities, including image recognition, machine vision, natural language process, and context analysis, to look for sensitive data in the cloud and automatically redact it. Google introduced an Application Programming Interface (API) for the tool in 2019, allowing administrators to use it outside of Google's ecosystem. The DLP API lets administrators customize the tool based on the specific types of data they want to identify, such as patient information or credit card numbers. According to Scott Ellis, a Product Manager on Google's Security & Privacy team with a focus on data privacy technology for the Google Cloud Platform, the main goals behind the development of the DLP tool are to classify, mask, and de-identify sensitive data so it can still be used for research or analysis without putting the privacy of individuals at risk. Cryptographers and data scientists at Google released Private Join and Compute, a secure Multi-Party Computation (MPC) tool that helps organizations work together on valuable research without revealing information about individuals in the datasets. This tool allows one party to gain aggregated insights about another party's data without either of them being able to learn about individuals represented by the datasets being used. First, both parties encrypt their data using private keys so that no one else can access or decipher it. Then the parties send their encrypted data to each other. The Private Join and Compute tool employs a combination of cryptographic techniques to protect individual data known as private set intersection and homomorphic encryption. Private set intersection allows two parties to compute the intersection of their data (common data point, e.g., location or ID) while preventing the exposure of raw data to the other party. Homomorphic encryption enables computations to be performed on encrypted data without having to decrypt the data, thus only allowing the encrypted results of the computations to be revealed by the owner of the secret decryption key. IBM is also making efforts to change the game of data privacy within the commercial sector through the launch of its Security Homomorphic Encryption Services that lets enterprises test the encryption scheme. According to IBM, industry computing power has increased, and the algorithms used for Fully Homomorphic Encryption (FHE) have become more refined, allowing calculations to be performed fast enough for various types of real-world use cases and early experiments with businesses. IBM is also working on making FHE resistant to future quantum attacks. These developments and efforts call for further exploration.

We must continue to develop and improve methods that allow us to share sensitive data for research purposes while ensuring the accuracy and integrity of data, as well as the privacy of individuals in the data.

SoS Musings #46 - The Battle Against Fileless Malware Attacks Continues

SoS Musings #46 -

The Battle Against Fileless Malware Attacks Continues

As organizations adopt more advanced attack detection and prevention methods, cybercriminals continue to increase the sophistication of their attack methods. Today's adversaries are increasingly adopting fileless attack techniques to circumvent most security protections. Fileless malware attacks, also known as zero-footprint attacks or non-malware attacks, differ from many other malware threats in that they do not require attackers to install software on a victim's machine. Instead, fileless malware attacks are executed by taking control of tools, software, and applications already installed on the victim's machine, making them increasingly stealthy and capable of evading detection by most security solutions. As such attacks do not rely on files and leave no footprint, they are significantly more challenging to identify and remove. According to a Ponemon Institute report, fileless attacks are ten times more likely to succeed than file-based attacks. The cybersecurity and defense company Trend Micro revealed a 265 percent increase in fileless malware attacks in the first half of 2019 compared to that of 2018. A recent analysis of telemetry data from Cisco found that the most common critical-severity cybersecurity threat faced by endpoints in the first half of 2020 was fileless malware. One of the more dangerous examples of fileless malware is Emotet, a significantly advanced Trojan that can extract banking credentials, passwords to administrative accounts, and more, for cybercriminals to steal money or gain access to other key resources. ESET's latest "Cybersecurity Trends" report predicts that fileless attack methods will be used in highly sophisticated and larger-scale attacks in 2021. Fileless malware attacks call for the development, adoption, and exploration of advanced detection and prevention strategies.

A fileless malware attack is categorized as a Low-Observable Characteristics (LOC) attack. It is a type of stealth attack that poses a challenge in detection for most security solutions. Unlike traditional malware, fileless malware is not written to disk. Instead, fileless malware operates only in a computer's Random Access Memory (RAM), leaving no traces of its existence as nothing is ever written directly to the hard drive. The absence of traditional footprints adds an extra layer of difficulty in the performance of forensic analysis that would help security teams investigate a breach and prevent future attacks. Fileless malware attacks are a subset of Living-off-the-Land (LotL) attacks in which threat actors use trusted pre-installed system tools to hide their malicious activity. There are more than 100 Windows system tools that cybercriminals can exploit in the performance of LotL attacks.

PowerShell is a default Microsoft Windows tool commonly leveraged in fileless malware attacks. Windows PowerShell is a command-line shell and scripting language that has unrestricted access to the Windows operating system and provides unprecedented access to a machine's inner functions. As PowerShell is a built-in component of Windows, it has a high level of trust among administrators and is used to automate tasks across multiple machines and to manage configuration. Other tools and components commonly abused for fileless attacks include Windows Management Instrumentation (WMI), PsExec, BITSAdmin, MSIExec, RegSvr32, CertUtil, Task Scheduler, and Microsoft Office macros. As with most cyberattacks, fileless malware attacks are often initiated through the performance of social engineering. A common fileless malware attack scenario starts with a user being tricked into clicking on a malicious link in a spam or phishing email. The user is then taken to a malicious website requiring Flash to display its content. The malicious website loads Flash, containing vulnerabilities, on the user's computer. Flash opens PowerShell to execute instructions through the command line while it runs in memory. PowerShell then downloads and executes a script from a Command-and-Control (C&C) server that finds and sends the user's data to the attacker.

Fileless malware attacks often rely on human weaknesses to start. As a fileless malware attack typically begins with a phishing email, it is important to increase awareness about how to recognize and avoid phishing scams. According to data analyzed by Atlas VPN, Google registered 2.02 million phishing websites in 2020, representing a 19.91 percent increase from 2019 when the volume of malicious sites reached 1.69 million. Based on data collected during the global "2020 Gone Phishing Tournament" organized by Terranova Security and Microsoft, almost one-fifth of employees click on phishing email links despite having gone through security training. The study showed that the number of employees who clicked on a phishing link despite security training grew by 77 percent, increasing from 11.2 percent in 2019 to 19.8% in 2020. A study by the USENIX Association and a team of researchers from several German universities suggests that organizations should require employees to go through phishing awareness training at least once every six months to prevent the effects of such training from fading. The study also found that video-based and interactive training formats were the most effective in reminding employees about phishing and social engineering attacks. As the number of phishing attacks continues to grow, thus increasing the likelihood of a fileless malware attack, it is important to adopt solutions toward preventing the success of phishing.

Cybersecurity experts have recommended that organizations adopt a multilayered or defense-in-depth approach to combating fileless attacks. Adopting or implementing solutions for analyzing, detecting, and improving user and system behavior is essential to combatting fileless attacks. In addition to increasing security awareness among users, organizations can adopt User and Entity Behavior Analytics (UEBA) to help stop a fileless malware attack. Traditional file-based security monitoring tools detect malware based on disk scans, signatures, or rules, thus making them ineffective at detecting fileless malware. UEBA applies behavioral modeling and machine learning to identify anomalous and suspicious behaviors or entities, presenting an opportunity to detect fileless malware threats. UEBA solutions can help identify and track typical and unusual behaviors across users, hosts, software, and applications. The detection of anomalous activities through a UEBA tool could help pick up the performance of fileless malware attacks. The cybersecurity software company Trend Micro recommends the use of custom sandboxes, along with Intrusion Detection and Prevention Systems (IDPS), to help pick up on C&C communication, data exfiltration, and other suspicious traffic. Other methods recommended for organizations to combat fileless attacks include applying policies that restrict the use of scripts and scripting languages, allowing scripts to run just from read-only network locations, limiting the use of interactive PowerShell within the organization, scanning macro scripts, and applying the latest security updates to the operating system. Fighting fileless malware attacks also requires additional research and novel solution developments. For example, a team of researchers analyzed ten recently-emerged fileless cyberattacks to find out the characteristics and specific techniques used in the attacks. The researchers then divided the number of each type of technique used in a specific fileless cyberattack by the total number of available techniques, resulting in the identification and analysis of each ratio across three different dimensions. This process led to the classification of fileless cyberattacks into the following categories: evasion, attack, or collection. The researchers expect the study to provide a foundational framework for identifying and classifying the characteristics of fileless cyberattacks that are likely to appear in the future, thus contributing to potential response strategies. The battle against fileless malware attacks requires more training, multiple layers of security controls, and additional research.

SoS Musings #47 - The Problem with False Positives in Security Operations

SoS Musings #47 -

The Problem with False Positives in Security Operations

False positives are an issue commonly faced in the collection of threat intelligence, the execution of security operations, and incident response performance. The National Institute of Standards and Technology (NIST) defines false positives as alerts that incorrectly indicate that a vulnerability is present or that malicious activity is occurring. Specifically, in cybersecurity, false positives denote that a file, setting, or event has been flagged as malicious when it is truly benign. False positive alerts may expose organizations to security breaches as information security teams often waste time, resources, and efforts handling such alerts when they could be addressing actual threats to the system or network they are responsible for protecting.

Studies have highlighted the overwhelming generation of false positives. According to a survey conducted by the cybersecurity firm FireEye that polled C-level security executives at large enterprises around the world, 37% of the respondents revealed that they receive more than 10,000 alerts each month. Of those alerts, over 50% were false positives. Findings from a study by the Ponemon Institute indicate that the average organization may receive significantly more alerts in a week. The Ponemon study reported that the average number of malware alerts received by an organization during a typical week is nearly 17,000, with only 19% of the alerts having been considered reliable. Research from the Neustar International Security Council (NISC) found that more than a quarter of security alerts handled by organizations are false positives. The survey carried out by the NISC to which senior security professionals across the US and five European markets responded, also found that over 40% of organizations experience false positive alerts in more than 20% of cases, while 15% revealed that over 50% of their security alerts turn out to be false positives. A FireEye-sponsored survey conducted by the leading global market intelligence firm International Data Corporation (IDC) polled 350 internal and Managed Security Service Provider (MSSP) security analysts and managers working in organizations across multiple sectors, including financial, healthcare, and government. Internal security analysts and IT security managers revealed that they receive thousands of alerts each day, with 45% of them being false positives. MSSP analysts pointed out that 53% of the alerts they receive are false positives. Sixty-eight percent of those who participated in another survey done by the cybersecurity company Critical Start reported that false positives make up 25-75% of the security alerts they investigate on a daily basis.

False positives have the potential to disrupt cybersecurity efforts and result in a costly breach. The IDC InfoBrief, "The Voice of the Analysts: Improving Security Operations Center Processes Through Adapted Technologies," calls attention to the fact that many security analysts and managers are experiencing alert fatigue, leading to less productivity, ignored alerts, increased stress, and the Fear of Missing Incidents (FOMI). Security analysts are becoming increasingly overwhelmed by the flood of false positive alerts they receive from the different kinds of solutions implemented by Security Operations Centers (SOCs). The influx of false positives is decreasing the efficiency of in-house analysts' jobs and slowing down workflow processes. The IDC survey found that 35% of internal analysts manage alert overload by ignoring alerts. Forty-four percent of analysts at MSSPs also said that they ignore alerts when their queue gets too crowded, leaving multiple clients vulnerable to a potential breach. FOMI is affecting most security analysts and managers. The more challenges that analysts face when manually managing alerts, the more they worry about missing an incident. According to the same IDC survey, three in four analysts worry about missing incidents, while one in four worries significantly about missing incidents. However, FOMI seems to be impacting security managers more than analysts, with over 6% of the managers reporting to have lost sleep due to this fear of missing an incident.

Alert overload and the overwhelming number of false positives lead to high analyst turnover. The cybersecurity firm Critical Start surveyed SOC professionals working in different companies, MSSPs, and Managed Detection & Response (MDR) providers to gain insight into the state of incident response within SOCs regarding alert volume, alert management, SOC turnover, and more. Findings from the survey further indicate that the alert overload problem, together with the false positive rate being 50% or higher, can wear on security analysts in the long-run, eventually leading to burnout and high turnover rates in SOC teams. More than eight out of ten reported that their SOC had experienced at least 10% up to over 50% in analyst turnover due to the alert overload and the struggle to handle false positives. Due to the current global cybersecurity skills shortage, it is increasingly difficult to find skilled, well-experienced security professionals to replace security analysts. In addition to trying to hire more analysts to manage the onslaught of security alerts and the high false positive rate, SOCs turn off high-volume alerting features considered too noisy and ignore low to medium priority alerts. All of these factors lead enterprises to being more vulnerable to risk and security threats.

To address the challenges associated with alert overload and high false positive rates, security analysts and managers are calling for more automated SOC solutions. Most enterprise security teams are currently not using tools that automate SOC activities, as indicated by the top tools shared by analysts who participated in the FireEye-sponsored IDC survey. Less than half of the respondents reported using tools that apply Artificial Intelligence (AI) and Machine Learning (ML) to investigate alerts. Less than 50% of the respondents said that they use security tools and functions such as Security Information and Event Management (SIEM) software, Security Orchestration Automation and Response (SOAR) tools, and threat hunting. The survey also suggests that only two in five analysts use AI and ML together with other tools. Threat detection was ranked the highest in the list of activities that are best to automate, followed by threat intelligence and incident triage. ML and AL are essential to automating threat detection. A Recursive Neural Network (RNN) is a type of Deep Neural Network (DNN), which is an Artificial Neural Network (ANN). Through the use of an RNN, the accuracy of threat detection can be improved significantly, thus reducing or eliminating false positives. ML can be trained and tuned to improve over time using the RNN. Ivan Novikov, the founder and CEO of the application security company Wallarm gave a research-based presentation at BSides San Francisco in which he discussed how a neural network built on ML could be trained based on false positive detections to continuously tune the system to prevent the detection of future false positive events. Novikov explained how automatic rule tuning by the ML network could replace traditional false positive response responses such as CAPTCHA or email alerts to security teams. Security teams need advanced automated solutions to help reduce alert fatigue and increase focus on more high-skilled activities such as threat hunting.