Visible to the public Biblio

Found 12055 results

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 
W
Wang, C., Jiang, Y., Zhao, X., Song, X., Gu, M., Sun, J..  2018.  Weak-Assert: A Weakness-Oriented Assertion Recommendation Toolkit for Program Analysis. 2018 IEEE/ACM 40th International Conference on Software Engineering: Companion (ICSE-Companion). :69–72.

Assertions are helpful in program analysis, such as software testing and verification. The most challenging part of automatically recommending assertions is to design the assertion patterns and to insert assertions in proper locations. In this paper, we develop Weak-Assert, a weakness-oriented assertion recommendation toolkit for program analysis of C code. A weakness-oriented assertion is an assertion which can help to find potential program weaknesses. Weak-Assert uses well-designed patterns to match the abstract syntax trees of source code automatically. It collects significant messages from trees and inserts assertions into proper locations of programs. These assertions can be checked by using program analysis techniques. The experiments are set up on Juliet test suite and several actual projects in Github. Experimental results show that Weak-Assert helps to find 125 program weaknesses in 26 actual projects. These weaknesses are confirmed manually to be triggered by some test cases.

Hastings, Marcella, Fried, Joshua, Heninger, Nadia.  2016.  Weak Keys Remain Widespread in Network Devices. Proceedings of the 2016 Internet Measurement Conference. :49–63.

In 2012, two academic groups reported having computed the RSA private keys for 0.5% of HTTPS hosts on the internet, and traced the underlying issue to widespread random number generation failures on networked devices. The vulnerability was reported to dozens of vendors, several of whom responded with security advisories, and the Linux kernel was patched to fix a boottime entropy hole that contributed to the failures. In this paper, we measure the actions taken by vendors and end users over time in response to the original disclosure. We analyzed public internet-wide TLS scans performed between July 2010 and May 2016 and extracted 81 million distinct RSA keys. We then computed the pairwise common divisors for the entire set in order to factor over 313,000 keys vulnerable to the aw, and fingerprinted implementations to study patching behavior over time across vendors. We find that many vendors appear to have never produced a patch, and observed little to no patching behavior by end users of affected devices. The number of vulnerable hosts increased in the years after notification and public disclosure, and several newly vulnerable implementations have appeared since 2012. Vendor notification, positive vendor responses, and even vendor-produced public security advisories appear to have little correlation with end-user security.

Nathezhtha, T., Sangeetha, D., Vaidehi, V..  2019.  WC-PAD: Web Crawling based Phishing Attack Detection. 2019 International Carnahan Conference on Security Technology (ICCST). :1–6.
Phishing is a criminal offense which involves theft of user's sensitive data. The phishing websites target individuals, organizations, the cloud storage hosting sites and government websites. Currently, hardware based approaches for anti-phishing is widely used but due to the cost and operational factors software based approaches are preferred. The existing phishing detection approaches fails to provide solution to problem like zero-day phishing website attacks. To overcome these issues and precisely detect phishing occurrence a three phase attack detection named as Web Crawler based Phishing Attack Detector(WC-PAD) has been proposed. It takes the web traffics, web content and Uniform Resource Locator(URL) as input features, based on these features classification of phishing and non phishing websites are done. The experimental analysis of the proposed WC-PAD is done with datasets collected from real phishing cases. From the experimental results, it is found that the proposed WC-PAD gives 98.9% accuracy in both phishing and zero-day phishing attack detection.
Goel, N., Sharma, A., Goswami, S..  2017.  A way to secure a QR code: SQR. 2017 International Conference on Computing, Communication and Automation (ICCCA). :494–497.

Now a day, need for fast accessing of data is increasing with the exponential increase in the security field. QR codes have served as a useful tool for fast and convenient sharing of data. But with increased usage of QR Codes have become vulnerable to attacks such as phishing, pharming, manipulation and exploitation. These security flaws could pose a danger to an average user. In this paper we have proposed a way, called Secured QR (SQR) to fix all these issues. In this approach we secure a QR code with the help of a key in generator side and the same key is used to get the original information at scanner side. We have used AES algorithm for this purpose. SQR approach is applicable when we want to share/use sensitive information in the organization such as sharing of profile details, exchange of payment information, business cards, generation of electronic tickets etc.

Feng, W., Yan, W., Wu, S., Liu, N..  2017.  Wavelet transform and unsupervised machine learning to detect insider threat on cloud file-sharing. 2017 IEEE International Conference on Intelligence and Security Informatics (ISI). :155–157.

As increasingly more enterprises are deploying cloud file-sharing services, this adds a new channel for potential insider threats to company data and IPs. In this paper, we introduce a two-stage machine learning system to detect anomalies. In the first stage, we project the access logs of cloud file-sharing services onto relationship graphs and use three complementary graph-based unsupervised learning methods: OddBall, PageRank and Local Outlier Factor (LOF) to generate outlier indicators. In the second stage, we ensemble the outlier indicators and introduce the discrete wavelet transform (DWT) method, and propose a procedure to use wavelet coefficients with the Haar wavelet function to identify outliers for insider threat. The proposed system has been deployed in a real business environment, and demonstrated effectiveness by selected case studies.

Nasr, Milad, Zolfaghari, Hadi, Houmansadr, Amir.  2017.  The Waterfall of Liberty: Decoy Routing Circumvention That Resists Routing Attacks. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :2037–2052.

Decoy routing is an emerging approach for censorship circumvention in which circumvention is implemented with help from a number of volunteer Internet autonomous systems, called decoy ASes. Recent studies on decoy routing consider all decoy routing systems to be susceptible to a fundamental attack – regardless of their specific designs–in which the censors re-route traffic around decoy ASes, thereby preventing censored users from using such systems. In this paper, we propose a new architecture for decoy routing that, by design, is significantly stronger to rerouting attacks compared to all previous designs. Unlike previous designs, our new architecture operates decoy routers only on the downstream traffic of the censored users; therefore we call it downstream-only decoy routing. As we demonstrate through Internet-scale BGP simulations, downstream-only decoy routing offers significantly stronger resistance to rerouting attacks, which is intuitively because a (censoring) ISP has much less control on the downstream BGP routes of its traffic. Designing a downstream-only decoy routing system is a challenging engineering problem since decoy routers do not intercept the upstream traffic of censored users. We design the first downstream-only decoy routing system, called Waterfall, by devising unique covert communication mechanisms. We also use various techniques to make our Waterfall implementation resistant to traffic analysis attacks. We believe that downstream-only decoy routing is a significant step towards making decoy routing systems practical. This is because a downstream-only decoy routing system can be deployed using a significantly smaller number of volunteer ASes, given a target resistance to rerouting attacks. For instance, we show that a Waterfall implementation with only a single decoy AS is as resistant to routing attacks (against China) as a traditional decoy system (e.g., Telex) with 53 decoy ASes.

K. Cavalleri, B. Brinkman.  2015.  "Water treatment in context: resources and African religion". 2015 Systems and Information Engineering Design Symposium. :19-23.

Drinking water availability is a crucial problem that must be addressed in order to improve the quality of life of individuals living developing nations. Improving water supply availability is important for public health, as it is the third highest risk factor for poor health in developing nations with high mortality rates. This project researched drinking water filtration for areas of Sub-Saharan Africa near existing bodies of water, where the populations are completely reliant on collecting from surface water sources: the most contaminated water source type. Water filtration methods that can be completely created by the consumer would alleviate aid organization dependence in developing nations, put the consumers in control, and improve public health. Filtration processes pass water through a medium that will catch contaminants through physical entrapment or absorption and thus yield a cleaner effluent. When exploring different materials for filtration, removal of contaminants and hydraulic conductivity are the two most important components. Not only does the method have to treat the water, but also it has to do so in a timeframe that is quick enough to produce potable water at a rate that keeps up with everyday needs. Cement is easily accessible in Sub- Saharan regions. Most concrete mixtures are not meant to be pervious, as it is a construction material used for its compressive strength, however, reduced water content in a cement mixture gives it higher permeability. Several different concrete samples of varying thicknesses and water concentrations were created. Bacterial count tests were performed on both pre-filtered and filtered water samples. Concrete filtration does remove bacteria from drinking water, however, the method can still be improved upon.

Wang, Y., Wen, M., Liu, Y., Wang, Y., Li, Z., Wang, C., Yu, H., Cheung, S.-C., Xu, C., Zhu, Z..  2020.  Watchman: Monitoring Dependency Conflicts for Python Library Ecosystem. 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE). :125–135.
The PyPI ecosystem has indexed millions of Python libraries to allow developers to automatically download and install dependencies of their projects based on the specified version constraints. Despite the convenience brought by automation, version constraints in Python projects can easily conflict, resulting in build failures. We refer to such conflicts as Dependency Conflict (DC) issues. Although DC issues are common in Python projects, developers lack tool support to gain a comprehensive knowledge for diagnosing the root causes of these issues. In this paper, we conducted an empirical study on 235 real-world DC issues. We studied the manifestation patterns and fixing strategies of these issues and found several key factors that can lead to DC issues and their regressions. Based on our findings, we designed and implemented Watchman, a technique to continuously monitor dependency conflicts for the PyPI ecosystem. In our evaluation, Watchman analyzed PyPI snapshots between 11 Jul 2019 and 16 Aug 2019, and found 117 potential DC issues. We reported these issues to the developers of the corresponding projects. So far, 63 issues have been confirmed, 38 of which have been quickly fixed by applying our suggested patches.
Shalev, Noam, Keidar, Idit, Moatti, Yosef, Weinsberg, Yaron.  2016.  WatchIT: Who Watches Your IT Guy? Proceedings of the 8th ACM CCS International Workshop on Managing Insider Security Threats. :93–96.

System administrators have unlimited access to system resources. As the Snowden case shows, these permissions can be exploited to steal valuable personal, classified, or commercial data. In this work we propose a strategy that increases the organizational information security by constraining IT personnel's view of the system and monitoring their actions. To this end, we introduce the abstraction of perforated containers – while regular Linux containers are too restrictive to be used by system administrators, by "punching holes" in them, we strike a balance between information security and required administrative needs. Our system predicts which system resources should be accessible for handling each IT issue, creates a perforated container with the corresponding isolation, and deploys it in the corresponding machines as needed for fixing the problem. Under this approach, the system administrator retains his superuser privileges, while he can only operate within the container limits. We further provide means for the administrator to bypass the isolation, and perform operations beyond her boundaries. However, such operations are monitored and logged for later analysis and anomaly detection. We provide a proof-of-concept implementation of our strategy, along with a case study on the IT database of IBM Research in Israel.

Gao, Yang, Li, Borui, Wang, Wei, Xu, Wenyao, Zhou, Chi, Jin, Zhanpeng.  2018.  Watching and Safeguarding Your 3D Printer: Online Process Monitoring Against Cyber-Physical Attacks. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.. 2:108:1–108:27.

The increasing adoption of 3D printing in many safety and mission critical applications exposes 3D printers to a variety of cyber attacks that may result in catastrophic consequences if the printing process is compromised. For example, the mechanical properties (e.g., physical strength, thermal resistance, dimensional stability) of 3D printed objects could be significantly affected and degraded if a simple printing setting is maliciously changed. To address this challenge, this study proposes a model-free real-time online process monitoring approach that is capable of detecting and defending against the cyber-physical attacks on the firmwares of 3D printers. Specifically, we explore the potential attacks and consequences of four key printing attributes (including infill path, printing speed, layer thickness, and fan speed) and then formulate the attack models. Based on the intrinsic relation between the printing attributes and the physical observations, our defense model is established by systematically analyzing the multi-faceted, real-time measurement collected from the accelerometer, magnetometer and camera. The Kalman filter and Canny filter are used to map and estimate three aforementioned critical toolpath information that might affect the printing quality. Mel-frequency Cepstrum Coefficients are used to extract features for fan speed estimation. Experimental results show that, for a complex 3D printed design, our method can achieve 4% Hausdorff distance compared with the model dimension for infill path estimate, 6.07% Mean Absolute Percentage Error (MAPE) for speed estimate, 9.57% MAPE for layer thickness estimate, and 96.8% accuracy for fan speed identification. Our study demonstrates that, this new approach can effectively defend against the cyber-physical attacks on 3D printers and 3D printing process.

Saifuddin, K. M., Ali, A. J. B., Ahmed, A. S., Alam, S. S., Ahmad, A. S..  2018.  Watchdog and Pathrater based Intrusion Detection System for MANET. 2018 4th International Conference on Electrical Engineering and Information Communication Technology (iCEEiCT). :168–173.

Mobile Ad Hoc Network (MANET) is pretty vulnerable to attacks because of its broad distribution and open nodes. Hence, an effective Intrusion Detection System (IDS) is vital in MANET to deter unwanted malicious attacks. An IDS has been proposed in this paper based on watchdog and pathrater method as well as evaluation of its performance has been presented using Dynamic Source Routing (DSR) and Ad-hoc On-demand Distance Vector (AODV) routing protocols with and without considering the effect of the sinkhole attack. The results obtained justify that the proposed IDS is capable of detecting suspicious activities and identifying the malicious nodes. Moreover, it replaces the fake route with a real one in the routing table in order to mitigate the security risks. The performance appraisal also suggests that the AODV protocol has a capacity of sending more packets than DSR and yields more throughput.

Han, Yi, Etigowni, Sriharsha, Liu, Hua, Zonouz, Saman, Petropulu, Athina.  2017.  Watch Me, but Don'T Touch Me! Contactless Control Flow Monitoring via Electromagnetic Emanations. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :1095–1108.

Trustworthy operation of industrial control systems depends on secure and real-time code execution on the embedded programmable logic controllers (PLCs). The controllers monitor and control the critical infrastructures, such as electric power grids and healthcare platforms, and continuously report back the system status to human operators. We present Zeus, a contactless embedded controller security monitor to ensure its execution control flow integrity. Zeus leverages the electromagnetic emission by the PLC circuitry during the execution of the controller programs. Zeus's contactless execution tracking enables non-intrusive monitoring of security-critical controllers with tight real-time constraints. Those devices often cannot tolerate the cost and performance overhead that comes with additional traditional hardware or software monitoring modules. Furthermore, Zeus provides an air-gap between the monitor (trusted computing base) and the target (potentially compromised) PLC. This eliminates the possibility of the monitor infection by the same attack vectors. Zeus monitors for control flow integrity of the PLC program execution. Zeus monitors the communications between the human machine interface and the PLC, and captures the control logic binary uploads to the PLC. Zeus exercises its feasible execution paths, and fingerprints their emissions using an external electromagnetic sensor. Zeus trains a neural network for legitimate PLC executions, and uses it at runtime to identify the control flow based on PLC's electromagnetic emissions. We implemented Zeus on a commercial Allen Bradley PLC, which is widely used in industry, and evaluated it on real-world control program executions. Zeus was able to distinguish between different legitimate and malicious executions with 98.9% accuracy and with zero overhead on PLC execution by design.

Quinn, Ren, Holguin, Nico, Poster, Ben, Roach, Corey, Merwe, Jacobus Kobus Van der.  2019.  WASPP: Workflow Automation for Security Policy Procedures. 2019 15th International Conference on Network and Service Management (CNSM). :1–5.

Every day, university networks are bombarded with attempts to steal the sensitive data of the various disparate domains and organizations they serve. For this reason, universities form teams of information security specialists called a Security Operations Center (SOC) to manage the complex operations involved in monitoring and mitigating such attacks. When a suspicious event is identified, members of the SOC are tasked to understand the nature of the event in order to respond to any damage the attack might have caused. This process is defined by administrative policies which are often very high-level and rarely systematically defined. This impedes the implementation of generalized and automated event response solutions, leading to specific ad hoc solutions based primarily on human intuition and experience as well as immediate administrative priorities. These solutions are often fragile, highly specific, and more difficult to reuse in other scenarios.

Hoyle, Roberto, Das, Srijita, Kapadia, Apu, Lee, Adam J., Vaniea, Kami.  2017.  Was My Message Read?: Privacy and Signaling on Facebook Messenger Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. :3838–3842.

Major online messaging services such as Facebook Messenger and WhatsApp are starting to provide users with real-time information about when people read their messages, while useful, the feature has the potential to negatively impact privacy as well as cause concern over access to self. We report on two surveys using Mechanical Turk which looked at senders' (N=402\vphantom\\ use of and reactions to the `message seen' feature, and recipients' (N=316) privacy and signaling behaviors in the face of such visibility. Our findings indicate that senders experience a range of emotions when their message is not read, or is read but not answered immediately. Recipients also engage in various signaling behaviors in the face of visibility by both replying or not replying immediately.

Querel, Louis-Philippe, Rigby, Peter C..  2018.  WarningsGuru: Integrating Statistical Bug Models with Static Analysis to Provide Timely and Specific Bug Warnings. Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. :892-895.

The detection of bugs in software systems has been divided into two research areas: static code analysis and statistical modeling of historical data. Static analysis indicates precise problems on line numbers but has the disadvantage of suggesting many warning which are often false positives. In contrast, statistical models use the history of the system to suggest which files or commits are likely to contain bugs. These course-grained predictions do not indicate to the developer the precise reasons for the bug prediction. We combine static analysis with statistical bug models to limit the number of warnings and provide specific warnings information at the line level. Previous research was able to process only a limited number of releases, our tool, WarningsGuru, can analyze all commits in a source code repository and we currently have processed thousands of commits and warnings. Since we process every commit, we present developers with more precise information about when a warning is introduced allowing us to show recent warnings that are introduced in statistically risky commits. Results from two OSS projects show that CommitGuru's statistical model flags 25% and 29% of all commits as risky. When we combine this with static analysis in WarningsGuru the number of risky commits with warnings is 20% for both projects and the number commits with new warnings is only 3% and 6%. We can drastically reduce the number of commits and warnings developers have to examine. The tool, source code, and demo is available at https://github.com/louisq/warningsguru.

M. Grottke, A. Avritzer, D. S. Menasché, J. Alonso, L. Aguiar, S. G. Alvarez.  2015.  "WAP: Models and metrics for the assessment of critical-infrastructure-targeted malware campaigns". 2015 IEEE 26th International Symposium on Software Reliability Engineering (ISSRE). :330-335.

Ensuring system survivability in the wake of advanced persistent threats is a big challenge that the security community is facing to ensure critical infrastructure protection. In this paper, we define metrics and models for the assessment of coordinated massive malware campaigns targeting critical infrastructure sectors. First, we develop an analytical model that allows us to capture the effect of neighborhood on different metrics (infection probability and contagion probability). Then, we assess the impact of putting operational but possibly infected nodes into quarantine. Finally, we study the implications of scanning nodes for early detection of malware (e.g., worms), accounting for false positives and false negatives. Evaluating our methodology using a small four-node topology, we find that malware infections can be effectively contained by using quarantine and appropriate rates of scanning for soft impacts.

Z. Zhu, M. B. Wakin.  2015.  "Wall clutter mitigation and target detection using Discrete Prolate Spheroidal Sequences". 2015 3rd International Workshop on Compressed Sensing Theory and its Applications to Radar, Sonar and Remote Sensing (CoSeRa). :41-45.

We present a new method for mitigating wall return and a new greedy algorithm for detecting stationary targets after wall clutter has been cancelled. Given limited measurements of a stepped-frequency radar signal consisting of both wall and target return, our objective is to detect and localize the potential targets. Modulated Discrete Prolate Spheroidal Sequences (DPSS's) form an efficient basis for sampled bandpass signals. We mitigate the wall clutter efficiently within the compressive measurements through the use of a bandpass modulated DPSS basis. Then, in each step of an iterative algorithm for detecting the target positions, we use a modulated DPSS basis to cancel nearly all of the target return corresponding to previously selected targets. With this basis, we improve upon the target detection sensitivity of a Fourier-based technique.

Jiang, Zhongyuan, Ma, Jianfeng, Yu, Philip S..  2019.  Walk2Privacy: Limiting target link privacy disclosure against the adversarial link prediction. 2019 IEEE International Conference on Big Data (Big Data). :1381—1388.

The disclosure of an important yet sensitive link may cause serious privacy crisis between two users of a social graph. Only deleting the sensitive link referred to as a target link which is often the attacked target of adversaries is not enough, because the adversarial link prediction can deeply forecast the existence of the missing target link. Thus, to defend some specific adversarial link prediction, a budget limited number of other non-target links should be optimally removed. We first propose a path-based dissimilarity function as the optimizing objective and prove that the greedy link deletion to preserve target link privacy referred to as the GLD2Privacy which has monotonicity and submodularity properties can achieve a near optimal solution. However, emulating all length limited paths between any pair of nodes for GLD2Privacy mechanism is impossible in large scale social graphs. Secondly, we propose a Walk2Privacy mechanism that uses self-avoiding random walk which can efficiently run in large scale graphs to sample the paths of given lengths between the two ends of any missing target link, and based on the sampled paths we select the alternative non-target links being deleted for privacy purpose. Finally, we compose experiments to demonstrate that the Walk2Privacy algorithm can remarkably reduce the time consumption and achieve a very near solution that is achieved by the GLD2Privacy.

Sethi, Ricky J., Buell, Catherine A., Seeley, William P..  2018.  WAIVS: An Intelligent Interface for Visual Stylometry Using Semantic Workflows. Proceedings of the 23rd International Conference on Intelligent User Interfaces Companion. :54:1-54:2.

In this paper, we present initial work towards creating an intelligent interface that can act as an open access laboratory for visual stylometry called WAIVS, Workflows for Analysis of Images and Visual Stylometry. WAIVS allows scholars, students, and other interested parties to explore the nature of artistic style using cutting-edge research methods in visual stylometry. We create semantic workflows for this interface using various computer vision algorithms that not only facilitate artistically significant analyses but also impose intelligent semantic constraints on complex analyses. In the interface, we combine these workflows with a manually-curated dataset for analysis of artistic style based on either the school of art or the medium.

Michalevsky, Yan, Winetraub, Yonatan.  2017.  WaC: SpaceTEE - Secure and Tamper-Proof Computing in Space Using CubeSats. Proceedings of the 2017 Workshop on Attacks and Solutions in Hardware Security. :27–32.
Sensitive computation often has to be performed in a trusted execution environment (TEE), which, in turn, requires tamper-proof hardware. If the computational fabric can be tampered with, we may no longer be able to trust the correctness of the computation. We study the (wild and crazy) idea of using computational platforms in space as a means to protect data from adversarial physical access. In this paper, we propose SpaceTEE - a practical implementation of this approach using low-cost nano-satellites called CubeSats. We study the constraints of such a platform, the cost of deployment, and discuss possible applications under those constraints. As a case study, we design a hardware security module solution (called SpaceHSM) and describe how it can be used to implement a root-of-trust for a certificate authority (CA).
V
Ani, U. D., He, H., Tiwari, A..  2020.  Vulnerability-Based Impact Criticality Estimation for Industrial Control Systems. 2020 International Conference on Cyber Security and Protection of Digital Services (Cyber Security). :1—8.

Cyber threats directly affect the critical reliability and availability of modern Industry Control Systems (ICS) in respects of operations and processes. Where there are a variety of vulnerabilities and cyber threats, it is necessary to effectively evaluate cyber security risks, and control uncertainties of cyber environments, and quantitative evaluation can be helpful. To effectively and timely control the spread and impact produced by attacks on ICS networks, a probabilistic Multi-Attribute Vulnerability Criticality Analysis (MAVCA) model for impact estimation and prioritised remediation is presented. This offer a new approach for combining three major attributes: vulnerability severities influenced by environmental factors, the attack probabilities relative to the vulnerabilities, and functional dependencies attributed to vulnerability host components. A miniature ICS testbed evaluation illustrates the usability of the model for determining the weakest link and setting security priority in the ICS. This work can help create speedy and proactive security response. The metrics derived in this work can serve as sub-metrics inputs to a larger quantitative security metrics taxonomy; and can be integrated into the security risk assessment scheme of a larger distributed system.

Munaiah, Nuthan, Meneely, Andrew.  2016.  Vulnerability Severity Scoring and Bounties: Why the Disconnect? Proceedings of the 2Nd International Workshop on Software Analytics. :8–14.

The Common Vulnerability Scoring System (CVSS) is the de facto standard for vulnerability severity measurement today and is crucial in the analytics driving software fortification. Required by the U.S. National Vulnerability Database, over 75,000 vulnerabilities have been scored using CVSS. We compare how the CVSS correlates with another, closely-related measure of security impact: bounties. Recent economic studies of vulnerability disclosure processes show a clear relationship between black market value and bounty payments. We analyzed the CVSS scores and bounty awarded for 703 vulnerabilities across 24 products. We found a weak (Spearman’s ρ = 0.34) correlation between CVSS scores and bounties, with CVSS being more likely to underestimate bounty. We believe such a negative result is a cause for concern. We investigated why these measurements were so discordant by (a) analyzing the individual questions of CVSS with respect to bounties and (b) conducting a qualitative study to find the similarities and differences between CVSS and the publicly-available criteria for awarding bounties. Among our findings were that the bounty criteria were more explicit about code execution and privilege escalation whereas CVSS makes no explicit mention of those. We also found that bounty valuations are evaluated solely by project maintainers, whereas CVSS has little provenance in practice.

Liu, Kai, Zhou, Yun, Wang, Qingyong, Zhu, Xianqiang.  2019.  Vulnerability Severity Prediction With Deep Neural Network. 2019 5th International Conference on Big Data and Information Analytics (BigDIA). :114–119.
High frequency of network security incidents has also brought a lot of negative effects and even huge economic losses to countries, enterprises and individuals in recent years. Therefore, more and more attention has been paid to the problem of network security. In order to evaluate the newly included vulnerability text information accurately, and to reduce the workload of experts and the false negative rate of the traditional method. Multiple deep learning methods for vulnerability text classification evaluation are proposed in this paper. The standard Cross Site Scripting (XSS) vulnerability text data is processed first, and then classified using three kinds of deep neural networks (CNN, LSTM, TextRCNN) and one kind of traditional machine learning method (XGBoost). The dropout ratio of the optimal CNN network, the epoch of all deep neural networks and training set data were tuned via experiments to improve the fit on our target task. The results show that the deep learning methods evaluate vulnerability risk levels better, compared with traditional machine learning methods, but cost more time. We train our models in various training sets and test with the same testing set. The performance and utility of recurrent convolutional neural networks (TextRCNN) is highest in comparison to all other methods, which classification accuracy rate is 93.95%.
Majumder, R., Som, S., Gupta, R..  2017.  Vulnerability prediction through self-learning model. 2017 International Conference on Infocom Technologies and Unmanned Systems (Trends and Future Directions) (ICTUS). :400–402.

Vulnerability being the buzz word in the modern time is the most important jargon related to software and operating system. Since every now and then, software is developed some loopholes and incompleteness lie in the development phase, so there always remains a vulnerability of abruptness in it which can come into picture anytime. Detecting vulnerability is one thing and predicting its occurrence in the due course of time is another thing. If we get to know the vulnerability of any software in the due course of time then it acts as an active alarm for the developers to again develop sound and improvised software the second time. The proposal talks about the implementation of the idea using the artificial neural network, where different data sets are being given as input for being used for further analysis for successful results. As of now, there are models for studying the vulnerabilities in the software and networks, this paper proposal in addition to the current work, will throw light on the predictability of vulnerabilities over the due course of time.