Visible to the public Biblio

Found 1012 results

Filters: First Letter Of Title is C  [Clear All Filters]
A B [C] D E F G H I J K L M N O P Q R S T U V W X Y Z   [Show ALL]
Bi, X., Liu, X..  2020.  Chinese Character Captcha Sequential Selection System Based on Convolutional Neural Network. 2020 International Conference on Computer Vision, Image and Deep Learning (CVIDL). :554—559.

To ensure security, Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) is widely used in people's online lives. This paper presents a Chinese character captcha sequential selection system based on convolutional neural network (CNN). Captchas composed of English and digits can already be identified with extremely high accuracy, but Chinese character captcha recognition is still challenging. The task we need to complete is to identify Chinese characters with different colors and different fonts that are not on a straight line with rotation and affine transformation on pictures with complex backgrounds, and then perform word order restoration on the identified Chinese characters. We divide the task into several sub-processes: Chinese character detection based on Faster R-CNN, Chinese character recognition and word order recovery based on N-Gram. In the Chinese character recognition sub-process, we have made outstanding contributions. We constructed a single Chinese character data set and built a 10-layer convolutional neural network. Eventually we achieved an accuracy of 98.43%, and completed the task perfectly.

Ming, Kun.  2020.  Chinese Coreference Resolution via Bidirectional LSTMs using Word and Token Level Representations. 2020 16th International Conference on Computational Intelligence and Security (CIS). :73–76.
Coreference resolution is an important task in the field of natural language processing. Most existing methods usually utilize word-level representations, ignoring massive information from the texts. To address this issue, we investigate how to improve Chinese coreference resolution by using span-level semantic representations. Specifically, we propose a model which acquires word and character representations through pre-trained Skip-Gram embeddings and pre-trained BERT, then explicitly leverages span-level information by performing bidirectional LSTMs among above representations. Experiments on CoNLL-2012 shared task have demonstrated that the proposed model achieves 62.95% F1-score, outperforming our baseline methods.
Vijayakumar, P., Bose, S., Kannan, A..  2014.  Chinese remainder theorem based centralised group key management for secure multicast communication. Information Security, IET. 8:179-187.

Designing a centralised group key management with minimal computation complexity to support dynamic secure multicast communication is a challenging issue in secure multimedia multicast. In this study, the authors propose a Chinese remainder theorem-based group key management scheme that drastically reduces computation complexity of the key server. The computation complexity of key server is reduced to O(1) in this proposed algorithm. Moreover, the computation complexity of group member is also minimised by performing one modulo division operation when a user join or leave operation is performed in a multicast group. The proposed algorithm has been implemented and tested using a key-star-based key management scheme and has been observed that this proposed algorithm reduces the computation complexity significantly.

Zhu, Meng, Yang, Xudong.  2019.  Chinese Texts Classification System. 2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT). :149–152.
In this article, we designed an automatic Chinese text classification system aiming to implement a system for classifying news texts. We propose two improved classification algorithms as two different choices for users to choose and then our system uses the chosen method for the obtaining of the classified result of the input text. There are two improved algorithms, one is k-Bayes using hierarchy conception based on NB method in machine learning field and another one adds attention layer to the convolutional neural network in deep learning field. Through experiments, our results showed that improved classification algorithms had better accuracy than based algorithms and our system is useful for making classifying news texts more reasonably and effectively.
Kumar, K. N., Nene, M. J..  2017.  Chip-Based symmetric and asymmetric key generation in hierarchical wireless sensors networks. 2017 International Conference on Inventive Systems and Control (ICISC). :1–6.
Realization of an application using Wireless Sensor Networks (WSNs) using Sensor Nodes (SNs) brings in profound advantages of ad-hoc and flexible network deployments. Implementation of these networks face immense challenges due to short wireless range; along with limited power, storage & computational capabilities of SNs. Also, due to the tiny physical attributes of the SNs in WSNs, they are prone to physical attacks. In the context of WSNs, the physical attacks may range from destroying, lifting, replacing and adding new SNs. The work in this paper addresses the threats induced due to physical attacks and, further proposes a methodology to mitigate it. The methodology incorporates the use of newly proposed secured and efficient symmetric and asymmetric key distribution technique based on the additional commodity hardware Trusted Platform Module (TPM). Further, the paper demonstrates the merits of the proposed methodology. With some additional economical cost for the hardware, the proposed technique can fulfill the security requirement of WSNs, like confidentiality, integrity, authenticity, resilience to attack, key connectivity and data freshness.
Abratkiewicz, K., Gromek, D., Samczynski, P..  2019.  Chirp Rate Estimation and micro-Doppler Signatures for Pedestrian Security Radar Systems. 2019 Signal Processing Symposium (SPSympo). :212—215.

A new approach to micro-Doppler signal analysis is presented in this article. Novel chirp rate estimators in the time-frequency domain were used for this purpose, which provided the chirp rate of micro-Doppler signatures, allowing the classification of objects in the urban environment. As an example verifying the method, a signal from a high-resolution radar with a linear frequency modulated continuous wave (FMCW) recording an echo reflected from a pedestrian was used to validate the proposed algorithms for chirp rate estimation. The obtained results are plotted on saturated accelerograms, giving an additional parameter dedicated for target classification in security systems utilizing radar sensors for target detection.

Isaeva, N. A..  2018.  Choice of Control Parameters of Complex System on the Basis of Estimates of the Risks. 2018 Eleventh International Conference "Management of Large-Scale System Development" (MLSD. :1-4.

The method of choice the control parameters of a complex system based on estimates of the risks is proposed. The procedure of calculating the estimates of risks intended for a choice of rational managing directors of influences by an allocation of the group of the operating factors for the set criteria factor is considered. The purpose of choice of control parameters of the complex system is the minimization of an estimate of the risk of the functioning of the system by mean of a solution of a problem of search of an extremum of the function of many variables. The example of a choice of the operating factors in the sphere of intangible assets is given.

Naik, N., Jenkins, P., Newell, D..  2017.  Choice of suitable Identity and Access Management standards for mobile computing and communication. 2017 24th International Conference on Telecommunications (ICT). :1–6.
Enterprises have recognised the importance of personal mobile devices for business and official use. Employees and consumers have been freely accessing resources and services from their principal organisation and partners' businesses on their mobile devices, to improve the efficiency and productivity of their businesses. This mobile computing-based business model has one major challenge, that of ascertaining and linking users' identities and access rights across business partners. The parent organisation owns all the confidential information about users but the collaborative organisation has to verify users' identities and access rights to allow access to their services and resources. This challenge involves resolving how to communicate users' identities to collaborative organisations without sending their confidential information. Several generic Identity and Access Management (IAM) standards have been proposed, and three have become established standards: Security Assertion Markup Language (SAML), Open Authentication (OAuth), and OpenID Connect (OIDC). Mobile computing and communication have some specific requirements and limitations; therefore, this paper evaluates these IAM standards to ascertain suitable IAM to protect mobile computing and communication. This evaluation is based on the three types of analyses: comparative analysis, suitability analysis and security vulnerability analysis of SAML, OAuth and OIDC.
Tennyson, M.F., Mitropoulos, F.J..  2014.  Choosing a profile length in the SCAP method of source code authorship attribution. SOUTHEASTCON 2014, IEEE. :1-6.

Source code authorship attribution is the task of determining the author of source code whose author is not explicitly known. One specific method of source code authorship attribution that has been shown to be extremely effective is the SCAP method. This method, however, relies on a parameter L that has heretofore been quite nebulous. In the SCAP method, each candidate author's known work is represented as a profile of that author, where the parameter L defines the profile's maximum length. In this study, alternative approaches for selecting a value for L were investigated. Several alternative approaches were found to perform better than the baseline approach used in the SCAP method. The approach that performed the best was empirically shown to improve the performance from 91.0% to 97.2% measured as a percentage of documents correctly attributed using a data set consisting of 7,231 programs written in Java and C++.

Johnson, N., Near, J. P., Hellerstein, J. M., Song, D..  2020.  Chorus: a Programming Framework for Building Scalable Differential Privacy Mechanisms. 2020 IEEE European Symposium on Security and Privacy (EuroS P). :535–551.
Differential privacy is fast becoming the gold standard in enabling statistical analysis of data while protecting the privacy of individuals. However, practical use of differential privacy still lags behind research progress because research prototypes cannot satisfy the scalability requirements of production deployments. To address this challenge, we present Chorus, a framework for building scalable differential privacy mechanisms which is based on cooperation between the mechanism itself and a high-performance production database management system (DBMS). We demonstrate the use of Chorus to build the first highly scalable implementations of complex mechanisms like Weighted PINQ, MWEM, and the matrix mechanism. We report on our experience deploying Chorus at Uber, and evaluate its scalability on real-world queries.
Li, P., Liu, Q., Zhao, W., Wang, D., Wang, S..  2018.  Chronic Poisoning against Machine Learning Based IDSs Using Edge Pattern Detection. 2018 IEEE International Conference on Communications (ICC). :1-7.

In big data era, machine learning is one of fundamental techniques in intrusion detection systems (IDSs). Poisoning attack, which is one of the most recognized security threats towards machine learning- based IDSs, injects some adversarial samples into the training phase, inducing data drifting of training data and a significant performance decrease of target IDSs over testing data. In this paper, we adopt the Edge Pattern Detection (EPD) algorithm to design a novel poisoning method that attack against several machine learning algorithms used in IDSs. Specifically, we propose a boundary pattern detection algorithm to efficiently generate the points that are near to abnormal data but considered to be normal ones by current classifiers. Then, we introduce a Batch-EPD Boundary Pattern (BEBP) detection algorithm to overcome the limitation of the number of edge pattern points generated by EPD and to obtain more useful adversarial samples. Based on BEBP, we further present a moderate but effective poisoning method called chronic poisoning attack. Extensive experiments on synthetic and three real network data sets demonstrate the performance of the proposed poisoning method against several well-known machine learning algorithms and a practical intrusion detection method named FMIFS-LSSVM-IDS.

Zhang, Yuan, Xu, Chunxiang, Li, Hongwei, Yang, Haomiao, Shen, Xuemin.  2019.  Chronos: Secure and Accurate Time-Stamping Scheme for Digital Files via Blockchain. ICC 2019 - 2019 IEEE International Conference on Communications (ICC). :1—6.

It is common to certify when a file was created in digital investigations, e.g., determining first inventors for patentable ideas in intellectual property systems to resolve disputes. Secure time-stamping schemes can be derived from blockchain-based storage to protect files from backdating/forward-dating, where a file is integrated into a transaction on a blockchain and the timestamp of the corresponding block reflects the latest time the file was created. Nevertheless, blocks' timestamps in blockchains suffer from time errors, which causes the inaccuracy of files' timestamps. In this paper, we propose an accurate blockchain-based time-stamping scheme called Chronos. In Chronos, when a file is created, the file and a sufficient number of successive blocks that are latest confirmed on blockchain are integrated into a transaction. Due to chain quality, it is computationally infeasible to pre-compute these blocks. The time when the last block was chained to the blockchain serves as the earliest creation time of the file. The time when the block including the transaction was chained indicates the latest creation time of the file. Therefore, Chronos makes the file's creation time corresponding to this time interval. Based on chain growth, Chronos derives the time when these two blocks were chained from their heights on the blockchain, which ensures the accuracy of the file's timestamp. The security and performance of Chronos are demonstrated by a comprehensive evaluation.

Wu, Zhijun, Xu, Enzhong, Liu, Liang, Yue, Meng.  2019.  CHTDS: A CP-ABE Access Control Scheme Based on Hash Table and Data Segmentation in NDN. 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :843—848.

For future Internet, information-centric networking (ICN) is considered a potential solution to many of its current problems, such as content distribution, mobility, and security. Named Data Networking (NDN) is a more popular ICN project. However, concern regarding the protection of user data persists. Information caching in NDN decouples content and content publishers, which leads to content security threats due to lack of secure controls. Therefore, this paper presents a CP-ABE (ciphertext policy attribute based encryption) access control scheme based on hash table and data segmentation (CHTDS). Based on data segmentation, CHTDS uses a method of linearly splitting fixed data blocks, which effectively improves data management. CHTDS also introduces CP-ABE mechanism and hash table data structure to ensure secure access control and privilege revocation does not need to re-encrypt the published content. The analysis results show that CHTDS can effectively realize the security and fine-grained access control in the NDN environment, and reduce communication overhead for content access.

Drees, Maximilian, Gmyr, Robert, Scheideler, Christian.  2016.  Churn- and DoS-resistant Overlay Networks Based on Network Reconfiguration. Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures. :417–427.
We present three robust overlay networks: First, we present a network that organizes the nodes into an expander and is resistant to even massive adversarial churn. Second, we develop a network based on the hypercube that maintains connectivity under adversarial DoS-attacks. For the DoS-attacks we use the notion of a Ω(log log n)-late adversary which only has access to topological information that is at least Ω(log log n) rounds old. Finally, we develop a network that combines both churn- and DoS-resistance. The networks gain their robustness through constant network reconfiguration, i.e., the topology of the networks changes constantly. Our reconfiguration algorithms are based on node sampling primitives for expanders and hypercubes that allow each node to sample a logarithmic number of nodes uniformly at random in O(log log n) communication rounds. These primitives are specific to overlay networks and their optimal runtime represents an exponential improvement over known techniques. Our results have a wide range of applications, for example in the area of scalable and robust peer-to-peer systems.
Ke, Yu-Ming, Chen, Chih-Wei, Hsiao, Hsu-Chun, Perrig, Adrian, Sekar, Vyas.  2016.  CICADAS: Congesting the Internet with Coordinated and Decentralized Pulsating Attacks. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :699–710.

This study stems from the premise that we need to break away from the "reactive" cycle of developing defenses against new DDoS attacks (e.g., amplification) by proactively investigating the potential for new types of DDoS attacks. Our specific focus is on pulsating attacks, a particularly debilitating type that has been hypothesized in the literature. In a pulsating attack, bots coordinate to generate intermittent pulses at target links to significantly reduce the throughput of TCP connections traversing the target. With pulsating attacks, attackers can cause significantly greater damage to legitimate users than traditional link flooding attacks. To date, however, pulsating attacks have been either deemed ineffective or easily defendable for two reasons: (1) they require a central coordinator and can thus be tracked; and (2) they require tight synchronization of pulses, which is difficult even in normal non-congestion scenarios. This paper argues that, in fact, the perceived drawbacks of pulsating attacks are in fact not fundamental. We develop a practical pulsating attack called CICADAS using two key ideas: using both (1) congestion as an implicit signal for decentralized implementation, and (2) a Kalman-filter-based approach to achieve tight synchronization. We validate CICADAS using simulations and wide-area experiments. We also discuss possible countermeasures against this attack.

Banakar, V., Upadhya, P., Keshavan, M..  2020.  CIED - rapid composability of rack scale resources using Capability Inference Engine across Datacenters. 2020 IEEE Infrastructure Conference. :1–4.
There are multiple steps involved in transitioning a server from the factory to being fully provisioned for an intended workload. These steps include finding the optimal slot for the hardware and to compose the required resources on the hardware for the intended workload. There are many different factors that influence the placement of server hardware in the datacenter, such as physical limitations to connect to a network be it Ethernet or storage networks, power requirements, temperature/cooling considerations, and physical space, etc. In addition to this, there may be custom requirements driven by workload policies (such as security, data privacy, power redundancy, etc.). Once the server has been placed in the right slot it needs to be configured with the appropriate resources for the intended workload. CIED will provide a ranked list of locations for server placement based on the intended workload, connectivity and physical requirements of the server. Once the server is placed in the suggested slot, the solution automatically discovers the server and composes the required resources (compute, storage and networks) for running the appropriate workload. CIED reduces the overall time taken to move hardware from factory to production and also maximizes the server hardware utilization while minimizing downtime by physically placing the resources optimally. From the case study that was undertaken, the time taken to transition a server from factory to being fully provisioned was proportional to the number of devices in the datacenter. With CIED this time is constant irrespective of the complexity or the number of devices in a datacenter.
Korzhik, Valery, Duy Cuong, Nguyen, Morales-Luna, Guillermo.  2019.  Cipher Modification Against Steganalysis Based on NIST Tests. 2019 24th Conference of Open Innovations Association (FRUCT). :179–186.

Part of our team proposed a new steganalytic method based on NIST tests at MMM-ACNS 2017 [1], and it was encouraged to investigate some cipher modifications to prevent such types of steganalysis. In the current paper, we propose one cipher modification based on decompression by arithmetic source compression coding. The experiment shows that the current proposed method allows to protect stegosystems against steganalysis based on NIST tests, while security of the encrypted embedded messages is kept. Protection of contemporary image steganography based on edge detection and modified LSB against NIST tests steganalysis is also presented.

Berti, Francesco, Koeune, Fran\c cois, Pereira, Olivier, Peters, Thomas, Standaert, Fran\c cois-Xavier.  2018.  Ciphertext Integrity with Misuse and Leakage: Definition and Efficient Constructions with Symmetric Primitives. Proceedings of the 2018 on Asia Conference on Computer and Communications Security. :37–50.

Leakage resilience (LR) and misuse resistance (MR) are two important properties for the deployment of authenticated encryption (AE) schemes. They aim at mitigating the impact of implementation flaws due to side-channel leakages and misused randomness. In this paper, we discuss the interactions and incompatibilities between these two properties. We start from the usual definition of MR for AE schemes from Rogaway and Shrimpton, and argue that it may be overly demanding in the presence of leakages. As a result, we turn back to the basic security requirements for AE: ciphertext integrity (INT-CTXT) and CPA security, and propose to focus on a new notion of CIML security, which is an extension of INT-CTXT in the presence of misuse and leakages. We discuss the extent to which CIML security is offered by previous proposals of MR AE schemes, conclude by the negative, and propose two new efficient CIML-secure AE schemes: the DTE scheme offers security in the standard model, while the DCE scheme offers security in the random oracle model, but comes with some efficiency benefits. On our way, we observe that these constructions are not trivial, and show for instance that the composition of a LR MAC and a LR encryption scheme, while providing a (traditional) MR AE scheme, can surprisingly lose the MR property in the presence of leakages and does not achieve CIML security. Eventually, we show the LR CPA security of DTE and DCE.

Malluhi, Qutaibah M., Shikfa, Abdullatif, Trinh, Viet Cuong.  2017.  A Ciphertext-Policy Attribute-Based Encryption Scheme With Optimized Ciphertext Size And Fast Decryption. Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. :230–240.

We address the problem of ciphertext-policy attribute-based encryption with fine access control, a cryptographic primitive which has many concrete application scenarios such as Pay-TV, e-Health, Cloud Storage and so on. In this context we improve on previous LSSS based techniques by building on previous work of Hohenberger and Waters at PKC'13 and proposing a construction that achieves ciphertext size linear in the minimum between the size of the boolean access formula and the number of its clauses. Our construction also supports fast decryption. We also propose two interesting extensions: the first one aims at reducing storage and computation at the user side and is useful in the context of lightweight devices or devices using a cloud operator. The second proposes the use of multiple authorities to mitigate key escrow by the authority.

H. Bahrami, K. Hajsadeghi.  2015.  "Circuit design to improve security of telecommunication devices". 2015 IEEE Conference on Technologies for Sustainability (SusTech). :171-175.

Security in mobile handsets of telecommunication standards such as GSM, Project 25 and TETRA is very important, especially when governments and military forces use handsets and telecommunication devices. Although telecommunication could be quite secure by using encryption, coding, tunneling and exclusive channel, attackers create new ways to bypass them without the knowledge of the legitimate user. In this paper we introduce a new, simple and economical circuit to warn the user in cases where the message is not encrypted because of manipulation by attackers or accidental damage. This circuit not only consumes very low power but also is created to sustain telecommunication devices in aspect of security and using friendly. Warning to user causes the best practices of telecommunication devices without wasting time and energy for fault detection.

Shamsi, Kaveh, Li, Meng, Meade, Travis, Zhao, Zheng, Pan, David Z., Jin, Yier.  2017.  Circuit Obfuscation and Oracle-guided Attacks: Who Can Prevail? Proceedings of the on Great Lakes Symposium on VLSI 2017. :357–362.
This paper provides a systematization of knowledge in the domain of integrated circuit protection through obfuscation with a focus on the recent Boolean satisfiability (SAT) attacks. The study systematically combines real-world IC reverse engineering reports, experimental results using the most recent oracle-guided attacks, and concepts in machine-learning and cryptography to draw a map of the state-of-the-art of IC obfuscation and future challenges and opportunities.
Jim Blythe, University of Southern California, Ross Koppel, University of Pennsylvania, Sean Smith, Dartmouth College.  2013.  Circumvention of Security: Good Users Do Bad Things.

Conventional wisdom is that the textbook view describes reality, and only bad people (not good people trying to get their jobs done) break the rules. And yet it doesn't, and good people circumvent.

Published in IEEE Security & Privacy, volume 11, issue 5, September - October 2013.

Lee, Seung Ji.  2016.  Citywide Management of Media Facades: Case Study of Seoul City. Proceedings of the 3rd Conference on Media Architecture Biennale. :11:1–11:4.

Due to the evolution of LED lighting and information technology, the application of media facades has expanded rapidly. Despite the positive aspects of media facades, the growth of them can cause light pollution and add to the confusion of the city. This study analyzes the Seoul case which implements citywide management with a master plan for media facades. Through this, the study aims to investigate the meaning of citywide management of media facades installed on individual buildings. Firstly, it investigates the conditions of media facades in Seoul City. The identified problems prove the necessity of the citywide management for media facades. Secondly, it analyzed the progress of media facades regulation of Seoul City. Management target has changed from the indiscreet installation for the individual media facades to further inducing the attractive media facade for overall Seoul City. For this, the 'Seoul Media Facade Management MasterPlan' was drafted to establish citywide management by the Seoul government. Thirdly, it analyzed the MasterPlan. The management tools in the MasterPlan are classified into regional management, elemental management, and specialization plans, each having detailed approaches. Finally, the study discussed the meaning of citywide management in the aspect that media facades are the cultural asset to the city, that the regional differentiation is adopted, and that the continuous maintenance for both of the hardware and content) is important. Media facades utilizing the facade of buildings are recognized as an element of urban landscapes securing the publicness, contributing to the vitalization of the area, and finally providing pleasure to the citizens.

Tunc, C., Hariri, S., Montero, F. D. L. P., Fargo, F., Satam, P..  2015.  CLaaS: Cybersecurity Lab as a Service – Design, Analysis, and Evaluation. 2015 International Conference on Cloud and Autonomic Computing. :224–227.

The explosive growth of IT infrastructures, cloud systems, and Internet of Things (IoT) have resulted in complex systems that are extremely difficult to secure and protect against cyberattacks that are growing exponentially in the complexity and also in the number. Overcoming the cybersecurity challenges require cybersecurity environments supporting the development of innovative cybersecurity algorithms and evaluation of the experiments. In this paper, we present the design, analysis, and evaluation of the Cybersecurity Lab as a Service (CLaaS) which offers virtual cybersecurity experiments as a cloud service that can be accessed from anywhere and from any device (desktop, laptop, tablet, smart mobile device, etc.) with Internet connectivity. We exploit cloud computing systems and virtualization technologies to provide isolated and virtual cybersecurity experiments for vulnerability exploitation, launching cyberattacks, how cyber resources and services can be hardened, etc. We also present our performance evaluation and effectiveness of CLaaS experiments used by students.

Jain, Bhushan, Tsai, Chia-Che, Porter, Donald E..  2017.  A Clairvoyant Approach to Evaluating Software (In)Security. Proceedings of the 16th Workshop on Hot Topics in Operating Systems. :62–68.

Nearly all modern software has security flaws–-either known or unknown by the users. However, metrics for evaluating software security (or lack thereof) are noisy at best. Common evaluation methods include counting the past vulnerabilities of the program, or comparing the size of the Trusted Computing Base (TCB), measured in lines of code (LoC) or binary size. Other than deleting large swaths of code from project, it is difficult to assess whether a code change decreased the likelihood of a future security vulnerability. Developers need a practical, constructive way of evaluating security. This position paper argues that we actually have all the tools needed to design a better, empirical method of security evaluation. We discuss related work that estimates the severity and vulnerability of certain attack vectors based on code properties that can be determined via static analysis. This paper proposes a grand, unified model that can predict the risk and severity of vulnerabilities in a program. Our prediction model uses machine learning to correlate these code features of open-source applications with the history of vulnerabilities reported in the CVE (Common Vulnerabilities and Exposures) database. Based on this model, one can incorporate an analysis into the standard development cycle that predicts whether the code is becoming more or less prone to vulnerabilities.