Biblio

Found 142 results

Filters: Keyword is Tools  [Clear All Filters]
2019-09-23
Zheng, N., Alawini, A., Ives, Z. G..  2019.  Fine-Grained Provenance for Matching ETL. 2019 IEEE 35th International Conference on Data Engineering (ICDE). :184–195.
Data provenance tools capture the steps used to produce analyses. However, scientists must choose among workflow provenance systems, which allow arbitrary code but only track provenance at the granularity of files; provenance APIs, which provide tuple-level provenance, but incur overhead in all computations; and database provenance tools, which track tuple-level provenance through relational operators and support optimization, but support a limited subset of data science tasks. None of these solutions are well suited for tracing errors introduced during common ETL, record alignment, and matching tasks - for data types such as strings, images, etc. Scientists need new capabilities to identify the sources of errors, find why different code versions produce different results, and identify which parameter values affect output. We propose PROVision, a provenance-driven troubleshooting tool that supports ETL and matching computations and traces extraction of content within data objects. PROVision extends database-style provenance techniques to capture equivalences, support optimizations, and enable selective evaluation. We formalize our extensions, implement them in the PROVision system, and validate their effectiveness and scalability for common ETL and matching tasks.
2019-10-22
Deb Nath, Atul Prasad, Bhunia, Swarup, Ray, Sandip.  2018.  ArtiFact: Architecture and CAD Flow for Efficient Formal Verification of SoC Security Policies. 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI). :411–416.
Verification of security policies represents one of the most critical, complex, and expensive steps of modern SoC design validation. SoC security policies are typically implemented as part of functional design flow, with a diverse set of protection mechanisms sprinkled across various IP blocks. An obvious upshot is that their verification requires comprehension and analysis of the entire system, representing a scalability bottleneck for verification tools. The scale and complexity of industrial SoC is far beyond the analysis capacity of state-of-the-art formal tools; even simulation-based security verification is severely limited in effectiveness because of the need to exercise subtle corner-cases across the entire system. We address this challenge by developing a novel security architecture that accounts for verification needs from the ground up. Our framework, ArtiFact, provides an alternative architecture for security policy implementation that exploits a flexible, centralized, infrastructure IP and enables scalable, streamlined verification of these policies. With our architecture, verification of system-level security policies reduces to analysis of this single IP and its interfaces, enabling off-the-shelf formal tools to successfully verify these policies. We introduce a CAD flow that supports both formal and dynamic (simulation-based) verification, and is built on top of such off-the-shelf tools. Our approach reduces verification time by over 62X and bug detection time by 34X for illustrative policies.
2019-09-23
Chen, W., Liang, X., Li, J., Qin, H., Mu, Y., Wang, J..  2018.  Blockchain Based Provenance Sharing of Scientific Workflows. 2018 IEEE International Conference on Big Data (Big Data). :3814–3820.
In a research community, the provenance sharing of scientific workflows can enhance distributed research cooperation, experiment reproducibility verification and experiment repeatedly doing. Considering that scientists in such a community are often in a loose relation and distributed geographically, traditional centralized provenance sharing architectures have shown their disadvantages in poor trustworthiness, reliabilities and efficiency. Additionally, they are also difficult to protect the rights and interests of data providers. All these have been largely hindering the willings of distributed scientists to share their workflow provenance. Considering the big advantages of blockchain in decentralization, trustworthiness and high reliability, an approach to sharing scientific workflow provenance based on blockchain in a research community is proposed. To make the approach more practical, provenance is handled on-chain and original data is delivered off-chain. A kind of block structure to support efficient provenance storing and retrieving is designed, and an algorithm for scientists to search workflow segments from provenance as well as an algorithm for experiments backtracking are provided to enhance the experiment result sharing, save computing resource and time cost by avoiding repeated experiments as far as possible. Analyses show that the approach is efficient and effective.
2019-01-21
Kafash, S. H., Giraldo, J., Murguia, C., Cárdenas, A. A., Ruths, J..  2018.  Constraining Attacker Capabilities Through Actuator Saturation. 2018 Annual American Control Conference (ACC). :986–991.
For LTI control systems, we provide mathematical tools - in terms of Linear Matrix Inequalities - for computing outer ellipsoidal bounds on the reachable sets that attacks can induce in the system when they are subject to the physical limits of the actuators. Next, for a given set of dangerous states, states that (if reached) compromise the integrity or safe operation of the system, we provide tools for designing new artificial limits on the actuators (smaller than their physical bounds) such that the new ellipsoidal bounds (and thus the new reachable sets) are as large as possible (in terms of volume) while guaranteeing that the dangerous states are not reachable. This guarantees that the new bounds cut as little as possible from the original reachable set to minimize the loss of system performance. Computer simulations using a platoon of vehicles are presented to illustrate the performance of our tools.
2018-12-03
Ma, Y..  2018.  Constructing Supply Chains in Open Source Software. 2018 IEEE/ACM 40th International Conference on Software Engineering: Companion (ICSE-Companion). :458–459.
The supply chain is an extremely successful way to cope with the risk posed by distributed decision making in product sourcing and distribution. While open source software has similarly distributed decision making and involves code and information flows similar to those in ordinary supply chains, the actual networks necessary to quantify and communicate risks in software supply chains have not been constructed on large scale. This work proposes to close this gap by measuring dependency, code reuse, and knowledge flow networks in open source software. We have done preliminary work by developing suitable tools and methods that rely on public version control data to measure and comparing these networks for R language and emberjs packages. We propose ways to calculate the three networks for the entirety of public software, evaluate their accuracy, and to provide public infrastructure to build risk assessment and mitigation tools for various individual and organizational participants in open sources software. We hope that this infrastructure will contribute to more predictable experience with OSS and lead to its even wider adoption.
2019-09-23
Yazici, I. M., Karabulut, E., Aktas, M. S..  2018.  A Data Provenance Visualization Approach. 2018 14th International Conference on Semantics, Knowledge and Grids (SKG). :84–91.
Data Provenance has created an emerging requirement for technologies that enable end users to access, evaluate, and act on the provenance of data in recent years. In the era of Big Data, the amount of data created by corporations around the world has grown each year. As an example, both in the Social Media and e-Science domains, data is growing at an unprecedented rate. As the data has grown rapidly, information on the origin and lifecycle of the data has also grown. In turn, this requires technologies that enable the clarification and interpretation of data through the use of data provenance. This study proposes methodologies towards the visualization of W3C-PROV-O Specification compatible provenance data. The visualizations are done by summarization and comparison of the data provenance. We facilitated the testing of these methodologies by providing a prototype, extending an existing open source visualization tool. We discuss the usability of the proposed methodologies with an experimental study; our initial results show that the proposed approach is usable, and its processing overhead is negligible.
2019-02-14
Facon, A., Guilley, S., Lec'Hvien, M., Schaub, A., Souissi, Y..  2018.  Detecting Cache-Timing Vulnerabilities in Post-Quantum Cryptography Algorithms. 2018 IEEE 3rd International Verification and Security Workshop (IVSW). :7-12.
When implemented on real systems, cryptographic algorithms are vulnerable to attacks observing their execution behavior, such as cache-timing attacks. Designing protected implementations must be done with knowledge and validation tools as early as possible in the development cycle. In this article we propose a methodology to assess the robustness of the candidates for the NIST post-quantum standardization project to cache-timing attacks. To this end we have developed a dedicated vulnerability research tool. It performs a static analysis with tainting propagation of sensitive variables across the source code and detects leakage patterns. We use it to assess the security of the NIST post-quantum cryptography project submissions. Our results show that more than 80% of the analyzed implementations have at least one potential flaw, and three submissions total more than 1000 reported flaws each. Finally, this comprehensive study of the competitors security allows us to identify the most frequent weaknesses amongst candidates and how they might be fixed.
2018-12-10
Zhu, J., Liapis, A., Risi, S., Bidarra, R., Youngblood, G. M..  2018.  Explainable AI for Designers: A Human-Centered Perspective on Mixed-Initiative Co-Creation. 2018 IEEE Conference on Computational Intelligence and Games (CIG). :1–8.
Growing interest in eXplainable Artificial Intelligence (XAI) aims to make AI and machine learning more understandable to human users. However, most existing work focuses on new algorithms, and not on usability, practical interpretability and efficacy on real users. In this vision paper, we propose a new research area of eXplainable AI for Designers (XAID), specifically for game designers. By focusing on a specific user group, their needs and tasks, we propose a human-centered approach for facilitating game designers to co-create with AI/ML techniques through XAID. We illustrate our initial XAID framework through three use cases, which require an understanding both of the innate properties of the AI techniques and users' needs, and we identify key open challenges.
2019-10-14
Guo, Y., Chen, L., Shi, G..  2018.  Function-Oriented Programming: A New Class of Code Reuse Attack in C Applications. 2018 IEEE Conference on Communications and Network Security (CNS). :1–9.
Control-hijacking attacks include code injection attacks and code reuse attacks. In recent years, with the emergence of the defense mechanism data-execution prevention(DEP), code reuse attacks have become mainstream, such as return-oriented programming(ROP), Jump-Oriented Programming(JOP), and Counterfeit Object-oriented Programming(COOP). And a series of defensive measures have been proposed, such as DEP, address space layout randomization (ASLR), coarse-grained Control-Flow Integrity(CFI) and fine-grained CFI. In this paper, we propose a new attack called function-oriented programming(FOP) to construct malicious program behavior. FOP takes advantage of the existing function of the C program to induce attack. We propose concrete algorithms for FOP gadgets and build a tool to identify FOP gadgets. FOP can successfully bypass coarse-grained CFI, and FOP also can bypass some existing fine-grained CFI technologies, such as shadow stack technology. We show a real-world attack for proftpd1.3.0 server in the Linux x64 environment. We believe that the FOP attack will encourage people to come up with more effective defense measures.
2019-01-21
Nicho, M., Oluwasegun, A., Kamoun, F..  2018.  Identifying Vulnerabilities in APT Attacks: A Simulated Approach. 2018 9th IFIP International Conference on New Technologies, Mobility and Security (NTMS). :1–4.
This research aims to identify some vulnerabilities of advanced persistent threat (APT) attacks using multiple simulated attacks in a virtualized environment. Our experimental study shows that while updating the antivirus software and the operating system with the latest patches may help in mitigating APTs, APT threat vectors could still infiltrate the strongest defenses. Accordingly, we highlight some critical areas of security concern that need to be addressed.
2018-12-03
Shearon, C. E..  2018.  IPC-1782 standard for traceability of critical items based on risk. 2018 Pan Pacific Microelectronics Symposium (Pan Pacific). :1–3.
Traceability has grown from being a specialized need for certain safety critical segments of the industry, to now being a recognized value-add tool for the industry as a whole that can be utilized for manual to automated processes End to End throughout the supply chain. The perception of traceability data collection persists as being a burden that provides value only when the most rare and disastrous of events take place. Disparate standards have evolved in the industry, mainly dictated by large OEM companies in the market create confusion, as a multitude of requirements and definitions proliferate. The intent of the IPC-1782 project is to bring the whole principle of traceability up to date and enable business to move faster, increase revenue, increase productivity, and decrease costs as a result of increased trust. Traceability, as defined in this standard will represent the most effective quality tool available, becoming an intrinsic part of best practice operations, with the encouragement of automated data collection from existing manufacturing systems which works well with Industry 4.0, integrating quality, reliability, product safety, predictive (routine, preventative, and corrective) maintenance, throughput, manufacturing, engineering and supply-chain data, reducing cost of ownership as well as ensuring timeliness and accuracy all the way from a finished product back through to the initial materials and granular attributes about the processes along the way. The goal of this standard is to create a single expandable and extendable data structure that can be adopted for all levels of traceability and enable easily exchanged information, as appropriate, across many industries. The scope includes support for the most demanding instances for detail and integrity such as those required by critical safety systems, all the way through to situations where only basic traceability, such as for simple consumer products, are required. A key driver for the adoption of the standard is the ability to find a relevant and achievable level of traceability that exactly meets the requirement following risk assessment of the business. The wealth of data accessible from traceability for analysis (e.g.; Big Data, etc.) can easily and quickly yield information that can raise expectations of very significant quality and performance improvements, as well as providing the necessary protection against the costs of issues in the market and providing very timely information to regulatory bodies along with consumers/customers as appropriate. This information can also be used to quickly raise yields, drive product innovation that resonates with consumers, and help drive development tests & design requirements that are meaningful to the Marketplace. Leveraging IPC 1782 to create the best value of Component Traceability for your business.
2019-02-14
Jenkins, J., Cai, H..  2018.  Leveraging Historical Versions of Android Apps for Efficient and Precise Taint Analysis. 2018 IEEE/ACM 15th International Conference on Mining Software Repositories (MSR). :265-269.
Today, computing on various Android devices is pervasive. However, growing security vulnerabilities and attacks in the Android ecosystem constitute various threats through user apps. Taint analysis is a common technique for defending against these threats, yet it suffers from challenges in attaining practical simultaneous scalability and effectiveness. This paper presents a novel approach to fast and precise taint checking, called incremental taint analysis, by exploiting the evolving nature of Android apps. The analysis narrows down the search space of taint checking from an entire app, as conventionally addressed, to the parts of the program that are different from its previous versions. This technique improves the overall efficiency of checking multiple versions of the app as it evolves. We have implemented the techniques as a tool prototype, EVOTAINT, and evaluated our analysis by applying it to real-world evolving Android apps. Our preliminary results show that the incremental approach largely reduced the cost of taint analysis, by 78.6% on average, yet without sacrificing the analysis effectiveness, relative to a representative precise taint analysis as the baseline.
2019-01-21
Tsuda, Y., Nakazato, J., Takagi, Y., Inoue, D., Nakao, K., Terada, K..  2018.  A Lightweight Host-Based Intrusion Detection Based on Process Generation Patterns. 2018 13th Asia Joint Conference on Information Security (AsiaJCIS). :102–108.
Advanced persistent threat (APT) has been considered globally as a serious social problem since the 2010s. Adversaries of this threat, at first, try to penetrate into targeting organizations by using a backdoor which is opened with drive-by-download attacks, malicious e-mail attachments, etc. After adversaries' intruding, they usually execute benign applications (e.g, OS built-in commands, management tools published by OS vendors, etc.) for investigating networks of targeting organizations. Therefore, if they penetrate into networks once, it is difficult to rapidly detect these malicious activities only by using anti-virus software or network-based intrusion systems. Meanwhile, enterprise networks are managed well in general. That means network administrators have a good grasp of installed applications and routinely used applications for employees' daily works. Thereby, in order to find anomaly behaviors on well-managed networks, it is effective to observe changes executing their applications. In this paper, we propose a lightweight host-based intrusion detection system by using process generation patterns. Our system periodically collects lists of active processes from each host, then the system constructs process trees from the lists. In addition, the system detects anomaly processes from the process trees considering parent-child relationships, execution sequences and lifetime of processes. Moreover, we evaluated the system in our organization. The system collected 2, 403, 230 process paths in total from 498 hosts for two months, then the system could extract 38 anomaly processes. Among them, one PowerShell process was also detected by using an anti-virus software running on our organization. Furthermore, our system could filter out the other 18 PowerShell processes, which were used for maintenance of our network.
2019-02-08
Yi, F., Cai, H. Y., Xin, F. Z..  2018.  A Logic-Based Attack Graph for Analyzing Network Security Risk Against Potential Attack. 2018 IEEE International Conference on Networking, Architecture and Storage (NAS). :1-4.
In this paper, we present LAPA, a framework for automatically analyzing network security risk and generating attack graph for potential attack. The key novelty in our work is that we represent the properties of networks and zero day vulnerabilities, and use logical reasoning algorithm to generate potential attack path to determine if the attacker can exploit these vulnerabilities. In order to demonstrate the efficacy, we have implemented the LAPA framework and compared with three previous network vulnerability analysis methods. Our analysis results have a low rate of false negatives and less cost of processing time due to the worst case assumption and logical property specification and reasoning. We have also conducted a detailed study of the efficiency for generation attack graph with different value of attack path number, attack path depth and network size, which affect the processing time mostly. We estimate that LAPA can produce high quality results for a large portion of networks.
2019-08-26
Izurieta, C., Kimball, K., Rice, D., Valentien, T..  2018.  A Position Study to Investigate Technical Debt Associated with Security Weaknesses. 2018 IEEE/ACM International Conference on Technical Debt (TechDebt). :138–142.
Context: Managing technical debt (TD) associated with potential security breaches found during design can lead to catching vulnerabilities (i.e., exploitable weaknesses) earlier in the software lifecycle; thus, anticipating TD principal and interest that can have decidedly negative impacts on businesses. Goal: To establish an approach to help assess TD associated with security weaknesses by leveraging the Common Weakness Enumeration (CWE) and its scoring mechanism, the Common Weakness Scoring System (CWSS). Method: We present a position study with a five-step approach employing the Quamoco quality model to operationalize the scoring of architectural CWEs. Results: We use static analysis to detect design level CWEs, calculate their CWSS scores, and provide a relative ranking of weaknesses that help practitioners identify the highest risks in an organization with a potential to impact TD. Conclusion: CWSS is a community agreed upon method that should be leveraged to help inform the ranking of security related TD items.
2019-09-26
Torkura, K. A., Sukmana, M. I. H., Meinig, M., Cheng, F., Meinel, C., Graupner, H..  2018.  A Threat Modeling Approach for Cloud Storage Brokerage and File Sharing Systems. NOMS 2018 - 2018 IEEE/IFIP Network Operations and Management Symposium. :1-5.
Cloud storage brokerage systems abstract cloud storage complexities by mediating technical and business relationships between cloud stakeholders, while providing value-added services. This however raises security challenges pertaining to the integration of disparate components with sometimes conflicting security policies and architectural complexities. Assessing the security risks of these challenges is therefore important for Cloud Storage Brokers (CSBs). In this paper, we present a threat modeling schema to analyze and identify threats and risks in cloud brokerage brokerage systems. Our threat modeling schema works by generating attack trees, attack graphs, and data flow diagrams that represent the interconnections between identified security risks. Our proof-of-concept implementation employs the Common Configuration Scoring System (CCSS) to support the threat modeling schema, since current schemes lack sufficient security metrics which are imperatives for comprehensive risk assessments. We demonstrate the efficiency of our proposal by devising CCSS base scores for two attacks commonly launched against cloud storage systems: Cloud sStorage Enumeration Attack and Cloud Storage Exploitation Attack. These metrics are then combined with CVSS based metrics to assign probabilities in an Attack Tree. Thus, we show the possibility combining CVSS and CCSS for comprehensive threat modeling, and also show that our schemas can be used to improve cloud security.
2019-01-16
Rodríguez, R. J., Martín-Pérez, M., Abadía, I..  2018.  A tool to compute approximation matching between windows processes. 2018 6th International Symposium on Digital Forensic and Security (ISDFS). :1–6.
Finding identical digital objects (or artifacts) during a forensic analysis is commonly achieved by means of cryptographic hashing functions, such as MD5, SHA1, or SHA-256, to name a few. However, these functions suffer from the avalanche effect property, which guarantees that if an input is changed slightly the output changes significantly. Hence, these functions are unsuitable for typical digital forensics scenarios where a forensics memory image from a likely compromised machine shall be analyzed. This memory image file contains a snapshot of processes (instances of executable files) which were up on execution when the dumping process was done. However, processes are relocated at memory and contain dynamic data that depend on the current execution and environmental conditions. Therefore, the comparison of cryptographic hash values of different processes from the same executable file will be negative. Bytewise approximation matching algorithms may help in these scenarios, since they provide a similarity measurement in the range [0,1] between similar inputs instead of a yes/no answer (in the range 0,1). In this paper, we introduce ProcessFuzzyHash, a Volatility plugin that enables us to compute approximation hash values of processes contained in a Windows memory dump.
2019-04-05
Huang, M. Chiu, Wan, Y., Chiang, C., Wang, S..  2018.  Tor Browser Forensics in Exploring Invisible Evidence. 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :3909-3914.
Given the high frequency of information security incidents, feeling that we may soon become innocent victims of these events may be justified. Perpetrators of information security offenses take advantage of several methods to leave no evidence of their crimes, and this pattern of hiding tracks has caused difficulties for investigators searching for digital evidence. Use of the onion router (Tor) is a common way for criminals to conceal their identities and tracks. This paper aims to explain the composition and operation of onion routing; we conduct a forensic experiment to detect the use of the Tor browser and compare several browser modes, including incognito and normal. Through the experimental method described in this paper, investigators can learn to identify perpetrators of Internet crimes, which will be helpful in future endeavors in digital forensics.
2019-05-08
Barni, M., Stamm, M. C., Tondi, B..  2018.  Adversarial Multimedia Forensics: Overview and Challenges Ahead. 2018 26th European Signal Processing Conference (EUSIPCO). :962–966.

In recent decades, a significant research effort has been devoted to the development of forensic tools for retrieving information and detecting possible tampering of multimedia documents. A number of counter-forensic tools have been developed as well in order to impede a correct analysis. Such tools are often very effective due to the vulnerability of multimedia forensics tools, which are not designed to work in an adversarial environment. In this scenario, developing forensic techniques capable of granting good performance even in the presence of an adversary aiming at impeding the forensic analysis, is becoming a necessity. This turns out to be a difficult task, given the weakness of the traces the forensic analysis usually relies on. The goal of this paper is to provide an overview of the advances made over the last decade in the field of adversarial multimedia forensics. We first consider the view points of the forensic analyst and the attacker independently, then we review some of the attempts made to simultaneously take into account both perspectives by resorting to game theory. Eventually, we discuss the hottest open problems and outline possible paths for future research.

2019-03-15
Noor, U., Anwar, Z., Noor, U., Anwar, Z., Rashid, Z..  2018.  An Association Rule Mining-Based Framework for Profiling Regularities in Tactics Techniques and Procedures of Cyber Threat Actors. 2018 International Conference on Smart Computing and Electronic Enterprise (ICSCEE). :1-6.

Tactics Techniques and Procedures (TTPs) in cyber domain is an important threat information that describes the behavior and attack patterns of an adversary. Timely identification of associations between TTPs can lead to effective strategy for diagnosing the Cyber Threat Actors (CTAs) and their attack vectors. This study profiles the prevalence and regularities in the TTPs of CTAs. We developed a machine learning-based framework that takes as input Cyber Threat Intelligence (CTI) documents, selects the most prevalent TTPs with high information gain as features and based on them mine interesting regularities between TTPs using Association Rule Mining (ARM). We evaluated the proposed framework with publicly available TTPbased CTI documents. The results show that there are 28 TTPs more prevalent than the other TTPs. Our system identified 155 interesting association rules among the TTPs of CTAs. A summary of these rules is given to effectively investigate threats in the network.

2019-09-26
Elliott, A. S., Ruef, A., Hicks, M., Tarditi, D..  2018.  Checked C: Making C Safe by Extension. 2018 IEEE Cybersecurity Development (SecDev). :53-60.

This paper presents Checked C, an extension to C designed to support spatial safety, implemented in Clang and LLVM. Checked C's design is distinguished by its focus on backward-compatibility, incremental conversion, developer control, and enabling highly performant code. Like past approaches to a safer C, Checked C employs a form of checked pointer whose accesses can be statically or dynamically verified. Performance evaluation on a set of standard benchmark programs shows overheads to be relatively low. More interestingly, Checked C introduces the notions of a checked region and bounds-safe interfaces.

2019-05-01
Barrere, M., Hankin, C., Barboni, A., Zizzo, G., Boem, F., Maffeis, S., Parisini, T..  2018.  CPS-MT: A Real-Time Cyber-Physical System Monitoring Tool for Security Research. 2018 IEEE 24th International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA). :240–241.

Monitoring systems are essential to understand and control the behaviour of systems and networks. Cyber-physical systems (CPS) are particularly delicate under that perspective since they involve real-time constraints and physical phenomena that are not usually considered in common IT solutions. Therefore, there is a need for publicly available monitoring tools able to contemplate these aspects. In this poster/demo, we present our initiative, called CPS-MT, towards a versatile, real-time CPS monitoring tool, with a particular focus on security research. We first present its architecture and main components, followed by a MiniCPS-based case study. We also describe a performance analysis and preliminary results. During the demo, we will discuss CPS-MT's capabilities and limitations for security applications.

2019-05-08
Balogun, A. M., Zuva, T..  2018.  Criminal Profiling in Digital Forensics: Assumptions, Challenges and Probable Solution. 2018 International Conference on Intelligent and Innovative Computing Applications (ICONIC). :1–7.

Cybercrime has been regarded understandably as a consequent compromise that follows the advent and perceived success of the computer and internet technologies. Equally effecting the privacy, trust, finance and welfare of the wealthy and low-income individuals and organizations, this menace has shown no indication of slowing down. Reports across the world have consistently shown exponential increase in the numbers and costs of cyber-incidents, and more worriedly low conviction rates of cybercriminals, over the years. Stakeholders increasingly explore ways to keep up with containing cyber-incidents by devising tools and techniques to increase the overall efficiency of investigations, but the gap keeps getting wider. However, criminal profiling - an investigative technique that has been proven to provide accurate and valuable directions to traditional crime investigations - has not seen a widespread application, including a formal methodology, to cybercrime investigations due to difficulties in its seamless transference. This paper, in a bid to address this problem, seeks to preliminarily identify the exact benefits criminal profiling has brought to successful traditional crime investigations and the benefits it can translate to cybercrime investigations, identify the challenges posed by the cyber-scene to its implementation in cybercrime investigations, and proffer a practicable solution.

2019-11-12
Padon, Oded.  2018.  Deductive Verification of Distributed Protocols in First-Order Logic. 2018 Formal Methods in Computer Aided Design (FMCAD). :1-1.

Formal verification of infinite-state systems, and distributed systems in particular, is a long standing research goal. In the deductive verification approach, the programmer provides inductive invariants and pre/post specifications of procedures, reducing the verification problem to checking validity of logical verification conditions. This check is often performed by automated theorem provers and SMT solvers, substantially increasing productivity in the verification of complex systems. However, the unpredictability of automated provers presents a major hurdle to usability of these tools. This problem is particularly acute in case of provers that handle undecidable logics, for example, first-order logic with quantifiers and theories such as arithmetic. The resulting extreme sensitivity to minor changes has a strong negative impact on the convergence of the overall proof effort.

2019-02-25
Katole, R. A., Sherekar, S. S., Thakare, V. M..  2018.  Detection of SQL injection attacks by removing the parameter values of SQL query. 2018 2nd International Conference on Inventive Systems and Control (ICISC). :736–741.

Internet users are increasing day by day. The web services and mobile web applications or desktop web application's demands are also increasing. The chances of a system being hacked are also increasing. All web applications maintain data at the backend database from which results are retrieved. As web applications can be accessed from anywhere all around the world which must be available to all the users of the web application. SQL injection attack is nowadays one of the topmost threats for security of web applications. By using SQL injection attackers can steal confidential information. In this paper, the SQL injection attack detection method by removing the parameter values of the SQL query is discussed and results are presented.