Visible to the public Biblio

Filters: Keyword is Instruments  [Clear All Filters]
2019-09-23
Zheng, N., Alawini, A., Ives, Z. G..  2019.  Fine-Grained Provenance for Matching ETL. 2019 IEEE 35th International Conference on Data Engineering (ICDE). :184–195.
Data provenance tools capture the steps used to produce analyses. However, scientists must choose among workflow provenance systems, which allow arbitrary code but only track provenance at the granularity of files; provenance APIs, which provide tuple-level provenance, but incur overhead in all computations; and database provenance tools, which track tuple-level provenance through relational operators and support optimization, but support a limited subset of data science tasks. None of these solutions are well suited for tracing errors introduced during common ETL, record alignment, and matching tasks - for data types such as strings, images, etc. Scientists need new capabilities to identify the sources of errors, find why different code versions produce different results, and identify which parameter values affect output. We propose PROVision, a provenance-driven troubleshooting tool that supports ETL and matching computations and traces extraction of content within data objects. PROVision extends database-style provenance techniques to capture equivalences, support optimizations, and enable selective evaluation. We formalize our extensions, implement them in the PROVision system, and validate their effectiveness and scalability for common ETL and matching tasks.
2019-08-05
Headrick, W. J., Dlugosz, A., Rajcok, P..  2018.  Information Assurance in modern ATE. 2018 IEEE AUTOTESTCON. :1–4.

For modern Automatic Test Equipment (ATE) one of the most daunting tasks is now Information Assurance (IA). What was once at most a secondary item consisting mainly of installing an Anti-Virus suite is now becoming one of the most important aspects of ATE. Given the current climate of IA it has become important to ensure ATE is kept safe from any breaches of security or loss of information. Even though most ATE are not on the Internet (or even on a network for many) they are still vulnerable to some of the same attack vectors plaguing common computers and other electronic devices. This paper will discuss some of the processes and procedures which must be used to ensure that modern ATE can continue to be used to test and detect faults in the systems they are designed to test. The common items that must be considered for ATE are as follows: The ATE system must have some form of Anti-Virus (as should all computers). The ATE system should have a minimum software footprint only providing the software needed to perform the task. The ATE system should be verified to have all the Operating System (OS) settings configured pursuant to the task it is intended to perform. The ATE OS settings should include password and password expiration settings to prevent access by anyone not expected to be on the system. The ATE system software should be written and constructed such that it in itself is not readily open to attack. The ATE system should be designed in a manner such that none of the instruments in the system can easily be attacked. The ATE system should insure any paths to the outside world (such as Ethernet or USB devices) are limited to only those required to perform the task it was designed for. These and many other common configuration concerns will be discussed in the paper.

2019-02-08
Ivanova, M., Durcheva, M., Baneres, D., Rodríguez, M. E..  2018.  eAssessment by Using a Trustworthy System in Blended and Online Institutions. 2018 17th International Conference on Information Technology Based Higher Education and Training (ITHET). :1-7.

eAssessment uses technology to support online evaluation of students' knowledge and skills. However, challenging problems must be addressed such as trustworthiness among students and teachers in blended and online settings. The TeSLA system proposes an innovative solution to guarantee correct authentication of students and to prove the authorship of their assessment tasks. Technologically, the system is based on the integration of five instruments: face recognition, voice recognition, keystroke dynamics, forensic analysis, and plagiarism. The paper aims to analyze and compare the results achieved after the second pilot performed in an online and a blended university revealing the realization of trust-driven solutions for eAssessment.

2018-09-12
Rafiuddin, M. F. B., Minhas, H., Dhubb, P. S..  2017.  A dark web story in-depth research and study conducted on the dark web based on forensic computing and security in Malaysia. 2017 IEEE International Conference on Power, Control, Signals and Instrumentation Engineering (ICPCSI). :3049–3055.
The following is a research conducted on the Dark Web to study and identify the ins and outs of the dark web, what the dark web is all about, the various methods available to access the dark web and many others. The researchers have also included the steps and precautions taken before the dark web was opened. Apart from that, the findings and the website links / URL are also included along with a description of the sites. The primary usage of the dark web and some of the researcher's experience has been further documented in this research paper.
2018-09-05
Buttigieg, R., Farrugia, M., Meli, C..  2017.  Security issues in controller area networks in automobiles. 2017 18th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering (STA). :93–98.
Modern vehicles may contain a considerable number of ECUs (Electronic Control Units) which are connected through various means of communication, with the CAN (Controller Area Network) protocol being the most widely used. However, several vulnerabilities such as the lack of authentication and the lack of data encryption have been pointed out by several authors, which ultimately render vehicles unsafe to their users and surroundings. Moreover, the lack of security in modern automobiles has been studied and analyzed by other researchers as well as several reports about modern car hacking have (already) been published. The contribution of this work aimed to analyze and test the level of security and how resilient is the CAN protocol by taking a BMW E90 (3-series) instrument cluster as a sample for a proof of concept study. This investigation was carried out by building and developing a rogue device using cheap commercially available components while being connected to the same CAN-Bus as a man in the middle device in order to send spoofed messages to the instrument cluster.
2018-02-02
Kochte, M. A., Baranowski, R., Wunderlich, H. J..  2017.  Trustworthy reconfigurable access to on-chip infrastructure. 2017 International Test Conference in Asia (ITC-Asia). :119–124.

The accessibility of on-chip embedded infrastructure for test, reconfiguration, or debug poses a serious security problem. Access mechanisms based on IEEE Std 1149.1 (JTAG), and especially reconfigurable scan networks (RSNs), as allowed by IEEE Std 1500, IEEE Std 1149.1-2013, and IEEE Std 1687 (IJTAG), require special care in the design and development. This work studies the threats to trustworthy data transmission in RSNs posed by untrusted components within the RSN and external interfaces. We propose a novel scan pattern generation method that finds trustworthy access sequences to prevent sniffing and spoofing of transmitted data in the RSN. For insecure RSNs, for which such accesses do not exist, we present an automated transformation that improves the security and trustworthiness while preserving the accessibility to attached instruments. The area overhead is reduced based on results from trustworthy access pattern generation. As a result, sensitive data is not exposed to untrusted components in the RSN, and compromised data cannot be injected during trustworthy accesses.

Moyer, T., Chadha, K., Cunningham, R., Schear, N., Smith, W., Bates, A., Butler, K., Capobianco, F., Jaeger, T., Cable, P..  2016.  Leveraging Data Provenance to Enhance Cyber Resilience. 2016 IEEE Cybersecurity Development (SecDev). :107–114.

Building secure systems used to mean ensuring a secure perimeter, but that is no longer the case. Today's systems are ill-equipped to deal with attackers that are able to pierce perimeter defenses. Data provenance is a critical technology in building resilient systems that will allow systems to recover from attackers that manage to overcome the "hard-shell" defenses. In this paper, we provide background information on data provenance, details on provenance collection, analysis, and storage techniques and challenges. Data provenance is situated to address the challenging problem of allowing a system to "fight-through" an attack, and we help to identify necessary work to ensure that future systems are resilient.

2018-01-16
Miramirkhani, N., Appini, M. P., Nikiforakis, N., Polychronakis, M..  2017.  Spotless Sandboxes: Evading Malware Analysis Systems Using Wear-and-Tear Artifacts. 2017 IEEE Symposium on Security and Privacy (SP). :1009–1024.

Malware sandboxes, widely used by antivirus companies, mobile application marketplaces, threat detection appliances, and security researchers, face the challenge of environment-aware malware that alters its behavior once it detects that it is being executed on an analysis environment. Recent efforts attempt to deal with this problem mostly by ensuring that well-known properties of analysis environments are replaced with realistic values, and that any instrumentation artifacts remain hidden. For sandboxes implemented using virtual machines, this can be achieved by scrubbing vendor-specific drivers, processes, BIOS versions, and other VM-revealing indicators, while more sophisticated sandboxes move away from emulation-based and virtualization-based systems towards bare-metal hosts. We observe that as the fidelity and transparency of dynamic malware analysis systems improves, malware authors can resort to other system characteristics that are indicative of artificial environments. We present a novel class of sandbox evasion techniques that exploit the "wear and tear" that inevitably occurs on real systems as a result of normal use. By moving beyond how realistic a system looks like, to how realistic its past use looks like, malware can effectively evade even sandboxes that do not expose any instrumentation indicators, including bare-metal systems. We investigate the feasibility of this evasion strategy by conducting a large-scale study of wear-and-tear artifacts collected from real user devices and publicly available malware analysis services. The results of our evaluation are alarming: using simple decision trees derived from the analyzed data, malware can determine that a system is an artificial environment and not a real user device with an accuracy of 92.86%. As a step towards defending against wear-and-tear malware evasion, we develop statistical models that capture a system's age and degree of use, which can be used to aid sandbox operators in creating system i- ages that exhibit a realistic wear-and-tear state.

2017-12-12
Dai, D., Chen, Y., Carns, P., Jenkins, J., Ross, R..  2017.  Lightweight Provenance Service for High-Performance Computing. 2017 26th International Conference on Parallel Architectures and Compilation Techniques (PACT). :117–129.

Provenance describes detailed information about the history of a piece of data, containing the relationships among elements such as users, processes, jobs, and workflows that contribute to the existence of data. Provenance is key to supporting many data management functionalities that are increasingly important in operations such as identifying data sources, parameters, or assumptions behind a given result; auditing data usage; or understanding details about how inputs are transformed into outputs. Despite its importance, however, provenance support is largely underdeveloped in highly parallel architectures and systems. One major challenge is the demanding requirements of providing provenance service in situ. The need to remain lightweight and to be always on often conflicts with the need to be transparent and offer an accurate catalog of details regarding the applications and systems. To tackle this challenge, we introduce a lightweight provenance service, called LPS, for high-performance computing (HPC) systems. LPS leverages a kernel instrument mechanism to achieve transparency and introduces representative execution and flexible granularity to capture comprehensive provenance with controllable overhead. Extensive evaluations and use cases have confirmed its efficiency and usability. We believe that LPS can be integrated into current and future HPC systems to support a variety of data management needs.

Sun, F., Zhang, P., White, J., Schmidt, D., Staples, J., Krause, L..  2017.  A Feasibility Study of Autonomically Detecting In-Process Cyber-Attacks. 2017 3rd IEEE International Conference on Cybernetics (CYBCONF). :1–8.

A cyber-attack detection system issues alerts when an attacker attempts to coerce a trusted software application to perform unsafe actions on the attacker's behalf. One way of issuing such alerts is to create an application-agnostic cyber- attack detection system that responds to prevalent software vulnerabilities. The creation of such an autonomic alert system, however, is impeded by the disparity between implementation language, function, quality-of-service (QoS) requirements, and architectural patterns present in applications, all of which contribute to the rapidly changing threat landscape presented by modern heterogeneous software systems. This paper evaluates the feasibility of creating an autonomic cyber-attack detection system and applying it to several exemplar web-based applications using program transformation and machine learning techniques. Specifically, we examine whether it is possible to detect cyber-attacks (1) online, i.e., as they occur using lightweight structures derived from a call graph and (2) offline, i.e., using machine learning techniques trained with features extracted from a trace of application execution. In both cases, we first characterize normal application behavior using supervised training with the test suites created for an application as part of the software development process. We then intentionally perturb our test applications so they are vulnerable to common attack vectors and then evaluate the effectiveness of various feature extraction and learning strategies on the perturbed applications. Our results show that both lightweight on-line models based on control flow of execution path and application specific off-line models can successfully and efficiently detect in-process cyber-attacks against web applications.

2017-03-08
Allen, J. H., Curtis, P. D., Mehravari, N., Crabb, G..  2015.  A proven method for identifying security gaps in international postal and transportation critical infrastructure. 2015 IEEE International Symposium on Technologies for Homeland Security (HST). :1–5.

The safety, security, and resilience of international postal, shipping, and transportation critical infrastructure are vital to the global supply chain that enables worldwide commerce and communications. But security on an international scale continues to fail in the face of new threats, such as the discovery by Panamanian authorities of suspected components of a surface-to-air missile system aboard a North Korean-flagged ship in July 2013 [1].This reality calls for new and innovative approaches to critical infrastructure security. Owners and operators of critical postal, shipping, and transportation operations need new methods to identify, assess, and mitigate security risks and gaps in the most effective manner possible.

2015-05-05
Everspaugh, A., Yan Zhai, Jellinek, R., Ristenpart, T., Swift, M..  2014.  Not-So-Random Numbers in Virtualized Linux and the Whirlwind RNG. Security and Privacy (SP), 2014 IEEE Symposium on. :559-574.

Virtualized environments are widely thought to cause problems for software-based random number generators (RNGs), due to use of virtual machine (VM) snapshots as well as fewer and believed-to-be lower quality entropy sources. Despite this, we are unaware of any published analysis of the security of critical RNGs when running in VMs. We fill this gap, using measurements of Linux's RNG systems (without the aid of hardware RNGs, the most common use case today) on Xen, VMware, and Amazon EC2. Despite CPU cycle counters providing a significant source of entropy, various deficiencies in the design of the Linux RNG makes its first output vulnerable during VM boots and, more critically, makes it suffer from catastrophic reset vulnerabilities. We show cases in which the RNG will output the exact same sequence of bits each time it is resumed from the same snapshot. This can compromise, for example, cryptographic secrets generated after resumption. We explore legacy-compatible countermeasures, as well as a clean-slate solution. The latter is a new RNG called Whirlwind that provides a simpler, more-secure solution for providing system randomness.
 

2015-04-30
Yinping Yang, Falcao, H., Delicado, N., Ortony, A..  2014.  Reducing Mistrust in Agent-Human Negotiations. Intelligent Systems, IEEE. 29:36-43.

Face-to-face negotiations always benefit if the interacting individuals trust each other. But trust is also important in online interactions, even for humans interacting with a computational agent. In this article, the authors describe a behavioral experiment to determine whether, by volunteering information that it need not disclose, a software agent in a multi-issue negotiation can alleviate mistrust in human counterparts who differ in their propensities to mistrust others. Results indicated that when cynical, mistrusting humans negotiated with an agent that proactively communicated its issue priority and invited reciprocation, there were significantly more agreements and better utilities than when the agent didn't volunteer such information. Furthermore, when the agent volunteered its issue priority, the outcomes for mistrusting individuals were as good as those for trusting individuals, for whom the volunteering of issue priority conferred no advantage. These findings provide insights for designing more effective, socially intelligent agents in online negotiation settings.