Visible to the public Information Assurance

SoS Newsletter- Advanced Book Block

Information Assurance

The term "information Assurance" was adopted in the late 1990's to cover what is often now referred to generically as "cybersecurity." Many still use the phrase, particularly in the U.S. government, both for teaching and research. Since it is a rather generic phrase, there is a wide area of coverage under this topic. The articles cited here, from the January to September of 2014, cover topics related both to technology and pedagogy.

  • Xiaohong Yuan; Williams, K.; Huiming Yu; Bei-Tseng Chu; Rorrer, A; Li Yang; Winters, K.; Kizza, J., "Developing Faculty Expertise in Information Assurance through Case Studies and Hands-On Experiences," System Sciences (HICSS), 2014 47th Hawaii International Conference on, pp.4938,4945, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.606 Though many Information Assurance (IA) educators agree that hands-on exercises and case studies improve student learning, hands-on exercises and case studies are not widely adopted due to the time needed to develop them and integrate them into curriculum. Under the support of National Science Foundation (NSF) Scholarship for Service program, we implemented two faculty development workshops to disseminate effective hands-on exercises and case studies developed through multiple previous and ongoing grants, and to develop faculty expertise in IA. This paper reports our experience of holding the faculty summer workshops on teaching information assurance through case studies and hands-on experiences. The topics presented at the workshops are briefly described and the evaluation results of the workshops are discussed. The workshops provided a valuable opportunity for IA educators to connect with each other and form collaboration in teaching and research in IA. Keywords: computer science education; continuing professional development; teacher training; teaching; IA educators; NSF Scholarship for Service program; National Science Foundation Scholarship for Service program; case studies; curriculum; faculty development workshops; faculty expertise; faculty summer workshops; hands-on exercises; hands-on experiences; information assurance educators; student learning; teaching; Access control; Authentication; Conferences; Cryptography; Educational institutions (ID#:14-2342) URL:
  • Romero-Mariona, J., "DITEC (DoD-Centric and Independent Technology Evaluation Capability): A Process for Testing Security," Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on , vol., no., pp.24,25, March 31 2014-April 4 2014. doi: 10.1109/ICSTW.2014.52 Information Assurance (IA) is one of the Department of Defense's (DoD) top priorities today. IA technologies are constantly evolving to protect critical information from the growing number of cyber threats. Furthermore, DoD spends millions of dollars each year procuring, maintaining, and discontinuing various IA and Cyber technologies. Today, there is no process and/or standardized method for making informed decisions about which IA technologies are better/best. Due to this, efforts for selecting technologies go through very disparate evaluations that are often times non-repeatable and very subjective. DITEC (DoD-centric and Independent Technology Evaluation Capability) is a new capability that streamlines IA technology evaluation. DITEC defines a Process for evaluating whether or not a product meets DoD needs, Security Metrics for measuring how well needs are met, and a Framework for comparing various products that address the same IA technology area. DITEC seeks to reduce the time and cost of creating a test plan and expedite the test and evaluation effort for considering new IA technologies, consequently streamlining the deployment of IA products across DoD and increasing the potential to meet its needs. Keywords: data protection; decision making; military computing; security of data; DITEC; Department of Defense; DoD-centric and independent technology evaluation capability; IA technologies; critical information protection; cyber technologies; cyber threats; information assurance; informed decision making; security metrics; security testing process; Computer security; Conferences; Measurement; US Department of Defense; Usability; Decision-making Support; Evaluation; Information Assurance; Security; Security Metrics (ID#:14-2343) URL:
  • Schumann, M.A; Drusinsky, D.; Michael, J.B.; Wijesekera, D., "Modeling Human-in-the-Loop Security Analysis and Decision-Making Processes," Software Engineering, IEEE Transactions on, vol.40, no.2, pp.154,166, Feb. 2014. doi: 10.1109/TSE.2014.2302433 This paper presents a novel application of computer-assisted formal methods for systematically specifying, documenting, statically and dynamically checking, and maintaining human-centered workflow processes. This approach provides for end-to-end verification and validation of process workflows, which is needed for process workflows that are intended for use in developing and maintaining high-integrity systems. We demonstrate the technical feasibility of our approach by applying it on the development of the US government's process workflow for implementing, certifying, and accrediting cross-domain computer security solutions. Our approach involves identifying human-in-the-loop decision points in the process activities and then modeling these via statechart assertions. We developed techniques to specify and enforce workflow hierarchies, which was a challenge due to the existence of concurrent activities within complex workflow processes. Some of the key advantages of our approach are: it results in development of a model that is executable, supporting both upfront and runtime checking of process-workflow requirements; aids comprehension and communication among stakeholders and process engineers; and provides for incorporating accountability and risk management into the engineering of process workflows. Keywords: decision making; formal specification; formal verification; government data processing; security of data; workflow management software; US government process workflow; United States; accountability; computer-assisted formal methods; cross-domain computer security solutions; decision-making process; end-to-end validation; end-to-end verification; high-integrity systems; human-centered workflow process; human-in-the-loop decision points; human-in-the-loop security analysis; process activities; process documentation; process dynamically checking; process maintenance; process specification; process statically checking; process workflows engineering ;risk management; statechart assertions; workflow hierarchies; Analytical models; Business; Formal specifications; Object oriented modeling; Runtime; Software; Unified modeling language; Formal methods; information assurance; process modeling; software engineering; statechart assertions; verification and validation (ID#:14-2344) URL:
  • Hershey, P.C.; Rao, S.; Silio, C.B.; Narayan, A, "System of Systems for Quality-of-Service Observation and Response in Cloud Computing Environments," Systems Journal, IEEE, vol. PP, no.99, pp.1, 11, January 2014. doi: 10.1109/JSYST.2013.2295961 As military, academic, and commercial computing systems evolve from autonomous entities that deliver computing products into network centric enterprise systems that deliver computing as a service, opportunities emerge to consolidate computing resources, software, and information through cloud computing. Along with these opportunities come challenges, particularly to service providers and operations centers that struggle to monitor and manage quality of service (QoS) for these services in order to meet customer service commitments. Traditional approaches fall short in addressing these challenges because they examine QoS from a limited perspective rather than from a system-of-systems (SoS) perspective applicable to a net-centric enterprise system in which any user from any location can share computing resources at any time. This paper presents a SoS approach to enable QoS monitoring, management, and response for enterprise systems that deliver computing as a service through a cloud computing environment. A concrete example is provided for application of this new SoS approach to a real-world scenario (viz., distributed denial of service). Simulated results confirm the efficacy of the approach. Keywords: Cloud computing; Delays; Monitoring; Quality of service; Security; Cloud computing; distributed denial of service (DDoS);enterprise systems; information assurance; net centric; quality of service (QoS); security; service-oriented architecture (SOA); systems of systems (SoS) (ID#:14-2345) URL:
  • Kowtko, M.A, "Biometric Authentication For Older Adults," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island, pp.1,6, 2-2 May 2014. doi: 10.1109/LISAT.2014.6845213 In recent times, cyber-attacks and cyber warfare have threatened network infrastructures from across the globe. The world has reacted by increasing security measures through the use of stronger passwords, strict access control lists, and new authentication means; however, while these measures are designed to improve security and Information Assurance (IA), they may create accessibility challenges for older adults and people with disabilities. Studies have shown the memory performance of older adults decline with age. Therefore, it becomes increasingly difficult for older adults to remember random strings of characters or passwords that have 12 or more character lengths. How are older adults challenged by security measures (passwords, CAPTCHA, etc.) and how does this affect their accessibility to engage in online activities or with mobile platforms? While username/password authentication, CAPTCHA, and security questions do provide adequate protection; they are still vulnerable to cyber-attacks. Passwords can be compromised from brute force, dictionary, and social engineering style attacks. CAPTCHA, a type of challenge-response test, was developed to ensure that user inputs were not manipulated by machine-based attacks. Unfortunately, CAPTCHA are now being exploited by new vulnerabilities and exploits. Insecure implementations through code or server interaction have circumvented CAPTCHA. New viruses and malware now utilize character recognition as means to circumvent CAPTCHA [1]. Security questions, another challenge response test that attempts to authenticate users, can also be compromised through social engineering attacks and spyware. Since these common security measures are increasingly being compromised, many security professionals are turning towards biometric authentication. Biometric authentication is any form of human biological measurement or metric that can be used to identify and authenticate an authorized user of a secure system. Biometric authentication- can include fingerprint, voice, iris, facial, keystroke, and hand geometry [2]. Biometric authentication is also less affected by traditional cyber-attacks. However, is Biometrics completely secure? This research will examine the security challenges and attacks that may risk the security of biometric authentication. Recently, medical professionals in the TeleHealth industry have begun to investigate the effectiveness of biometrics. In the United States alone, the population of older adults has increased significantly with nearly 10,000 adults per day reaching the age of 65 and older [3]. Although people are living longer, that does not mean that they are living healthier. Studies have shown the U.S. healthcare system is being inundated by older adults. As security with the healthcare industry increases, many believe that biometric authentication is the answer. However, there are potential problems; especially in the older adult population. The largest problem is authentication of older adults with medical complications. Cataracts, stroke, congestive heart failure, hard veins, and other ailments may challenge biometric authentication. Since biometrics often utilize metrics and measurement between biological features, anyone of the following conditions and more could potentially affect the verification of users. This research will analyze older adults and their impact of biometric authentication on the verification process. Keywords: authorisation; biometrics (access control); invasive software; medical administrative data processing; mobile computing; CAPTCHA; Cataracts; IA; TeleHealth industry ;US healthcare system; access control lists; authentication means; biometric authentication; challenge-response test; congestive heart failure; cyber warfare; cyber-attacks; dictionary; hard veins; healthcare industry; information assurance; machine-based attacks; medical professionals; mobile platforms; network infrastructures; older adults; online activities; security measures; security professionals; social engineering style attacks; spyware; stroke; username-password authentication; Authentication; Barium; CAPTCHAs; Computers; Heart; Iris recognition; Biometric Authentication; CAPTCHA; Cyber-attacks; Information Security; Older Adults; Telehealth (ID#:14-2346) URL:
  • ier Jin, "EDA Tools Trust Evaluation Through Security Property Proofs," Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014, pp.1,4, 24-28 March 2014. doi: 10.7873/DATE.2014.260 The security concerns of EDA tools have long been ignored because IC designers and integrators only focus on their functionality and performance. This lack of trusted EDA tools hampers hardware security researchers' efforts to design trusted integrated circuits. To address this concern, a novel EDA tools trust evaluation framework has been proposed to ensure the trustworthiness of EDA tools through its functional operation, rather than scrutinizing the software code. As a result, the newly proposed framework lowers the evaluation cost and is a better fit for hardware security researchers. To support the EDA tools evaluation framework, a new gate-level information assurance scheme is developed for security property checking on any gatelevel netlist. Helped by the gate-level scheme, we expand the territory of proof-carrying based IP protection from RT-level designs to gate-level netlist, so that most of the commercially trading third-party IP cores are under the protection of proof-carrying based security properties. Using a sample AES encryption core, we successfully prove the trustworthiness of Synopsys Design Compiler in generating a synthesized netlist. Keywords: cryptography; electronic design automation; integrated circuit design; AES encryption core; EDA tools trust evaluation; Synopsys design compiler; functional operation; gate-level information assurance scheme; gate-level netlist; hardware security researchers; proof-carrying based IP protection; security property proofs; software code; third-party IP cores; trusted integrated circuits; Hardware; IP networks; Integrated circuits; Logic gates; Sensitivity; Trojan horses (ID#:14-2347) URL:
  • Whitmore, J.; Turpe, S.; Triller, S.; Poller, A; Carlson, C., "Threat Analysis In The Software Development Lifecycle," IBM Journal of Research and Development, vol.58, no.1, pp.6:1, 6:13, Jan.-Feb. 2014. doi: 10.1147/JRD.2013.2288060 Businesses and governments that deploy and operate IT (information technology) systems continue to seek assurance that software they procure has the security characteristics they expect. The criteria used to evaluate the security of software are expanding from static sets of functional and assurance requirements to complex sets of evidence related to development practices for design, coding, testing, and support, plus consideration of security in the supply chain. To meet these evolving expectations, creators of software are faced with the challenge of consistently and continuously applying the most current knowledge about risks, threats, and weaknesses to their existing and new software assets. Yet the practice of threat analysis remains an art form that is highly subjective and reserved for a small community of security experts. This paper reviews the findings of an IBM-sponsored project with the Fraunhofer Institute for Secure Information Technology (SIT) and the Technische Universitat Darmstadt. This project investigated aspects of security in software development, including practical methods for threat analysis. The project also examined existing methods and tools, assessing their efficacy for software development within an open-source software supply chain. These efforts yielded valuable insights plus an automated tool and knowledge base that has the potential for overcoming some of the current limitations of secure development on a large scale. Keywords: Analytical models; Business; Computer security; Encoding; Government; Information technology; Software development; information assurance (ID#:14-2348) URL: Beato, F.; Peeters, R., "Collaborative Joint Content Sharing For Online Social Networks," Pervasive Computing and Communications Workshops (PERCOM Workshops), 2014 IEEE International Conference on , vol., no., pp.616,621, 24-28 March 2014. doi: 10.1109/PerComW.2014.6815277 Online social networks' (OSNs) epic popularity has accustomed users to the ease of sharing information. At the same time, OSNs have been a focus of privacy concerns with respect to the information shared. Therefore, it is important that users have some assurance when sharing on OSNs: popular OSNs provide users with mechanisms, to protect shared information access rights. However, these mechanisms do not allow collaboration when defining access rights for joint content related to more than one user (e.g, party pictures in which different users are being tagged). In fact, the access rights list for such content is represented by the union of the access list defined by each related user, which could result in unwanted leakage. We propose a collaborative access control scheme, based on secret sharing, in which sharing of content on OSNs is decided collaboratively by a number of related users. We demonstrate that such mechanism is feasible and benefits users' privacy. Keywords: authorisation; data privacy; groupware; social networking (online) ;OSN; access rights list; collaborative access control scheme; collaborative joint content sharing; information sharing; online social networks; privacy concerns; secret sharing; unwanted leakage; user privacy; Access control; Collaboration; Encryption; Joints; Privacy (ID#:14-2349) URL:
  • Adjei, J.K., "Explaining the Role of Trust in Cloud Service Acquisition," Mobile Cloud Computing, Services, and Engineering (MobileCloud), 2014 2nd IEEE International Conference on, pp.283, 288, 8-11 April 2014. doi: 10.1109/MobileCloud.2014.48 Effective digital identity management system is a critical enabler of cloud computing, since it supports the provision of the required assurances to the transacting parties. Such assurances sometimes require the disclosure of sensitive personal information. Given the prevalence of various forms of identity abuses on the Internet, a re-examination of the factors underlying cloud services acquisition has become critical and imperative. In order to provide better assurances, parties to cloud transactions must have confidence in service providers' ability and integrity in protecting their interest and personal information. Thus a trusted cloud identity ecosystem could promote such user confidence and assurances. Using a qualitative research approach, this paper explains the role of trust in cloud service acquisition by organizations. The paper focuses on the processes of acquisition of cloud services by financial institutions in Ghana. The study forms part of comprehensive study on the monetization of personal Identity information. Keywords: cloud computing; data protection; trusted computing; Ghana; Internet; cloud computing; cloud services acquisition; cloud transactions; digital identity management system; financial institutions; identity abuses; interest protection; organizations; personal identity information; sensitive personal information; service provider ability; service provider integrity; transacting parties; trusted cloud identity ecosystem; u ser assurances; user confidence; Banking; Cloud computing ;Context; Law; Organizations; Privacy; cloud computing; information privacy; mediating; trust (ID#:14-2350) URL:
  • Kekkonen, T.; Kanstren, T.; Hatonen, K., "Towards Trusted Environment in Cloud Monitoring," Information Technology: New Generations (ITNG), 2014 11th International Conference on, pp.180,185, 7-9 April 2014. doi: 10.1109/ITNG.2014.104 This paper investigates the problem of providing trusted monitoring information on a cloud environment to the cloud customers. The general trust between customer and provider is taken as a starting point. The paper discusses possible methods to strengthen this trust. It focuses on establishing a chain of trust inside the provider infrastructure to supply monitoring data for the customer. The goal is to enable delivery of state and event information to parties outside the cloud infrastructure. The current technologies and research are reviewed for the solution and the usage scenario is presented. Based on such technology, higher assurance of the cloud can be presented to the customer. This allows customers with high security requirements and responsibilities to have more confidence in accepting the cloud as their platform of choice. Keywords: cloud computing; security of data; trusted computing; cloud customers; cloud monitoring; cloud service provider infrastructure; monitoring data; security requirements; trusted environment; trusted monitoring information; Hardware; Monitoring; Operating systems; Probes; Registers; Security; Virtual machining; TPM; cloud; integrity measurement; remote attestation; security concerns; security measurement (ID#:14-2351) URL:
  • Dubrova, E.; Naslund, M.; Selander, G., "Secure and Efficient LBIST For Feedback Shift Register-Based Cryptographic Systems," Test Symposium (ETS), 2014 19th IEEE European, pp.1,6, 26-30 May 2014. doi: 10.1109/ETS.2014.6847821 Cryptographic methods are used to protect confidential information against unauthorised modification or disclo-sure. Cryptographic algorithms providing high assurance exist, e.g. AES. However, many open problems related to assuring security of a hardware implementation of a cryptographic algorithm remain. Security of a hardware implementation can be compromised by a random fault or a deliberate attack. The traditional testing methods are good at detecting random faults, but they do not provide adequate protection against malicious alterations of a circuit known as hardware Trojans. For example, a recent attack on Intel's Ivy Bridge processor demonstrated that the traditional Logic Built-In Self-Test (LBIST) may fail even the simple case of stuck-at fault type of Trojans. In this paper, we present a novel LBIST method for Feedback Shift Register (FSR)-based cryptographic systems which can detect such Trojans. The specific properties of FSR-based cryptographic systems allow us to reach 100% single stuck-at fault coverage with a small set of deterministic tests. The test execution time of the proposed method is at least two orders of magnitude shorter than the one of the pseudo-random pattern-based LBIST. Our results enable an efficient protection of FSR-based cryptographic systems from random and malicious stuck-at faults. Keywords: cryptography ;logic testing; shift registers ;FSR-based cryptographic systems ;Ivy Bridge processor; LBIST method; confidential information protection; cryptographic algorithms; cryptographic methods; deliberate attack; feedback shift register-based cryptographic systems; hardware Trojans logic built-in self-test; random fault attack; stuck-at fault coverage; Boolean functions; Circuit faults; Clocks; Cryptography; Logic gates; Trojan horses; Vectors (ID#:14-2352) URL:
  • Zlomislic, Vinko; Fertalj, Kresimir; Sruk, Vlado, "Denial of Service Attacks: An Overview," Information Systems and Technologies (CISTI), 2014 9th Iberian Conference on, vol., no., pp.1,6, 18-21 June 2014. doi: 10.1109/CISTI.2014.6876979 Denial of service (DoS) attacks present one of the most significant threats to assurance of dependable and secure information systems. Rapid development of new and increasingly sophisticated attacks requires resourcefulness in designing and implementing reliable defenses. This paper presents an overview of current DoS attack and defense concepts, from a theoretical and practical point of view. Considering the elaborated DoS mechanisms, main directions are proposed for future research required in defending against the evolving threat. Keywords: Computer crime; Filtering; Floods; Protocols; Reliability; Servers; DDoS; Denial of Service; Denial of Sustainability; DoS; Network Security; System Security (ID#:14-2353) URL:
  • Almohri, H.M.J.; Danfeng Yao; Kafura, D., "Process Authentication for High System Assurance," Dependable and Secure Computing, IEEE Transactions on, vol.11, no.2, pp.168,180, March-April 2014. doi: 10.1109/TDSC.2013.29 This paper points out the need in modern operating system kernels for a process authentication mechanism, where a process of a user-level application proves its identity to the kernel. Process authentication is different from process identification. Identification is a way to describe a principal; PIDs or process names are identifiers for processes in an OS environment. However, the information such as process names or executable paths that is conventionally used by OS to identify a process is not reliable. As a result, malware may impersonate other processes, thus violating system assurance. We propose a lightweight secure application authentication framework in which user-level applications are required to present proofs at runtime to be authenticated to the kernel. To demonstrate the application of process authentication, we develop a system call monitoring framework for preventing unauthorized use or access of system resources. It verifies the identity of processes before completing the requested system calls. We implement and evaluate a prototype of our monitoring architecture in Linux. The results from our extensive performance evaluation show that our prototype incurs reasonably low overhead, indicating the feasibility of our approach for cryptographically authenticating applications and their processes in the operating system. Keywords: Linux; authorization; cryptography; operating system kernels; software architecture software performance evaluation; system monitoring; Linux; cryptographic authenticating applications; high system assurance; modern operating system kernels; monitoring architecture; performance evaluation; process authentication mechanism; process identification; requested system calls; secure application authentication framework; system call monitoring framework; unauthorized system resource access prevention; unauthorized system resource use prevention; user-level application; Authentication; Kernel; Malware; Monitoring; Runtime; Operating system security; process authentication; secret application credential; system call monitoring (ID#:14-2354) URL:
  • Xixiang Lv; Yi Mu; Hui Li, "Non-Interactive Key Establishment for Bundle Security Protocol of Space DTNs," Information Forensics and Security, IEEE Transactions on, vol.9, no.1, pp.5,13, Jan. 2014. doi: 10.1109/TIFS.2013.2289993 To ensure the authenticity, integrity, and confidentiality of bundles, the in-transit Protocol Data Units of bundle protocol (BP) in space delay/disruption tolerant networks (DTNs), the Consultative Committee for Space Data Systems bundle security protocol (BSP) specification suggests four IPsec style security headers to provide four aspects of security services. However, this specification leaves key management as an open problem. Aiming to address the key establishment issue for BP, in this paper, we utilize a time-evolving topology model and two-channel cryptography to design efficient and noninteractive key exchange protocol. A time-evolving model is used to formally model the periodic and predetermined behavior patterns of space DTNs, and therefore, a node can schedule when and to whom it should send its public key. Meanwhile, the application of two-channel cryptography enables DTN nodes to exchange their public keys or revocation status information, with authentication assurance and in a noninteractive manner. The proposed scheme helps to establish a secure context to support for BSP, tolerating high delays, and unexpected loss of connectivity of space DTNs. Keywords: cryptographic protocols; delay tolerant networks; space communication links; telecommunication channels; telecommunication security; BSP specification; DTN nodes; IPsec style security headers; authentication assurance; authenticity; bundle security protocol; connectivity loss; consultative committee; delay-disruption tolerant networks ;in-transit protocol data units; noninteractive key establishment; noninteractive key exchange protocol; noninteractive manner; revocation status information; security services; space DTN; pace data systems bundle security protocol; time-evolving model; time-evolving topology model; two-channel cryptography; Authentication; Delays; Message authentication; Protocols; Public key; Space-based delay tolerant networks; bundle authentication; key establishment (ID#:14-2355) URL:


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.