Visible to the public Biblio

Found 274 results

Filters: First Letter Of Title is L  [Clear All Filters]
A B C D E F G H I J K [L] M N O P Q R S T U V W X Y Z   [Show ALL]
L
Balakrishnan, R., Parekh, R..  2014.  Learning to predict subject-line opens for large-scale email marketing. Big Data (Big Data), 2014 IEEE International Conference on. :579-584.

Billions of dollars of services and goods are sold through email marketing. Subject lines have a strong influence on open rates of the e-mails, as the consumers often open e-mails based on the subject. Traditionally, the e-mail-subject lines are compiled based on the best assessment of the human editors. We propose a method to help the editors by predicting subject line open rates by learning from past subject lines. The method derives different types of features from subject lines based on keywords, performance of past subject lines and syntax. Furthermore, we evaluate the contribution of individual subject-line keywords to overall open rates based on an iterative method-namely Attribution Scoring - and use this for improved predictions. A random forest based model is trained to combine these features to predict the performance. We use a dataset of more than a hundred thousand different subject lines with many billions of impressions to train and test the method. The proposed method shows significant improvement in prediction accuracy over the baselines for both new as well as already used subject lines.
 

Zuin, Gianlucca, Chaimowicz, Luiz, Veloso, Adriano.  2018.  Learning Transferable Features For Open-Domain Question Answering. 2018 International Joint Conference on Neural Networks (IJCNN). :1–8.

Corpora used to learn open-domain Question-Answering (QA) models are typically collected from a wide variety of topics or domains. Since QA requires understanding natural language, open-domain QA models generally need very large training corpora. A simple way to alleviate data demand is to restrict the domain covered by the QA model, leading thus to domain-specific QA models. While learning improved QA models for a specific domain is still challenging due to the lack of sufficient training data in the topic of interest, additional training data can be obtained from related topic domains. Thus, instead of learning a single open-domain QA model, we investigate domain adaptation approaches in order to create multiple improved domain-specific QA models. We demonstrate that this can be achieved by stratifying the source dataset, without the need of searching for complementary data unlike many other domain adaptation approaches. We propose a deep architecture that jointly exploits convolutional and recurrent networks for learning domain-specific features while transferring domain-shared features. That is, we use transferable features to enable model adaptation from multiple source domains. We consider different transference approaches designed to learn span-level and sentence-level QA models. We found that domain-adaptation greatly improves sentence-level QA performance, and span-level QA benefits from sentence information. Finally, we also show that a simple clustering algorithm may be employed when the topic domains are unknown and the resulting loss in accuracy is negligible.

Baba, Asif Iqbal, Jaeger, Manfred, Lu, Hua, Pedersen, Torben Bach, Ku, Wei-Shinn, Xie, Xike.  2016.  Learning-Based Cleansing for Indoor RFID Data. Proceedings of the 2016 International Conference on Management of Data. :925–936.

RFID is widely used for object tracking in indoor environments, e.g., airport baggage tracking. Analyzing RFID data offers insight into the underlying tracking systems as well as the associated business processes. However, the inherent uncertainty in RFID data, including noise (cross readings) and incompleteness (missing readings), pose challenges to high-level RFID data querying and analysis. In this paper, we address these challenges by proposing a learning-based data cleansing approach that, unlike existing approaches, requires no detailed prior knowledge about the spatio-temporal properties of the indoor space and the RFID reader deployment. Requiring only minimal information about RFID deployment, the approach learns relevant knowledge from raw RFID data and uses it to cleanse the data. In particular, we model raw RFID readings as time series that are sparse because the indoor space is only partly covered by a limited number of RFID readers. We propose the Indoor RFID Multi-variate Hidden Markov Model (IR-MHMM) to capture the uncertainties of indoor RFID data as well as the correlation of moving object locations and object RFID readings. We propose three state space design methods for IR-MHMM that enable the learning of parameters while contending with raw RFID data time series. We solely use raw uncleansed RFID data for the learning of model parameters, requiring no special labeled data or ground truth. The resulting IR-MHMM based RFID data cleansing approach is able to recover missing readings and reduce cross readings with high effectiveness and efficiency, as demonstrated by extensive experimental studies with both synthetic and real data. Given enough indoor RFID data for learning, the proposed approach achieves a data cleansing accuracy comparable to or even better than state-of-the-art techniques requiring very detailed prior knowledge, making our solution superior in terms of both effectiveness and employability.

Olaimat, M. Al, Lee, D., Kim, Y., Kim, J., Kim, J..  2020.  A Learning-based Data Augmentation for Network Anomaly Detection. 2020 29th International Conference on Computer Communications and Networks (ICCCN). :1–10.
While machine learning technologies have been remarkably advanced over the past several years, one of the fundamental requirements for the success of learning-based approaches would be the availability of high-quality data that thoroughly represent individual classes in a problem space. Unfortunately, it is not uncommon to observe a significant degree of class imbalance with only a few instances for minority classes in many datasets, including network traffic traces highly skewed toward a large number of normal connections while very small in quantity for attack instances. A well-known approach to addressing the class imbalance problem is data augmentation that generates synthetic instances belonging to minority classes. However, traditional statistical techniques may be limited since the extended data through statistical sampling should have the same density as original data instances with a minor degree of variation. This paper takes a learning-based approach to data augmentation to enable effective network anomaly detection. One of the critical challenges for the learning-based approach is the mode collapse problem resulting in a limited diversity of samples, which was also observed from our preliminary experimental result. To this end, we present a novel "Divide-Augment-Combine" (DAC) strategy, which groups the instances based on their characteristics and augments data on a group basis to represent a subset independently using a generative adversarial model. Our experimental results conducted with two recently collected public network datasets (UNSW-NB15 and IDS-2017) show that the proposed technique enhances performances up to 21.5% for identifying network anomalies.
Lu, X., Wan, X., Xiao, L., Tang, Y., Zhuang, W..  2018.  Learning-Based Rogue Edge Detection in VANETs with Ambient Radio Signals. 2018 IEEE International Conference on Communications (ICC). :1-6.
Edge computing for mobile devices in vehicular ad hoc networks (VANETs) has to address rogue edge attacks, in which a rogue edge node claims to be the serving edge in the vehicle to steal user secrets and help launch other attacks such as man-in-the-middle attacks. Rogue edge detection in VANETs is more challenging than the spoofing detection in indoor wireless networks due to the high mobility of onboard units (OBUs) and the large-scale network infrastructure with roadside units (RSUs). In this paper, we propose a physical (PHY)- layer rogue edge detection scheme for VANETs according to the shared ambient radio signals observed during the same moving trace of the mobile device and the serving edge in the same vehicle. In this scheme, the edge node under test has to send the physical properties of the ambient radio signals, including the received signal strength indicator (RSSI) of the ambient signals with the corresponding source media access control (MAC) address during a given time slot. The mobile device can choose to compare the received ambient signal properties and its own record or apply the RSSI of the received signals to detect rogue edge attacks, and determines test threshold in the detection. We adopt a reinforcement learning technique to enable the mobile device to achieve the optimal detection policy in the dynamic VANET without being aware of the VANET model and the attack model. Simulation results show that the Q-learning based detection scheme can significantly reduce the detection error rate and increase the utility compared with existing schemes.
Lou, Xin, Tran, Cuong, Yau, David K.Y., Tan, Rui, Ng, Hongwei, Fu, Tom Zhengjia, Winslett, Marianne.  2019.  Learning-Based Time Delay Attack Characterization for Cyber-Physical Systems. 2019 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm). :1—6.
The cyber-physical systems (CPSes) rely on computing and control techniques to achieve system safety and reliability. However, recent attacks show that these techniques are vulnerable once the cyber-attackers have bypassed air gaps. The attacks may cause service disruptions or even physical damages. This paper designs the built-in attack characterization scheme for one general type of cyber-attacks in CPS, which we call time delay attack, that delays the transmission of the system control commands. We use the recurrent neural networks in deep learning to estimate the delay values from the input trace. Specifically, to deal with the long time-sequence data, we design the deep learning model using stacked bidirectional long short-term memory (LSTM) units. The proposed approach is tested by using the data generated from a power plant control system. The results show that the LSTM-based deep learning approach can work well based on data traces from three sensor measurements, i.e., temperature, pressure, and power generation, in the power plant control system. Moreover, we show that the proposed approach outperforms the base approach based on k-nearest neighbors.
Hojjati, Avesta, Adhikari, Anku, Struckmann, Katarina, Chou, Edward, Tho Nguyen, Thi Ngoc, Madan, Kushagra, Winslett, Marianne S., Gunter, Carl A., King, William P..  2016.  Leave Your Phone at the Door: Side Channels That Reveal Factory Floor Secrets. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :883–894.

From pencils to commercial aircraft, every man-made object must be designed and manufactured. When it is cheaper or easier to steal a design or a manufacturing process specification than to invent one's own, the incentive for theft is present. As more and more manufacturing data comes online, incidents of such theft are increasing. In this paper, we present a side-channel attack on manufacturing equipment that reveals both the form of a product and its manufacturing process, i.e., exactly how it is made. In the attack, a human deliberately or accidentally places an attack-enabled phone close to the equipment or makes or receives a phone call on any phone nearby. The phone executing the attack records audio and, optionally, magnetometer data. We present a method of reconstructing the product's form and manufacturing process from the captured data, based on machine learning, signal processing, and human assistance. We demonstrate the attack on a 3D printer and a CNC mill, each with its own acoustic signature, and discuss the commonalities in the sensor data captured for these two different machines. We compare the quality of the data captured with a variety of smartphone models. Capturing data from the 3D printer, we reproduce the form and process information of objects previously unknown to the reconstructors. On average, our accuracy is within 1 mm in reconstructing the length of a line segment in a fabricated object's shape and within 1 degree in determining an angle in a fabricated object's shape. We conclude with recommendations for defending against these attacks.

Tychalas, Dimitrios, Keliris, Anastasis, Maniatakos, Michail.  2019.  LED Alert: Supply Chain Threats for Stealthy Data Exfiltration in Industrial Control Systems. 2019 IEEE 25th International Symposium on On-Line Testing and Robust System Design (IOLTS). :194–199.

Industrial Internet-of-Things has been touted as the next revolution in the industrial domain, offering interconnectivity, independence, real-time operation, and self-optimization. Integration of smart systems, however, bridges the gap between information and operation technology, creating new avenues for attacks from the cyber domain. The dismantling of this air-gap, in conjunction with the devices' long lifespan -in the range of 20-30 years-, motivates us to bring the attention of the community to emerging advanced persistent threats. We demonstrate a threat that bridges the air-gap by leaking data from memory to analog peripherals through Direct Memory Access (DMA), delivered as a firmware modification through the supply chain. The attack automatically adapts to a target device by leveraging the Device Tree and resides solely in the peripherals, completely transparent to the main CPU, by judiciously short-circuiting specific components. We implement this attack on a commercial Programmable Logic Controller, leaking information over the available LEDs. We evaluate the presented attack vector in terms of stealthiness, and demonstrate no observable overhead on both CPU performance and DMA transfer speed. Since traditional anomaly detection techniques would fail to detect this firmware trojan, this work highlights the need for industrial control system-appropriate techniques that can be applied promptly to installed devices.

Na, L., Yunwei, D., Tianwei, C., Chao, W., Yang, G..  2015.  The Legitimacy Detection for Multilevel Hybrid Cloud Algorithm Based Data Access. Reliability and Security - Companion 2015 IEEE International Conference on Software Quality. :169–172.

In this paper a joint algorithm was designed to detect a variety of unauthorized access risks in multilevel hybrid cloud. First of all, the access history is recorded among different virtual machines in multilevel hybrid cloud using the global flow diagram. Then, the global flow graph is taken as auxiliary decision-making basis to design legitimacy detection algorithm based data access and is represented by formal representation, Finally the implement process was specified, and the algorithm can effectively detect operating against regulations such as simple unauthorized level across, beyond indirect unauthorized and other irregularities.

Afzali, Hammad, Torres-Arias, Santiago, Curtmola, Reza, Cappos, Justin.  2018.  Le-Git-Imate: Towards Verifiable Web-Based Git Repositories. Proceedings of the 2018 on Asia Conference on Computer and Communications Security. :469-482.
Web-based Git hosting services such as GitHub and GitLab are popular choices to manage and interact with Git repositories. However, they lack an important security feature - the ability to sign Git commits. Users instruct the server to perform repository operations on their behalf and have to trust that the server will execute their requests faithfully. Such trust may be unwarranted though because a malicious or a compromised server may execute the requested actions in an incorrect manner, leading to a different state of the repository than what the user intended. In this paper, we show a range of high-impact attacks that can be executed stealthily when developers use the web UI of a Git hosting service to perform common actions such as editing files or merging branches. We then propose le-git-imate, a defense against these attacks which provides security guarantees comparable and compatible with Git's standard commit signing mechanism. We implement le-git-imate as a Chrome browser extension. le-git-imate does not require changes on the server side and can thus be used immediately. It also preserves current workflows used in Github/GitLab and does not require the user to leave the browser, and it allows anyone to verify that the server's actions faithfully follow the user's requested actions. Moreover, experimental evaluation using the browser extension shows that le-git-imate has comparable performance with Git's standard commit signature mechanism. With our solution in place, users can take advantage of GitHub/GitLab's web-based features without sacrificing security, thus paving the way towards verifiable web-based Git repositories.
Guo, Wenbo, Mu, Dongliang, Xu, Jun, Su, Purui, Wang, Gang, Xing, Xinyu.  2018.  LEMNA: Explaining Deep Learning Based Security Applications. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :364–379.
While deep learning has shown a great potential in various domains, the lack of transparency has limited its application in security or safety-critical areas. Existing research has attempted to develop explanation techniques to provide interpretable explanations for each classification decision. Unfortunately, current methods are optimized for non-security tasks ( e.g., image analysis). Their key assumptions are often violated in security applications, leading to a poor explanation fidelity. In this paper, we propose LEMNA, a high-fidelity explanation method dedicated for security applications. Given an input data sample, LEMNA generates a small set of interpretable features to explain how the input sample is classified. The core idea is to approximate a local area of the complex deep learning decision boundary using a simple interpretable model. The local interpretable model is specially designed to (1) handle feature dependency to better work with security applications ( e.g., binary code analysis); and (2) handle nonlinear local boundaries to boost explanation fidelity. We evaluate our system using two popular deep learning applications in security (a malware classifier, and a function start detector for binary reverse-engineering). Extensive evaluations show that LEMNA's explanation has a much higher fidelity level compared to existing methods. In addition, we demonstrate practical use cases of LEMNA to help machine learning developers to validate model behavior, troubleshoot classification errors, and automatically patch the errors of the target models.
Deng, Zhaoxia, Feldman, Ariel, Kurtz, Stuart A., Chong, Frederic T..  2017.  Lemonade from Lemons: Harnessing Device Wearout to Create Limited-Use Security Architectures. Proceedings of the 44th Annual International Symposium on Computer Architecture. :361–374.

Most architectures are designed to mitigate the usually undesirable phenomenon of device wearout. We take a contrarian view and harness this phenomenon to create hardware security mechanisms that resist attacks by statistically enforcing an upper bound on hardware uses, and consequently attacks. For example, let us assume that a user may log into a smartphone a maximum of 50 times a day for 5 years, resulting in approximately 91,250 legitimate uses. If we assume at least 8-character passwords and we require login (and retrieval of the storage decryption key) to traverse hardware that wears out in 91,250 uses, then an adversary has a negligible chance of successful brute-force attack before the hardware wears out, even assuming real-world password cracking by professionals. M-way replication of our hardware and periodic re-encryption of storage can increase the daily usage bound by a factor of M. The key challenge is to achieve practical statistical bounds on both minimum and maximum uses for an architecture, given that individual devices can vary widely in wearout characteristics. We introduce techniques for architecturally controlling these bounds and perform a design space exploration for three use cases: a limited-use connection, a limited-use targeting system and one-time pads. These techniques include decision trees, parallel structures, Shamir's secret-sharing mechanism, Reed-Solomon codes, and module replication. We explore the cost in area, energy and latency of using these techniques to achieve system-level usage targets given device-level wearout distributions. With redundant encoding, for example, we can improve exponential sensitivity to device lifetime variation to linear sensitivity, reducing the total number of NEMS devices by 4 orders of magnitude to about 0.8 million for limited-use connections (compared with 4 billion if without redundant encoding).

Pandes, Tiffany Lyn O., Omorog, Challiz D., Medrano, Regino B..  2018.  LeMTrac: Legislative Management and Tracking System. :1—6.

{Information and Communications Technology (ICT) have rationalized government services into a more efficient and transparent government. However, a large part of the government services remained constant in the manual process due to the high cost of ICT. The purpose of this paper is to explore the role of e-governance and ICT in the legislative management of municipalities in the Philippines. This study adopted the phases of Princeton Project Management Methodology (PPMM) as the approach in the development of LeMTrac. This paper utilized the developmental- quantitative research design involving two (2) sets of respondents, which are the end-users and IT experts. Majority of the respondents perceived that the system as "highly acceptable" with an average Likert score of 4.72 for the ISO 9126 Software quality metric Usability. The findings also reveal that the integration of LeMTrac within the Sangguniang Bayan (SB) Office in the Municipal Local Government Units (LGU) of Nabua and Bula, Camarines Sur provided better accessibility, security, and management of documents.

Rao, Ashwini, Hibshi, Hanan, Breaux, Travis, Lehker, Jean-Michel, Niu, Jianwei.  2014.  Less is More?: Investigating the Role of Examples in Security Studies Using Analogical Transfer Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :7:1–7:12.

Information system developers and administrators often overlook critical security requirements and best practices. This may be due to lack of tools and techniques that allow practitioners to tailor security knowledge to their particular context. In order to explore the impact of new security methods, we must improve our ability to study the impact of security tools and methods on software and system development. In this paper, we present early findings of an experiment to assess the extent to which the number and type of examples used in security training stimuli can impact security problem solving. To motivate this research, we formulate hypotheses from analogical transfer theory in psychology. The independent variables include number of problem surfaces and schemas, and the dependent variable is the answer accuracy. Our study results do not show a statistically significant difference in performance when the number and types of examples are varied. We discuss the limitations, threats to validity and opportunities for future studies in this area.

Muthusamy, Vinod, Slominski, Aleksander, Ishakian, Vatche, Khalaf, Rania, Reason, Johnathan, Rozsnyai, Szabolcs.  2016.  Lessons Learned Using a Process Mining Approach to Analyze Events from Distributed Applications. Proceedings of the 10th ACM International Conference on Distributed and Event-based Systems. :199–204.

The execution of distributed applications are captured by the events generated by the individual components. However, understanding the behavior of these applications from their event logs can be a complex and error prone task, compounded by the fact that applications continuously change rendering any knowledge obsolete. We describe our experiences applying a suite of process-aware analytic tools to a number of real world scenarios, and distill our lessons learned. For example, we have seen that these tools are used iteratively, where insights gained at one stage inform the configuration decisions made at an earlier stage. As well, we have observed that data onboarding, where the raw data is cleaned and transformed, is the most critical stage in the pipeline and requires the most manual effort and domain knowledge. In particular, missing, inconsistent, and low-resolution event time stamps are recurring problems that require better solutions. The experiences and insights presented here will assist practitioners applying process analytic tools to real scenarios, and reveal to researchers some of the more pressing challenges in this space.

Halderman, J. Alex, Schoen, Seth D., Heninger, Nadia, Clarkson, William, Paul, William, Calandrino, Joseph A., Feldman, Ariel J., Appelbaum, Jacob, Felten, Edward W..  2009.  Lest We Remember: Cold-boot Attacks on Encryption Keys. Commun. ACM. 52:91–98.

Contrary to widespread assumption, dynamic RAM (DRAM), the main memory in most modern computers, retains its contents for several seconds after power is lost, even at room temperature and even if removed from a motherboard. Although DRAM becomes less reliable when it is not refreshed, it is not immediately erased, and its contents persist sufficiently for malicious (or forensic) acquisition of usable full-system memory images. We show that this phenomenon limits the ability of an operating system to protect cryptographic key material from an attacker with physical access to a machine. It poses a particular threat to laptop users who rely on disk encryption: we demonstrate that it could be used to compromise several popular disk encryption products without the need for any special devices or materials. We experimentally characterize the extent and predictability of memory retention and report that remanence times can be increased dramatically with simple cooling techniques. We offer new algorithms for finding cryptographic keys in memory images and for correcting errors caused by bit decay. Though we discuss several strategies for mitigating these risks, we know of no simple remedy that would eliminate them.

[Anonymous].  2017.  Let me rephrase that: Transparent optimization in SDNs . ACM SIGCOMM Symposium on SDN Research (SOSR).

Enterprise networks today have highly diverse correctness requirements and relatively common performance objectives. As a result, preferred abstractions for enterprise networks are those which allow matching security and correctness specifications, while transparently managing performance. Existing SDN network management architectures, however, bundle correctness and performance as a single abstraction. We argue that this creates an SDN ecosystem that is unnecessarily hard to build, maintain and evolve. We advocate a separation of the diverse correctness abstractions from generic performance optimization, to enable easier evolution of SDN controllers and platforms. We propose Oreo, a first step towards a common and relatively transparent performance optimization layer for SDN. Oreo performs the optimization by first building a model that describes every flow in the network, and then performing network-wide, multi-objective optimization based on this model without disrupting higher level security and correctness. 

Authors: Santhosh Prabhu, Mo Dong, Tong Meng, P. Brighten Godfrey, and Matthew Caesar

Santhosh Prabhu, University of Illinois at Urbana-Champaign, Mo Dong, University of Illinois at Urbana-Champaign, Tong Meng, University of Illinois at Urbana-Champaign, P. Brighten Godfrey, University of Illinois at Urbana-Champaign, Matthew Caesar, University of Illinois at Urbana-Champaign.  2017.  Let Me Rephrase That: Transparent Optimization in SDNs. ACM Symposium on SDN Research (SOSR 2017).

Enterprise networks today have highly diverse correctness requirements and relatively common performance objectives. As a result, preferred abstractions for enterprise networks are those which allow matching correctness specification, while transparently managing performance. Existing SDN network management architectures, however, bundle correctness and performance as a single abstraction. We argue that this creates an SDN ecosystem that is unnecessarily hard to build, maintain and evolve. We advocate a separation of the diverse correctness abstractions from generic performance optimization, to enable easier evolution of SDN controllers and platforms. We propose Oreo, a first step towards a common and relatively transparent performance optimization layer for SDN. Oreo performs the optimization by first building a model that describes every flow in the network, and then performing network-wide, multi-objective optimization based on this model without disrupting higher level correctness.

Sachidananda, Vinay, Siboni, Shachar, Shabtai, Asaf, Toh, Jinghui, Bhairav, Suhas, Elovici, Yuval.  2017.  Let the Cat Out of the Bag: A Holistic Approach Towards Security Analysis of the Internet of Things. Proceedings of the 3rd ACM International Workshop on IoT Privacy, Trust, and Security. :3–10.

The exponential increase of Internet of Things (IoT) devices have resulted in a range of new and unanticipated vulnerabilities associated with their use. IoT devices from smart homes to smart enterprises can easily be compromised. One of the major problems associated with the IoT is maintaining security; the vulnerable nature of IoT devices poses a challenge to many aspects of security, including security testing and analysis. It is trivial to perform the security analysis for IoT devices to understand the loop holes and very nature of the devices itself. Given these issues, there has been less emphasis on security testing and analysis of the IoT. In this paper, we show our preliminary efforts in the area of security analysis for IoT devices and introduce a security IoT testbed for performing security analysis. We also discuss the necessary design, requirements and the architecture to support our security analysis conducted via the proposed testbed.

Bukasa, Sebanjila K., Lashermes, Ronan, Lanet, Jean-Louis, Leqay, Axel.  2018.  Let's Shock Our IoT's Heart: ARMv7-M Under (Fault) Attacks. Proceedings of the 13th International Conference on Availability, Reliability and Security. :33:1-33:6.

A fault attack is a well-known technique where the behaviour of a chip is voluntarily disturbed by hardware means in order to undermine the security of the information handled by the target. In this paper, we explore how Electromagnetic fault injection (EMFI) can be used to create vulnerabilities in sound software, targeting a Cortex-M3 microcontroller. Several use-cases are shown experimentally: control flow hijacking, buffer overflow (even with the presence of a canary), covert backdoor insertion and Return Oriented Programming can be achieved even if programs are not vulnerable in a software point of view. These results suggest that the protection of any software against vulnerabilities must take hardware into account as well.

Castle, Sam, Pervaiz, Fahad, Weld, Galen, Roesner, Franziska, Anderson, Richard.  2016.  Let's Talk Money: Evaluating the Security Challenges of Mobile Money in the Developing World. Proceedings of the 7th Annual Symposium on Computing for Development. :4:1–4:10.

Digital money drives modern economies, and the global adoption of mobile phones has enabled a wide range of digital financial services in the developing world. Where there is money, there must be security, yet prior work on mobile money has identified discouraging vulnerabilities in the current ecosystem. We begin by arguing that the situation is not as dire as it may seem–-many reported issues can be resolved by security best practices and updated mobile software. To support this argument, we diagnose the problems from two directions: (1) a large-scale analysis of existing financial service products and (2) a series of interviews with 7 developers and designers in Africa and South America. We frame this assessment within a novel, systematic threat model. In our large-scale analysis, we evaluate 197 Android apps and take a deeper look at 71 products to assess specific organizational practices. We conclude that although attack vectors are present in many apps, service providers are generally making intentional, security-conscious decisions. The developer interviews support these findings, as most participants demonstrated technical competency and experience, and all worked within established organizations with regimented code review processes and dedicated security teams.

Kothari, Suresh, Tamrawi, Ahmed, Sauceda, Jeremías, Mathews, Jon.  2016.  Let's Verify Linux: Accelerated Learning of Analytical Reasoning Through Automation and Collaboration. Proceedings of the 38th International Conference on Software Engineering Companion. :394–403.

We describe our experiences in the classroom using the internet to collaboratively verify a significant safety and security property across the entire Linux kernel. With 66,609 instances to check across three versions of Linux, the naive approach of simply dividing up the code and assigning it to students does not scale, and does little to educate. However, by teaching and applying analytical reasoning, the instances can be categorized effectively, the problems of scale can be managed, and students can collaborate and compete with one another to achieve an unprecedented level of verification. We refer to our approach as Evidence-Enabled Collaborative Verification (EECV). A key aspect of this approach is the use of visual software models, which provide mathematically rigorous and critical evidence for verification. The visual models make analytical reasoning interactive, interesting and applicable to large software. Visual models are generated automatically using a tool we have developed called L-SAP [14]. This tool generates an Instance Verification Kit (IVK) for each instance, which contains all of the verification evidence for the instance. The L-SAP tool is implemented on a software graph database platform called Atlas [6]. This platform comes with a powerful query language and interactive visualization to build and apply visual models for software verification. The course project is based on three recent versions of the Linux operating system with altogether 37 MLOC and 66,609 verification instances. The instances are accessible through a website [2] for students to collaborate and compete. The Atlas platform, the L-SAP tool, the structured labs for the project, and the lecture slides are available upon request for academic use.

Pal, Manjish, Sahu, Prashant, Jaiswal, Shailesh.  2018.  LevelTree: A New Scalable Data Center Networks Topology. 2018 International Conference on Advances in Computing, Communication Control and Networking (ICACCCN). :482-486.

In recent time it has become very crucial for the data center networks (DCN) to broaden the system limit to be able to meet with the increasing need of cloud based applications. A decent DCN topology must comprise of numerous properties for low diameter, high bisection bandwidth, ease of organization and so on. In addition, a DCN topology should depict aptness in failure resiliency, scalability, construction and routing. In this paper, we introduce a new Data Center Network topology termed LevelTree built up with several modules grows as a tree topology and each module is constructed from a complete graph. LevelTree demonstrates great topological properties and it beats critical topologies like Jellyfish, VolvoxDC, and Fattree regarding providing a superior worthwhile plan with greater capacity.

Masduki, B. W., Ramli, K., Salman, M..  2017.  Leverage Intrusion Detection System Framework for Cyber Situational Awareness System. 2017 International Conference on Smart Cities, Automation Intelligent Computing Systems (ICON-SONICS). :64–69.

As one of the security components in cyber situational awareness systems, Intrusion Detection System (IDS) is implemented by many organizations in their networks to address the impact of network attacks. Regardless of the tools and technologies used to generate security alarms, IDS can provide a situation overview of network traffic. With the security alarm data generated, most organizations do not have the right techniques and further analysis to make this alarm data more valuable for the security team to handle attacks and reduce risk to the organization. This paper proposes the IDS Metrics Framework for cyber situational awareness system that includes the latest technologies and techniques that can be used to create valuable metrics for security advisors in making the right decisions. This metrics framework consists of the various tools and techniques used to evaluate the data. The evaluation of the data is then used as a measurement against one or more reference points to produce an outcome that can be very useful for the decision making process of cyber situational awareness system. This metric offers an additional Graphical User Interface (GUI) tools that produces graphical displays and provides a great platform for analysis and decision-making by security teams.

Gu, Peng, Li, Shuangchen, Stow, Dylan, Barnes, Russell, Liu, Liu, Xie, Yuan, Kursun, Eren.  2016.  Leveraging 3D Technologies for Hardware Security: Opportunities and Challenges. Proceedings of the 26th Edition on Great Lakes Symposium on VLSI. :347–352.

3D die stacking and 2.5D interposer design are promising technologies to improve integration density, performance and cost. Current approaches face serious issues in dealing with emerging security challenges such as side channel attacks, hardware trojans, secure IC manufacturing and IP piracy. By utilizing intrinsic characteristics of 2.5D and 3D technologies, we propose novel opportunities in designing secure systems. We present: (i) a 3D architecture for shielding side-channel information; (ii) split fabrication using active interposers; (iii) circuit camouflage on monolithic 3D IC, and (iv) 3D IC-based security processing-in-memory (PIM). Advantages and challenges of these designs are discussed, showing that the new designs can improve existing countermeasures against security threats and further provide new security features.