Visible to the public Biblio

Found 270 results

Filters: First Letter Of Title is L  [Clear All Filters]
A B C D E F G H I J K [L] M N O P Q R S T U V W X Y Z   [Show ALL]
L
Holmes, Ashton, Desai, Sunny, Nahapetian, Ani.  2016.  LuxLeak: Capturing Computing Activity Using Smart Device Ambient Light Sensors. Proceedings of the 2Nd Workshop on Experiences in the Design and Implementation of Smart Objects. :47–52.

In this paper, we consider side-channel mechanisms, specifically using smart device ambient light sensors, to capture information about user computing activity. We distinguish keyboard keystrokes using only the ambient light sensor readings from a smart watch worn on the user's non-dominant hand. Additionally, we investigate the feasibility of capturing screen emanations for determining user browser usage patterns. The experimental results expose privacy and security risks, as well as the potential for new mobile user interfaces and applications.

Lei Xu, Pham Dang Khoa, Seung Hun Kim, Won Woo Ro, Weidong Shi.  2014.  LUT based secure cloud computing #x2014; An implementation using FPGAs. ReConFigurable Computing and FPGAs (ReConFig), 2014 International Conference on. :1-6.

Cloud computing is widely deployed to handle challenges such as big data processing and storage. Due to the outsourcing and sharing feature of cloud computing, security is one of the main concerns that hinders the end users to shift their businesses to the cloud. A lot of cryptographic techniques have been proposed to alleviate the data security issues in cloud computing, but most of these works focus on solving a specific security problem such as data sharing, comparison, searching, etc. At the same time, little efforts have been done on program security and formalization of the security requirements in the context of cloud computing. We propose a formal definition of the security of cloud computing, which captures the essence of the security requirements of both data and program. Analysis of some existing technologies under the proposed definition shows the effectiveness of the definition. We also give a simple look-up table based solution for secure cloud computing which satisfies the given definition. As FPGA uses look-up table as its main computation component, it is a suitable hardware platform for the proposed secure cloud computing scheme. So we use FPGAs to implement the proposed solution for k-means clustering algorithm, which shows the effectiveness of the proposed solution.
 

Liao, Xiaojing, Alrwais, Sumayah, Yuan, Kan, Xing, Luyi, Wang, XiaoFeng, Hao, Shuang, Beyah, Raheem.  2016.  Lurking Malice in the Cloud: Understanding and Detecting Cloud Repository As a Malicious Service. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1541–1552.

The popularity of cloud hosting services also brings in new security challenges: it has been reported that these services are increasingly utilized by miscreants for their malicious online activities. Mitigating this emerging threat, posed by such "bad repositories" (simply Bar), is challenging due to the different hosting strategy to traditional hosting service, the lack of direct observations of the repositories by those outside the cloud, the reluctance of the cloud provider to scan its customers' repositories without their consent, and the unique evasion strategies employed by the adversary. In this paper, we took the first step toward understanding and detecting this emerging threat. Using a small set of "seeds" (i.e., confirmed Bars), we identified a set of collective features from the websites they serve (e.g., attempts to hide Bars), which uniquely characterize the Bars. These features were utilized to build a scanner that detected over 600 Bars on leading cloud platforms like Amazon, Google, and 150K sites, including popular ones like groupon.com, using them. Highlights of our study include the pivotal roles played by these repositories on malicious infrastructures and other important discoveries include how the adversary exploited legitimate cloud repositories and why the adversary uses Bars in the first place that has never been reported. These findings bring such malicious services to the spotlight and contribute to a better understanding and ultimately eliminating this new threat.

Wu, Peilun, Guo, Hui.  2019.  LuNet: A Deep Neural Network for Network Intrusion Detection. 2019 IEEE Symposium Series on Computational Intelligence (SSCI). :617—624.

Network attack is a significant security issue for modern society. From small mobile devices to large cloud platforms, almost all computing products, used in our daily life, are networked and potentially under the threat of network intrusion. With the fast-growing network users, network intrusions become more and more frequent, volatile and advanced. Being able to capture intrusions in time for such a large scale network is critical and very challenging. To this end, the machine learning (or AI) based network intrusion detection (NID), due to its intelligent capability, has drawn increasing attention in recent years. Compared to the traditional signature-based approaches, the AI-based solutions are more capable of detecting variants of advanced network attacks. However, the high detection rate achieved by the existing designs is usually accompanied by a high rate of false alarms, which may significantly discount the overall effectiveness of the intrusion detection system. In this paper, we consider the existence of spatial and temporal features in the network traffic data and propose a hierarchical CNN+RNN neural network, LuNet. In LuNet, the convolutional neural network (CNN) and the recurrent neural network (RNN) learn input traffic data in sync with a gradually increasing granularity such that both spatial and temporal features of the data can be effectively extracted. Our experiments on two network traffic datasets show that compared to the state-of-the-art network intrusion detection techniques, LuNet not only offers a high level of detection capability but also has a much low rate of false positive-alarm.

Schuette, J., Brost, G. S..  2018.  LUCON: Data Flow Control for Message-Based IoT Systems. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :289-299.

Today's emerging Industrial Internet of Things (IIoT) scenarios are characterized by the exchange of data between services across enterprises. Traditional access and usage control mechanisms are only able to determine if data may be used by a subject, but lack an understanding of how it may be used. The ability to control the way how data is processed is however crucial for enterprises to guarantee (and provide evidence of) compliant processing of critical data, as well as for users who need to control if their private data may be analyzed or linked with additional information - a major concern in IoT applications processing personal information. In this paper, we introduce LUCON, a data-centric security policy framework for distributed systems that considers data flows by controlling how messages may be routed across services and how they are combined and processed. LUCON policies prevent information leaks, bind data usage to obligations, and enforce data flows across services. Policy enforcement is based on a dynamic taint analysis at runtime and an upfront static verification of message routes against policies. We discuss the semantics of these two complementing enforcement models and illustrate how LUCON policies are compiled from a simple policy language into a first-order logic representation. We demonstrate the practical application of LUCON in a real-world IoT middleware and discuss its integration into Apache Camel. Finally, we evaluate the runtime impact of LUCON and discuss performance and scalability aspects.

Xu, Yanli, Jiang, Shengming, Liu, Feng.  2016.  A LTE-based Communication Architecture for Coastal Networks. Proceedings of the 11th ACM International Conference on Underwater Networks & Systems. :6:1–6:2.
Currently, the coastal communication is mainly provided by satellite networks, which are expensive with low transmission rate and unable to support underwater communication efficiently. In this work, we propose a communication architecture for coastal network based on long term evolution (LTE) cellular networks in which a cellular network architecture is designed for the maritime communication scenario. Some key technologies of next-generation cellular networks such as device-to-device (D2D) and multiple input multiple output (MIMO) are integrated into the proposed architecture to support more efficient data transmission. In addition, over-water nodes aid the transmission of underwater network to improve the communication quality. With the proposed communication architecture, the coastal network can provide high-quality communication service to traffics with different quality-of-service (QoS) requirements.
Yao, Lin, Jiang, Binyao, Deng, Jing, Obaidat, Mohammad S..  2019.  LSTM-Based Detection for Timing Attacks in Named Data Network. 2019 IEEE Global Communications Conference (GLOBECOM). :1—6.

Named Data Network (NDN) is an alternative to host-centric networking exemplified by today's Internet. One key feature of NDN is in-network caching that reduces access delay and query overhead by caching popular contents at the source as well as at a few other nodes. Unfortunately, in-network caching suffers various privacy risks by different attacks, one of which is termed timing attack. This is an attack to infer whether a consumer has recently requested certain contents based on the time difference between the delivery time of those contents that are currently cached and those that are not cached. In order to prevent the privacy leakage and resist such kind of attacks, we propose a detection scheme by adopting Long Short-term Memory (LSTM) model. Based on the four input features of LSTM, cache hit ratio, average request interval, request frequency, and types of requested contents, we timely capture more important eigenvalues by dividing a constant time window size into a few small slices in order to detect timing attacks accurately. We have performed extensive simulations to compare our scheme with several other state-of-the-art schemes in classification accuracy, detection ratio, false alarm ratio, and F-measure. It has been shown that our scheme possesses a better performance in all cases studied.

Althubiti, Sara A., Jones, Eric Marcell, Roy, Kaushik.  2018.  LSTM for Anomaly-Based Network Intrusion Detection. 2018 28th International Telecommunication Networks and Applications Conference (ITNAC). :1–3.
Due to the massive amount of the network traffic, attackers have a great chance to cause a huge damage to the network system or its users. Intrusion detection plays an important role in ensuring security for the system by detecting the attacks and the malicious activities. In this paper, we utilize CIDDS dataset and apply a deep learning approach, Long-Short-Term Memory (LSTM), to implement intrusion detection system. This research achieves a reasonable accuracy of 0.85.
Andoni, Alexandr, Razenshteyn, Ilya, Nosatzki, Negev Shekel.  2017.  LSH Forest: Practical Algorithms Made Theoretical. Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms. :67–78.
We analyze LSH Forest [BCG05]—a popular heuristic for the nearest neighbor search—and show that a careful yet simple modification of it outperforms "vanilla" LSH algorithms. The end result is the first instance of a simple, practical algorithm that provably leverages data-dependent hashing to improve upon data-oblivious LSH. Here is the entire algorithm for the d-dimensional Hamming space. The LSH Forest, for a given dataset, applies a random permutation to all the d coordinates, and builds a trie on the resulting strings. In our modification, we further augment this trie: for each node, we store a constant number of points close to the mean of the corresponding subset of the dataset, which are compared to any query point reaching that node. The overall data structure is simply several such tries sampled independently. While the new algorithm does not quantitatively improve upon the best data-dependent hashing algorithms from [AR15] (which are known to be optimal), it is significantly simpler, being based on a practical heuristic, and is provably better than the best LSH algorithm for the Hamming space [IM98, HIM12].
Wang, Fei, Kwon, Yonghwi, Ma, Shiqing, Zhang, Xiangyu, Xu, Dongyan.  2018.  Lprov: Practical Library-Aware Provenance Tracing. Proceedings of the 34th Annual Computer Security Applications Conference. :605-617.

With the continuing evolution of sophisticated APT attacks, provenance tracking is becoming an important technique for efficient attack investigation in enterprise networks. Most of existing provenance techniques are operating on system event auditing that discloses dependence relationships by scrutinizing syscall traces. Unfortunately, such auditing-based provenance is not able to track the causality of another important dimension in provenance, the shared libraries. Different from other data-only system entities like files and sockets, dynamic libraries are linked at runtime and may get executed, which poses new challenges in provenance tracking. For example, library provenance cannot be tracked by syscalls and mapping; whether a library function is called and how it is called within an execution context is invisible at syscall level; linking a library does not promise their execution at runtime. Addressing these challenges is critical to tracking sophisticated attacks leveraging libraries. In this paper, to facilitate fine-grained investigation inside the execution of library binaries, we develop Lprov, a novel provenance tracking system which combines library tracing and syscall tracing. Upon a syscall, Lprov identifies the library calls together with the stack which induces it so that the library execution provenance can be accurately revealed. Our evaluation shows that Lprov can precisely identify attack provenance involving libraries, including malicious library attack and library vulnerability exploitation, while syscall-based provenance tools fail to identify. It only incurs 7.0% (in geometric mean) runtime overhead and consumes 3 times less storage space of a state-of-the-art provenance tool.

Seliem, M., Elgazzar, K..  2020.  LPA-SDP: A Lightweight Privacy-Aware Service Discovery Protocol for IoT Environments. 2020 IEEE 6th World Forum on Internet of Things (WF-IoT). :1–7.
Latest forecasts show that 50 billion devices will be connected to the Internet by 2020. These devices will provide ubiquitous data access and enable smarter interactions in all aspects of our everyday life, including vital domains such as healthcare and battlefields, where privacy is a key requirement. With the increasing adoption of IoT and the explosion of these resource-constrained devices, manual discovery and configuration become significantly challenging. Despite there is a number of resource discovery protocols that can be efficiently used in IoT deployments, none of these protocols provides any privacy consideration. This paper presents LPA-SDT, a novel technique for service discovery that builds privacy into the design from the ground up. Performance evaluation demonstrates that LPA-SDT outperforms state-of-the-art discovery techniques for resource-constrained environments while preserving user and data privacy.
A. Papadopoulos, L. Czap, C. Fragouli.  2015.  "LP formulations for secrecy over erasure networks with feedback". 2015 IEEE International Symposium on Information Theory (ISIT). :954-958.

We design polynomial time schemes for secure message transmission over arbitrary networks, in the presence of an eavesdropper, and where each edge corresponds to an erasure channel with public feedback. Our schemes are described through linear programming (LP) formulations, that explicitly select (possibly different) sets of paths for key-generation and message sending. Although our LPs are not always capacity-achieving, they outperform the best known alternatives in the literature, and extend to incorporate several interesting scenaria.

Conglei Shi, Yingcai Wu, Shixia Liu, Hong Zhou, Huamin Qu.  2014.  LoyalTracker: Visualizing Loyalty Dynamics in Search Engines. Visualization and Computer Graphics, IEEE Transactions on. 20:1733-1742.

The huge amount of user log data collected by search engine providers creates new opportunities to understand user loyalty and defection behavior at an unprecedented scale. However, this also poses a great challenge to analyze the behavior and glean insights into the complex, large data. In this paper, we introduce LoyalTracker, a visual analytics system to track user loyalty and switching behavior towards multiple search engines from the vast amount of user log data. We propose a new interactive visualization technique (flow view) based on a flow metaphor, which conveys a proper visual summary of the dynamics of user loyalty of thousands of users over time. Two other visualization techniques, a density map and a word cloud, are integrated to enable analysts to gain further insights into the patterns identified by the flow view. Case studies and the interview with domain experts are conducted to demonstrate the usefulness of our technique in understanding user loyalty and switching behavior in search engines.
 

Zhang, Naiji, Jaafar, Fehmi, Malik, Yasir.  2019.  Low-Rate DoS Attack Detection Using PSD Based Entropy and Machine Learning. 2019 6th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/ 2019 5th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). :59–62.
The Distributed Denial of Service attack is one of the most common attacks and it is hard to mitigate, however, it has become more difficult while dealing with the Low-rate DoS (LDoS) attacks. The LDoS exploits the vulnerability of TCP congestion-control mechanism by sending malicious traffic at the low constant rate and influence the victim machine. Recently, machine learning approaches are applied to detect the complex DDoS attacks and improve the efficiency and robustness of the intrusion detection system. In this research, the algorithm is designed to balance the detection rate and its efficiency. The detection algorithm combines the Power Spectral Density (PSD) entropy function and Support Vector Machine to detect LDoS traffic from normal traffic. In our solution, the detection rate and efficiency are adjustable based on the parameter in the decision algorithm. To have high efficiency, the detection method will always detect the attacks by calculating PSD-entropy first and compare it with the two adaptive thresholds. The thresholds can efficiently filter nearly 19% of the samples with a high detection rate. To minimize the computational cost and look only for the patterns that are most relevant for detection, Support Vector Machine based machine learning model is applied to learn the traffic pattern and select appropriate features for detection algorithm. The experimental results show that the proposed approach can detect 99.19% of the LDoS attacks and has an O (n log n) time complexity in the best case.
Wang, Meng, Chow, Joe H., Hao, Yingshuai, Zhang, Shuai, Li, Wenting, Wang, Ren, Gao, Pengzhi, Lackner, Christopher, Farantatos, Evangelos, Patel, Mahendra.  2019.  A Low-Rank Framework of PMU Data Recovery and Event Identification. 2019 International Conference on Smart Grid Synchronized Measurements and Analytics (SGSMA). :1–9.

The large amounts of synchrophasor data obtained by Phasor Measurement Units (PMUs) provide dynamic visibility into power systems. Extracting reliable information from the data can enhance power system situational awareness. The data quality often suffers from data losses, bad data, and cyber data attacks. Data privacy is also an increasing concern. In this paper, we discuss our recently proposed framework of data recovery, error correction, data privacy enhancement, and event identification methods by exploiting the intrinsic low-dimensional structures in the high-dimensional spatial-temporal blocks of PMU data. Our data-driven approaches are computationally efficient with provable analytical guarantees. The data recovery method can recover the ground-truth data even if simultaneous and consecutive data losses and errors happen across all PMU channels for some time. We can identify PMU channels that are under false data injection attacks by locating abnormal dynamics in the data. The data recovery method for the operator can extract the information accurately by collectively processing the privacy-preserving data from many PMUs. A cyber intruder with access to partial measurements cannot recover the data correctly even using the same approach. A real-time event identification method is also proposed, based on the new idea of characterizing an event by the low-dimensional subspace spanned by the dominant singular vectors of the data matrix.

Page, Adam, Attaran, Nasrin, Shea, Colin, Homayoun, Houman, Mohsenin, Tinoosh.  2016.  Low-Power Manycore Accelerator for Personalized Biomedical Applications. Proceedings of the 26th Edition on Great Lakes Symposium on VLSI. :63–68.

Wearable personal health monitoring systems can offer a cost effective solution for human healthcare. These systems must provide both highly accurate, secured and quick processing and delivery of vast amount of data. In addition, wearable biomedical devices are used in inpatient, outpatient, and at home e-Patient care that must constantly monitor the patient's biomedical and physiological signals 24/7. These biomedical applications require sampling and processing multiple streams of physiological signals with strict power and area footprint. The processing typically consists of feature extraction, data fusion, and classification stages that require a large number of digital signal processing and machine learning kernels. In response to these requirements, in this paper, a low-power, domain-specific many-core accelerator named Power Efficient Nano Clusters (PENC) is proposed to map and execute the kernels of these applications. Experimental results show that the manycore is able to reduce energy consumption by up to 80% and 14% for DSP and machine learning kernels, respectively, when optimally parallelized. The performance of the proposed PENC manycore when acting as a coprocessor to an Intel Atom processor is compared with existing commercial off-the-shelf embedded processing platforms including Intel Atom, Xilinx Artix-7 FPGA, and NVIDIA TK1 ARM-A15 with GPU SoC. The results show that the PENC manycore architecture reduces the energy by as much as 10X while outperforming all off-the-shelf embedded processing platforms across all studied machine learning classifiers.

Shim, Yong, Sengupta, Abhronil, Roy, Kaushik.  2016.  Low-power Approximate Convolution Computing Unit with Domain-wall Motion Based "Spin-memristor" for Image Processing Applications. Proceedings of the 53rd Annual Design Automation Conference. :21:1–21:6.

Convolution serves as the basic computational primitive for various associative computing tasks ranging from edge detection to image matching. CMOS implementation of such computations entails significant bottlenecks in area and energy consumption due to the large number of multiplication and addition operations involved. In this paper, we propose an ultra-low power and compact hybrid spintronic-CMOS design for the convolution computing unit. Low-voltage operation of domain-wall motion based magneto-metallic "Spin-Memristor"s interfaced with CMOS circuits is able to perform the convolution operation with reasonable accuracy. Simulation results of Gabor filtering for edge detection reveal \textasciitilde 2.5× lower energy consumption compared to a baseline 45nm-CMOS implementation.

Xu, Ke, Li, Yu, Huang, Bo, Liu, Xiangkai, Wang, Hong, Wu, Zhuoyan, Yan, Zhanpeng, Tu, Xueying, Wu, Tongqing, Zeng, Daibing.  2018.  A Low-Power 4096x2160@30Fps H.265/HEVC Video Encoder for Smart Video Surveillance. Proceedings of the International Symposium on Low Power Electronics and Design. :38:1–38:6.

This paper presents the design and VLSI implementation of a low-power HEVC main profile encoder, which is able to process up to 4096x2160@30fps 4:2:0 encoding in real-time with five-stage pipeline architecture. A pyramid ME (Motion Estimation) engine is employed to reduce search complexity. To compensate for the video sequences with fast moving objects, GME (Global Motion Estimation) are introduced to alleviate the effect of limited search range. We also implement an alternative 5x5 search along with 3x3 to boost video quality. For intra mode decision, original pixels, instead of reconstructed ones are used to reduce pipeline stall. The encoder supports DVFS (Dynamic Voltage and Frequency Scaling) and features three operating modes, which helps to reduce power consumption by 25%. Scalable quality that trades encoding quality for power by reducing size of search range and intra prediction candidates, achieves 11.4% power reduction with 3.5% quality degradation. Furthermore, a lossless frame buffer compression is proposed which reduced DDR bandwidth by 49.1% and power consumption by 13.6%. The entire video surveillance SoC is fabricated with TSMC 28nm technology with 1.96 mm2 area. It consumes 2.88M logic gates and 117KB SRAM. The measured power consumption is 103mW at 350MHz for 4K encoding with high-quality mode. The 0.39nJ/pixel of energy efficiency of this work, which achieves 42% $\backslash$textasciitilde 97% power reduction as compared with reference designs, make it ideal for real-time low-power smart video surveillance applications.

Sengupta, A., Roy, D., Mohanty, S. P..  2019.  Low-Overhead Robust RTL Signature for DSP Core Protection: New Paradigm for Smart CE Design. 2019 IEEE International Conference on Consumer Electronics (ICCE). :1–6.
The design process of smart Consumer Electronics (CE) devices heavily relies on reusable Intellectual Property (IP) cores of Digital Signal Processor (DSP) and Multimedia Processor (MP). On the other hand, due to strict competition and rivalry between IP vendors, the problem of ownership conflict and IP piracy is surging. Therefore, to design a secured smart CE device, protection of DSP/MP IP core is essential. Embedding a robust IP owner's signature can protect an IP core from ownership abuse and forgery. This paper presents a covert signature embedding process for DSP/MP IP core at Register-transfer level (RTL). The secret marks of the signature are distributed over the entire design such that it provides higher robustness. For example for 8th order FIR filter, it incurs only between 6% and 3% area overhead for maximum and minimum size signature respectively compared to the non-signature FIR RTL design but with significantly enhanced security.
Zhan, Dongyang, Li, Huhua, Ye, Lin, Zhang, Hongli, Fang, Binxing, Du, Xiaojiang.  2019.  A Low-Overhead Kernel Object Monitoring Approach for Virtual Machine Introspection. ICC 2019 - 2019 IEEE International Conference on Communications (ICC). :1–6.

Monitoring kernel object modification of virtual machine is widely used by virtual-machine-introspection-based security monitors to protect virtual machines in cloud computing, such as monitoring dentry objects to intercept file operations, etc. However, most of the current virtual machine monitors, such as KVM and Xen, only support page-level monitoring, because the Intel EPT technology can only monitor page privilege. If the out-of-virtual-machine security tools want to monitor some kernel objects, they need to intercept the operation of the whole memory page. Since there are some other objects stored in the monitored pages, the modification of them will also trigger the monitor. Therefore, page-level memory monitor usually introduces overhead to related kernel services of the target virtual machine. In this paper, we propose a low-overhead kernel object monitoring approach to reduce the overhead caused by page-level monitor. The core idea is to migrate the target kernel objects to a protected memory area and then to monitor the corresponding new memory pages. Since the new pages only contain the kernel objects to be monitored, other kernel objects will not trigger our monitor. Therefore, our monitor will not introduce runtime overhead to the related kernel service. The experimental results show that our system can monitor target kernel objects effectively only with very low overhead.

Ren, Kun, Diamond, Thaddeus, Abadi, Daniel J., Thomson, Alexander.  2016.  Low-Overhead Asynchronous Checkpointing in Main-Memory Database Systems. Proceedings of the 2016 International Conference on Management of Data. :1539–1551.

As it becomes increasingly common for transaction processing systems to operate on datasets that fit within the main memory of a single machine or a cluster of commodity machines, traditional mechanisms for guaranteeing transaction durability–-which typically involve synchronous log flushes–-incur increasingly unappealing costs to otherwise lightweight transactions. Many applications have turned to periodically checkpointing full database state. However, existing checkpointing methods–-even those which avoid freezing the storage layer–-often come with significant costs to operation throughput, end-to-end latency, and total memory usage. This paper presents Checkpointing Asynchronously using Logical Consistency (CALC), a lightweight, asynchronous technique for capturing database snapshots that does not require a physical point of consistency to create a checkpoint, and avoids conspicuous latency spikes incurred by other database snapshotting schemes. Our experiments show that CALC can capture frequent checkpoints across a variety of transactional workloads with extremely small cost to transactional throughput and low additional memory usage compared to other state-of-the-art checkpointing systems.

Ozmen, Muslum Ozgur, Yavuz, Attila A..  2017.  Low-Cost Standard Public Key Cryptography Services for Wireless IoT Systems. Proceedings of the 2017 Workshop on Internet of Things Security and Privacy. :65–70.

Internet of Things (IoT) is an integral part of application domains such as smart-home and digital healthcare. Various standard public key cryptography techniques (e.g., key exchange, public key encryption, signature) are available to provide fundamental security services for IoTs. However, despite their pervasiveness and well-proven security, they also have been shown to be highly energy costly for embedded devices. Hence, it is a critical task to improve the energy efficiency of standard cryptographic services, while preserving their desirable properties simultaneously. In this paper, we exploit synergies among various cryptographic primitives with algorithmic optimizations to substantially reduce the energy consumption of standard cryptographic techniques on embedded devices. Our contributions are: (i) We harness special precomputation techniques, which have not been considered for some important cryptographic standards to boost the performance of key exchange, integrated encryption, and hybrid constructions. (ii) We provide self-certification for these techniques to push their performance to the edge. (iii) We implemented our techniques and their counterparts on 8-bit AVR ATmega 2560 and evaluated their performance. We used microECC library and made the implementations on NIST-recommended secp192 curve, due to its standardization. Our experiments confirmed significant improvements on the battery life (up to 7x) while preserving the desirable properties of standard techniques. Moreover, to the best of our knowledge, we provide the first open-source framework including such set of optimizations on low-end devices.

Liu, Yuanyuan, Cheng, Jianping, Zhang, Li, Xing, Yuxiang, Chen, Zhiqiang, Zheng, Peng.  2014.  A low-cost dual energy CT system with sparse data. Tsinghua Science and Technology. 19:184-194.

Dual Energy CT (DECT) has recently gained significant research interest owing to its ability to discriminate materials, and hence is widely applied in the field of nuclear safety and security inspection. With the current technological developments, DECT can be typically realized by using two sets of detectors, one for detecting lower energy X-rays and another for detecting higher energy X-rays. This makes the imaging system expensive, limiting its practical implementation. In 2009, our group performed a preliminary study on a new low-cost system design, using only a complete data set for lower energy level and a sparse data set for the higher energy level. This could significantly reduce the cost of the system, as it contained much smaller number of detector elements. Reconstruction method is the key point of this system. In the present study, we further validated this system and proposed a robust method, involving three main steps: (1) estimation of the missing data iteratively with TV constraints; (2) use the reconstruction from the complete lower energy CT data set to form an initial estimation of the projection data for higher energy level; (3) use ordered views to accelerate the computation. Numerical simulations with different number of detector elements have also been examined. The results obtained in this study demonstrate that 1 + 14% CT data is sufficient enough to provide a rather good reconstruction of both the effective atomic number and electron density distributions of the scanned object, instead of 2 sets CT data.

Stein, G., Peng, Q..  2018.  Low-Cost Breaking of a Unique Chinese Language CAPTCHA Using Curriculum Learning and Clustering. 2018 IEEE International Conference on Electro/Information Technology (EIT). :0595–0600.

Text-based CAPTCHAs are still commonly used to attempt to prevent automated access to web services. By displaying an image of distorted text, they attempt to create a challenge image that OCR software can not interpret correctly, but a human user can easily determine the correct response to. This work focuses on a CAPTCHA used by a popular Chinese language question-and-answer website and how resilient it is to modern machine learning methods. While the majority of text-based CAPTCHAs focus on transcription tasks, the CAPTCHA solved in this work is based on localization of inverted symbols in a distorted image. A convolutional neural network (CNN) was created to evaluate the likelihood of a region in the image belonging to an inverted character. It is used with a feature map and clustering to identify potential locations of inverted characters. Training of the CNN was performed using curriculum learning and compared to other potential training methods. The proposed method was able to determine the correct response in 95.2% of cases of a simulated CAPTCHA and 67.6% on a set of real CAPTCHAs. Potential methods to increase difficulty of the CAPTCHA and the success rate of the automated solver are considered.

Newmarch, Jan.  2016.  Low Power Wireless: Routing to the Internet. Linux J.. 2016

How to get two Raspberry Pis to communicate over a 6LoWPAN network.