Visible to the public Biblio

Filters: First Letter Of Last Name is Q  [Clear All Filters]
A B C D E F G H I J K L M N O P [Q] R S T U V W X Y Z   [Show ALL]
Q
Q. Wang, Y. Ren, M. Scaperoth, G. Parmer.  2015.  "SPeCK: a kernel for scalable predictability". 21st IEEE Real-Time and Embedded Technology and Applications Symposium. :121-132.

Multi- and many-core systems are increasingly prevalent in embedded systems. Additionally, isolation requirements between different partitions and criticalities are gaining in importance. This difficult combination is not well addressed by current software systems. Parallel systems require consistency guarantees on shared data-structures often provided by locks that use predictable resource sharing protocols. However, as the number of cores increase, even a single shared cache-line (e.g. for the lock) can cause significant interference. In this paper, we present a clean-slate design of the SPeCK kernel, the next generation of our COMPOSITE OS, that attempts to provide a strong version of scalable predictability - where predictability bounds made on a single core, remain constant with an increase in cores. Results show that, despite using a non-preemptive kernel, it has strong scalable predictability, low average-case overheads, and demonstrates better response-times than a state-of-the-art preemptive system.

Qadir, J., Hasan, O..  2015.  Applying Formal Methods to Networking: Theory, Techniques, and Applications. Communications Surveys Tutorials, IEEE. 17:256-291.

Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet, which began as a research experiment, was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, particularly for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification and to an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design-in particular, the software defined networking (SDN) paradigm-offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods and present a survey of its applications to networking.
 

Qawasmeh, Ethar, Al-Saleh, Mohammed I., Al-Sharif, Ziad A..  2019.  Towards a Generic Approach for Memory Forensics. 2019 Sixth HCT Information Technology Trends (ITT). :094—098.

The era of information technology has, unfortunately, contributed to the tremendous rise in the number of criminal activities. However, digital artifacts can be utilized in convicting cybercriminal and exposing their activities. The digital forensics science concerns about all aspects related to cybercrimes. It seeks digital evidence by following standard methodologies to be admitted in court rooms. This paper concerns about memory forensics for the unique artifacts it holds. Memory contains information about the current state of systems and applications. Moreover, an application's data explains how a criminal has been interacting the application just before the memory is acquired. Memory forensics at the application level is currently random and cumbersome. Targeting specific applications is what forensic researchers and practitioner are currently striving to provide. This paper suggests a general solution to investigate any application. Our solution aims to utilize an application's data structures and variables' information in the investigation process. This is because an application's data has to be stored and retrieved in the means of variables. Data structures and variables' information can be generated by compilers for debugging purposes. We show that an application's information is a valuable resource to the investigator.

Qayum, Mohammad A., Badawy, Abdel-Hameed A., Cook, Jeanine.  2017.  DyAdHyTM: A Low Overhead Dynamically Adaptive Hybrid Transactional Memory with Application to Large Graphs. Proceedings of the International Symposium on Memory Systems. :327–336.
Big data is a buzzword used to describe massive volumes of data that provides opportunities of exploring new insights through data analytics. However, big data is mostly structured but can be semi-structured or unstructured. It is normally so large that it is not only difficult but also slow to process using traditional computing systems. One of the solutions is to format the data as graph data structures and process them on shared memory architecture to use fast and novel policies such as transactional memory. In most graph applications in big data type problems such as bioinformatics, social networks, and cybersecurity, graphs are sparse in nature. Due to this sparsity, we have the opportunity to use Transactional Memory (TM) as the synchronization policy for critical sections to speedup applications. At low conflict probability TM performs better than most synchronization policies due to its inherent non-blocking characteristics. TM can be implemented in Software, Hardware or a combination of both. However, hardware TM implementations are fast but limited by scarce hardware resources while software implementations have high overheads which can degrade performance. In this paper, we develop a low overhead, yet simple, dynamically adaptive (i.e., at runtime) hybrid (i.e., combines hardware and software) TM (DyAd-HyTM) scheme that combines the best features of both Hardware TM (HTM) and Software TM (STM) while adapting to application's requirements. It performs better than coarse-grain lock by up to 8.12x, a low overhead STM by up to 2.68x, a couple of implementations of HTMs (by up to 2.59x), and other HyTMs (by up to 1.55x) for SSCA-2 graph benchmark running on a multicore machine with a large shared memory.
Qazi, Zafar Ayyub, Penumarthi, Phani Krishna, Sekar, Vyas, Gopalakrishnan, Vijay, Joshi, Kaustubh, Das, Samir R..  2016.  KLEIN: A Minimally Disruptive Design for an Elastic Cellular Core. Proceedings of the Symposium on SDN Research. :2:1–2:12.

Today's cellular core, which connects the radio access network to the Internet, relies on fixed hardware appliances placed at a few dedicated locations and uses relatively static routing policies. As such, today's core design has key limitations—it induces inefficient provisioning tradeoffs and is poorly equipped to handle overload, failure scenarios, and diverse application requirements. To address these limitations, ongoing efforts envision "clean slate" solutions that depart from cellular standards and routing protocols; e.g., via programmable switches at base stations and per-flow SDN-like orchestration. The driving question of this work is to ask if a clean-slate redesign is necessary and if not, how can we design a flexible cellular core that is minimally disruptive. We propose KLEIN, a design that stays within the confines of current cellular standards and addresses the above limitations by combining network functions virtualization with smart resource management. We address key challenges w.r.t. scalability and responsiveness in realizing KLEIN via backwards-compatible orchestration mechanisms. Our evaluations through data-driven simulations and real prototype experiments using OpenAirInterface show that KLEIN can scale to billions of devices and is close to optimal for wide variety of traffic and deployment parameters.

Qbeitah, M. A., Aldwairi, M..  2018.  Dynamic malware analysis of phishing emails. 2018 9th International Conference on Information and Communication Systems (ICICS). :18–24.

Malicious software or malware is one of the most significant dangers facing the Internet today. In the fight against malware, users depend on anti-malware and anti-virus products to proactively detect threats before damage is done. Those products rely on static signatures obtained through malware analysis. Unfortunately, malware authors are always one step ahead in avoiding detection. This research deals with dynamic malware analysis, which emphasizes on: how the malware will behave after execution, what changes to the operating system, registry and network communication take place. Dynamic analysis opens up the doors for automatic generation of anomaly and active signatures based on the new malware's behavior. The research includes a design of honeypot to capture new malware and a complete dynamic analysis laboratory setting. We propose a standard analysis methodology by preparing the analysis tools, then running the malicious samples in a controlled environment to investigate their behavior. We analyze 173 recent Phishing emails and 45 SPIM messages in search for potentially new malwares, we present two malware samples and their comprehensive dynamic analysis.

Qi, Bolun, Fan, Chuchu, Jiang, Minghao, Mitra, Sayan.  2018.  DryVR 2.0: A Tool for Verification and Controller Synthesis of Black-box Cyber-physical Systems. Proceedings of the 21st International Conference on Hybrid Systems: Computation and Control (Part of CPS Week). :269–270.
We present a demo of DryVR 2.0, a framework for verification and controller synthesis of cyber-physical systems composed of black-box simulators and white-box automata. For verification, DryVR 2.0 takes as input a black-box simulator, a white-box transition graph, a time bound and a safety specification. As output it generates over-approximations of the reachable states and returns "Safe" if the system meets the given bounded safety specification, or it returns "Unsafe" with a counter-example. For controller synthesis, DryVR 2.0 takes as input black-box simulator(s) and a reach-avoid specification, and uses RRTs to find a transition graph such that the combined system satisfies the given specification.
Qi, C., Wu, J., Chen, H., Yu, H., Hu, H., Cheng, G..  2017.  Game-Theoretic Analysis for Security of Various Software-Defined Networking (SDN) Architectures. 2017 IEEE 85th Vehicular Technology Conference (VTC Spring). :1–5.

Security evaluation of diverse SDN frameworks is of significant importance to design resilient systems and deal with attacks. Focused on SDN scenarios, a game-theoretic model is proposed to analyze their security performance in existing SDN architectures. The model can describe specific traits in different structures, represent several types of information of players (attacker and defender) and quantitatively calculate systems' reliability. Simulation results illustrate dynamic SDN structures have distinct security improvement over static ones. Besides, effective dynamic scheduling mechanisms adopted in dynamic systems can enhance their security further.

Qi, Jie, Cao, Zheng, Sun, Haixin.  2016.  An Effective Method for Underwater Target Radiation Signal Detecting and Reconstructing. Proceedings of the 11th ACM International Conference on Underwater Networks & Systems. :48:1–48:2.

Using the sparse feature of the signal, compressed sensing theory can take a sample to compress data at a rate lower than the Nyquist sampling rate. The signal must be represented by the sparse matrix, however. Based on the above theory, this article puts forward a sparse degree of adaptive algorithms which can be used for the detection and reconstruction of the underwater target radiation signal. The received underwater target radiation signal, at first, transits the noise energy into signal energy under test by the stochastic resonance system, and then based on Gerschgorin disk criterion, it can make out the number of underwater target radiation signals in order to determine the optimal sparse degree of compressed sensing, and finally, the detection and reconstruction of the original signal can be realized by utilizing the compressed sensing technique. The simulation results show that this method can effectively detect underwater target radiation signals, and they can also be detected quite well under low signal-to-noise ratio(SNR).

Qi, L. T., Huang, H. P., Wang, P., Wang, R. C..  2018.  Abnormal Item Detection Based on Time Window Merging for Recommender Systems. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :252–259.

CFRS (Collaborative Filtering Recommendation System) is one of the most widely used individualized recommendation systems. However, CFRS is susceptible to shilling attacks based on profile injection. The current research on shilling attack mainly focuses on the recognition of false user profiles, but these methods depend on the specific attack models and the computational cost is huge. From the view of item, some abnormal item detection methods are proposed which are independent of attack models and overcome the defects of user profiles model, but its detection rate, false alarm rate and time overhead need to be further improved. In order to solve these problems, it proposes an abnormal item detection method based on time window merging. This method first uses the small window to partition rating time series, and determine whether the window is suspicious in terms of the number of abnormal ratings within it. Then, the suspicious small windows are merged to form suspicious intervals. We use the rating distribution characteristics RAR (Ratio of Abnormal Rating), ATIAR (Average Time Interval of Abnormal Rating), DAR(Deviation of Abnormal Rating) and DTIAR (Deviation of Time Interval of Abnormal Rating) in the suspicious intervals to determine whether the item is subject to attacks. Experiment results on the MovieLens 100K data set show that the method has a high detection rate and a low false alarm rate.

Qi, Ling, Qiao, Yuanyuan, Abdesslem, Fehmi Ben, Ma, Zhanyu, Yang, Jie.  2016.  Oscillation Resolution for Massive Cell Phone Traffic Data. Proceedings of the First Workshop on Mobile Data. :25–30.

Cellular towers capture logs of mobile subscribers whenever their devices connect to the network. When the logs show data traffic at a cell tower generated by a device, it reveals that this device is close to the tower. The logs can then be used to trace the locations of mobile subscribers for different applications, such as studying customer behaviour, improving location-based services, or helping urban planning. However, the logs often suffer from an oscillation phenomenon. Oscillations may happen when a device, even when not moving, does not only connect to the nearest cell tower, but is instead unpredictably switching between multiple cell towers because of random noise, load balancing, or simply dynamic changes in signal strength. Detecting and removing oscillations are a challenge when analyzing location data collected from the cellular network. In this paper, we propose an algorithm called SOL (Stable, Oscillation, Leap periods) aimed at discovering and reducing oscillations in the collected logs. We apply our algorithm on real datasets which contain about 18.9\textasciitildeTB of traffic logs generated by more than 3\textasciitildemillion mobile subscribers covering about 21000 cell towers and collected during 27\textasciitildedays from both GSM and UMTS networks in northern China. Experimental results demonstrate the ability and effectiveness of SOL to reduce oscillations in cellular network logs.

Qiao, Siyi, Hu, Chengchen, Guan, Xiaohong, Zou, Jianhua.  2016.  Taming the Flow Table Overflow in OpenFlow Switch. Proceedings of the 2016 ACM SIGCOMM Conference. :591–592.

SDN has become the wide area network technology, which the academic and industry most concerned about.The limited table sizes of today’s SDN switches has turned to the most prominent short planks in the network design implementation. TCAM based flow table can provide an excellent matching performance while it really costs much. Even the flow table overflow cannot be prevented by a fixed-capacity flow table. In this paper, we design FTS(Flow Table Sharing) mechanism that can improve the performance disaster caused by overflow. We demonstrate that FTS reduces both control messages quantity and RTT time by two orders of magnitude compared to current state-of-the-art OpenFlow table-miss handler.

Qiao, Yue, Srinivasan, Kannan, Arora, Anish.  2017.  Channel Spoofer: Defeating Channel Variability and Unpredictability. Proceedings of the 13th International Conference on Emerging Networking EXperiments and Technologies. :402–413.
A vast literature on secret sharing protocols now exists based on the folk theorem that the wireless channel between communicating parties Alice and Bob cannot be controlled or predicted by a third party in a fine-grain way. We find that the folk theorem unfortunately does not hold. In particular, we show how an adversary, using a customized full-duplex forwarder, can control the channel seen by Alice and Bob in fine granularity without leaving a trace, while predicting with high probability the secrets generated by any channel reciprocity based secret sharing protocol. An implementation of our proposed secret manipulator, called Channel Spoofer, on a software-defined radio platform empirically verifies Channel Spoofer's effectiveness in breaking several representative state-of-the-art secret sharing protocols. To the best of our knowledge, the proposed Channel Spoofer is the first practical attacker against all extant channel reciprocity based secret sharing protocols.
Qiao, Z., Cheng, L., Zhang, S., Yang, L., Guo, C..  2017.  Detection of Composite Insulators Inner Defects Based on Flash Thermography. 2017 1st International Conference on Electrical Materials and Power Equipment (ICEMPE). :359–363.

Usually, the air gap will appear inside the composite insulators and it will lead to serious accident. In order to detect these internal defects in composite insulators operated in the transmission lines, a new non-destructive technique has been proposed. In the study, the mathematical analysis model of the composite insulators inner defects, which is about heat diffusion, has been build. The model helps to analyze the propagation process of heat loss and judge the structure and defects under the surface. Compared with traditional detection methods and other non-destructive techniques, the technique mentioned above has many advantages. In the study, air defects of composite insulators have been made artificially. Firstly, the artificially fabricated samples are tested by flash thermography, and this method shows a good performance to figure out the structure or defects under the surface. Compared the effect of different excitation between flash and hair drier, the artificially samples have a better performance after heating by flash. So the flash excitation is better. After testing by different pollution on the surface, it can be concluded that different pollution don't have much influence on figuring out the structure or defect under the surface, only have some influence on heat diffusion. Then the defective composite insulators from work site are detected and the image of defect is clear. This new active thermography system can be detected quickly, efficiently and accurately, ignoring the influence of different pollution and other environmental restrictions. So it will have a broad prospect of figuring out the defeats and structure in composite insulators even other styles of insulators.

Qin, Peng, Tan, Cheng, Zhao, Lei, Cheng, Yueqiang.  2019.  Defending against ROP Attacks with Nearly Zero Overhead. 2019 IEEE Global Communications Conference (GLOBECOM). :1–6.
Return-Oriented Programming (ROP) is a sophisticated exploitation technique that is able to drive target applications to perform arbitrary unintended operations by constructing a gadget chain reusing existing small code sequences (gadgets) collected across the entire code space. In this paper, we propose to address ROP attacks from a different angle-shrinking available code space at runtime. We present ROPStarvation , a generic and transparent ROP countermeasure that defend against all types of ROP attacks with almost zero run-time overhead. ROPStarvation does not aim to completely stop ROP attacks, instead it attempts to significantly increase the bar by decreasing the possibility of launching a successful ROP exploit in reality. Moreover, shrinking available code space at runtime is lightweight that makes ROPStarvation practical for being deployed with high performance requirement. Results show that ROPStarvation successfully reduces the code space of target applications by 85%. With the reduced code segments, ROPStarvation decreases the probability of building a valid ROP gadget chain by 100% and 83% respectively, with the assumptions that whether the adversary knows the vulnerable applications are protected by ROPStarvation . Evaluations on the SPEC CPU2006 benchmark show that ROPStarvation introduces nearly zero (0.2% on average) run-time performance overhead.
Qin, Xinghong, Li, Bin, Huang, Jiwu.  2019.  A New Spatial Steganographic Scheme by Modeling Image Residuals with Multivariate Gaussian Model. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :2617–2621.
Embedding costs used in content-adaptive image steganographic schemes can be defined in a heuristic way or with a statistical model. Inspired by previous steganographic methods, i.e., MG (multivariate Gaussian model) and MiPOD (minimizing the power of optimal detector), we propose a model-driven scheme in this paper. Firstly, we model image residuals obtained by high-pass filtering with quantized multivariate Gaussian distribution. Then, we derive the approximated Fisher Information (FI). We show that FI is related to both Gaussian variance and filter coefficients. Lastly, by selecting the maximum FI value derived with various filters as the final FI, we obtain embedding costs. Experimental results show that the proposed scheme is comparable to existing steganographic methods in resisting steganalysis equipped with rich models and selection-channel-aware rich models. It is also computational efficient when compared to MiPOD, which is the state-of-the-art model-driven method.
Qin, Y., Wang, H., Jia, Z., Xia, H..  2016.  A flexible and scalable implementation of elliptic curve cryptography over GF(p) based on ASIP. 2016 IEEE 35th International Performance Computing and Communications Conference (IPCCC). :1–8.

Public-key cryptography schemes are widely used due to their high level of security. As a very efficient one among public-key cryptosystems, elliptic curve cryptography (ECC) has been studied for years. Researchers used to improve the efficiency of ECC through point multiplication, which is the most important and complex operation of ECC. In our research, we use special families of curves and prime fields which have special properties. After that, we introduce the instruction set architecture (ISA) extension method to accelerate this algorithm (192-bit private key) and build an ECC\_ASIP model with six new ECC custom instructions. Finally, the ECC\_ASIP model is implemented in a field-programmable gate array (FPGA) platform. The persuasive experiments have been conducted to evaluate the performance of our new model in the aspects of the performance, the code storage space and hardware resources. Experimental results show that our processor improves 69.6% in the execution efficiency and requires only 6.2% more hardware resources.

Qin, Yunchuan, Xiao, Qi.  2017.  Polynomial-based Key Management Scheme for Robotic System. Proceedings of the 8th International Conference on Computer Modeling and Simulation. :105–108.

With the robots being applied for more and more fields, security issues attracted more attention. In this paper, we propose that the key center in cloud send a polynomial information to each robot component, the component would put their information into the polynomial to get a group of new keys using in next slot, then check and update its key groups after success in hash. Because of the degree of the polynomial is higher than the number of components, even if an attacker got all the key values of the components, he also cannot restore the polynomial. The information about the keys will be discarded immediately after used, so an attacker cannot obtain the session key used before by the invasion of a component. This article solves the security problems about robotic system caused by cyber-attacks and physical attacks.

Qin, Zhan, Yan, Jingbo, Ren, Kui, Chen, Chang Wen, Wang, Cong.  2016.  SecSIFT: Secure Image SIFT Feature Extraction in Cloud Computing. ACM Trans. Multimedia Comput. Commun. Appl.. 12:65:1–65:24.

The image and multimedia data produced by individuals and enterprises is increasing every day. Motivated by the advances in cloud computing, there is a growing need to outsource such computational intensive image feature detection tasks to cloud for its economic computing resources and on-demand ubiquitous access. However, the concerns over the effective protection of private image and multimedia data when outsourcing it to cloud platform become the major barrier that impedes the further implementation of cloud computing techniques over massive amount of image and multimedia data. To address this fundamental challenge, we study the state-of-the-art image feature detection algorithms and focus on Scalar Invariant Feature Transform (SIFT), which is one of the most important local feature detection algorithms and has been broadly employed in different areas, including object recognition, image matching, robotic mapping, and so on. We analyze and model the privacy requirements in outsourcing SIFT computation and propose Secure Scalar Invariant Feature Transform (SecSIFT), a high-performance privacy-preserving SIFT feature detection system. In contrast to previous works, the proposed design is not restricted by the efficiency limitations of current homomorphic encryption scheme. In our design, we decompose and distribute the computation procedures of the original SIFT algorithm to a set of independent, co-operative cloud servers and keep the outsourced computation procedures as simple as possible to avoid utilizing a computationally expensive homomorphic encryption scheme. The proposed SecSIFT enables implementation with practical computation and communication complexity. Extensive experimental results demonstrate that SecSIFT performs comparably to original SIFT on image benchmarks while capable of preserving the privacy in an efficient way.

Qin, Zhan, Yang, Yin, Yu, Ting, Khalil, Issa, Xiao, Xiaokui, Ren, Kui.  2016.  Heavy Hitter Estimation over Set-Valued Data with Local Differential Privacy. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :192–203.

In local differential privacy (LDP), each user perturbs her data locally before sending the noisy data to a data collector. The latter then analyzes the data to obtain useful statistics. Unlike the setting of centralized differential privacy, in LDP the data collector never gains access to the exact values of sensitive data, which protects not only the privacy of data contributors but also the collector itself against the risk of potential data leakage. Existing LDP solutions in the literature are mostly limited to the case that each user possesses a tuple of numeric or categorical values, and the data collector computes basic statistics such as counts or mean values. To the best of our knowledge, no existing work tackles more complex data mining tasks such as heavy hitter discovery over set-valued data. In this paper, we present a systematic study of heavy hitter mining under LDP. We first review existing solutions, extend them to the heavy hitter estimation, and explain why their effectiveness is limited. We then propose LDPMiner, a two-phase mechanism for obtaining accurate heavy hitters with LDP. The main idea is to first gather a candidate set of heavy hitters using a portion of the privacy budget, and focus the remaining budget on refining the candidate set in a second phase, which is much more efficient budget-wise than obtaining the heavy hitters directly from the whole dataset. We provide both in-depth theoretical analysis and extensive experiments to compare LDPMiner against adaptations of previous solutions. The results show that LDPMiner significantly improves over existing methods. More importantly, LDPMiner successfully identifies the majority true heavy hitters in practical settings.

Qin, Zhengrui, Tang, Yutao, Novak, Ed, Li, Qun.  2016.  MobiPlay: A Remote Execution Based Record-and-replay Tool for Mobile Applications. Proceedings of the 38th International Conference on Software Engineering. :571–582.

The record-and-replay approach for software testing is important and valuable for developers in designing mobile applications. However, the existing solutions for recording and replaying Android applications are far from perfect. When considering the richness of mobile phones' input capabilities including touch screen, sensors, GPS, etc., existing approaches either fall short of covering all these different input types, or require elevated privileges that are not easily attained and can be dangerous. In this paper, we present a novel system, called MobiPlay, which aims to improve record-and-replay testing. By collaborating between a mobile phone and a server, we are the first to capture all possible inputs by doing so at the application layer, instead of at the Android framework layer or the Linux kernel layer, which would be infeasible without a server. MobiPlay runs the to-be-tested application on the server under exactly the same environment as the mobile phone, and displays the GUI of the application in real time on a thin client application installed on the mobile phone. From the perspective of the mobile phone user, the application appears to be local. We have implemented our system and evaluated it with tens of popular mobile applications showing that MobiPlay is efficient, flexible, and comprehensive. It can record all input data, including all sensor data, all touchscreen gestures, and GPS. It is able to record and replay on both the mobile phone and the server. Furthermore, it is suitable for both white-box and black-box testing.

Qing Xu, Beihang University, Chun Zhang, Extreme Networks, Inc., Geir Dullerud, University of Illinois at Urbana-Champaign.  2014.  Stabilization of Markovian Jump Linear Systems with Log-Quantized Feedback. American Society Mechanical Engineers Journal of Dynamic Systems, Measurement and Control. 136(3)

This paper is concerned with mean-square stabilization of single-input Markovian jump linear systems (MJLSs) with logarithmically quantized state feedback. We introduce the concepts and provide explicit constructions of stabilizing mode-dependent logarithmic quantizers together with associated controllers, and a semi-convex way to determine the optimal (coarsest) stabilizing quantization density. An example application is presented as a special case of the developed framework, that of feedback stabilizing a linear time-invariant (LTI) system over a log-quantized erasure channel. A hardware implementation of this application on an inverted pendulum testbed is provided using a finite word-length approximation.

Qingshan Liu, Tingwen Huang, Jun Wang.  2014.  One-Layer Continuous-and Discrete-Time Projection Neural Networks for Solving Variational Inequalities and Related Optimization Problems. Neural Networks and Learning Systems, IEEE Transactions on. 25:1308-1318.

This paper presents one-layer projection neural networks based on projection operators for solving constrained variational inequalities and related optimization problems. Sufficient conditions for global convergence of the proposed neural networks are provided based on Lyapunov stability. Compared with the existing neural networks for variational inequalities and optimization, the proposed neural networks have lower model complexities. In addition, some improved criteria for global convergence are given. Compared with our previous work, a design parameter has been added in the projection neural network models, and it results in some improved performance. The simulation results on numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural networks.

Qingyi Chen, Hongwei Kang, Hua Zhou, Xingping Sun, Yong Shen, YunZhi Jin, Jun Yin.  2014.  Research on cloud computing complex adaptive agent. Service Systems and Service Management (ICSSSM), 2014 11th International Conference on. :1-4.

It has gradually realized in the industry that the increasing complexity of cloud computing under interaction of technology, business, society and the like, instead of being simply solved depending on research on information technology, shall be explained and researched from a systematic and scientific perspective on the basis of theory and method of a complex adaptive system (CAS). This article, for basic problems in CAS theoretical framework, makes research on definition of an active adaptive agent constituting the cloud computing system, and proposes a service agent concept and basic model through commonality abstraction from two basic levels: cloud computing technology and business, thus laying a foundation for further development of cloud computing complexity research as well as for multi-agent based cloud computing environment simulation.

Qiu, Jian, Li, Hengjian, Dong, Jiwen, Feng, Guang.  2017.  A Privacy-Preserving Cancelable Palmprint Template Generation Scheme Using Noise Data. Proceedings of the 2Nd International Conference on Intelligent Information Processing. :29:1–29:5.

In order to achieve more secure and privacy-preserving, a new method of cancelable palmprint template generation scheme using noise data is proposed. Firstly, the random projection is used to reduce the dimension of the palmprint image and the reduced dimension image is normalized. Secondly, a chaotic matrix is produced and it is also normalized. Then the cancelable palmprint feature is generated by comparing the normalized chaotic matrix with reduced dimension image after normalization. Finally, in order to enhance the privacy protection, and then the noise data with independent and identically distributed is added, as the final palmprint features. In this article, the algorithm of adding noise data is analyzed theoretically. Experimental results on the Hong Kong PolyU Palmprint Database verify that random projection and noise are generated in an uncomplicated way, the computational complexity is low. The theoretical analysis of nosie data is consistent with the experimental results. According to the system requirement, on the basis of guaranteeing accuracy, adding a certain amount of noise will contribute to security and privacy protection.