Visible to the public Biblio

Found 773 results

Filters: Keyword is human factors  [Clear All Filters]
2019-11-04
Harrison, William L., Allwein, Gerard.  2018.  Semantics-Directed Prototyping of Hardware Runtime Monitors. 2018 International Symposium on Rapid System Prototyping (RSP). :42-48.

Building memory protection mechanisms into embedded hardware is attractive because it has the potential to neutralize a host of software-based attacks with relatively small performance overhead. A hardware monitor, being at the lowest level of the system stack, is more difficult to bypass than a software monitor and hardware-based protections are also potentially more fine-grained than is possible in software: an individual instruction executing on a processor may entail multiple memory accesses, all of which may be tracked in hardware. Finally, hardware-based protection can be performed without the necessity of altering application binaries. This article presents a proof-of-concept codesign of a small embedded processor with a hardware monitor protecting against ROP-style code reuse attacks. While the case study is small, it indicates, we argue, an approach to rapid-prototyping runtime monitors in hardware that is quick, flexible, and extensible as well as being amenable to formal verification.

Bukasa, Sebanjila K., Lashermes, Ronan, Lanet, Jean-Louis, Leqay, Axel.  2018.  Let's Shock Our IoT's Heart: ARMv7-M Under (Fault) Attacks. Proceedings of the 13th International Conference on Availability, Reliability and Security. :33:1-33:6.

A fault attack is a well-known technique where the behaviour of a chip is voluntarily disturbed by hardware means in order to undermine the security of the information handled by the target. In this paper, we explore how Electromagnetic fault injection (EMFI) can be used to create vulnerabilities in sound software, targeting a Cortex-M3 microcontroller. Several use-cases are shown experimentally: control flow hijacking, buffer overflow (even with the presence of a canary), covert backdoor insertion and Return Oriented Programming can be achieved even if programs are not vulnerable in a software point of view. These results suggest that the protection of any software against vulnerabilities must take hardware into account as well.

Wang, Jingyuan, Xie, Peidai, Wang, Yongjun, Rong, Zelin.  2018.  A Survey of Return-Oriented Programming Attack, Defense and Its Benign Use. 2018 13th Asia Joint Conference on Information Security (AsiaJCIS). :83-88.

The return-oriented programming(ROP) attack has been a common access to exploit software vulnerabilities in the modern operating system(OS). An attacker can execute arbitrary code with the aid of ROP despite security mechanisms are involved in OS. In order to mitigate ROP attack, defense mechanisms are also drawn researchers' attention. Besides, research on the benign use of ROP become a hot spot in recent years, since ROP has a perfect resistance to static analysis, which can be adapted to hide some important code. The results in benign use also benefit from a low overhead on program size. The paper discusses the concepts of ROP attack as well as extended ROP attack in recent years. Corresponding defense mechanisms based on randomization, frequency, and control flow integrity are analyzed as well, besides, we also analyzed limitations in this defense mechanisms. Later, we discussed the benign use of ROP in steganography, code integrity verification, and software watermarking, which showed the significant promotion by adopting ROP. At the end of this paper, we looked into the development of ROP attack, the future of possible mitigation strategies and the potential for benign use.

Farkhani, Reza Mirzazade, Jafari, Saman, Arshad, Sajjad, Robertson, William, Kirda, Engin, Okhravi, Hamed.  2018.  On the Effectiveness of Type-Based Control Flow Integrity. Proceedings of the 34th Annual Computer Security Applications Conference. :28-39.

Control flow integrity (CFI) has received significant attention in the community to combat control hijacking attacks in the presence of memory corruption vulnerabilities. The challenges in creating a practical CFI has resulted in the development of a new type of CFI based on runtime type checking (RTC). RTC-based CFI has been implemented in a number of recent practical efforts such as GRSecurity Reuse Attack Protector (RAP) and LLVM-CFI. While there has been a number of previous efforts that studied the strengths and limitations of other types of CFI techniques, little has been done to evaluate the RTC-based CFI. In this work, we study the effectiveness of RTC from the security and practicality aspects. From the security perspective, we observe that type collisions are abundant in sufficiently large code bases but exploiting them to build a functional attack is not straightforward. Then we show how an attacker can successfully bypass RTC techniques using a variant of ROP attacks that respect type checking (called TROP) and also built two proof-of-concept exploits, one against Nginx web server and the other against Exim mail server. We also discuss practical challenges of implementing RTC. Our findings suggest that while RTC is more practical for applying CFI to large code bases, its policy is not strong enough when facing a motivated attacker.

2019-10-23
Alshawish, Ali, Spielvogel, Korbinian, de Meer, Hermann.  2019.  A Model-Based Time-to-Compromise Estimator to Assess the Security Posture of Vulnerable Networks. 2019 International Conference on Networked Systems (NetSys). :1-3.

Several operational and economic factors impact the patching decisions of critical infrastructures. The constraints imposed by such factors could prevent organizations from fully remedying all of the vulnerabilities that expose their (critical) assets to risk. Therefore, an involved decision maker (e.g. security officer) has to strategically decide on the allocation of possible remediation efforts towards minimizing the inherent security risk. This, however, involves the use of comparative judgments to prioritize risks and remediation actions. Throughout this work, the security risk is quantified using the security metric Time-To-Compromise (TTC). Our main contribution is to provide a generic TTC estimator to comparatively assess the security posture of computer networks taking into account interdependencies between the network components, different adversary skill levels, and characteristics of (known and zero-day) vulnerabilities. The presented estimator relies on a stochastic TTC model and Monte Carlo simulation (MCS) techniques to account for the input data variability and inherent prediction uncertainties.

Isaeva, N. A..  2018.  Choice of Control Parameters of Complex System on the Basis of Estimates of the Risks. 2018 Eleventh International Conference "Management of Large-Scale System Development" (MLSD. :1-4.

The method of choice the control parameters of a complex system based on estimates of the risks is proposed. The procedure of calculating the estimates of risks intended for a choice of rational managing directors of influences by an allocation of the group of the operating factors for the set criteria factor is considered. The purpose of choice of control parameters of the complex system is the minimization of an estimate of the risk of the functioning of the system by mean of a solution of a problem of search of an extremum of the function of many variables. The example of a choice of the operating factors in the sphere of intangible assets is given.

Zieger, Andrej, Freiling, Felix, Kossakowski, Klaus-Peter.  2018.  The $\beta$-Time-to-Compromise Metric for Practical Cyber Security Risk Estimation. 2018 11th International Conference on IT Security Incident Management IT Forensics (IMF). :115-133.

To manage cybersecurity risks in practice, a simple yet effective method to assess suchs risks for individual systems is needed. With time-to-compromise (TTC), McQueen et al. (2005) introduced such a metric that measures the expected time that a system remains uncompromised given a specific threat landscape. Unlike other approaches that require complex system modeling to proceed, TTC combines simplicity with expressiveness and therefore has evolved into one of the most successful cybersecurity metrics in practice. We revisit TTC and identify several mathematical and methodological shortcomings which we address by embedding all aspects of the metric into the continuous domain and the possibility to incorporate information about vulnerability characteristics and other cyber threat intelligence into the model. We propose $\beta$-TTC, a formal extension of TTC which includes information from CVSS vectors as well as a continuous attacker skill based on a $\beta$-distribution. We show that our new metric (1) remains simple enough for practical use and (2) gives more realistic predictions than the original TTC by using data from a modern and productively used vulnerability database of a national CERT.

Bahirat, Kanchan, Shah, Umang, Cardenas, Alvaro A., Prabhakaran, Balakrishnan.  2018.  ALERT: Adding a Secure Layer in Decision Support for Advanced Driver Assistance System (ADAS). Proceedings of the 26th ACM International Conference on Multimedia. :1984-1992.

With the ever-increasing popularity of LiDAR (Light Image Detection and Ranging) sensors, a wide range of applications such as vehicle automation and robot navigation are developed utilizing the 3D LiDAR data. Many of these applications involve remote guidance - either for safety or for the task performance - of these vehicles and robots. Research studies have exposed vulnerabilities of using LiDAR data by considering different security attack scenarios. Considering the security risks associated with the improper behavior of these applications, it has become crucial to authenticate the 3D LiDAR data that highly influence the decision making in such applications. In this paper, we propose a framework, ALERT (Authentication, Localization, and Estimation of Risks and Threats), as a secure layer in the decision support system used in the navigation control of vehicles and robots. To start with, ALERT tamper-proofs 3D LiDAR data by employing an innovative mechanism for creating and extracting a dynamic watermark. Next, when tampering is detected (because of the inability to verify the dynamic watermark), ALERT then carries out cross-modal authentication for localizing the tampered region. Finally, ALERT estimates the level of risk and threat based on the temporal and spatial nature of the attacks on LiDAR data. This estimation of risk and threats can then be incorporated into the decision support system used by ADAS (Advanced Driver Assistance System). We carried out several experiments to evaluate the efficacy of the proposed ALERT for ADAS and the experimental results demonstrate the effectiveness of the proposed approach.

Redmiles, Elissa M., Mazurek, Michelle L., Dickerson, John P..  2018.  Dancing Pigs or Externalities?: Measuring the Rationality of Security Decisions Proceedings of the 2018 ACM Conference on Economics and Computation. :215-232.

Accurately modeling human decision-making in security is critical to thinking about when, why, and how to recommend that users adopt certain secure behaviors. In this work, we conduct behavioral economics experiments to model the rationality of end-user security decision-making in a realistic online experimental system simulating a bank account. We ask participants to make a financially impactful security choice, in the face of transparent risks of account compromise and benefits offered by an optional security behavior (two-factor authentication). We measure the cost and utility of adopting the security behavior via measurements of time spent executing the behavior and estimates of the participant's wage. We find that more than 50% of our participants made rational (e.g., utility optimal) decisions, and we find that participants are more likely to behave rationally in the face of higher risk. Additionally, we find that users' decisions can be modeled well as a function of past behavior (anchoring effects), knowledge of costs, and to a lesser extent, users' awareness of risks and context (R2=0.61). We also find evidence of endowment effects, as seen in other areas of economic and psychological decision-science literature, in our digital-security setting. Finally, using our data, we show theoretically that a "one-size-fits-all" emphasis on security can lead to market losses, but that adoption by a subset of users with higher risks or lower costs can lead to market gains.

Dutta, Raj Gautam, Yu, Feng, Zhang, Teng, Hu, Yaodan, Jin, Yier.  2018.  Security for Safety: A Path Toward Building Trusted Autonomous Vehicles. Proceedings of the International Conference on Computer-Aided Design. :92:1-92:6.

Automotive systems have always been designed with safety in mind. In this regard, the functional safety standard, ISO 26262, was drafted with the intention of minimizing risk due to random hardware faults or systematic failure in design of electrical and electronic components of an automobile. However, growing complexity of a modern car has added another potential point of failure in the form of cyber or sensor attacks. Recently, researchers have demonstrated that vulnerability in vehicle's software or sensing units could enable them to remotely alter the intended operation of the vehicle. As such, in addition to safety, security should be considered as an important design goal. However, designing security solutions without the consideration of safety objectives could result in potential hazards. Consequently, in this paper we propose the notion of security for safety and show that by integrating safety conditions with our system-level security solution, which comprises of a modified Kalman filter and a Chi-squared detector, we can prevent potential hazards that could occur due to violation of safety objectives during an attack. Furthermore, with the help of a car-following case study, where the follower car is equipped with an adaptive-cruise control unit, we show that our proposed system-level security solution preserves the safety constraints and prevent collision between vehicle while under sensor attack.

Karmaker Santu, Shubhra Kanti, Bindschadler, Vincent, Zhai, ChengXiang, Gunter, Carl A..  2018.  NRF: A Naive Re-Identification Framework. Proceedings of the 2018 Workshop on Privacy in the Electronic Society. :121-132.

The promise of big data relies on the release and aggregation of data sets. When these data sets contain sensitive information about individuals, it has been scalable and convenient to protect the privacy of these individuals by de-identification. However, studies show that the combination of de-identified data sets with other data sets risks re-identification of some records. Some studies have shown how to measure this risk in specific contexts where certain types of public data sets (such as voter roles) are assumed to be available to attackers. To the extent that it can be accomplished, such analyses enable the threat of compromises to be balanced against the benefits of sharing data. For example, a study that might save lives by enabling medical research may be enabled in light of a sufficiently low probability of compromise from sharing de-identified data. In this paper, we introduce a general probabilistic re-identification framework that can be instantiated in specific contexts to estimate the probability of compromises based on explicit assumptions. We further propose a baseline of such assumptions that enable a first-cut estimate of risk for practical case studies. We refer to the framework with these assumptions as the Naive Re-identification Framework (NRF). As a case study, we show how we can apply NRF to analyze and quantify the risk of re-identification arising from releasing de-identified medical data in the context of publicly-available social media data. The results of this case study show that NRF can be used to obtain meaningful quantification of the re-identification risk, compare the risk of different social media, and assess risks of combinations of various demographic attributes and medical conditions that individuals may voluntarily disclose on social media.

Davari, Maryam, Bertino, Elisa.  2018.  Reactive Access Control Systems. Proceedings of the 23Nd ACM on Symposium on Access Control Models and Technologies. :205-207.

In context-aware applications, user's access privileges rely on both user's identity and context. Access control rules are usually statically defined while contexts and the system state can change dynamically. Changes in contexts can result in service disruptions. To address this issue, this poster proposes a reactive access control system that associates contingency plans with access control rules. Risk scores are also associated with actions part of the contingency plans. Such risks are estimated by using fuzzy inference. Our approach is cast into the XACML reference architecture.

Kontogeorgis, Dimitrios, Limniotis, Konstantinos, Kantzavelou, Ioanna.  2018.  An Evaluation of the HTTPS Adoption in Websites in Greece: Estimating the Users Awareness. Proceedings of the 22Nd Pan-Hellenic Conference on Informatics. :46-51.

The adoption of the HTTPS - i.e. HTTP over TLS - protocol by the Hellenic websites is studied in this work. Since this protocol constitutes a de-facto standard for secure communications in the web, our aim is to identify whether the underlying TLS protocol in popular websites in Greece is properly configured, so as to avoid known vulnerabilities. To this end, a systematic approach utilizing two well-known TLS scanner tools is adopted to evaluate 241 sites of high popularity. The results illustrate that only about half of the sites seem to be at a satisfactory level and, thus, there is still much room for improvement, mainly due to the fact that obsolete ciphers and/or protocol versions are still supported; there is also a small portion - i.e. about 3% of the sites - that do not implement the HTTPS at all, thus posing very high security risks for their users who provide their credentials via a totally insecure channel. We also examined, using an appropriate online questionnaire, whether the users are actually aware of what the HTTPS means and how they check the security of the websites. The outcome of this research shows that much work needs to be done to increase the knowledge and the security awareness of an average Internet user.

McNeil, Martha, Llansó, Thomas, Pearson, Dallas.  2018.  Application of Capability-Based Cyber Risk Assessment Methodology to a Space System. Proceedings of the 5th Annual Symposium and Bootcamp on Hot Topics in the Science of Security. :7:1-7:10.

Despite more than a decade of heightened focus on cybersecurity, cyber threats remain an ongoing and growing concern [1]-[3]. Stakeholders often perform cyber risk assessments in order to understand potential mission impacts due to cyber threats. One common approach to cyber risk assessment is event-based analysis which usually considers adverse events, effects, and paths through a system, then estimates the effort/likelihood and mission impact of such attacks. When conducted manually, this type of approach is labor-intensive, subjective, and does not scale well to complex systems. As an alternative, we present an automated capability-based risk assessment approach, compare it to manual event-based analysis approaches, describe its application to a notional space system ground segment, and discuss the results.

2019-10-22
Alzahrani, Ahmed, Johnson, Chris, Altamimi, Saad.  2018.  Information security policy compliance: Investigating the role of intrinsic motivation towards policy compliance in the organisation. 2018 4th International Conference on Information Management (ICIM). :125–132.
Recent behavioral research in information security has focused on increasing employees' motivation to enhance the security performance in an organization. This empirical study investigated employees' information security policy (ISP) compliance intentions using self-determination theory (SDT). Relevant hypotheses were developed to test the proposed research model. Data obtained via a survey (N=3D407) from a Fortune 600 organization in Saudi Arabia provides empirical support for the model. The results confirmed that autonomy, competence and the concept of relatedness all positively affect employees' intentions to comply. The variable 'perceived value congruence' had a negative effect on ISP compliance intentions, and the perceived legitimacy construct did not affect employees' intentions. In general, the findings of this study suggest that SDT has value in research into employees' ISP compliance intentions.
2019-10-15
Pan, Y., He, F., Yu, H..  2018.  An Adaptive Method to Learn Directive Trust Strength for Trust-Aware Recommender Systems. 2018 IEEE 22nd International Conference on Computer Supported Cooperative Work in Design ((CSCWD)). :10–16.

Trust Relationships have shown great potential to improve recommendation quality, especially for cold start and sparse users. Since each user trust their friends in different degrees, there are numbers of works been proposed to take Trust Strength into account for recommender systems. However, these methods ignore the information of trust directions between users. In this paper, we propose a novel method to adaptively learn directive trust strength to improve trust-aware recommender systems. Advancing previous works, we propose to establish direction of trust strength by modeling the implicit relationships between users with roles of trusters and trustees. Specially, under new trust strength with directions, how to compute the directive trust strength is becoming a new challenge. Therefore, we present a novel method to adaptively learn directive trust strengths in a unified framework by enforcing the trust strength into range of [0, 1] through a mapping function. Our experiments on Epinions and Ciao datasets demonstrate that the proposed algorithm can effectively outperform several state-of-art algorithms on both MAE and RMSE metrics.

Coleman, M. S., Doody, D. P., Shields, M. A..  2018.  Machine Learning for Real-Time Data-Driven Security Practices. 2018 29th Irish Signals and Systems Conference (ISSC). :1–6.

The risk of cyber-attacks exploiting vulnerable organisations has increased significantly over the past several years. These attacks may combine to exploit a vulnerability breach within a system's protection strategy, which has the potential for loss, damage or destruction of assets. Consequently, every vulnerability has an accompanying risk, which is defined as the "intersection of assets, threats, and vulnerabilities" [1]. This research project aims to experimentally compare the similarity-based ranking of cyber security information utilising a recommendation environment. The Memory-Based Collaborative Filtering technique was employed, specifically the User-Based and Item-Based approaches. These systems utilised information from the National Vulnerability Database, specifically for the identification and similarity-based ranking of cyber-security vulnerability information, relating to hardware and software applications. Experiments were performed using the Item-Based technique, to identify the optimum system parameters, evaluated through the AUC evaluation metric. Once identified, the Item-Based technique was compared with the User-Based technique which utilised the parameters identified from the previous experiments. During these experiments, the Pearson's Correlation Coefficient and the Cosine similarity measure was used. From these experiments, it was identified that utilised the Item-Based technique which employed the Cosine similarity measure, an AUC evaluation metric of 0.80225 was achieved.

Qi, L. T., Huang, H. P., Wang, P., Wang, R. C..  2018.  Abnormal Item Detection Based on Time Window Merging for Recommender Systems. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :252–259.

CFRS (Collaborative Filtering Recommendation System) is one of the most widely used individualized recommendation systems. However, CFRS is susceptible to shilling attacks based on profile injection. The current research on shilling attack mainly focuses on the recognition of false user profiles, but these methods depend on the specific attack models and the computational cost is huge. From the view of item, some abnormal item detection methods are proposed which are independent of attack models and overcome the defects of user profiles model, but its detection rate, false alarm rate and time overhead need to be further improved. In order to solve these problems, it proposes an abnormal item detection method based on time window merging. This method first uses the small window to partition rating time series, and determine whether the window is suspicious in terms of the number of abnormal ratings within it. Then, the suspicious small windows are merged to form suspicious intervals. We use the rating distribution characteristics RAR (Ratio of Abnormal Rating), ATIAR (Average Time Interval of Abnormal Rating), DAR(Deviation of Abnormal Rating) and DTIAR (Deviation of Time Interval of Abnormal Rating) in the suspicious intervals to determine whether the item is subject to attacks. Experiment results on the MovieLens 100K data set show that the method has a high detection rate and a low false alarm rate.

Panagiotakis, C., Papadakis, H., Fragopoulou, P..  2018.  Detection of Hurriedly Created Abnormal Profiles in Recommender Systems. 2018 International Conference on Intelligent Systems (IS). :499–506.

Recommender systems try to predict the preferences of users for specific items. These systems suffer from profile injection attacks, where the attackers have some prior knowledge of the system ratings and their goal is to promote or demote a particular item introducing abnormal (anomalous) ratings. The detection of both cases is a challenging problem. In this paper, we propose a framework to spot anomalous rating profiles (outliers), where the outliers hurriedly create a profile that injects into the system either random ratings or specific ratings, without any prior knowledge of the existing ratings. The proposed detection method is based on the unpredictable behavior of the outliers in a validation set, on the user-item rating matrix and on the similarity between users. The proposed system is totally unsupervised, and in the last step it uses the k-means clustering method automatically spotting the spurious profiles. For the cases where labeling sample data is available, a random forest classifier is trained to show how supervised methods outperforms unsupervised ones. Experimental results on the MovieLens 100k and the MovieLens 1M datasets demonstrate the high performance of the proposed schemata.

Zhang, F., Deng, Z., He, Z., Lin, X., Sun, L..  2018.  Detection Of Shilling Attack In Collaborative Filtering Recommender System By Pca And Data Complexity. 2018 International Conference on Machine Learning and Cybernetics (ICMLC). 2:673–678.

Collaborative filtering (CF) recommender system has been widely used for its well performing in personalized recommendation, but CF recommender system is vulnerable to shilling attacks in which shilling attack profiles are injected into the system by attackers to affect recommendations. Design robust recommender system and propose attack detection methods are the main research direction to handle shilling attacks, among which unsupervised PCA is particularly effective in experiment, but if we have no information about the number of shilling attack profiles, the unsupervised PCA will be suffered. In this paper, a new unsupervised detection method which combine PCA and data complexity has been proposed to detect shilling attacks. In the proposed method, PCA is used to select suspected attack profiles, and data complexity is used to pick out the authentic profiles from suspected attack profiles. Compared with the traditional PCA, the proposed method could perform well and there is no need to determine the number of shilling attack profiles in advance.

Li, Gaochao, Jin, Xin, Wang, Zhonghua, Chen, Xunxun, Wu, Xiao.  2018.  Expert Recommendation Based on Collaborative Filtering in Subject Research. Proceedings of the 2018 International Conference on Information Science and System. :291–298.

This article implements a method for expert recommendation based on collaborative filtering. The recommendation model extracts potential evaluation experts from historical data, figures out the relevance between past subjects and current subjects, obtains the evaluation experience index and personal ability index of experts, calculates the relevance of research direction between experts and subjects and finally recommends the most proper experts.

Abdelhakim, Boudhir Anouar, Mohamed, Ben Ahmed, Mohammed, Bouhorma, Ikram, Ben Abdel Ouahab.  2018.  New Security Approach for IoT Communication Systems. Proceedings of the 3rd International Conference on Smart City Applications. :2:1–2:8.

The Security is a real permanent problem in wired and wireless communication systems. This issue becomes more and more complex in the internet of things context where the security solution still poor and insufficient where the number of these noeud hugely increase (around 26 milliards in 2020). In this paper we propose a new security schema which avoid the use of cryptography mechanism based on the exchange of symmetric or asymmetric keys which aren't recommended in IoT devices due to their limitation in processing, stockage and energy. The proposed solution is based on the use of the multi-agent ensuring the security of connected objects. These objects programmed with agents are able to communicate with other objects without any need to compute keys. The main objective in this work is to maintain a high level of security with an optimization of the energy consumption of IoT devices.

Pejo, Balazs, Tang, Qiang, Biczók, Gergely.  2018.  The Price of Privacy in Collaborative Learning. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :2261–2263.

Machine learning algorithms have reached mainstream status and are widely deployed in many applications. The accuracy of such algorithms depends significantly on the size of the underlying training dataset; in reality a small or medium sized organization often does not have enough data to train a reasonably accurate model. For such organizations, a realistic solution is to train machine learning models based on a joint dataset (which is a union of the individual ones). Unfortunately, privacy concerns prevent them from straightforwardly doing so. While a number of privacy-preserving solutions exist for collaborating organizations to securely aggregate the parameters in the process of training the models, we are not aware of any work that provides a rational framework for the participants to precisely balance the privacy loss and accuracy gain in their collaboration. In this paper, we model the collaborative training process as a two-player game where each player aims to achieve higher accuracy while preserving the privacy of its own dataset. We introduce the notion of Price of Privacy, a novel approach for measuring the impact of privacy protection on the accuracy in the proposed framework. Furthermore, we develop a game-theoretical model for different player types, and then either find or prove the existence of a Nash Equilibrium with regard to the strength of privacy protection for each player.

Wang, Jun, Arriaga, Afonso, Tang, Qiang, Ryan, Peter Y.A..  2018.  Facilitating Privacy-Preserving Recommendation-as-a-Service with Machine Learning. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :2306–2308.

Machine-Learning-as-a-Service has become increasingly popular, with Recommendation-as-a-Service as one of the representative examples. In such services, providing privacy protection for the users is an important topic. Reviewing privacy-preserving solutions which were proposed in the past decade, privacy and machine learning are often seen as two competing goals at stake. Though improving cryptographic primitives (e.g., secure multi-party computation (SMC) or homomorphic encryption (HE)) or devising sophisticated secure protocols has made a remarkable achievement, but in conjunction with state-of-the-art recommender systems often yields far-from-practical solutions. We tackle this problem from the direction of machine learning. We aim to design crypto-friendly recommendation algorithms, thus to obtain efficient solutions by directly using existing cryptographic tools. In particular, we propose an HE-friendly recommender system, refer to as CryptoRec, which (1) decouples user features from latent feature space, avoiding training the recommendation model on encrypted data; (2) only relies on addition and multiplication operations, making the model straightforwardly compatible with HE schemes. The properties turn recommendation-computations into a simple matrix-multiplication operation. To further improve efficiency, we introduce a sparse-quantization-reuse method which reduces the recommendation-computation time by \$9$\backslash$times\$ (compared to using CryptoRec directly), without compromising the accuracy. We demonstrate the efficiency and accuracy of CryptoRec on three real-world datasets. CryptoRec allows a server to estimate a user's preferences on thousands of items within a few seconds on a single PC, with the user's data homomorphically encrypted, while its prediction accuracy is still competitive with state-of-the-art recommender systems computing over clear data. Our solution enables Recommendation-as-a-Service on large datasets in a nearly real-time (seconds) level.

2019-10-02
Alkadi, A., Chi, H., Prodanoff, Z. G., Kreidl, P..  2018.  Evaluation of Two RFID Traffic Models with Potential in Anomaly Detection. SoutheastCon 2018. :1–5.

The use of Knuth's Rule and Bayesian Blocks constant piecewise models for characterization of RFID traffic has been proposed already. This study presents an evaluation of the application of those two modeling techniques for various RFID traffic patterns. The data sets used in this study consist of time series of binned RFID command counts. More specifically., we compare the shape of several empirical plots of raw data sets we obtained from experimental RIFD readings., against the constant piecewise graphs produced as an output of the two modeling algorithms. One issue limiting the applicability of modeling techniques to RFID traffic is the fact that there are a large number of various RFID applications available. We consider this phenomenon to present the main motivation for this study. The general expectation is that the RFID traffic traces from different applications would be sequences with different histogram shapes. Therefore., no modeling technique could be considered universal for modeling the traffic from multiple RFID applications., without first evaluating its model performance for various traffic patterns. We postulate that differences in traffic patterns are present if the histograms of two different sets of RFID traces form visually different plot shapes.