Visible to the public Biblio

Found 145 results

Filters: First Letter Of Title is V  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R S T U [V] W X Y Z   [Show ALL]
V
Bruno Korbar, Dartmouth College, Jim Blythe, University of Southern California, Ross Koppel, University of Pennsylvania, Vijay Kothari, Dartmouth College, Sean Smith, Dartmouth College.  2016.  Validating an Agent-Based Model of Human Password Behavior. AAAI-16 Workshop on Artificial Intelligence for Cyber Security .

Effective reasoning about the impact of security policy decisions requires understanding how human users actually behave, rather than assuming desirable but incorrect behavior. Simulation could help with this reasoning, but it requires building computational models of the relevant human behavior and validating that these models match what humans actually do. In this paper we describe our progress on building agent-based models of human behavior with passwords, and we demonstrate how these models reproduce phenomena
shown in the empirical literature.
 

Hadj, M. A. El, Erradi, M., Khoumsi, A., Benkaouz, Y..  2018.  Validation and Correction of Large Security Policies: A Clustering and Access Log Based Approach. 2018 IEEE International Conference on Big Data (Big Data). :5330-5332.

In big data environments with big number of users and high volume of data, we need to manage the corresponding huge number of security policies. Due to the distributed management of these policies, they may contain several anomalies, such as conflicts and redundancies, which may lead to both safety and availability problems. The distributed systems guided by such security policies produce a huge number of access logs. Due to potential security breaches, the access logs may show the presence of non-allowed accesses. This may also be a consequence of conflicting rules in the security policies. In this paper, we present an ongoing work on developing an environment for verifying and correcting security policies. To make the approach efficient, an access log is used as input to determine suspicious parts of the policy that should be considered. The approach is also made efficient by clustering the policy and the access log and considering separately the obtained clusters. The clustering technique and the use of access log significantly reduces the complexity of the suggested approach, making it scalable for large amounts of data.

Shu, H., Shen, X., Xu, L., Guo, Q., Sun, H..  2018.  A Validity Test Methodfor Transmission Betweens and Transmission Sections Based on Chain Attack Analysisand Line Outage Distribution Factors. 2018 2nd IEEE Conference on Energy Internet and Energy System Integration (EI2). :1-6.

The identification of transmission sections is used to improve the efficiency of monitoring the operation of the power grid. In order to test the validity of transmission sections identified, an assessment process is necessary. In addition, Transmission betweenness, an index for finding the key transmission lines in the power grid, should also be verified. In this paper, chain attack is assumed to check the weak links in the grid, thus verifying the transmission betweenness implemented for the system. Moreover, the line outage distribution factors (LODFs) are used to quantify the change of power flow when the leading line in transmission sections breaks down, so that the validity of transmission sections can be proved. Case studies based on IEEE 39 and IEEE 118 -bus system proved the effectiveness of the proposed method.

Wu, N., Farokhi, F., Smith, D., Kaafar, M. A..  2020.  The Value of Collaboration in Convex Machine Learning with Differential Privacy. 2020 IEEE Symposium on Security and Privacy (SP). :304–317.
In this paper, we apply machine learning to distributed private data owned by multiple data owners, entities with access to non-overlapping training datasets. We use noisy, differentially-private gradients to minimize the fitness cost of the machine learning model using stochastic gradient descent. We quantify the quality of the trained model, using the fitness cost, as a function of privacy budget and size of the distributed datasets to capture the trade-off between privacy and utility in machine learning. This way, we can predict the outcome of collaboration among privacy-aware data owners prior to executing potentially computationally-expensive machine learning algorithms. Particularly, we show that the difference between the fitness of the trained machine learning model using differentially-private gradient queries and the fitness of the trained machine model in the absence of any privacy concerns is inversely proportional to the size of the training datasets squared and the privacy budget squared. We successfully validate the performance prediction with the actual performance of the proposed privacy-aware learning algorithms, applied to: financial datasets for determining interest rates of loans using regression; and detecting credit card frauds using support vector machines.
Wang, Weina, Ying, Lei, Zhang, Junshan.  2016.  The Value of Privacy: Strategic Data Subjects, Incentive Mechanisms and Fundamental Limits. Proceedings of the 2016 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Science. :249–260.

We study the value of data privacy in a game-theoretic model of trading private data, where a data collector purchases private data from strategic data subjects (individuals) through an incentive mechanism. The private data of each individual represents her knowledge about an underlying state, which is the information that the data collector desires to learn. Different from most of the existing work on privacy-aware surveys, our model does not assume the data collector to be trustworthy. Then, an individual takes full control of its own data privacy and reports only a privacy-preserving version of her data. In this paper, the value of ε units of privacy is measured by the minimum payment of all nonnegative payment mechanisms, under which an individual's best response at a Nash equilibrium is to report the data with a privacy level of ε. The higher ε is, the less private the reported data is. We derive lower and upper bounds on the value of privacy which are asymptotically tight as the number of data subjects becomes large. Specifically, the lower bound assures that it is impossible to use less amount of payment to buy ε units of privacy, and the upper bound is given by an achievable payment mechanism that we designed. Based on these fundamental limits, we further derive lower and upper bounds on the minimum total payment for the data collector to achieve a given learning accuracy target, and show that the total payment of the designed mechanism is at most one individual's payment away from the minimum.

Heindorf, Stefan, Potthast, Martin, Stein, Benno, Engels, Gregor.  2016.  Vandalism Detection in Wikidata. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. :327–336.

Wikidata is the new, large-scale knowledge base of the Wikimedia Foundation. Its knowledge is increasingly used within Wikipedia itself and various other kinds of information systems, imposing high demands on its integrity. Wikidata can be edited by anyone and, unfortunately, it frequently gets vandalized, exposing all information systems using it to the risk of spreading vandalized and falsified information. In this paper, we present a new machine learning-based approach to detect vandalism in Wikidata. We propose a set of 47 features that exploit both content and context information, and we report on 4 classifiers of increasing effectiveness tailored to this learning task. Our approach is evaluated on the recently published Wikidata Vandalism Corpus WDVC-2015 and it achieves an area under curve value of the receiver operating characteristic, ROC-AUC, of 0.991. It significantly outperforms the state of the art represented by the rule-based Wikidata Abuse Filter (0.865 ROC-AUC) and a prototypical vandalism detector recently introduced by Wikimedia within the Objective Revision Evaluation Service (0.859 ROC-AUC).

Tomandl, A., Herrmann, D., Fuchs, K.-P., Federrath, H., Scheuer, F..  2014.  VANETsim: An open source simulator for security and privacy concepts in VANETs. High Performance Computing Simulation (HPCS), 2014 International Conference on. :543-550.

Aside from massive advantages in safety and convenience on the road, Vehicular Ad Hoc Networks (VANETs) introduce security risks to the users. Proposals of new security concepts to counter these risks are challenging to verify because of missing real world implementations of VANETs. To fill this gap, we introduce VANETsim, an event-driven simulation platform, specifically designed to investigate application-level privacy and security implications in vehicular communications. VANETsim focuses on realistic vehicular movement on real road networks and communication between the moving nodes. A powerful graphical user interface and an experimentation environment supports the user when setting up or carrying out experiments.

Nadi, Sarah, Krüger, Stefan.  2016.  Variability Modeling of Cryptographic Components: Clafer Experience Report. Proceedings of the Tenth International Workshop on Variability Modelling of Software-intensive Systems. :105–112.
Software systems need to use cryptography to protect any sensitive data they collect. However, there are various classes of cryptographic components (e.g., ciphers, digests, etc.), each suitable for a specific purpose. Additionally, each class of such components comes with various algorithms and configurations. Finding the right combination of algorithms and correct settings to use is often difficult. We believe that using variability modeling to model these algorithms, their relationships, and restrictions can help non-experts navigate this complex domain. In this paper, we report on our experience modeling cryptographic components in Clafer, a modeling language that combines feature modeling and meta-modeling. We discuss design decisions we took as well as the challenges we ran into. Our work helps expand variability modeling into new domains and sheds lights on modeling requirements that appear in practice.
Geetha, C. R., Basavaraju, S., Puttamadappa, C..  2013.  Variable load image steganography using multiple edge detection and minimum error replacement method. 2013 IEEE Conference on Information Communication Technologies. :53—58.

This paper proposes a steganography method using the digital images. Here, we are embedding the data which is to be secured into the digital image. Human Visual System proved that the changes in the image edges are insensitive to human eyes. Therefore we are using edge detection method in steganography to increase data hiding capacity by embedding more data in these edge pixels. So, if we can increase number of edge pixels, we can increase the amount of data that can be hidden in the image. To increase the number of edge pixels, multiple edge detection is employed. Edge detection is carried out using more sophisticated operator like canny operator. To compensate for the resulting decrease in the PSNR because of increase in the amount of data hidden, Minimum Error Replacement [MER] method is used. Therefore, the main goal of image steganography i.e. security with highest embedding capacity and good visual qualities are achieved. To extract the data we need the original image and the embedding ratio. Extraction is done by taking multiple edges detecting the original image and the data is extracted corresponding to the embedding ratio.

Jian Wang, Lin Mei, Yi Li, Jian-Ye Li, Kun Zhao, Yuan Yao.  2014.  Variable Window for Outlier Detection and Impulsive Noise Recognition in Range Images. Cluster, Cloud and Grid Computing (CCGrid), 2014 14th IEEE/ACM International Symposium on. :857-864.

To improve comprehensive performance of denoising range images, an impulsive noise (IN) denoising method with variable windows is proposed in this paper. Founded on several discriminant criteria, the principles of dropout IN detection and outlier IN detection are provided. Subsequently, a nearest non-IN neighbors searching process and an Index Distance Weighted Mean filter is combined for IN denoising. As key factors of adapatablity of the proposed denoising method, the sizes of two windows for outlier INs detection and INs denoising are investigated. Originated from a theoretical model of invader occlusion, variable window is presented for adapting window size to dynamic environment of each point, accompanying with practical criteria of adaptive variable window size determination. Experiments on real range images of multi-line surface are proceeded with evaluations in terms of computational complexity and quality assessment with comparison analysis among a few other popular methods. It is indicated that the proposed method can detect the impulsive noises with high accuracy, meanwhile, denoise them with strong adaptability with the help of variable window.
 

Zhang, Hao, Zhang, Tao, Chen, Huajin.  2017.  Variance Analysis of Pixel-Value Differencing Steganography. Proceedings of the 2017 International Conference on Cryptography, Security and Privacy. :28–32.

As the adaptive steganography selects edge and texture area for loading, the theoretical analysis is limited by modeling difficulty. This paper introduces a novel method to study pixel-value difference (PVD) embedding scheme. First, the difference histogram values of cover image are used as parameters, and a variance formula for PVD stego noise is obtained. The accuracy of this formula has been verified through analysis with standard pictures. Second, the stego noise is divided into six kinds of pixel regions, and the regional noise variances are utilized to compare the security between PVD and least significant bit matching (LSBM) steganography. A mathematical conclusion is presented that, with the embedding capacity less than 2.75 bits per pixel, PVD is always not safer than LSBM under the same embedding rate, regardless of region selection. Finally, 10000 image samples are used to observe the validity of mathematical conclusion. For most images and regions, the data are also shown to be consistent with the prior judgment. Meanwhile, the cases of exception are analyzed seriously, and are found to be caused by randomness of pixel selection and abandoned blocks in PVD scheme. In summary, the unity of theory and practice completely indicates the effectiveness of our new method.

Xin, Wei, Wang, M., Shao, Shuai, Wang, Z., Zhang, Tao.  2015.  A variant of schnorr signature scheme for path-checking in RFID-based supply chains. 2015 12th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD). :2608–2613.

The RFID technology has attracted considerable attention in recent years, and brings convenience to supply chain management. In this paper, we concentrate on designing path-checking protocols to check the valid paths in supply chains. By entering a valid path, the check reader can distinguish whether the tags have gone through the path or not. Based on modified schnorr signature scheme, we provide a path-checking method to achieve multi-signatures and final verification. In the end, we conduct security and privacy analysis to the scheme.

Srinate, Panpet, Chiewthanakul, Bhichate.  2018.  A variant of the Schnorr signature using an elliptic curve over a field of characteristic two. 2018 15th International Joint Conference on Computer Science and Software Engineering (JCSSE). :1–5.
Digital signature over elliptic curve is one of the most important applications of security because it is effective. Recently, it has been developed and defined in the various standard of security. The application of the digital signature are signer authentication, data integrity, and non-repudiation. Currently, the requirements to implement authentication process on a computer hardware with limited resource such as energy, memory and computing power are increasing. The developer should consider these factors along with security factor for the effective implement on the computer hardware with limited resource. In this paper, we propose the Schnorr signature scheme using Koblitz curve over a field of characteristic two. The advantage of Schnorr signature scheme is a good combination with Koblitz curve over a field of characteristic two, therefore its arithmetic can be performed in any computer. Moreover, we use Double-and-Add scalar multiplication to reduce time in the process of systems. In addition, this paper shows a result of time in the process of the system to compare the performance of the Schnorr signature scheme on Koblitz curve using Double-andAdd scalar multiplication with the Schnorr signature scheme on Koblitz curve using typical scalar multiplication. The result of this study is that both systems working correctly. However, the Schnorr signature scheme on Koblitz curve using Double-andAdd performs better in time efficiency than of Schnorr signature scheme on Koblitz curve using typical scalar multiplication.
Bouzar-Benlabiod, L., Méziani, L., Rubin, S. H., Belaidi, K., Haddar, N. E..  2019.  Variational Encoder-Decoder Recurrent Neural Network (VED-RNN) for Anomaly Prediction in a Host Environment. 2019 IEEE 20th International Conference on Information Reuse and Integration for Data Science (IRI). :75–82.
Intrusion detection systems (IDS) are important security tools. NIDS monitors network's traffic and HIDS filters local one. HIDS are often based on anomaly detection. Several studies deal with anomaly detection using system-call traces. In this paper, we propose an anomaly detection and prediction approach. System-call traces, invoked by the running programs, are analyzed in real time. For prediction, we use a Sequence to sequence model based on variational encoder-decoder (VED) and variants of Recurrent Neural Networks (RNN), these architectures showed their performance on natural language processing. To make the analogy, we exploit the semantics behind the invoking order of system-calls that are then seen as sentences. A preprocessing phase is added to optimize the prediction model input data representation. A one-class classification is done to categorize the sequences into normal or abnormal. Tests are achieved on the ADFA-LD dataset and showed the advantage of the prediction for the intrusion detection/prediction task.
Hossain, F. S., Shintani, M., Inoue, M., Orailoglu, A..  2018.  Variation-Aware Hardware Trojan Detection through Power Side-Channel. 2018 IEEE International Test Conference (ITC). :1-10.

A hardware Trojan (HT) denotes the malicious addition or modification of circuit elements. The purpose of this work is to improve the HT detection sensitivity in ICs using power side-channel analysis. This paper presents three detection techniques in power based side-channel analysis by increasing Trojan-to-circuit power consumption and reducing the variation effect in the detection threshold. Incorporating the three proposed methods has demonstrated that a realistic fine-grain circuit partitioning and an improved pattern set to increase HT activation chances can magnify Trojan detectability.

Koch, S., John, M., Worner, M., Muller, A., Ertl, T..  2014.  VarifocalReader #x2014; In-Depth Visual Analysis of Large Text Documents. Visualization and Computer Graphics, IEEE Transactions on. 20:1723-1732.

Interactive visualization provides valuable support for exploring, analyzing, and understanding textual documents. Certain tasks, however, require that insights derived from visual abstractions are verified by a human expert perusing the source text. So far, this problem is typically solved by offering overview-detail techniques, which present different views with different levels of abstractions. This often leads to problems with visual continuity. Focus-context techniques, on the other hand, succeed in accentuating interesting subsections of large text documents but are normally not suited for integrating visual abstractions. With VarifocalReader we present a technique that helps to solve some of these approaches' problems by combining characteristics from both. In particular, our method simplifies working with large and potentially complex text documents by simultaneously offering abstract representations of varying detail, based on the inherent structure of the document, and access to the text itself. In addition, VarifocalReader supports intra-document exploration through advanced navigation concepts and facilitates visual analysis tasks. The approach enables users to apply machine learning techniques and search mechanisms as well as to assess and adapt these techniques. This helps to extract entities, concepts and other artifacts from texts. In combination with the automatic generation of intermediate text levels through topic segmentation for thematic orientation, users can test hypotheses or develop interesting new research questions. To illustrate the advantages of our approach, we provide usage examples from literature studies.

Lira, Wallace, Gama, Fernando, Barbosa, Hivana, Alves, Ronnie, de Souza, Cleidson.  2016.  VCloud: Adding Interactiveness to Word Clouds for Knowledge Exploration in Large Unstructured Texts. Proceedings of the 31st Annual ACM Symposium on Applied Computing. :193–198.

The identification of relevant information in large text databases is a challenging task. One of the reasons is human beings' limitations in handling large volumes of data. A common solution for scavenging data from texts are word clouds. A word cloud illustrates word usage in a document by resizing individual words in documents proportionally to how frequently they appear. Even though word clouds are easy to understand, they are not particularly efficient, because they are static. In addition, the presented information lacks context, i.e., words are not explained and they may lead to radically erroneous interpretations. To tackle these problems we developed VCloud, a tool that allows the user to interact with word clouds, therefore allowing informative and interactive data exploration. Furthermore, our tool also allows one to compare two data sets presented as word clouds. We evaluated VCloud using real data about the evolution of gastritis research through the years. The papers indexed by Pubmed related to this medical context were selected for visualization and data analysis using VCloud. A domain expert explored these visualizations, being able to extract useful information from it. This illustrates how can VCloud be a valuable tool for visual text analytics.

Khan, J..  2017.  Vehicle Network Security Testing. 2017 Third International Conference on Sensing, Signal Processing and Security (ICSSS). :119–123.

In-vehicle networks like Controller Area Network, FlexRay, Ethernet are now subjected to huge security threats where unauthorized entities can take control of the whole vehicle. This can pose very serious threats including accidents. Security features like encryption, message authentication are getting implemented in vehicle networks to counteract these issues. This paper is proposing a set of novel validation techniques to ensure that vehicle network security is fool proof. Security validation against requirements, security validation using white box approach, black box approach and grey box approaches are put forward. Test system architecture, validation of message authentication, decoding the patterns from vehicle network data, using diagnostics as a security loophole, V2V V2X loopholes, gateway module security testing are considered in detail. Aim of this research paper is to put forward a set of tools and methods for finding and reporting any security loopholes in the in-vehicle network security implementation.

Markwood, Ian D., Liu, Yao.  2016.  Vehicle Self-Surveillance: Sensor-Enabled Automatic Driver Recognition. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :425–436.

Motor vehicles are widely used, quite valuable, and often targeted for theft. Preventive measures include car alarms, proximity control, and physical locks, which can be bypassed if the car is left unlocked, or if the thief obtains the keys. Reactive strategies like cameras, motion detectors, human patrolling, and GPS tracking can monitor a vehicle, but may not detect car thefts in a timely manner. We propose a fast automatic driver recognition system that identifies unauthorized drivers while overcoming the drawbacks of previous approaches. We factor drivers' trips into elemental driving events, from which we extract their driving preference features that cannot be exactly reproduced by a thief driving away in the stolen car. We performed real world evaluation using the driving data collected from 31 volunteers. Experiment results show we can distinguish the current driver as the owner with 97% accuracy, while preventing impersonation 91% of the time.

Xiong, Chen, Chen, Hua, Cai, Ming, Gao, Jing.  2019.  A Vehicle Trajectory Adversary Model Based on VLPR Data. 2019 5th International Conference on Transportation Information and Safety (ICTIS). :903–912.
Although transport agency has employed desensitization techniques to deal with the privacy information when publicizing vehicle license plate recognition (VLPR) data, the adversaries can still eavesdrop on vehicle trajectories by certain means and further acquire the associated person and vehicle information through background knowledge. In this work, a privacy attacking method by using the desensitized VLPR data is proposed to link the vehicle trajectory. First the road average speed is evaluated by analyzing the changes of traffic flow, which is used to estimate the vehicle's travel time to the next VLPR system. Then the vehicle suspicion list is constructed through the time relevance of neighboring VLPR systems. Finally, since vehicles may have the same features like color, type, etc, the target trajectory will be located by filtering the suspected list by the rule of qualified identifier (QI) attributes and closest time method. Based on the Foshan City's VLPR data, the method is tested and results show that correct vehicle trajectory can be linked, which proves that the current VLPR data publication way has the risk of privacy disclosure. At last, the effects of related parameters on the proposed method are discussed and effective suggestions are made for publicizing VLPR date in the future.
Lacroix, Jesse, El-Khatib, Khalil, Akalu, Rajen.  2016.  Vehicular Digital Forensics: What Does My Vehicle Know About Me? Proceedings of the 6th ACM Symposium on Development and Analysis of Intelligent Vehicular Networks and Applications. :59–66.

A major component of modern vehicles is the infotainment system, which interfaces with its drivers and passengers. Other mobile devices, such as handheld phones and laptops, can relay information to the embedded infotainment system through Bluetooth and vehicle WiFi. The ability to extract information from these systems would help forensic analysts determine the general contents that is stored in an infotainment system. Based off the data that is extracted, this would help determine what stored information is relevant to law enforcement agencies and what information is non-essential when it comes to solving criminal activities relating to the vehicle itself. This would overall solidify the Intelligent Transport System and Vehicular Ad Hoc Network infrastructure in combating crime through the use of vehicle forensics. Additionally, determining the content of these systems will allow forensic analysts to know if they can determine anything about the end-user directly and/or indirectly.

Setyono, R. Puji, Sarno, R..  2018.  Vendor Track Record Selection Using Best Worst Method. 2018 International Seminar on Application for Technology of Information and Communication. :41–48.
Every company will largely depend on other companies. This will help unite a large business process. Risks that arise from other companies will affect the business performance of a company. Because of this, the right choice for suppliers is crucial. Each vendor has different characteristics. Everything is not always suitable basically the selection process is quite complex and risky. This has led to a new case study which has been studied for years by researchers known as Supplier Selection Problems. Selection of vendors with multi-criteria decision making has been widely studied over years ago. The Best Worst Method is a new science in Multi-Criteria Decision Making (MCDM) determination. In this research, taking case study at XYZ company is in Indonesia which is engaged in mining and industry. The research utilized the transaction data that have been recorded by the XYZ company and analyzed vendor valuation. The weighting of Best Worst Method is calculated based on vendor assessment result. The results show that XYZ company still focuses on Price as its key criteria.
Johnson, Claiborne, MacGahan, Thomas, Heaps, John, Baldor, Kevin, von Ronne, Jeffery, Niu, Jianwei.  2017.  Verifiable Assume-Guarantee Privacy Specifications for Actor Component Architectures. Proceedings of the 22Nd ACM on Symposium on Access Control Models and Technologies. :167–178.

Many organizations process personal information in the course of normal operations. Improper disclosure of this information can be damaging, so organizations must obey privacy laws and regulations that impose restrictions on its release or risk penalties. Since electronic management of personal information must be held in strict compliance with the law, software systems designed for such purposes must have some guarantee of compliance. To support this, we develop a general methodology for designing and implementing verifiable information systems. This paper develops the design of the History Aware Programming Language into a framework for creating systems that can be mechanically checked against privacy specifications. We apply this framework to create and verify a prototypical Electronic Medical Record System (EMRS) expressed as a set of actor components and first-order linear temporal logic specifications in assume-guarantee form. We then show that the implementation of the EMRS provably enforces a formalized Health Insurance Portability and Accountability Act (HIPAA) policy using a combination of model checking and static analysis techniques.

Ferretti, L., Marchetti, M., Colajanni, M..  2017.  Verifiable Delegated Authorization for User-Centric Architectures and an OAuth2 Implementation. 2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC). 2:718–723.

Delegated authorization protocols have become wide-spread to implement Web applications and services, where some popular providers managing people identity information and personal data allow their users to delegate third party Web services to access their data. In this paper, we analyze the risks related to untrusted providers not behaving correctly, and we solve this problem by proposing the first verifiable delegated authorization protocol that allows third party services to verify the correctness of users data returned by the provider. The contribution of the paper is twofold: we show how delegated authorization can be cryptographically enforced through authenticated data structures protocols, we extend the standard OAuth2 protocol by supporting efficient and verifiable delegated authorization including database updates and privileges revocation.

Ye, Fanghan, Dong, Xiaolei, Shen, Jiachen, Cao, Zhenfu, Zhao, Wenhua.  2019.  A Verifiable Dynamic Multi-user Searchable Encryption Scheme without Trusted Third Parties. 2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS). :896–900.
Searchable encryption is a cryptographic primitive that allows users to search for keywords on encrypted data. It allows users to search in archives stored on cloud servers. Among searchable encryption schemes, those supporting multiuser settings are more suitable for daily application scenarios and more practical. However, since the cloud server is semi-trusted, the result set returned by the server is undefined, and most existing multi-user searchable encryption schemes rely heavily on trusted third parties to manage user permission. To address these problems, verifiable multi-user searchable encryption schemes with dynamic management of user search permissions, weak trust on trusted third parties and are desirable. In this paper, we propose such a scheme. Our scheme manages user permission and key distribution without a trusted third party. User search permission and user access permission matrices are generated separately to manage user permissions dynamically. In addition, our scheme can verify the result set returned by the cloud server. We also show that our scheme is index and trapdoor indistinguishable under chosen keyword attacks in the random oracle model. Finally, a detailed comparison experiment is made by using the actual document data set, and the results show that our scheme is efficient and practical.