Visible to the public Biblio

Found 1366 results

Filters: Keyword is privacy  [Clear All Filters]
2021-01-25
Chen, J., Lin, X., Shi, Z., Liu, Y..  2020.  Link Prediction Adversarial Attack Via Iterative Gradient Attack. IEEE Transactions on Computational Social Systems. 7:1081–1094.
Increasing deep neural networks are applied in solving graph evolved tasks, such as node classification and link prediction. However, the vulnerability of deep models can be revealed using carefully crafted adversarial examples generated by various adversarial attack methods. To explore this security problem, we define the link prediction adversarial attack problem and put forward a novel iterative gradient attack (IGA) strategy using the gradient information in the trained graph autoencoder (GAE) model. Not surprisingly, GAE can be fooled by an adversarial graph with a few links perturbed on the clean one. The results on comprehensive experiments of different real-world graphs indicate that most deep models and even the state-of-the-art link prediction algorithms cannot escape the adversarial attack, such as GAE. We can benefit the attack as an efficient privacy protection tool from the link prediction of unknown violations. On the other hand, the adversarial attack is a robust evaluation metric for current link prediction algorithms of their defensibility.
2021-01-18
Pattanayak, S., Ludwig, S. A..  2019.  Improving Data Privacy Using Fuzzy Logic and Autoencoder Neural Network. 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–6.
Data privacy is a very important problem to address while sharing data among multiple organizations and has become very crucial in the health sectors since multiple organizations such as hospitals are storing data of patients in the form of Electronic Health Records. Stored data is used with other organizations or research analysts to improve the health care of patients. However, the data records contain sensitive information such as age, sex, and date of birth of the patients. Revealing sensitive data can cause a privacy breach of the individuals. This has triggered research that has led to many different privacy preserving techniques being introduced. Thus, we designed a technique that not only encrypts / hides the sensitive information but also sends the data to different organizations securely. To encrypt sensitive data we use different fuzzy logic membership functions. We then use an autoencoder neural network to send the modified data. The output data of the autoencoder can then be used by different organizations for research analysis.
Sebbah, A., Kadri, B..  2020.  A Privacy and Authentication Scheme for IoT Environments Using ECC and Fuzzy Extractor. 2020 International Conference on Intelligent Systems and Computer Vision (ISCV). :1–5.
The internet of things (IoT) is consisting of many complementary elements which have their own specificities and capacities. These elements are gaining new application and use cases in our lives. Nevertheless, they open a negative horizon of security and privacy issues which must be treated delicately before the deployment of any IoT. Recently, different works emerged dealing with the same branch of issues, like the work of Yuwen Chen et al. that is called LightPriAuth. LightPriAuth has several drawbacks and weakness against various popular attacks such as Insider attack and stolen smart card. Our objective in this paper is to propose a novel solution which is “authentication scheme with three factor using ECC and fuzzy extractor” to ensure security and privacy. The obtained results had proven the superiority of our scheme's performances compared to that of LightPriAuth which, additionally, had defeated the weaknesses left by LightPriAuth.
2021-01-15
Pete, I., Hughes, J., Chua, Y. T., Bada, M..  2020.  A Social Network Analysis and Comparison of Six Dark Web Forums. 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :484—493.

With increasing monitoring and regulation by platforms, communities with criminal interests are moving to the dark web, which hosts content ranging from whistle-blowing and privacy, to drugs, terrorism, and hacking. Using post discussion data from six dark web forums we construct six interaction graphs and use social network analysis tools to study these underground communities. We observe the structure of each network to highlight structural patterns and identify nodes of importance through network centrality analysis. Our findings suggest that in the majority of the forums some members are highly connected and form hubs, while most members have a lower number of connections. When examining the posting activities of central nodes we found that most of the central nodes post in sub-forums with broader topics, such as general discussions and tutorials. These members play different roles in the different forums, and within each forum we identified diverse user profiles.

Ebrahimi, M., Samtani, S., Chai, Y., Chen, H..  2020.  Detecting Cyber Threats in Non-English Hacker Forums: An Adversarial Cross-Lingual Knowledge Transfer Approach. 2020 IEEE Security and Privacy Workshops (SPW). :20—26.

The regularity of devastating cyber-attacks has made cybersecurity a grand societal challenge. Many cybersecurity professionals are closely examining the international Dark Web to proactively pinpoint potential cyber threats. Despite its potential, the Dark Web contains hundreds of thousands of non-English posts. While machine translation is the prevailing approach to process non-English text, applying MT on hacker forum text results in mistranslations. In this study, we draw upon Long-Short Term Memory (LSTM), Cross-Lingual Knowledge Transfer (CLKT), and Generative Adversarial Networks (GANs) principles to design a novel Adversarial CLKT (A-CLKT) approach. A-CLKT operates on untranslated text to retain the original semantics of the language and leverages the collective knowledge about cyber threats across languages to create a language invariant representation without any manual feature engineering or external resources. Three experiments demonstrate how A-CLKT outperforms state-of-the-art machine learning, deep learning, and CLKT algorithms in identifying cyber-threats in French and Russian forums.

Liu, Y., Lin, F. Y., Ahmad-Post, Z., Ebrahimi, M., Zhang, N., Hu, J. L., Xin, J., Li, W., Chen, H..  2020.  Identifying, Collecting, and Monitoring Personally Identifiable Information: From the Dark Web to the Surface Web. 2020 IEEE International Conference on Intelligence and Security Informatics (ISI). :1—6.

Personally identifiable information (PII) has become a major target of cyber-attacks, causing severe losses to data breach victims. To protect data breach victims, researchers focus on collecting exposed PII to assess privacy risk and identify at-risk individuals. However, existing studies mostly rely on exposed PII collected from either the dark web or the surface web. Due to the wide exposure of PII on both the dark web and surface web, collecting from only the dark web or the surface web could result in an underestimation of privacy risk. Despite its research and practical value, jointly collecting PII from both sources is a non-trivial task. In this paper, we summarize our effort to systematically identify, collect, and monitor a total of 1,212,004,819 exposed PII records across both the dark web and surface web. Our effort resulted in 5.8 million stolen SSNs, 845,000 stolen credit/debit cards, and 1.2 billion stolen account credentials. From the surface web, we identified and collected over 1.3 million PII records of the victims whose PII is exposed on the dark web. To the best of our knowledge, this is the largest academic collection of exposed PII, which, if properly anonymized, enables various privacy research inquiries, including assessing privacy risk and identifying at-risk populations.

2021-01-11
Johnson, N., Near, J. P., Hellerstein, J. M., Song, D..  2020.  Chorus: a Programming Framework for Building Scalable Differential Privacy Mechanisms. 2020 IEEE European Symposium on Security and Privacy (EuroS P). :535–551.
Differential privacy is fast becoming the gold standard in enabling statistical analysis of data while protecting the privacy of individuals. However, practical use of differential privacy still lags behind research progress because research prototypes cannot satisfy the scalability requirements of production deployments. To address this challenge, we present Chorus, a framework for building scalable differential privacy mechanisms which is based on cooperation between the mechanism itself and a high-performance production database management system (DBMS). We demonstrate the use of Chorus to build the first highly scalable implementations of complex mechanisms like Weighted PINQ, MWEM, and the matrix mechanism. We report on our experience deploying Chorus at Uber, and evaluate its scalability on real-world queries.
Jiang, P., Liao, S..  2020.  Differential Privacy Online Learning Based on the Composition Theorem. 2020 IEEE 10th International Conference on Electronics Information and Emergency Communication (ICEIEC). :200–203.
Privacy protection is becoming more and more important in the era of big data. Differential privacy is a rigorous and provable privacy protection method that can protect privacy for a single piece of data. But existing differential privacy online learning methods have great limitations in the scope of application and accuracy. Aiming at this problem, we propose a more general and accurate algorithm, named DPOL-CT, for differential privacy online learning. We first distinguish the difference in differential privacy protection between offline learning and online learning. Then we prove that the DPOL-CT algorithm achieves (∊, δ)-differential privacy for online learning under the Gaussian, the Laplace and the Staircase mechanisms and enjoys a sublinear expected regret bound. We further discuss the trade-off between the differential privacy level and the regret bound. Theoretical analysis and experimental results show that the DPOL-CT algorithm has good performance guarantees.
Lobo-Vesga, E., Russo, A., Gaboardi, M..  2020.  A Programming Framework for Differential Privacy with Accuracy Concentration Bounds. 2020 IEEE Symposium on Security and Privacy (SP). :411–428.
Differential privacy offers a formal framework for reasoning about privacy and accuracy of computations on private data. It also offers a rich set of building blocks for constructing private data analyses. When carefully calibrated, these analyses simultaneously guarantee the privacy of the individuals contributing their data, and the accuracy of the data analyses results, inferring useful properties about the population. The compositional nature of differential privacy has motivated the design and implementation of several programming languages aimed at helping a data analyst in programming differentially private analyses. However, most of the programming languages for differential privacy proposed so far provide support for reasoning about privacy but not for reasoning about the accuracy of data analyses. To overcome this limitation, in this work we present DPella, a programming framework providing data analysts with support for reasoning about privacy, accuracy and their trade-offs. The distinguishing feature of DPella is a novel component which statically tracks the accuracy of different data analyses. In order to make tighter accuracy estimations, this component leverages taint analysis for automatically inferring statistical independence of the different noise quantities added for guaranteeing privacy. We evaluate our approach by implementing several classical queries from the literature and showing how data analysts can figure out the best manner to calibrate privacy to meet the accuracy requirements.
Li, Y., Chang, T.-H., Chi, C.-Y..  2020.  Secure Federated Averaging Algorithm with Differential Privacy. 2020 IEEE 30th International Workshop on Machine Learning for Signal Processing (MLSP). :1–6.
Federated learning (FL), as a recent advance of distributed machine learning, is capable of learning a model over the network without directly accessing the client's raw data. Nevertheless, the clients' sensitive information can still be exposed to adversaries via differential attacks on messages exchanged between the parameter server and clients. In this paper, we consider the widely used federating averaging (FedAvg) algorithm and propose to enhance the data privacy by the differential privacy (DP) technique, which obfuscates the exchanged messages by properly adding Gaussian noise. We analytically show that the proposed secure FedAvg algorithm maintains an O(l/T) convergence rate, where T is the total number of stochastic gradient descent (SGD) updates for local model parameters. Moreover, we demonstrate how various algorithm parameters can impact on the algorithm communication efficiency. Experiment results are presented to justify the obtained analytical results on the performance of the proposed algorithm in terms of testing accuracy.
Farokhi, F..  2020.  Temporally Discounted Differential Privacy for Evolving Datasets on an Infinite Horizon. 2020 ACM/IEEE 11th International Conference on Cyber-Physical Systems (ICCPS). :1–8.
We define discounted differential privacy, as an alternative to (conventional) differential privacy, to investigate privacy of evolving datasets, containing time series over an unbounded horizon. We use privacy loss as a measure of the amount of information leaked by the reports at a certain fixed time. We observe that privacy losses are weighted equally across time in the definition of differential privacy, and therefore the magnitude of privacy-preserving additive noise must grow without bound to ensure differential privacy over an infinite horizon. Motivated by the discounted utility theory within the economics literature, we use exponential and hyperbolic discounting of privacy losses across time to relax the definition of differential privacy under continual observations. This implies that privacy losses in distant past are less important than the current ones to an individual. We use discounted differential privacy to investigate privacy of evolving datasets using additive Laplace noise and show that the magnitude of the additive noise can remain bounded under discounted differential privacy. We illustrate the quality of privacy-preserving mechanisms satisfying discounted differential privacy on smart-meter measurement time-series of real households, made publicly available by Ausgrid (an Australian electricity distribution company).
Wu, N., Farokhi, F., Smith, D., Kaafar, M. A..  2020.  The Value of Collaboration in Convex Machine Learning with Differential Privacy. 2020 IEEE Symposium on Security and Privacy (SP). :304–317.
In this paper, we apply machine learning to distributed private data owned by multiple data owners, entities with access to non-overlapping training datasets. We use noisy, differentially-private gradients to minimize the fitness cost of the machine learning model using stochastic gradient descent. We quantify the quality of the trained model, using the fitness cost, as a function of privacy budget and size of the distributed datasets to capture the trade-off between privacy and utility in machine learning. This way, we can predict the outcome of collaboration among privacy-aware data owners prior to executing potentially computationally-expensive machine learning algorithms. Particularly, we show that the difference between the fitness of the trained machine learning model using differentially-private gradient queries and the fitness of the trained machine model in the absence of any privacy concerns is inversely proportional to the size of the training datasets squared and the privacy budget squared. We successfully validate the performance prediction with the actual performance of the proposed privacy-aware learning algorithms, applied to: financial datasets for determining interest rates of loans using regression; and detecting credit card frauds using support vector machines.
Lyu, L..  2020.  Lightweight Crypto-Assisted Distributed Differential Privacy for Privacy-Preserving Distributed Learning. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.
The appearance of distributed learning allows multiple participants to collaboratively train a global model, where instead of directly releasing their private training data with the server, participants iteratively share their local model updates (parameters) with the server. However, recent attacks demonstrate that sharing local model updates is not sufficient to provide reasonable privacy guarantees, as local model updates may result in significant privacy leakage about local training data of participants. To address this issue, in this paper, we present an alternative approach that combines distributed differential privacy (DDP) with a three-layer encryption protocol to achieve a better privacy-utility tradeoff than the existing DP-based approaches. An unbiased encoding algorithm is proposed to cope with floating-point values, while largely reducing mean squared error due to rounding. Our approach dispenses with the need for any trusted server, and enables each party to add less noise to achieve the same privacy and similar utility guarantees as that of the centralized differential privacy. Preliminary analysis and performance evaluation confirm the effectiveness of our approach, which achieves significantly higher accuracy than that of local differential privacy approach, and comparable accuracy to the centralized differential privacy approach.
Wang, J., Wang, A..  2020.  An Improved Collaborative Filtering Recommendation Algorithm Based on Differential Privacy. 2020 IEEE 11th International Conference on Software Engineering and Service Science (ICSESS). :310–315.
In this paper, differential privacy protection method is applied to matrix factorization method that used to solve the recommendation problem. For centralized recommendation scenarios, a collaborative filtering recommendation model based on matrix factorization is established, and a matrix factorization mechanism satisfying ε-differential privacy is proposed. Firstly, the potential characteristic matrix of users and projects is constructed. Secondly, noise is added to the matrix by the method of target disturbance, which satisfies the differential privacy constraint, then the noise matrix factorization model is obtained. The parameters of the model are obtained by the stochastic gradient descent algorithm. Finally, the differential privacy matrix factorization model is used for score prediction. The effectiveness of the algorithm is evaluated on the public datasets including Movielens and Netflix. The experimental results show that compared with the existing typical recommendation methods, the new matrix factorization method with privacy protection can recommend within a certain range of recommendation accuracy loss while protecting the users' privacy information.
Dikii, D. I..  2020.  Remote Access Control Model for MQTT Protocol. 2020 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). :288–291.
The author considers the Internet of Things security problems, namely, the organization of secure access control when using the MQTT protocol. Security mechanisms and methods that are employed or supported by the MQTT protocol have been analyzed. Thus, the protocol employs authentication by the login and password. In addition, it supports cryptographic processing over transferring data via the TLS protocol. Third-party services on OAuth protocol can be used for authentication. The authorization takes place by configuring the ACL-files or via third-party services and databases. The author suggests a device discretionary access control model of machine-to-machine interaction under the MQTT protocol, which is based on the HRU-model. The model entails six operators: the addition and deletion of a subject, the addition and deletion of an object, the addition and deletion of access privileges. The access control model is presented in a form of an access matrix and has three types of privileges: read, write, ownership. The model is composed in a way that makes it compatible with the protocol of a widespread version v3.1.1. The available types of messages in the MQTT protocol allow for the adjustment of access privileges. The author considered an algorithm with such a service data unit build that the unit could easily be distinguished in the message body. The implementation of the suggested model will lead to the minimization of administrator's involvement due to the possibility for devices to determine access privileges to the information resource without human involvement. The author suggests recommendations for security policies, when organizing an informational exchange in accordance with the MQTT protocol.
Huang, K., Yang, T..  2020.  Additive and Subtractive Cuckoo Filters. 2020 IEEE/ACM 28th International Symposium on Quality of Service (IWQoS). :1–10.
Bloom filters (BFs) are fast and space-efficient data structures used for set membership queries in many applications. BFs are required to satisfy three key requirements: low space cost, high-speed lookups, and fast updates. Prior works do not satisfy these requirements at the same time. The standard BF does not support deletions of items and the variants that support deletions need additional space or performance overhead. The state-of-the-art cuckoo filters (CF) has high performance with seemingly low space cost. However, the CF suffers a critical issue of varying space cost per item. This is because the exclusive-OR (XOR) operation used by the CF requires the total number of buckets to be a power of two, leading to the space inflation. To address the issue, in this paper we propose a scalable variant of the cuckoo filter called additive and subtractive cuckoo filter (ASCF). We aim to improve the space efficiency while sustaining comparably high performance. The ASCF uses the addition and subtraction (ADD/SUB) operations instead of the XOR operation to compute an item's two candidate bucket indexes based on its fingerprint. Experimental results show that the ASCF achieves both low space cost and high performance. Compared to the CF, the ASCF reduces up to 1.9x space cost per item while maintaining the same lookup and update throughput. In addition, the ASCF outperforms other filters in both space cost and performance.
Zhang, H., Zhang, D., Chen, H., Xu, J..  2020.  Improving Efficiency of Pseudonym Revocation in VANET Using Cuckoo Filter. 2020 IEEE 20th International Conference on Communication Technology (ICCT). :763–769.
In VANETs, pseudonyms are often used to replace the identity of vehicles in communication. When vehicles drive out of the network or misbehave, their pseudonym certificates need to be revoked by the certificate authority (CA). The certificate revocation lists (CRLs) are usually used to store the revoked certificates before their expiration. However, using CRLs would incur additional storage, communication and computation overhead. Some existing schemes have proposed to use Bloom Filter to compress the original CRLs, but they are unable to delete the expired certificates and introduce the false positive problem. In this paper, we propose an improved pseudonym certificates revocation scheme, using Cuckoo Filter for compression to reduce the impact of these problems. In order to optimize deletion efficiency, we propose the concept of Certificate Expiration List (CEL) which can be implemented with priority queue. The experimental results show that our scheme can effectively reduce the storage and communication overhead of pseudonym certificates revocation, while retaining moderately low false positive rates. In addition, our scheme can also greatly improve the lookup performance on CRLs, and reduce the revocation operation costs by allowing deletion.
Awad, M. A., Ashkiani, S., Porumbescu, S. D., Owens, J. D..  2020.  Dynamic Graphs on the GPU. 2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS). :739–748.
We present a fast dynamic graph data structure for the GPU. Our dynamic graph structure uses one hash table per vertex to store adjacency lists and achieves 3.4-14.8x faster insertion rates over the state of the art across a diverse set of large datasets, as well as deletion speedups up to 7.8x. The data structure supports queries and dynamic updates through both edge and vertex insertion and deletion. In addition, we define a comprehensive evaluation strategy based on operations, workloads, and applications that we believe better characterize and evaluate dynamic graph data structures.
Cao, S., Zou, J., Du, X., Zhang, X..  2020.  A Successive Framework: Enabling Accurate Identification and Secure Storage for Data in Smart Grid. ICC 2020 - 2020 IEEE International Conference on Communications (ICC). :1–6.
Due to malicious eavesdropping, forgery as well as other risks, it is challenging to dispose and store collected power data from smart grid in secure manners. Blockchain technology has become a novel method to solve the above problems because of its de-centralization and tamper-proof characteristics. It is especially well known that data stored in blockchain cannot be changed, so it is vital to seek out perfect mechanisms to ensure that data are compliant with high quality (namely, accuracy of the power data) before being stored in blockchain. This will help avoid losses due to low-quality data modification or deletion as needed in smart grid. Thus, we apply the parallel vision theory on the identification of meter readings to realize accurate power data. A cloud-blockchain fusion model (CBFM) is proposed for the storage of accurate power data, allowing for secure conducting of flexible transactions. Only power data calculated by parallel visual system instead of image data collected originally via robot would be stored in blockchain. Hence, we define the quality assurance before data uploaded to blockchain and security guarantee after data stored in blockchain as a successive framework, which is a brand new solution to manage efficiency and security as a whole for power data and data alike in other scenes. Security analysis and performance evaluations are performed, which prove that CBFM is highly secure and efficient impressively.
Kuperberg, M..  2020.  Towards Enabling Deletion in Append-Only Blockchains to Support Data Growth Management and GDPR Compliance. 2020 IEEE International Conference on Blockchain (Blockchain). :393–400.
Conventional blockchain implementations with append-only semantics do not support deleting or overwriting data in confirmed blocks. However, many industry-relevant use cases require the ability to delete data, especially when personally identifiable information is stored or when data growth has to be constrained. Existing attempts to reconcile these contradictions compromise on core qualities of the blockchain paradigm, as they include backdoor-like approaches such as central authorities with elevated rights or usage of specialized chameleon hash algorithms in chaining of the blocks. The contribution of this paper is a novel architecture for the blockchain ledger and consensus, which uses a tree of context chains with simultaneous validity. A context chain captures the transactions of a closed group of entities and persons, thus structuring blocks in a precisely defined way. The resulting context isolation enables consensus-steered deletion of an entire context without side effects to other contexts. We show how this architecture supports truncation, data rollover and separation of concerns, how the GDPR regulations can be fulfilled by this architecture and how it differs from sidechains and state channels.
Tiwari, P., Skanda, C. S., Sanjana, U., Aruna, S., Honnavalli, P..  2020.  Secure Wipe Out in BYOD Environment. 2020 International Workshop on Big Data and Information Security (IWBIS). :109–114.
Bring Your Own Device (BYOD) is a new trend where employees use their personal devices to connect to their organization networks to access sensitive information and work-related systems. One of the primary challenges in BYOD is to securely delete company data when an employee leaves an organization. In common BYOD programs, the personal device in use is completely wiped out. This may lead to the deletion of personal data during exit procedures. Due to performance and deletion latency, erasure of data in most file systems today results in unlinking the file location and marking data blocks as unused. This may suffice the need of a normal user trying to delete unwanted files but the file content is not erased from the data blocks and can be retrieved with the help of various data recovery and forensic tools. In this paper, we discuss: (1) existing work related to secure deletion, and (2) secure and selective deletion methods that delete only the required files or directories without tampering personal data. We present two per-file deletion methods: Overwriting data and Encryption based deletion which erase specific files securely. Our proposed per-file deletion methods reduce latency and performance overheads caused by overwriting an entire disk.
Wang, W.-C., Ho, C.-C., Chang, Y.-M., Chang, Y.-H..  2020.  Challenges and Designs for Secure Deletion in Storage Systems. 2020 Indo – Taiwan 2nd International Conference on Computing, Analytics and Networks (Indo-Taiwan ICAN). :181–189.
Data security has risen to be one of the most critical concerns of computer professionals. Tighter legal requirements now exist for the purpose of protecting user data from unauthorized uses and for both preserving and erasing/sanitizing data records to meet legal compliance requirements. To meet the data security requirement, many secure (data) deletion techniques have been proposed to deal with the data security concerns from different system layers. This paper surveys the state-of-the-art secure deletion techniques that have been designed to pursue higher efficiency, verifiability, and portability for emerging types of hard disk drives and flash-based solid-state drives. Meanwhile, the pros and cons of implementing secure deletion in different system layers are also discussed, so as to assist in pursuing better secure deletion designs for future storage systems.
Žulj, S., Delija, D., Sirovatka, G..  2020.  Analysis of secure data deletion and recovery with common digital forensic tools and procedures. 2020 43rd International Convention on Information, Communication and Electronic Technology (MIPRO). :1607–1610.
This paper presents how students practical’s is developed and used for the important task forensic specialist have to do when using common digital forensic tools for data deletion and data recovery from various types of digital media and live systems. Digital forensic tools like EnCase, FTK imager, BlackLight, and open source tools are discussed in developed practical’s scenarios. This paper shows how these tools can be used to train and enhance student understanding of the capabilities and limitations of digital forensic tools in uncommon digital forensic scenarios. Students’ practicals encourage students to efficiently use digital forensic tools in the various professional scenarios that they will encounter.
2020-12-28
Padmapriya, S., Valli, R., Jayekumar, M..  2020.  Monitoring Algorithm in Malicious Vehicular Adhoc Networks. 2020 International Conference on System, Computation, Automation and Networking (ICSCAN). :1—6.

Vehicular Adhoc Networks (VANETs) ensures road safety by communicating with a set of smart vehicles. VANET is a subset of Mobile Adhoc Networks (MANETs). VANET enabled vehicles helps in establishing communication services among one another or with the Road Side Unit (RSU). Information transmitted in VANET is distributed in an open access environment and hence security is one of the most critical issues related to VANET. Although each vehicle is not a source of all communications, most contact depends on the information that other vehicles receive from it. That vehicle must be able to assess, determine and respond locally on the information obtained from other vehicles to protect VANET from malicious act. Of this reason, message verification in VANET is more difficult due to the protection and privacy issues of the participating vehicles. To overcome security threats, we propose Monitoring Algorithm that detects malicious nodes based on the pre-selected threshold value. The threshold value is compared with the distrust value which is inherently tagged with each vehicle. The proposed Monitoring Algorithm not only detects malicious vehicles, but also isolates the malicious vehicles from the network. The proposed technique is simulated using Network Simulator2 (NS2) tool. The simulation result illustrated that the proposed Monitoring Algorithm outperforms the existing algorithms in terms of malicious node detection, network delay, packet delivery ratio and throughput, thereby uplifting the overall performance of the network.

Tojiboev, R., Lee, W., Lee, C. C..  2020.  Adding Noise Trajectory for Providing Privacy in Data Publishing by Vectorization. 2020 IEEE International Conference on Big Data and Smart Computing (BigComp). :432—434.

Since trajectory data is widely collected and utilized for scientific research and business purpose, publishing trajectory without proper privacy-policy leads to an acute threat to individual data. Recently, several methods, i.e., k-anonymity, l-diversity, t-closeness have been studied, though they tend to protect by reducing data depends on a feature of each method. When a strong privacy protection is required, these methods have excessively reduced data utility that may affect the result of scientific research. In this research, we suggest a novel approach to tackle this existing dilemma via an adding noise trajectory on a vector-based grid environment.