Visible to the public Biblio

Found 553 results

Filters: Keyword is Computational modeling  [Clear All Filters]
2022-08-03
Le, Van Thanh, El Ioini, Nabil, Pahl, Claus, Barzegar, Hamid R., Ardagna, Claudio.  2021.  A Distributed Trust Layer for Edge Infrastructure. 2021 Sixth International Conference on Fog and Mobile Edge Computing (FMEC). :1—8.
Recently, Mobile Edge Cloud computing (MEC) has attracted attention both from academia and industry. The idea of moving a part of cloud resources closer to users and data sources can bring many advantages in terms of speed, data traffic, security and context-aware services. The MEC infrastructure does not only host and serves applications next to the end-users, but services can be dynamically migrated and reallocated as mobile users move in order to guarantee latency and performance constraints. This specific requirement calls for the involvement and collaboration of multiple MEC providers, which raises a major issue related to trustworthiness. Two main challenges need to be addressed: i) trustworthiness needs to be handled in a manner that does not affect latency or performance, ii) trustworthiness is considered in different dimensions - not only security metrics but also performance and quality metrics in general. In this paper, we propose a trust layer for public MEC infrastructure that handles establishing and updating trust relations among all MEC entities, making the interaction withing a MEC network transparent. First, we define trust attributes affecting the trusted quality of the entire infrastructure and then a methodology with a computation model that combines these trust attribute values. Our experiments showed that the trust model allows us to reduce latency by removing the burden from a single MEC node, while at the same time increase the network trustworthiness.
Laputenko, Andrey.  2021.  Assessing Trustworthiness of IoT Applications Using Logic Circuits. 2021 IEEE East-West Design & Test Symposium (EWDTS). :1—4.
The paper describes a methodology for assessing non-functional requirements, such as trust characteristics for applications running on computationally constrained devices in the Internet of Things. The methodology is demonstrated through an example of a microcontroller-based temperature monitoring system. The concepts of trust and trustworthiness for software and devices of the Internet of Things are complex characteristics for describing the correct and secure operation of such systems and include aspects of operational and information security, reliability, resilience and privacy. Machine learning models, which are increasingly often used for such tasks in recent years, are resource-consuming software implementations. The paper proposes to use a logic circuit model to implement the above algorithms as an additional module for computationally constrained devices for checking the trustworthiness of applications running on them. Such a module could be implemented as a hardware, for example, as an FPGA in order to achieve more effectiveness.
2022-07-15
Luo, Yun, Chen, Yuling, Li, Tao, Wang, Yilei, Yang, Yixian.  2021.  Using information entropy to analyze secure multi-party computation protocol. 2021 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :312—318.

Secure multi-party computation(SMPC) is an important research field in cryptography, secure multi-party computation has a wide range of applications in practice. Accordingly, information security issues have arisen. Aiming at security issues in Secure multi-party computation, we consider that semi-honest participants have malicious operations such as collusion in the process of information interaction, gaining an information advantage over honest parties through collusion which leads to deviations in the security of the protocol. To solve this problem, we combine information entropy to propose an n-round information exchange protocol, in which each participant broadcasts a relevant information value in each round without revealing additional information. Through the change of the uncertainty of the correct result value in each round of interactive information, each participant cannot determine the correct result value before the end of the protocol. Security analysis shows that our protocol guarantees the security of the output obtained by the participants after the completion of the protocol.

Zhang, Dayin, Chen, Xiaojun, Shi, Jinqiao, Wang, Dakui, Zeng, Shuai.  2021.  A Differential Privacy Collaborative Deep Learning Algorithm in Pervasive Edge Computing Environment. 2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :347—354.
With the development of 5G technology and intelligent terminals, the future direction of the Industrial Internet of Things (IIoT) evolution is Pervasive Edge Computing (PEC). In the pervasive edge computing environment, intelligent terminals can perform calculations and data processing. By migrating part of the original cloud computing model's calculations to intelligent terminals, the intelligent terminal can complete model training without uploading local data to a remote server. Pervasive edge computing solves the problem of data islands and is also successfully applied in scenarios such as vehicle interconnection and video surveillance. However, pervasive edge computing is facing great security problems. Suppose the remote server is honest but curious. In that case, it can still design algorithms for the intelligent terminal to execute and infer sensitive content such as their identity data and private pictures through the information returned by the intelligent terminal. In this paper, we research the problem of honest but curious remote servers infringing intelligent terminal privacy and propose a differential privacy collaborative deep learning algorithm in the pervasive edge computing environment. We use a Gaussian mechanism that meets the differential privacy guarantee to add noise on the first layer of the neural network to protect the data of the intelligent terminal and use analytical moments accountant technology to track the cumulative privacy loss. Experiments show that with the Gaussian mechanism, the training data of intelligent terminals can be protected reduction inaccuracy.
Aggarwal, Pranjal, Kumar, Akash, Michael, Kshitiz, Nemade, Jagrut, Sharma, Shubham, C, Pavan Kumar.  2021.  Random Decision Forest approach for Mitigating SQL Injection Attacks. 2021 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT). :1—5.
Structured Query Language (SQL) is extensively used for storing, manipulating and retrieving information in the relational database management system. Using SQL statements, attackers will try to gain unauthorized access to databases and launch attacks to modify/retrieve the stored data, such attacks are called as SQL injection attacks. Such SQL Injection (SQLi) attacks tops the list of web application security risks of all the times. Identifying and mitigating the potential SQL attack statements before their execution can prevent SQLi attacks. Various techniques are proposed in the literature to mitigate SQLi attacks. In this paper, a random decision forest approach is introduced to mitigate SQLi attacks. From the experimental results, we can infer that the proposed approach achieves a precision of 97% and an accuracy of 95%.
Pengwei, Ma, Kai, Wei, Chunyu, Jiang, Junyi, Li, Jiafeng, Tian, Siyuan, Liu, Minjing, Zhong.  2021.  Research on Evaluation System of Relational Cloud Database. 2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :1369—1373.
With the continuous emergence of cloud computing technology, cloud infrastructure software will become the mainstream application model in the future. Among the databases, relational databases occupy the largest market share. Therefore, the relational cloud database will be the main product of the combination of database technology and cloud computing technology, and will become an important branch of the database industry. This article explores the establishment of an evaluation system framework for relational databases, helping enterprises to select relational cloud database products according to a clear goal and path. This article can help enterprises complete the landing of relational cloud database projects.
Yuan, Rui, Wang, Xinna, Xu, Jiangmin, Meng, Shunmei.  2021.  A Differential-Privacy-based hybrid collaborative recommendation method with factorization and regression. 2021 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :389—396.
Recommender systems have been proved to be effective techniques to provide users with better experiences. However, when a recommender knows the user's preference characteristics or gets their sensitive information, then a series of privacy concerns are raised. A amount of solutions in the literature have been proposed to enhance privacy protection degree of recommender systems. Although the existing solutions have enhanced the protection, they led to a decrease in recommendation accuracy simultaneously. In this paper, we propose a security-aware hybrid recommendation method by combining the factorization and regression techniques. Specifically, the differential privacy mechanism is integrated into data pre-processing for data encryption. Firstly data are perturbed to satisfy differential privacy and transported to the recommender. Then the recommender calculates the aggregated data. However, applying differential privacy raises utility issues of low recommendation accuracy, meanwhile the use of a single model may cause overfitting. In order to tackle this challenge, we adopt a fusion prediction model by combining linear regression (LR) and matrix factorization (MF) for collaborative recommendation. With the MovieLens dataset, we evaluate the recommendation accuracy and regression of our recommender system and demonstrate that our system performs better than the existing recommender system under privacy requirement.
Wang, Shilei, Wang, Hui, Yu, Hongtao, Zhang, Fuzhi.  2021.  Detecting shilling groups in recommender systems based on hierarchical topic model. 2021 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA). :832—837.
In a group shilling attack, attackers work collaboratively to inject fake profiles aiming to obtain desired recommendation result. This type of attacks is more harmful to recommender systems than individual shilling attacks. Previous studies pay much attention to detect individual attackers, and little work has been done on the detection of shilling groups. In this work, we introduce a topic modeling method of natural language processing into shilling attack detection and propose a shilling group detection method on the basis of hierarchical topic model. First, we model the given dataset to a series of user rating documents and use the hierarchical topic model to learn the specific topic distributions of each user from these rating documents to describe user rating behaviors. Second, we divide candidate groups based on rating value and rating time which are not involved in the hierarchical topic model. Lastly, we calculate group suspicious degrees in accordance with several indicators calculated through the analysis of user rating distributions, and use the k-means clustering algorithm to distinguish shilling groups. The experimental results on the Netflix and Amazon datasets show that the proposed approach performs better than baseline methods.
2022-07-14
De, Rohit, Moberly, Raymond, Beery, Colton, Juybari, Jeremy, Sundqvist, Kyle.  2021.  Multi-Qubit Size-Hopping Deutsch-Jozsa Algorithm with Qubit Reordering for Secure Quantum Key Distribution. 2021 IEEE International Conference on Quantum Computing and Engineering (QCE). :473—474.
As a classic quantum computing implementation, the Deustch-Jozsa (DJ) algorithm is taught in many courses pertaining to quantum information science and technology (QIST). We exploit the DJ framework as an educational testbed, illustrating fundamental qubit concepts while identifying associated algorithmic challenges. In this work, we present a self-contained exploration which may be beneficial in educating the future quantum workforce. Quantum Key Distribution (QKD), an improvement over the classical Public Key Infrastructure (PKI), allows two parties, Alice and Bob, to share a secret key by using the quantum physical properties. For QKD the DJ-packets, consisting of the input qubits and the target qubit for the DJ algorithm, carry the secret information between Alice and Bob. Previous research from Nagata and Nakamura discovered in 2015 that the DJ algorithm for QKD allows an attacker to successfully intercept and remain undetected. Improving upon the past research we increased the entropy of DJ-packets through: (i) size hopping (H), where the number of qubits in consecutive DJ-packets keeps on changing and (ii) reordering (R) the qubits within the DJ-packets. These concepts together illustrate the multiple scales where entropy may increase in a DJ algorithm to make for a more robust QKD framework, and therefore significantly decrease Eve’s chance of success. The proof of concept of the new schemes is tested on Google’s Cirq quantum simulator, and detailed python simulations show that attacker’s interception success rate can be drastically reduced.
2022-07-13
Angelogianni, Anna, Politis, Ilias, Polvanesi, Pier Luigi, Pastor, Antonio, Xenakis, Christos.  2021.  Unveiling the user requirements of a cyber range for 5G security testing and training. 2021 IEEE 26th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD). :1—6.
Cyber ranges are proven to be effective towards the direction of cyber security training. Nevertheless, the existing literature in the area of cyber ranges does not cover, to our best knowledge, the field of 5G security training. 5G networks, though, reprise a significant field for modern cyber security, introducing a novel threat landscape. In parallel, the demand for skilled cyber security specialists is high and still rising. Therefore, it is of utmost importance to provide all means to experts aiming to increase their preparedness level in the case of an unwanted event. The EU funded SPIDER project proposes an innovative Cyber Range as a Service (CRaaS) platform for 5G cyber security testing and training. This paper aims to present the evaluation framework, followed by SPIDER, for the extraction of the user requirements. To validate the defined user requirements, SPIDER leveraged of questionnaires which included both closed and open format questions and were circulated among the personnel of telecommunication providers, vendors, security service providers, managers, engineers, cyber security personnel and researchers. Here, we demonstrate a selected set of the most critical questions and responses received. From the conducted analysis we reach to some important conclusions regarding 5G testing and training capabilities that should be offered by a cyber range, in addition to the analysis of the different perceptions between cyber security and 5G experts.
2022-07-12
Farrukh, Yasir Ali, Ahmad, Zeeshan, Khan, Irfan, Elavarasan, Rajvikram Madurai.  2021.  A Sequential Supervised Machine Learning Approach for Cyber Attack Detection in a Smart Grid System. 2021 North American Power Symposium (NAPS). :1—6.
Modern smart grid systems are heavily dependent on Information and Communication Technology, and this dependency makes them prone to cyber-attacks. The occurrence of a cyber-attack has increased in recent years resulting in substantial damage to power systems. For a reliable and stable operation, cyber protection, control, and detection techniques are becoming essential. Automated detection of cyberattacks with high accuracy is a challenge. To address this, we propose a two-layer hierarchical machine learning model having an accuracy of 95.44 % to improve the detection of cyberattacks. The first layer of the model is used to distinguish between the two modes of operation - normal state or cyberattack. The second layer is used to classify the state into different types of cyberattacks. The layered approach provides an opportunity for the model to focus its training on the targeted task of the layer, resulting in improvement in model accuracy. To validate the effectiveness of the proposed model, we compared its performance against other recent cyber attack detection models proposed in the literature.
2022-07-05
Barros, Bettina D., Venkategowda, Naveen K. D., Werner, Stefan.  2021.  Quickest Detection of Stochastic False Data Injection Attacks with Unknown Parameters. 2021 IEEE Statistical Signal Processing Workshop (SSP). :426—430.
This paper considers a multivariate quickest detection problem with false data injection (FDI) attacks in internet of things (IoT) systems. We derive a sequential generalized likelihood ratio test (GLRT) for zero-mean Gaussian FDI attacks. Exploiting the fact that covariance matrices are positive, we propose strategies to detect positive semi-definite matrix additions rather than arbitrary changes in the covariance matrix. The distribution of the GLRT is only known asymptotically whereas quickest detectors deal with short sequences, thereby leading to loss of performance. Therefore, we use a finite-sample correction to reduce the false alarm rate. Further, we provide a numerical approach to estimate the threshold sequences, which are analytically intractable to compute. We also compare the average detection delay of the proposed detector for constant and varying threshold sequences. Simulations showed that the proposed detector outperforms the standard sequential GLRT detector.
Park, Ho-rim, Hwang, Kyu-hong, Ha, Young-guk.  2021.  An Object Detection Model Robust to Out-of-Distribution Data. 2021 IEEE International Conference on Big Data and Smart Computing (BigComp). :275—278.
Most of the studies of the existing object detection models are studies to better detect the objects to be detected. The problem of false detection of objects that should not be detected is not considered. When an object detection model that does not take this problem into account is applied to an industrial field close to humans, false detection can lead to a dangerous situation that greatly interferes with human life. To solve this false detection problem, this paper proposes a method of fine-tuning the backbone neural network model of the object detection model using the Outlier Exposure method and applying the class-specific uncertainty constant to the confidence score to detect the object.
2022-06-13
Dutta, Aritra, Bose, Rajesh, Chakraborty, Swarnendu Kumar, Roy, Sandip, Mondal, Haraprasad.  2021.  Data Security Mechanism for Green Cloud. 2021 Innovations in Energy Management and Renewable Resources(52042). :1–4.
Data and veracious information are an important feature of any organization; it takes special care as a like asset of the organization. Cloud computing system main target to provide service to the user like high-speed access user data for storage and retrieval. Now, big concern is data protection in cloud computing technology as because data leaking and various malicious attacks happened in cloud computing technology. This study provides user data protection in the cloud storage device. The article presents the architecture of a data security hybrid infrastructure that protects and stores the user data from the unauthenticated user. In this hybrid model, we use a different type of security model.
Fan, Teah Yi, Rana, Muhammad Ehsan.  2021.  Facilitating Role of Cloud Computing in Driving Big Data Emergence. 2021 Third International Sustainability and Resilience Conference: Climate Change. :524–529.
Big data emerges as an important technology that addresses the storage, processing and analytics aspects of massive data characterized by 5V's (volume, velocity, variety, veracity, value) which has grown exponentially beyond the handling capacity traditional data architectures. The most significant technologies include the parallel storage and processing framework which requires entirely new IT infrastructures to facilitate big data adoption. Cloud computing emerges as a successful paradigm in computing technology that shifted the business landscape of IT infrastructures towards service-oriented basis. Cloud service providers build IT infrastructures and technologies and offer them as services which can be accessed through internet to the consumers. This paper discusses on the facilitating role of cloud computing in the field of big data analytics. Cloud deployment models concerning the architectural aspect and the current trend of adoption are introduced. The fundamental cloud services models concerning the infrastructural and technological provisioning are introduced while the emerging cloud services models related to big data are discussed with examples of technology platforms offered by the big cloud service providers - Amazon, Google, Microsoft and Cloudera. The main advantages of cloud adoption in terms of availability and scalability for big data are reiterated. Lastly, the challenges concerning cloud security, data privacy and data governance of consuming and adopting big data in the cloud are highlighted.
2022-06-10
Nguyen, Tien N., Choo, Raymond.  2021.  Human-in-the-Loop XAI-enabled Vulnerability Detection, Investigation, and Mitigation. 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE). :1210–1212.
The need for cyber resilience is increasingly important in our technology-dependent society, where computing systems, devices and data will continue to be the target of cyber attackers. Hence, we propose a conceptual framework called ‘Human-in-the-Loop Explainable-AI-Enabled Vulnerability Detection, Investigation, and Mitigation’ (HXAI-VDIM). Specifically, instead of resolving complex scenario of security vulnerabilities as an output of an AI/ML model, we integrate the security analyst or forensic investigator into the man-machine loop and leverage explainable AI (XAI) to combine both AI and Intelligence Assistant (IA) to amplify human intelligence in both proactive and reactive processes. Our goal is that HXAI-VDIM integrates human and machine in an interactive and iterative loop with security visualization that utilizes human intelligence to guide the XAI-enabled system and generate refined solutions.
2022-06-09
Fang, Shiwei, Huang, Jin, Samplawski, Colin, Ganesan, Deepak, Marlin, Benjamin, Abdelzaher, Tarek, Wigness, Maggie B..  2021.  Optimizing Intelligent Edge-clouds with Partitioning, Compression and Speculative Inference. MILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM). :892–896.
Internet of Battlefield Things (IoBTs) are well positioned to take advantage of recent technology trends that have led to the development of low-power neural accelerators and low-cost high-performance sensors. However, a key challenge that needs to be dealt with is that despite all the advancements, edge devices remain resource-constrained, thus prohibiting complex deep neural networks from deploying and deriving actionable insights from various sensors. Furthermore, deploying sophisticated sensors in a distributed manner to improve decision-making also poses an extra challenge of coordinating and exchanging data between the nodes and server. We propose an architecture that abstracts away these thorny deployment considerations from an end-user (such as a commander or warfighter). Our architecture can automatically compile and deploy the inference model into a set of distributed nodes and server while taking into consideration of the resource availability, variation, and uncertainties.
Hoarau, Kevin, Tournoux, Pierre Ugo, Razafindralambo, Tahiry.  2021.  Suitability of Graph Representation for BGP Anomaly Detection. 2021 IEEE 46th Conference on Local Computer Networks (LCN). :305–310.
The Border Gateway Protocol (BGP) is in charge of the route exchange at the Internet scale. Anomalies in BGP can have several causes (mis-configuration, outage and attacks). These anomalies are classified into large or small scale anomalies. Machine learning models are used to analyze and detect anomalies from the complex data extracted from BGP behavior. Two types of data representation can be used inside the machine learning models: a graph representation of the network (graph features) or a statistical computation on the data (statistical features). In this paper, we evaluate and compare the accuracy of machine learning models using graph features and statistical features on both large and small scale BGP anomalies. We show that statistical features have better accuracy for large scale anomalies, and graph features increase the detection accuracy by 15% for small scale anomalies and are well suited for BGP small scale anomaly detection.
Xu, Qichao, Zhao, Lifeng, Su, Zhou.  2021.  UAV-assisted Abnormal Vehicle Behavior Detection in Internet of Vehicles. 2021 40th Chinese Control Conference (CCC). :7500–7505.
With advantages of low cost, high mobility, and flexible deployment, unmanned aerial vehicle (UAVs) are employed to efficiently detect abnormal vehicle behaviors (AVBs) in the internet of vehicles (IoVs). However, due to limited resources including battery, computing, and communication, UAVs are selfish to work cooperatively. To solve the above problem, in this paper, a game theoretical UAV incentive scheme in IoVs is proposed. Specifically, the abnormal behavior model is first constructed, where three model categories are defined: velocity abnormality, distance abnormality, and overtaking abnormality. Then, the barging pricing framework is designed to model the interactions between UAVs and IoVs, where the transaction prices are determined with the abnormal behavior category detected by UAVs. At last, simulations are conducted to verify the feasibility and effectiveness of our proposed scheme.
2022-06-08
Wang, Runhao, Kang, Jiexiang, Yin, Wei, Wang, Hui, Sun, Haiying, Chen, Xiaohong, Gao, Zhongjie, Wang, Shuning, Liu, Jing.  2021.  DeepTrace: A Secure Fingerprinting Framework for Intellectual Property Protection of Deep Neural Networks. 2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :188–195.
Deep Neural Networks (DNN) has gained great success in solving several challenging problems in recent years. It is well known that training a DNN model from scratch requires a lot of data and computational resources. However, using a pre-trained model directly or using it to initialize weights cost less time and often gets better results. Therefore, well pre-trained DNN models are valuable intellectual property that we should protect. In this work, we propose DeepTrace, a framework for model owners to secretly fingerprinting the target DNN model using a special trigger set and verifying from outputs. An embedded fingerprint can be extracted to uniquely identify the information of model owner and authorized users. Our framework benefits from both white-box and black-box verification, which makes it useful whether we know the model details or not. We evaluate the performance of DeepTrace on two different datasets, with different DNN architectures. Our experiment shows that, with the advantages of combining white-box and black-box verification, our framework has very little effect on model accuracy, and is robust against different model modifications. It also consumes very little computing resources when extracting fingerprint.
2022-06-06
Yeboah-Ofori, Abel, Ismail, Umar Mukhtar, Swidurski, Tymoteusz, Opoku-Boateng, Francisca.  2021.  Cyberattack Ontology: A Knowledge Representation for Cyber Supply Chain Security. 2021 International Conference on Computing, Computational Modelling and Applications (ICCMA). :65–70.
Cyberattacks on cyber supply chain (CSC) systems and the cascading impacts have brought many challenges and different threat levels with unpredictable consequences. The embedded networks nodes have various loopholes that could be exploited by the threat actors leading to various attacks, risks, and the threat of cascading attacks on the various systems. Key factors such as lack of common ontology vocabulary and semantic interoperability of cyberattack information, inadequate conceptualized ontology learning and hierarchical approach to representing the relationships in the CSC security domain has led to explicit knowledge representation. This paper explores cyberattack ontology learning to describe security concepts, properties and the relationships required to model security goal. Cyberattack ontology provides a semantic mapping between different organizational and vendor security goals has been inherently challenging. The contributions of this paper are threefold. First, we consider CSC security modelling such as goal, actor, attack, TTP, and requirements using semantic rules for logical representation. Secondly, we model a cyberattack ontology for semantic mapping and knowledge representation. Finally, we discuss concepts for threat intelligence and knowledge reuse. The results show that the cyberattack ontology concepts could be used to improve CSC security.
2022-05-10
Ali-Eldin, Amr M.T..  2021.  A Cloud-Based Trust Computing Model for the Social Internet of Things. 2021 International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC). :161–165.
As IoT systems would have an economic impact, they are gaining growing interest. Millions of IoT devices are expected to join the internet of things, which will carny both major benefits and significant security threats to consumers. For IoT systems that secure data and preserve privacy of users, trust management is an essential component. IoT objects carry on the ownership settings of their owners, allowing them to interact with each other. Social relationships are believed to be important in confidence building. In this paper, we explain how to compute trust in social IoT environments using a cloud-based approach.
Agarkhed, Jayashree, Pawar, Geetha.  2021.  Efficient Security Model for Pervasive Computing Using Multi-Layer Neural Network. 2021 Fourth International Conference on Electrical, Computer and Communication Technologies (ICECCT). :1–6.

In new technological world pervasive computing plays the important role in data computing and communication. The pervasive computing provides the mobile environment for decentralized computational services at anywhere, anytime at any context and location. Pervasive computing is flexible and makes portable devices and computing surrounded us as part of our daily life. Devices like Laptop, Smartphones, PDAs, and any other portable devices can constitute the pervasive environment. These devices in pervasive environments are worldwide and can receive various communications including audio visual services. The users and the system in this pervasive environment face the challenges of user trust, data privacy and user and device node identity. To give the feasible determination for these challenges. This paper aims to propose a dynamic learning in pervasive computing environment refer the challenges proposed efficient security model (ESM) for trustworthy and untrustworthy attackers. ESM model also compared with existing generic models; it also provides better accuracy rate than existing models.

2022-05-09
Ma, Zhuoran, Ma, Jianfeng, Miao, Yinbin, Liu, Ximeng, Choo, Kim-Kwang Raymond, Yang, Ruikang, Wang, Xiangyu.  2021.  Lightweight Privacy-preserving Medical Diagnosis in Edge Computing. 2021 IEEE World Congress on Services (SERVICES). :9–9.
In the era of machine learning, mobile users are able to submit their symptoms to doctors at any time, anywhere for personal diagnosis. It is prevalent to exploit edge computing for real-time diagnosis services in order to reduce transmission latency. Although data-driven machine learning is powerful, it inevitably compromises privacy by relying on vast amounts of medical data to build a diagnostic model. Therefore, it is necessary to protect data privacy without accessing local data. However, the blossom has also been accompanied by various problems, i.e., the limitation of training data, vulnerabilities, and privacy concern. As a solution to these above challenges, in this paper, we design a lightweight privacy-preserving medical diagnosis mechanism on edge. Our method redesigns the extreme gradient boosting (XGBoost) model based on the edge-cloud model, which adopts encrypted model parameters instead of local data to reduce amounts of ciphertext computation to plaintext computation, thus realizing lightweight privacy preservation on resource-limited edges. Additionally, the proposed scheme is able to provide a secure diagnosis on edge while maintaining privacy to ensure an accurate and timely diagnosis. The proposed system with secure computation could securely construct the XGBoost model with lightweight overhead, and efficiently provide a medical diagnosis without privacy leakage. Our security analysis and experimental evaluation indicate the security, effectiveness, and efficiency of the proposed system.
2022-05-06
Haugdal, Hallvar, Uhlen, Kjetil, Jóhannsson, Hjörtur.  2021.  An Open Source Power System Simulator in Python for Efficient Prototyping of WAMPAC Applications. 2021 IEEE Madrid PowerTech. :1–6.
An open source software package for performing dynamic RMS simulation of small to medium-sized power systems is presented, written entirely in the Python programming language. The main objective is to facilitate fast prototyping of new wide area monitoring, control and protection applications for the future power system by enabling seamless integration with other tools available for Python in the open source community, e.g. for signal processing, artificial intelligence, communication protocols etc. The focus is thus transparency and expandability rather than computational efficiency and performance.The main purpose of this paper, besides presenting the code and some results, is to share interesting experiences with the power system community, and thus stimulate wider use and further development. Two interesting conclusions at the current stage of development are as follows:First, the simulation code is fast enough to emulate real-time simulation for small and medium-size grids with a time step of 5 ms, and allows for interactive feedback from the user during the simulation. Second, the simulation code can be uploaded to an online Python interpreter, edited, run and shared with anyone with a compatible internet browser. Based on this, we believe that the presented simulation code could be a valuable tool, both for researchers in early stages of prototyping real-time applications, and in the educational setting, for students developing intuition for concepts and phenomena through real-time interaction with a running power system model.