Visible to the public Biblio

Found 116 results

Filters: First Letter Of Title is V  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R S T U [V] W X Y Z   [Show ALL]
V
Bruno Korbar, Dartmouth College, Jim Blythe, University of Southern California, Ross Koppel, University of Pennsylvania, Vijay Kothari, Dartmouth College, Sean Smith, Dartmouth College.  2016.  Validating an Agent-Based Model of Human Password Behavior. AAAI-16 Workshop on Artificial Intelligence for Cyber Security .

Effective reasoning about the impact of security policy decisions requires understanding how human users actually behave, rather than assuming desirable but incorrect behavior. Simulation could help with this reasoning, but it requires building computational models of the relevant human behavior and validating that these models match what humans actually do. In this paper we describe our progress on building agent-based models of human behavior with passwords, and we demonstrate how these models reproduce phenomena
shown in the empirical literature.
 

Hadj, M. A. El, Erradi, M., Khoumsi, A., Benkaouz, Y..  2018.  Validation and Correction of Large Security Policies: A Clustering and Access Log Based Approach. 2018 IEEE International Conference on Big Data (Big Data). :5330-5332.

In big data environments with big number of users and high volume of data, we need to manage the corresponding huge number of security policies. Due to the distributed management of these policies, they may contain several anomalies, such as conflicts and redundancies, which may lead to both safety and availability problems. The distributed systems guided by such security policies produce a huge number of access logs. Due to potential security breaches, the access logs may show the presence of non-allowed accesses. This may also be a consequence of conflicting rules in the security policies. In this paper, we present an ongoing work on developing an environment for verifying and correcting security policies. To make the approach efficient, an access log is used as input to determine suspicious parts of the policy that should be considered. The approach is also made efficient by clustering the policy and the access log and considering separately the obtained clusters. The clustering technique and the use of access log significantly reduces the complexity of the suggested approach, making it scalable for large amounts of data.

Shu, H., Shen, X., Xu, L., Guo, Q., Sun, H..  2018.  A Validity Test Methodfor Transmission Betweens and Transmission Sections Based on Chain Attack Analysisand Line Outage Distribution Factors. 2018 2nd IEEE Conference on Energy Internet and Energy System Integration (EI2). :1-6.

The identification of transmission sections is used to improve the efficiency of monitoring the operation of the power grid. In order to test the validity of transmission sections identified, an assessment process is necessary. In addition, Transmission betweenness, an index for finding the key transmission lines in the power grid, should also be verified. In this paper, chain attack is assumed to check the weak links in the grid, thus verifying the transmission betweenness implemented for the system. Moreover, the line outage distribution factors (LODFs) are used to quantify the change of power flow when the leading line in transmission sections breaks down, so that the validity of transmission sections can be proved. Case studies based on IEEE 39 and IEEE 118 -bus system proved the effectiveness of the proposed method.

Wang, Weina, Ying, Lei, Zhang, Junshan.  2016.  The Value of Privacy: Strategic Data Subjects, Incentive Mechanisms and Fundamental Limits. Proceedings of the 2016 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Science. :249–260.

We study the value of data privacy in a game-theoretic model of trading private data, where a data collector purchases private data from strategic data subjects (individuals) through an incentive mechanism. The private data of each individual represents her knowledge about an underlying state, which is the information that the data collector desires to learn. Different from most of the existing work on privacy-aware surveys, our model does not assume the data collector to be trustworthy. Then, an individual takes full control of its own data privacy and reports only a privacy-preserving version of her data. In this paper, the value of ε units of privacy is measured by the minimum payment of all nonnegative payment mechanisms, under which an individual's best response at a Nash equilibrium is to report the data with a privacy level of ε. The higher ε is, the less private the reported data is. We derive lower and upper bounds on the value of privacy which are asymptotically tight as the number of data subjects becomes large. Specifically, the lower bound assures that it is impossible to use less amount of payment to buy ε units of privacy, and the upper bound is given by an achievable payment mechanism that we designed. Based on these fundamental limits, we further derive lower and upper bounds on the minimum total payment for the data collector to achieve a given learning accuracy target, and show that the total payment of the designed mechanism is at most one individual's payment away from the minimum.

Heindorf, Stefan, Potthast, Martin, Stein, Benno, Engels, Gregor.  2016.  Vandalism Detection in Wikidata. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. :327–336.

Wikidata is the new, large-scale knowledge base of the Wikimedia Foundation. Its knowledge is increasingly used within Wikipedia itself and various other kinds of information systems, imposing high demands on its integrity. Wikidata can be edited by anyone and, unfortunately, it frequently gets vandalized, exposing all information systems using it to the risk of spreading vandalized and falsified information. In this paper, we present a new machine learning-based approach to detect vandalism in Wikidata. We propose a set of 47 features that exploit both content and context information, and we report on 4 classifiers of increasing effectiveness tailored to this learning task. Our approach is evaluated on the recently published Wikidata Vandalism Corpus WDVC-2015 and it achieves an area under curve value of the receiver operating characteristic, ROC-AUC, of 0.991. It significantly outperforms the state of the art represented by the rule-based Wikidata Abuse Filter (0.865 ROC-AUC) and a prototypical vandalism detector recently introduced by Wikimedia within the Objective Revision Evaluation Service (0.859 ROC-AUC).

Tomandl, A., Herrmann, D., Fuchs, K.-P., Federrath, H., Scheuer, F..  2014.  VANETsim: An open source simulator for security and privacy concepts in VANETs. High Performance Computing Simulation (HPCS), 2014 International Conference on. :543-550.

Aside from massive advantages in safety and convenience on the road, Vehicular Ad Hoc Networks (VANETs) introduce security risks to the users. Proposals of new security concepts to counter these risks are challenging to verify because of missing real world implementations of VANETs. To fill this gap, we introduce VANETsim, an event-driven simulation platform, specifically designed to investigate application-level privacy and security implications in vehicular communications. VANETsim focuses on realistic vehicular movement on real road networks and communication between the moving nodes. A powerful graphical user interface and an experimentation environment supports the user when setting up or carrying out experiments.

Nadi, Sarah, Krüger, Stefan.  2016.  Variability Modeling of Cryptographic Components: Clafer Experience Report. Proceedings of the Tenth International Workshop on Variability Modelling of Software-intensive Systems. :105–112.
Software systems need to use cryptography to protect any sensitive data they collect. However, there are various classes of cryptographic components (e.g., ciphers, digests, etc.), each suitable for a specific purpose. Additionally, each class of such components comes with various algorithms and configurations. Finding the right combination of algorithms and correct settings to use is often difficult. We believe that using variability modeling to model these algorithms, their relationships, and restrictions can help non-experts navigate this complex domain. In this paper, we report on our experience modeling cryptographic components in Clafer, a modeling language that combines feature modeling and meta-modeling. We discuss design decisions we took as well as the challenges we ran into. Our work helps expand variability modeling into new domains and sheds lights on modeling requirements that appear in practice.
Jian Wang, Lin Mei, Yi Li, Jian-Ye Li, Kun Zhao, Yuan Yao.  2014.  Variable Window for Outlier Detection and Impulsive Noise Recognition in Range Images. Cluster, Cloud and Grid Computing (CCGrid), 2014 14th IEEE/ACM International Symposium on. :857-864.

To improve comprehensive performance of denoising range images, an impulsive noise (IN) denoising method with variable windows is proposed in this paper. Founded on several discriminant criteria, the principles of dropout IN detection and outlier IN detection are provided. Subsequently, a nearest non-IN neighbors searching process and an Index Distance Weighted Mean filter is combined for IN denoising. As key factors of adapatablity of the proposed denoising method, the sizes of two windows for outlier INs detection and INs denoising are investigated. Originated from a theoretical model of invader occlusion, variable window is presented for adapting window size to dynamic environment of each point, accompanying with practical criteria of adaptive variable window size determination. Experiments on real range images of multi-line surface are proceeded with evaluations in terms of computational complexity and quality assessment with comparison analysis among a few other popular methods. It is indicated that the proposed method can detect the impulsive noises with high accuracy, meanwhile, denoise them with strong adaptability with the help of variable window.
 

Zhang, Hao, Zhang, Tao, Chen, Huajin.  2017.  Variance Analysis of Pixel-Value Differencing Steganography. Proceedings of the 2017 International Conference on Cryptography, Security and Privacy. :28–32.

As the adaptive steganography selects edge and texture area for loading, the theoretical analysis is limited by modeling difficulty. This paper introduces a novel method to study pixel-value difference (PVD) embedding scheme. First, the difference histogram values of cover image are used as parameters, and a variance formula for PVD stego noise is obtained. The accuracy of this formula has been verified through analysis with standard pictures. Second, the stego noise is divided into six kinds of pixel regions, and the regional noise variances are utilized to compare the security between PVD and least significant bit matching (LSBM) steganography. A mathematical conclusion is presented that, with the embedding capacity less than 2.75 bits per pixel, PVD is always not safer than LSBM under the same embedding rate, regardless of region selection. Finally, 10000 image samples are used to observe the validity of mathematical conclusion. For most images and regions, the data are also shown to be consistent with the prior judgment. Meanwhile, the cases of exception are analyzed seriously, and are found to be caused by randomness of pixel selection and abandoned blocks in PVD scheme. In summary, the unity of theory and practice completely indicates the effectiveness of our new method.

Xin, Wei, Wang, M., Shao, Shuai, Wang, Z., Zhang, Tao.  2015.  A variant of schnorr signature scheme for path-checking in RFID-based supply chains. 2015 12th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD). :2608–2613.

The RFID technology has attracted considerable attention in recent years, and brings convenience to supply chain management. In this paper, we concentrate on designing path-checking protocols to check the valid paths in supply chains. By entering a valid path, the check reader can distinguish whether the tags have gone through the path or not. Based on modified schnorr signature scheme, we provide a path-checking method to achieve multi-signatures and final verification. In the end, we conduct security and privacy analysis to the scheme.

Hossain, F. S., Shintani, M., Inoue, M., Orailoglu, A..  2018.  Variation-Aware Hardware Trojan Detection through Power Side-Channel. 2018 IEEE International Test Conference (ITC). :1-10.

A hardware Trojan (HT) denotes the malicious addition or modification of circuit elements. The purpose of this work is to improve the HT detection sensitivity in ICs using power side-channel analysis. This paper presents three detection techniques in power based side-channel analysis by increasing Trojan-to-circuit power consumption and reducing the variation effect in the detection threshold. Incorporating the three proposed methods has demonstrated that a realistic fine-grain circuit partitioning and an improved pattern set to increase HT activation chances can magnify Trojan detectability.

Koch, S., John, M., Worner, M., Muller, A., Ertl, T..  2014.  VarifocalReader #x2014; In-Depth Visual Analysis of Large Text Documents. Visualization and Computer Graphics, IEEE Transactions on. 20:1723-1732.

Interactive visualization provides valuable support for exploring, analyzing, and understanding textual documents. Certain tasks, however, require that insights derived from visual abstractions are verified by a human expert perusing the source text. So far, this problem is typically solved by offering overview-detail techniques, which present different views with different levels of abstractions. This often leads to problems with visual continuity. Focus-context techniques, on the other hand, succeed in accentuating interesting subsections of large text documents but are normally not suited for integrating visual abstractions. With VarifocalReader we present a technique that helps to solve some of these approaches' problems by combining characteristics from both. In particular, our method simplifies working with large and potentially complex text documents by simultaneously offering abstract representations of varying detail, based on the inherent structure of the document, and access to the text itself. In addition, VarifocalReader supports intra-document exploration through advanced navigation concepts and facilitates visual analysis tasks. The approach enables users to apply machine learning techniques and search mechanisms as well as to assess and adapt these techniques. This helps to extract entities, concepts and other artifacts from texts. In combination with the automatic generation of intermediate text levels through topic segmentation for thematic orientation, users can test hypotheses or develop interesting new research questions. To illustrate the advantages of our approach, we provide usage examples from literature studies.

Lira, Wallace, Gama, Fernando, Barbosa, Hivana, Alves, Ronnie, de Souza, Cleidson.  2016.  VCloud: Adding Interactiveness to Word Clouds for Knowledge Exploration in Large Unstructured Texts. Proceedings of the 31st Annual ACM Symposium on Applied Computing. :193–198.

The identification of relevant information in large text databases is a challenging task. One of the reasons is human beings' limitations in handling large volumes of data. A common solution for scavenging data from texts are word clouds. A word cloud illustrates word usage in a document by resizing individual words in documents proportionally to how frequently they appear. Even though word clouds are easy to understand, they are not particularly efficient, because they are static. In addition, the presented information lacks context, i.e., words are not explained and they may lead to radically erroneous interpretations. To tackle these problems we developed VCloud, a tool that allows the user to interact with word clouds, therefore allowing informative and interactive data exploration. Furthermore, our tool also allows one to compare two data sets presented as word clouds. We evaluated VCloud using real data about the evolution of gastritis research through the years. The papers indexed by Pubmed related to this medical context were selected for visualization and data analysis using VCloud. A domain expert explored these visualizations, being able to extract useful information from it. This illustrates how can VCloud be a valuable tool for visual text analytics.

Khan, J..  2017.  Vehicle Network Security Testing. 2017 Third International Conference on Sensing, Signal Processing and Security (ICSSS). :119–123.

In-vehicle networks like Controller Area Network, FlexRay, Ethernet are now subjected to huge security threats where unauthorized entities can take control of the whole vehicle. This can pose very serious threats including accidents. Security features like encryption, message authentication are getting implemented in vehicle networks to counteract these issues. This paper is proposing a set of novel validation techniques to ensure that vehicle network security is fool proof. Security validation against requirements, security validation using white box approach, black box approach and grey box approaches are put forward. Test system architecture, validation of message authentication, decoding the patterns from vehicle network data, using diagnostics as a security loophole, V2V V2X loopholes, gateway module security testing are considered in detail. Aim of this research paper is to put forward a set of tools and methods for finding and reporting any security loopholes in the in-vehicle network security implementation.

Markwood, Ian D., Liu, Yao.  2016.  Vehicle Self-Surveillance: Sensor-Enabled Automatic Driver Recognition. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :425–436.

Motor vehicles are widely used, quite valuable, and often targeted for theft. Preventive measures include car alarms, proximity control, and physical locks, which can be bypassed if the car is left unlocked, or if the thief obtains the keys. Reactive strategies like cameras, motion detectors, human patrolling, and GPS tracking can monitor a vehicle, but may not detect car thefts in a timely manner. We propose a fast automatic driver recognition system that identifies unauthorized drivers while overcoming the drawbacks of previous approaches. We factor drivers' trips into elemental driving events, from which we extract their driving preference features that cannot be exactly reproduced by a thief driving away in the stolen car. We performed real world evaluation using the driving data collected from 31 volunteers. Experiment results show we can distinguish the current driver as the owner with 97% accuracy, while preventing impersonation 91% of the time.

Lacroix, Jesse, El-Khatib, Khalil, Akalu, Rajen.  2016.  Vehicular Digital Forensics: What Does My Vehicle Know About Me? Proceedings of the 6th ACM Symposium on Development and Analysis of Intelligent Vehicular Networks and Applications. :59–66.

A major component of modern vehicles is the infotainment system, which interfaces with its drivers and passengers. Other mobile devices, such as handheld phones and laptops, can relay information to the embedded infotainment system through Bluetooth and vehicle WiFi. The ability to extract information from these systems would help forensic analysts determine the general contents that is stored in an infotainment system. Based off the data that is extracted, this would help determine what stored information is relevant to law enforcement agencies and what information is non-essential when it comes to solving criminal activities relating to the vehicle itself. This would overall solidify the Intelligent Transport System and Vehicular Ad Hoc Network infrastructure in combating crime through the use of vehicle forensics. Additionally, determining the content of these systems will allow forensic analysts to know if they can determine anything about the end-user directly and/or indirectly.

Setyono, R. Puji, Sarno, R..  2018.  Vendor Track Record Selection Using Best Worst Method. 2018 International Seminar on Application for Technology of Information and Communication. :41–48.
Every company will largely depend on other companies. This will help unite a large business process. Risks that arise from other companies will affect the business performance of a company. Because of this, the right choice for suppliers is crucial. Each vendor has different characteristics. Everything is not always suitable basically the selection process is quite complex and risky. This has led to a new case study which has been studied for years by researchers known as Supplier Selection Problems. Selection of vendors with multi-criteria decision making has been widely studied over years ago. The Best Worst Method is a new science in Multi-Criteria Decision Making (MCDM) determination. In this research, taking case study at XYZ company is in Indonesia which is engaged in mining and industry. The research utilized the transaction data that have been recorded by the XYZ company and analyzed vendor valuation. The weighting of Best Worst Method is calculated based on vendor assessment result. The results show that XYZ company still focuses on Price as its key criteria.
Johnson, Claiborne, MacGahan, Thomas, Heaps, John, Baldor, Kevin, von Ronne, Jeffery, Niu, Jianwei.  2017.  Verifiable Assume-Guarantee Privacy Specifications for Actor Component Architectures. Proceedings of the 22Nd ACM on Symposium on Access Control Models and Technologies. :167–178.

Many organizations process personal information in the course of normal operations. Improper disclosure of this information can be damaging, so organizations must obey privacy laws and regulations that impose restrictions on its release or risk penalties. Since electronic management of personal information must be held in strict compliance with the law, software systems designed for such purposes must have some guarantee of compliance. To support this, we develop a general methodology for designing and implementing verifiable information systems. This paper develops the design of the History Aware Programming Language into a framework for creating systems that can be mechanically checked against privacy specifications. We apply this framework to create and verify a prototypical Electronic Medical Record System (EMRS) expressed as a set of actor components and first-order linear temporal logic specifications in assume-guarantee form. We then show that the implementation of the EMRS provably enforces a formalized Health Insurance Portability and Accountability Act (HIPAA) policy using a combination of model checking and static analysis techniques.

Ferretti, L., Marchetti, M., Colajanni, M..  2017.  Verifiable Delegated Authorization for User-Centric Architectures and an OAuth2 Implementation. 2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC). 2:718–723.

Delegated authorization protocols have become wide-spread to implement Web applications and services, where some popular providers managing people identity information and personal data allow their users to delegate third party Web services to access their data. In this paper, we analyze the risks related to untrusted providers not behaving correctly, and we solve this problem by proposing the first verifiable delegated authorization protocol that allows third party services to verify the correctness of users data returned by the provider. The contribution of the paper is twofold: we show how delegated authorization can be cryptographically enforced through authenticated data structures protocols, we extend the standard OAuth2 protocol by supporting efficient and verifiable delegated authorization including database updates and privileges revocation.

Eltayesh, Faryed, Bentahar, Jamal.  2017.  Verifiable Outsourced Database in the Cloud Using Game Theory. Proceedings of the Symposium on Applied Computing. :370–377.

In the verifiable database (VDB) model, a computationally weak client (database owner) delegates his database management to a database service provider on the cloud, which is considered untrusted third party, while users can query the data and verify the integrity of query results. Since the process can be computationally costly and has a limited support for sophisticated query types such as aggregated queries, we propose in this paper a framework that helps bridge the gap between security and practicality trade-offs. The proposed framework remodels the verifiable database problem using Stackelberg security game. In the new model, the database owner creates and uploads to the database service provider the database and its authentication structure (AS). Next, the game is played between the defender (verifier), who is a trusted party to the database owner and runs scheduled randomized verifications using Stackelberg mixed strategy, and the database service provider. The idea is to randomize the verification schedule in an optimized way that grants the optimal payoff for the verifier while making it extremely hard for the database service provider or any attacker to figure out which part of the database is being verified next. We have implemented and compared the proposed model performance with a uniform randomization model. Simulation results show that the proposed model outperforms the uniform randomization model. Furthermore, we have evaluated the efficiency of the proposed model against different cost metrics.

Ren, Yanli, Ding, Ning, Zhang, Xinpeng, Lu, Haining, Gu, Dawu.  2016.  Verifiable Outsourcing Algorithms for Modular Exponentiations with Improved Checkability. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :293–303.

The problem of securely outsourcing computation has received widespread attention due to the development of cloud computing and mobile devices. In this paper, we first propose a secure verifiable outsourcing algorithm of single modular exponentiation based on the one-malicious model of two untrusted servers. The outsourcer could detect any failure with probability 1 if one of the servers misbehaves. We also present the other verifiable outsourcing algorithm for multiple modular exponentiations based on the same model. Compared with the state-of-the-art algorithms, the proposed algorithms improve both checkability and efficiency for the outsourcer. Finally, we utilize the proposed algorithms as two subroutines to achieve outsource-secure polynomial evaluation and ciphertext-policy attributed-based encryption (CP-ABE) scheme with verifiable outsourced encryption and decryption.

Dai, H., Zhu, X., Yang, G., Yi, X..  2017.  A Verifiable Single Keyword Top-k Search Scheme against Insider Attacks over Cloud Data. 2017 3rd International Conference on Big Data Computing and Communications (BIGCOM). :111–116.

With the development of cloud computing and its economic benefit, more and more companies and individuals outsource their data and computation to clouds. Meanwhile, the business way of resource outsourcing makes the data out of control from its owner and results in many security issues. The existing secure keyword search methods assume that cloud servers are curious-but-honest or partial honest, which makes them powerless to deal with the deliberately falsified or fabricated results of insider attacks. In this paper, we propose a verifiable single keyword top-k search scheme against insider attacks which can verify the integrity of search results. Data owners generate verification codes (VCs) for the corresponding files, which embed the ordered sequence information of the relevance scores between files and keywords. Then files and corresponding VCs are outsourced to cloud servers. When a data user performs a keyword search in cloud servers, the qualified result files are determined according to the relevance scores between the files and the interested keyword and then returned to the data user together with a VC. The integrity of the result files is verified by data users through reconstructing a new VC on the received files and comparing it with the received one. Performance evaluation have been conducted to demonstrate the efficiency and result redundancy of the proposed scheme.

Kang, E. Y., Mu, D., Huang, L., Lan, Q..  2017.  Verification and Validation of a Cyber-Physical System in the Automotive Domain. 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :326–333.
Software development for Cyber-Physical Systems (CPS), e.g., autonomous vehicles, requires both functional and non-functional quality assurance to guarantee that the CPS operates safely and effectively. EAST-ADL is a domain specific architectural language dedicated to safety-critical automotive embedded system design. We have previously modified EAST-ADL to include energy constraints and transformed energy-aware real-time (ERT) behaviors modeled in EAST-ADL/Stateflow into UPPAAL models amenable to formal verification. Previous work is extended in this paper by including support for Simulink and an integration of Simulink/Stateflow (S/S) within the same too lchain. S/S models are transformed, based on the extended ERT constraints with probability parameters, into verifiable UPPAAL-SMC models and integrate the translation with formal statistical analysis techniques: Probabilistic extension of EAST-ADL constraints is defined as a semantics denotation. A set of mapping rules is proposed to facilitate the guarantee of translation. Formal analysis on both functional- and non-functional properties is performed using Simulink Design Verifier and UPPAAL-SMC. Our approach is demonstrated on the autonomous traffic sign recognition vehicle case study.
Zhang, T., Zheng, H., Zhang, L..  2018.  Verification CAPTCHA Based on Deep Learning. 2018 37th Chinese Control Conference (CCC). :9056–9060.
At present, the captcha is widely used in the Internet. The method of captcha recognition using the convolutional neural networks was introduced in this paper. It was easier to apply the convolution neural network model of simple training to segment the captcha, and the network structure was established imitating VGGNet model. and the correct rate can be reached more than 90%. For the more difficult segmentation captcha, it can be used the end-to-end thought to the captcha as a whole to training, In this way, the recognition rate of the more difficult segmentation captcha can be reached about 85%.
T. Long, G. Yao.  2015.  "Verification for Security-Relevant Properties and Hyperproperties". 2015 IEEE 12th Intl Conf on Ubiquitous Intelligence and Computing and 2015 IEEE 12th Intl Conf on Autonomic and Trusted Computing and 2015 IEEE 15th Intl Conf on Scalable Computing and Communications and Its Associated Workshops (UIC-ATC-ScalCom). :490-497.

Privacy analysis is essential in the society. Data privacy preservation for access control, guaranteed service in wireless sensor networks are important parts. In programs' verification, we not only consider about these kinds of safety and liveness properties but some security policies like noninterference, and observational determinism which have been proposed as hyper properties. Fairness is widely applied in verification for concurrent systems, wireless sensor networks and embedded systems. This paper studies verification and analysis for proving security-relevant properties and hyper properties by proposing deductive proof rules under fairness requirements (constraints).