Visible to the public Biblio

Filters: First Letter Of Title is W  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R S T U V [W] X Y Z   [Show ALL]
W
Michalevsky, Yan, Winetraub, Yonatan.  2017.  WaC: SpaceTEE - Secure and Tamper-Proof Computing in Space Using CubeSats. Proceedings of the 2017 Workshop on Attacks and Solutions in Hardware Security. :27–32.
Sensitive computation often has to be performed in a trusted execution environment (TEE), which, in turn, requires tamper-proof hardware. If the computational fabric can be tampered with, we may no longer be able to trust the correctness of the computation. We study the (wild and crazy) idea of using computational platforms in space as a means to protect data from adversarial physical access. In this paper, we propose SpaceTEE - a practical implementation of this approach using low-cost nano-satellites called CubeSats. We study the constraints of such a platform, the cost of deployment, and discuss possible applications under those constraints. As a case study, we design a hardware security module solution (called SpaceHSM) and describe how it can be used to implement a root-of-trust for a certificate authority (CA).
Sethi, Ricky J., Buell, Catherine A., Seeley, William P..  2018.  WAIVS: An Intelligent Interface for Visual Stylometry Using Semantic Workflows. Proceedings of the 23rd International Conference on Intelligent User Interfaces Companion. :54:1-54:2.

In this paper, we present initial work towards creating an intelligent interface that can act as an open access laboratory for visual stylometry called WAIVS, Workflows for Analysis of Images and Visual Stylometry. WAIVS allows scholars, students, and other interested parties to explore the nature of artistic style using cutting-edge research methods in visual stylometry. We create semantic workflows for this interface using various computer vision algorithms that not only facilitate artistically significant analyses but also impose intelligent semantic constraints on complex analyses. In the interface, we combine these workflows with a manually-curated dataset for analysis of artistic style based on either the school of art or the medium.

Z. Zhu, M. B. Wakin.  2015.  "Wall clutter mitigation and target detection using Discrete Prolate Spheroidal Sequences". 2015 3rd International Workshop on Compressed Sensing Theory and its Applications to Radar, Sonar and Remote Sensing (CoSeRa). :41-45.

We present a new method for mitigating wall return and a new greedy algorithm for detecting stationary targets after wall clutter has been cancelled. Given limited measurements of a stepped-frequency radar signal consisting of both wall and target return, our objective is to detect and localize the potential targets. Modulated Discrete Prolate Spheroidal Sequences (DPSS's) form an efficient basis for sampled bandpass signals. We mitigate the wall clutter efficiently within the compressive measurements through the use of a bandpass modulated DPSS basis. Then, in each step of an iterative algorithm for detecting the target positions, we use a modulated DPSS basis to cancel nearly all of the target return corresponding to previously selected targets. With this basis, we improve upon the target detection sensitivity of a Fourier-based technique.

M. Grottke, A. Avritzer, D. S. Menasché, J. Alonso, L. Aguiar, S. G. Alvarez.  2015.  "WAP: Models and metrics for the assessment of critical-infrastructure-targeted malware campaigns". 2015 IEEE 26th International Symposium on Software Reliability Engineering (ISSRE). :330-335.

Ensuring system survivability in the wake of advanced persistent threats is a big challenge that the security community is facing to ensure critical infrastructure protection. In this paper, we define metrics and models for the assessment of coordinated massive malware campaigns targeting critical infrastructure sectors. First, we develop an analytical model that allows us to capture the effect of neighborhood on different metrics (infection probability and contagion probability). Then, we assess the impact of putting operational but possibly infected nodes into quarantine. Finally, we study the implications of scanning nodes for early detection of malware (e.g., worms), accounting for false positives and false negatives. Evaluating our methodology using a small four-node topology, we find that malware infections can be effectively contained by using quarantine and appropriate rates of scanning for soft impacts.

Querel, Louis-Philippe, Rigby, Peter C..  2018.  WarningsGuru: Integrating Statistical Bug Models with Static Analysis to Provide Timely and Specific Bug Warnings. Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. :892-895.

The detection of bugs in software systems has been divided into two research areas: static code analysis and statistical modeling of historical data. Static analysis indicates precise problems on line numbers but has the disadvantage of suggesting many warning which are often false positives. In contrast, statistical models use the history of the system to suggest which files or commits are likely to contain bugs. These course-grained predictions do not indicate to the developer the precise reasons for the bug prediction. We combine static analysis with statistical bug models to limit the number of warnings and provide specific warnings information at the line level. Previous research was able to process only a limited number of releases, our tool, WarningsGuru, can analyze all commits in a source code repository and we currently have processed thousands of commits and warnings. Since we process every commit, we present developers with more precise information about when a warning is introduced allowing us to show recent warnings that are introduced in statistically risky commits. Results from two OSS projects show that CommitGuru's statistical model flags 25% and 29% of all commits as risky. When we combine this with static analysis in WarningsGuru the number of risky commits with warnings is 20% for both projects and the number commits with new warnings is only 3% and 6%. We can drastically reduce the number of commits and warnings developers have to examine. The tool, source code, and demo is available at https://github.com/louisq/warningsguru.

Hoyle, Roberto, Das, Srijita, Kapadia, Apu, Lee, Adam J., Vaniea, Kami.  2017.  Was My Message Read?: Privacy and Signaling on Facebook Messenger Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. :3838–3842.

Major online messaging services such as Facebook Messenger and WhatsApp are starting to provide users with real-time information about when people read their messages, while useful, the feature has the potential to negatively impact privacy as well as cause concern over access to self. We report on two surveys using Mechanical Turk which looked at senders' (N=402\vphantom\\ use of and reactions to the `message seen' feature, and recipients' (N=316) privacy and signaling behaviors in the face of such visibility. Our findings indicate that senders experience a range of emotions when their message is not read, or is read but not answered immediately. Recipients also engage in various signaling behaviors in the face of visibility by both replying or not replying immediately.

Han, Yi, Etigowni, Sriharsha, Liu, Hua, Zonouz, Saman, Petropulu, Athina.  2017.  Watch Me, but Don'T Touch Me! Contactless Control Flow Monitoring via Electromagnetic Emanations. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :1095–1108.

Trustworthy operation of industrial control systems depends on secure and real-time code execution on the embedded programmable logic controllers (PLCs). The controllers monitor and control the critical infrastructures, such as electric power grids and healthcare platforms, and continuously report back the system status to human operators. We present Zeus, a contactless embedded controller security monitor to ensure its execution control flow integrity. Zeus leverages the electromagnetic emission by the PLC circuitry during the execution of the controller programs. Zeus's contactless execution tracking enables non-intrusive monitoring of security-critical controllers with tight real-time constraints. Those devices often cannot tolerate the cost and performance overhead that comes with additional traditional hardware or software monitoring modules. Furthermore, Zeus provides an air-gap between the monitor (trusted computing base) and the target (potentially compromised) PLC. This eliminates the possibility of the monitor infection by the same attack vectors. Zeus monitors for control flow integrity of the PLC program execution. Zeus monitors the communications between the human machine interface and the PLC, and captures the control logic binary uploads to the PLC. Zeus exercises its feasible execution paths, and fingerprints their emissions using an external electromagnetic sensor. Zeus trains a neural network for legitimate PLC executions, and uses it at runtime to identify the control flow based on PLC's electromagnetic emissions. We implemented Zeus on a commercial Allen Bradley PLC, which is widely used in industry, and evaluated it on real-world control program executions. Zeus was able to distinguish between different legitimate and malicious executions with 98.9% accuracy and with zero overhead on PLC execution by design.

Saifuddin, K. M., Ali, A. J. B., Ahmed, A. S., Alam, S. S., Ahmad, A. S..  2018.  Watchdog and Pathrater based Intrusion Detection System for MANET. 2018 4th International Conference on Electrical Engineering and Information Communication Technology (iCEEiCT). :168–173.

Mobile Ad Hoc Network (MANET) is pretty vulnerable to attacks because of its broad distribution and open nodes. Hence, an effective Intrusion Detection System (IDS) is vital in MANET to deter unwanted malicious attacks. An IDS has been proposed in this paper based on watchdog and pathrater method as well as evaluation of its performance has been presented using Dynamic Source Routing (DSR) and Ad-hoc On-demand Distance Vector (AODV) routing protocols with and without considering the effect of the sinkhole attack. The results obtained justify that the proposed IDS is capable of detecting suspicious activities and identifying the malicious nodes. Moreover, it replaces the fake route with a real one in the routing table in order to mitigate the security risks. The performance appraisal also suggests that the AODV protocol has a capacity of sending more packets than DSR and yields more throughput.

Gao, Yang, Li, Borui, Wang, Wei, Xu, Wenyao, Zhou, Chi, Jin, Zhanpeng.  2018.  Watching and Safeguarding Your 3D Printer: Online Process Monitoring Against Cyber-Physical Attacks. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.. 2:108:1–108:27.

The increasing adoption of 3D printing in many safety and mission critical applications exposes 3D printers to a variety of cyber attacks that may result in catastrophic consequences if the printing process is compromised. For example, the mechanical properties (e.g., physical strength, thermal resistance, dimensional stability) of 3D printed objects could be significantly affected and degraded if a simple printing setting is maliciously changed. To address this challenge, this study proposes a model-free real-time online process monitoring approach that is capable of detecting and defending against the cyber-physical attacks on the firmwares of 3D printers. Specifically, we explore the potential attacks and consequences of four key printing attributes (including infill path, printing speed, layer thickness, and fan speed) and then formulate the attack models. Based on the intrinsic relation between the printing attributes and the physical observations, our defense model is established by systematically analyzing the multi-faceted, real-time measurement collected from the accelerometer, magnetometer and camera. The Kalman filter and Canny filter are used to map and estimate three aforementioned critical toolpath information that might affect the printing quality. Mel-frequency Cepstrum Coefficients are used to extract features for fan speed estimation. Experimental results show that, for a complex 3D printed design, our method can achieve 4% Hausdorff distance compared with the model dimension for infill path estimate, 6.07% Mean Absolute Percentage Error (MAPE) for speed estimate, 9.57% MAPE for layer thickness estimate, and 96.8% accuracy for fan speed identification. Our study demonstrates that, this new approach can effectively defend against the cyber-physical attacks on 3D printers and 3D printing process.

Shalev, Noam, Keidar, Idit, Moatti, Yosef, Weinsberg, Yaron.  2016.  WatchIT: Who Watches Your IT Guy? Proceedings of the 8th ACM CCS International Workshop on Managing Insider Security Threats. :93–96.

System administrators have unlimited access to system resources. As the Snowden case shows, these permissions can be exploited to steal valuable personal, classified, or commercial data. In this work we propose a strategy that increases the organizational information security by constraining IT personnel's view of the system and monitoring their actions. To this end, we introduce the abstraction of perforated containers – while regular Linux containers are too restrictive to be used by system administrators, by "punching holes" in them, we strike a balance between information security and required administrative needs. Our system predicts which system resources should be accessible for handling each IT issue, creates a perforated container with the corresponding isolation, and deploys it in the corresponding machines as needed for fixing the problem. Under this approach, the system administrator retains his superuser privileges, while he can only operate within the container limits. We further provide means for the administrator to bypass the isolation, and perform operations beyond her boundaries. However, such operations are monitored and logged for later analysis and anomaly detection. We provide a proof-of-concept implementation of our strategy, along with a case study on the IT database of IBM Research in Israel.

K. Cavalleri, B. Brinkman.  2015.  "Water treatment in context: resources and African religion". 2015 Systems and Information Engineering Design Symposium. :19-23.

Drinking water availability is a crucial problem that must be addressed in order to improve the quality of life of individuals living developing nations. Improving water supply availability is important for public health, as it is the third highest risk factor for poor health in developing nations with high mortality rates. This project researched drinking water filtration for areas of Sub-Saharan Africa near existing bodies of water, where the populations are completely reliant on collecting from surface water sources: the most contaminated water source type. Water filtration methods that can be completely created by the consumer would alleviate aid organization dependence in developing nations, put the consumers in control, and improve public health. Filtration processes pass water through a medium that will catch contaminants through physical entrapment or absorption and thus yield a cleaner effluent. When exploring different materials for filtration, removal of contaminants and hydraulic conductivity are the two most important components. Not only does the method have to treat the water, but also it has to do so in a timeframe that is quick enough to produce potable water at a rate that keeps up with everyday needs. Cement is easily accessible in Sub- Saharan regions. Most concrete mixtures are not meant to be pervious, as it is a construction material used for its compressive strength, however, reduced water content in a cement mixture gives it higher permeability. Several different concrete samples of varying thicknesses and water concentrations were created. Bacterial count tests were performed on both pre-filtered and filtered water samples. Concrete filtration does remove bacteria from drinking water, however, the method can still be improved upon.

Nasr, Milad, Zolfaghari, Hadi, Houmansadr, Amir.  2017.  The Waterfall of Liberty: Decoy Routing Circumvention That Resists Routing Attacks. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :2037–2052.

Decoy routing is an emerging approach for censorship circumvention in which circumvention is implemented with help from a number of volunteer Internet autonomous systems, called decoy ASes. Recent studies on decoy routing consider all decoy routing systems to be susceptible to a fundamental attack – regardless of their specific designs–in which the censors re-route traffic around decoy ASes, thereby preventing censored users from using such systems. In this paper, we propose a new architecture for decoy routing that, by design, is significantly stronger to rerouting attacks compared to all previous designs. Unlike previous designs, our new architecture operates decoy routers only on the downstream traffic of the censored users; therefore we call it downstream-only decoy routing. As we demonstrate through Internet-scale BGP simulations, downstream-only decoy routing offers significantly stronger resistance to rerouting attacks, which is intuitively because a (censoring) ISP has much less control on the downstream BGP routes of its traffic. Designing a downstream-only decoy routing system is a challenging engineering problem since decoy routers do not intercept the upstream traffic of censored users. We design the first downstream-only decoy routing system, called Waterfall, by devising unique covert communication mechanisms. We also use various techniques to make our Waterfall implementation resistant to traffic analysis attacks. We believe that downstream-only decoy routing is a significant step towards making decoy routing systems practical. This is because a downstream-only decoy routing system can be deployed using a significantly smaller number of volunteer ASes, given a target resistance to rerouting attacks. For instance, we show that a Waterfall implementation with only a single decoy AS is as resistant to routing attacks (against China) as a traditional decoy system (e.g., Telex) with 53 decoy ASes.

Feng, W., Yan, W., Wu, S., Liu, N..  2017.  Wavelet transform and unsupervised machine learning to detect insider threat on cloud file-sharing. 2017 IEEE International Conference on Intelligence and Security Informatics (ISI). :155–157.

As increasingly more enterprises are deploying cloud file-sharing services, this adds a new channel for potential insider threats to company data and IPs. In this paper, we introduce a two-stage machine learning system to detect anomalies. In the first stage, we project the access logs of cloud file-sharing services onto relationship graphs and use three complementary graph-based unsupervised learning methods: OddBall, PageRank and Local Outlier Factor (LOF) to generate outlier indicators. In the second stage, we ensemble the outlier indicators and introduce the discrete wavelet transform (DWT) method, and propose a procedure to use wavelet coefficients with the Haar wavelet function to identify outliers for insider threat. The proposed system has been deployed in a real business environment, and demonstrated effectiveness by selected case studies.

Goel, N., Sharma, A., Goswami, S..  2017.  A way to secure a QR code: SQR. 2017 International Conference on Computing, Communication and Automation (ICCCA). :494–497.

Now a day, need for fast accessing of data is increasing with the exponential increase in the security field. QR codes have served as a useful tool for fast and convenient sharing of data. But with increased usage of QR Codes have become vulnerable to attacks such as phishing, pharming, manipulation and exploitation. These security flaws could pose a danger to an average user. In this paper we have proposed a way, called Secured QR (SQR) to fix all these issues. In this approach we secure a QR code with the help of a key in generator side and the same key is used to get the original information at scanner side. We have used AES algorithm for this purpose. SQR approach is applicable when we want to share/use sensitive information in the organization such as sharing of profile details, exchange of payment information, business cards, generation of electronic tickets etc.

Hastings, Marcella, Fried, Joshua, Heninger, Nadia.  2016.  Weak Keys Remain Widespread in Network Devices. Proceedings of the 2016 Internet Measurement Conference. :49–63.

In 2012, two academic groups reported having computed the RSA private keys for 0.5% of HTTPS hosts on the internet, and traced the underlying issue to widespread random number generation failures on networked devices. The vulnerability was reported to dozens of vendors, several of whom responded with security advisories, and the Linux kernel was patched to fix a boottime entropy hole that contributed to the failures. In this paper, we measure the actions taken by vendors and end users over time in response to the original disclosure. We analyzed public internet-wide TLS scans performed between July 2010 and May 2016 and extracted 81 million distinct RSA keys. We then computed the pairwise common divisors for the entire set in order to factor over 313,000 keys vulnerable to the aw, and fingerprinted implementations to study patching behavior over time across vendors. We find that many vendors appear to have never produced a patch, and observed little to no patching behavior by end users of affected devices. The number of vulnerable hosts increased in the years after notification and public disclosure, and several newly vulnerable implementations have appeared since 2012. Vendor notification, positive vendor responses, and even vendor-produced public security advisories appear to have little correlation with end-user security.

Wang, C., Jiang, Y., Zhao, X., Song, X., Gu, M., Sun, J..  2018.  Weak-Assert: A Weakness-Oriented Assertion Recommendation Toolkit for Program Analysis. 2018 IEEE/ACM 40th International Conference on Software Engineering: Companion (ICSE-Companion). :69–72.

Assertions are helpful in program analysis, such as software testing and verification. The most challenging part of automatically recommending assertions is to design the assertion patterns and to insert assertions in proper locations. In this paper, we develop Weak-Assert, a weakness-oriented assertion recommendation toolkit for program analysis of C code. A weakness-oriented assertion is an assertion which can help to find potential program weaknesses. Weak-Assert uses well-designed patterns to match the abstract syntax trees of source code automatically. It collects significant messages from trees and inserts assertions into proper locations of programs. These assertions can be checked by using program analysis techniques. The experiments are set up on Juliet test suite and several actual projects in Github. Experimental results show that Weak-Assert helps to find 125 program weaknesses in 26 actual projects. These weaknesses are confirmed manually to be triggered by some test cases.

Shen, Shiyu, Gao, Jianlin, Wu, Aitian.  2018.  Weakness Identification and Flow Analysis Based on Tor Network. Proceedings of the 8th International Conference on Communication and Network Security. :90–94.

As the Internet technology develops rapidly, attacks against Tor networks becomes more and more frequent. So, it's more and more difficult for Tor network to meet people's demand to protect their private information. A method to improve the anonymity of Tor seems urgent. In this paper, we mainly talk about the principle of Tor, which is the largest anonymous communication system in the world, analyze the reason for its limited efficiency, and discuss the vulnerability of link fingerprint and node selection. After that, a node recognition model based on SVM is established, which verifies that the traffic characteristics expose the node attributes, thus revealing the link and destroying the anonymity. Based on what is done above, some measures are put forward to improve Tor protocol to make it more anonymous.

Vhaduri, S., Poellabauer, C..  2017.  Wearable Device User Authentication Using Physiological and Behavioral Metrics. 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC). :1–6.

Wearables, such as Fitbit, Apple Watch, and Microsoft Band, with their rich collection of sensors, facilitate the tracking of healthcare- and wellness-related metrics. However, the assessment of the physiological metrics collected by these devices could also be useful in identifying the user of the wearable, e.g., to detect unauthorized use or to correctly associate the data to a user if wearables are shared among multiple users. Further, researchers and healthcare providers often rely on these smart wearables to monitor research subjects and patients in their natural environments over extended periods of time. Here, it is important to associate the sensed data with the corresponding user and to detect if a device is being used by an unauthorized individual, to ensure study compliance. Existing one-time authentication approaches using credentials (e.g., passwords, certificates) or trait-based biometrics (e.g., face, fingerprints, iris, voice) might fail, since such credentials can easily be shared among users. In this paper, we present a continuous and reliable wearable-user authentication mechanism using coarse-grain minute-level physical activity (step counts) and physiological data (heart rate, calorie burn, and metabolic equivalent of task). From our analysis of 421 Fitbit users from a two-year long health study, we are able to statistically distinguish nearly 100% of the subject-pairs and to identify subjects with an average accuracy of 92.97%.

Chalkley, Joe D., Ranji, Thomas T., Westling, Carina E. I., Chockalingam, Nachiappan, Witchel, Harry J..  2017.  Wearable Sensor Metric for Fidgeting: Screen Engagement Rather Than Interest Causes NIMI of Wrists and Ankles. Proceedings of the European Conference on Cognitive Ergonomics 2017. :158–161.

Measuring fidgeting is an important goal for the psychology of mind-wandering and for human computer interaction (HCI). Previous work measuring the movement of the head, torso and thigh during HCI has shown that engaging screen content leads to non-instrumental movement inhibition (NIMI). Camera-based methods for measuring wrist movements are limited by occlusions. Here we used a high pass filtered magnitude of wearable tri-axial accelerometer recordings during 2-minute passive HCI stimuli as a surrogate for movement of the wrists and ankles. With 24 seated, healthy volunteers experiencing HCI, this metric showed that wrists moved significantly more than ankles. We found that NIMI could be detected in the wrists and ankles; it distinguished extremes of interest and boredom via restlessness. We conclude that both free-willed and forced screen engagement can elicit NIMI of the wrists and ankles.

Chen, D., Irwin, D..  2017.  Weatherman: Exposing Weather-Based Privacy Threats in Big Energy Data. 2017 IEEE International Conference on Big Data (Big Data). :1079–1086.

Smart energy meters record electricity consumption and generation at fine-grained intervals, and are among the most widely deployed sensors in the world. Energy data embeds detailed information about a building's energy-efficiency, as well as the behavior of its occupants, which academia and industry are actively working to extract. In many cases, either inadvertently or by design, these third-parties only have access to anonymous energy data without an associated location. The location of energy data is highly useful and highly sensitive information: it can provide important contextual information to improve big data analytics or interpret their results, but it can also enable third-parties to link private behavior derived from energy data with a particular location. In this paper, we present Weatherman, which leverages a suite of analytics techniques to localize the source of anonymous energy data. Our key insight is that energy consumption data, as well as wind and solar generation data, largely correlates with weather, e.g., temperature, wind speed, and cloud cover, and that every location on Earth has a distinct weather signature that uniquely identifies it. Weatherman represents a serious privacy threat, but also a potentially useful tool for researchers working with anonymous smart meter data. We evaluate Weatherman's potential in both areas by localizing data from over one hundred smart meters using a weather database that includes data from over 35,000 locations. Our results show that Weatherman localizes coarse (one-hour resolution) energy consumption, wind, and solar data to within 16.68km, 9.84km, and 5.12km, respectively, on average, which is more accurate using much coarser resolution data than prior work on localizing only anonymous solar data using solar signatures.

Tran, Manh Cong, Nakamura, Yasuhiro.  2016.  Web Access Behaviour Model for Filtering Out HTTP Automated Software Accessed Domain. Proceedings of the 10th International Conference on Ubiquitous Information Management and Communication. :67:1–67:4.

In many decades, due to fast growth of the World Wide Web, HTTP automated software/applications (auto-ware) are blooming for multiple purposes. Unfortunately, beside normal applications such as virus defining or operating system updating, auto-ware can also act as abnormal processes such as botnet, worms, virus, spywares, and advertising software (adware). Therefore, auto-ware, in a sense, consumes network bandwidth, and it might become internal security threats, auto-ware accessed domain/server also might be malicious one. Understanding about behaviour of HTTP auto-ware is beneficial for anomaly/malicious detection, the network management, traffic engineering and security. In this paper, HTTP auto-ware communication behaviour is analysed and modeled, from which a method in filtering out its domain/server is proposed. The filtered results can be used as a good resource for other security action purposes such as malicious domain/URL detection/filtering or investigation of HTTP malware from internal threats.

SHAR, L., Briand, L., Tan, H..  2014.  Web Application Vulnerability Prediction using Hybrid Program Analysis and Machine Learning. Dependable and Secure Computing, IEEE Transactions on. PP:1-1.

Due to limited time and resources, web software engineers need support in identifying vulnerable code. A practical approach to predicting vulnerable code would enable them to prioritize security auditing efforts. In this paper, we propose using a set of hybrid (static+dynamic) code attributes that characterize input validation and input sanitization code patterns and are expected to be significant indicators of web application vulnerabilities. Because static and dynamic program analyses complement each other, both techniques are used to extract the proposed attributes in an accurate and scalable way. Current vulnerability prediction techniques rely on the availability of data labeled with vulnerability information for training. For many real world applications, past vulnerability data is often not available or at least not complete. Hence, to address both situations where labeled past data is fully available or not, we apply both supervised and semi-supervised learning when building vulnerability predictors based on hybrid code attributes. Given that semi-supervised learning is entirely unexplored in this domain, we describe how to use this learning scheme effectively for vulnerability prediction. We performed empirical case studies on seven open source projects where we built and evaluated supervised and semi-supervised models. When cross validated with fully available labeled data, the supervised models achieve an average of 77 percent recall and 5 percent probability of false alarm for predicting SQL injection, cross site scripting, remote code execution and file inclusion vulnerabilities. With a low amount of labeled data, when compared to the supervised model, the semi-supervised model showed an average improvement of 24 percent higher recall and 3 percent lower probability of false alarm, thus suggesting semi-supervised learning may be a preferable solution for many real world applications where vulnerability data is missing.
 

Hasslinger, G., Kunbaz, M., Hasslinger, F., Bauschert, T..  2017.  Web Caching Evaluation from Wikipedia Request Statistics. 2017 15th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt). :1–6.

Wikipedia is one of the most popular information platforms on the Internet. The user access pattern to Wikipedia pages depends on their relevance in the current worldwide social discourse. We use publically available statistics about the top-1000 most popular pages on each day to estimate the efficiency of caches for support of the platform. While the data volumes are moderate, the main goal of Wikipedia caches is to reduce access times for page views and edits. We study the impact of most popular pages on the achievable cache hit rate in comparison to Zipf request distributions and we include daily dynamics in popularity.

Nasseralfoghara, M., Hamidi, H..  2019.  Web Covert Timing Channels Detection Based on Entropy. 2019 5th International Conference on Web Research (ICWR). :12-15.
Todays analyzing web weaknesses and vulnerabilities in order to find security attacks has become more urgent. In case there is a communication contrary to the system security policies, a covert channel has been created. The attacker can easily disclosure information from the victim's system with just one public access permission. Covert timing channels, unlike covert storage channels, do not have memory storage and they draw less attention. Different methods have been proposed for their identification, which generally benefit from the shape of traffic and the channel's regularity. In this article, an entropy-based detection method is designed and implemented. The attacker can adjust the amount of channel entropy by controlling measures such as changing the channel's level or creating noise on the channel to protect from the analyst's detection. As a result, the entropy threshold is not always constant for detection. By comparing the entropy from different levels of the channel and the analyst, we conclude that the analyst must investigate traffic at all possible levels.