Visible to the public Biblio

Found 12055 results

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 
Ma, Congjun, Wang, Haipeng, Zhao, Tao, Dian, Songyi.  2019.  Weighted LS-SVMR-Based System Identification with Outliers. Proceedings of the 2019 4th International Conference on Automation, Control and Robotics Engineering. :1–6.
Plenty of methods applied in system identification, while those based on data-driven are increasingly popular. Usually we ignore the absence of outliers among the system to be modeled, but it is unreachable in reality. To improve the precision of identification towards system with outliers, advantageous approaches with robustness are needed. This study analyzes the superiority of weighted Least Square Support Vector Machine Regression (LS-SVMR) in the field of system identification under random outliers, and compare it with LS-SVMR mainly.
Rehman, Ateeq Ur, Jiang, Aimin, Rehman, Abdul, Paul, Anand.  2019.  Weighted Based Trustworthiness Ranking in Social Internet of Things by using Soft Set Theory. 2019 IEEE 5th International Conference on Computer and Communications (ICCC). :1644–1648.

Internet of Things (IoT) is an evolving research area for the last two decades. The integration of the IoT and social networking concept results in developing an interdisciplinary research area called the Social Internet of Things (SIoT). The SIoT is dominant over the traditional IoT because of its structure, implementation, and operational manageability. In the SIoT, devices interact with each other independently to establish a social relationship for collective goals. To establish trustworthy relationships among the devices significantly improves the interaction in the SIoT and mitigates the phenomenon of risk. The problem is to choose a trustworthy node who is most suitable according to the choice parameters of the node. The best-selected node by one node is not necessarily the most suitable node for other nodes, as the trustworthiness of the node is independent for everyone. We employ some theoretical characterization of the soft-set theory to deal with this kind of decision-making problem. In this paper, we developed a weighted based trustworthiness ranking model by using soft set theory to evaluate the trustworthiness in the SIoT. The purpose of the proposed research is to reduce the risk of fraudulent transactions by identifying the most trusted nodes.

Shaghaghi, Arash, Kaafar, Mohamed Ali, Jha, Sanjay.  2017.  WedgeTail: An Intrusion Prevention System for the Data Plane of Software Defined Networks. Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. :849–861.
Networks are vulnerable to disruptions caused by malicious forwarding devices. The situation is likely to worsen in Software Defined Networks (SDNs) with the incompatibility of existing solutions, use of programmable soft switches and the potential of bringing down an entire network through compromised forwarding devices. In this paper, we present WedgeTail, an Intrusion Prevention System (IPS) designed to secure the SDN data plane. WedgeTail regards forwarding devices as points within a geometric space and stores the path packets take when traversing the network as trajectories. To be efficient, it prioritizes forwarding devices before inspection using an unsupervised trajectory-based sampling mechanism. For each of the forwarding device, WedgeTail computes the expected and actual trajectories of packets and 'hunts' for any forwarding device not processing packets as expected. Compared to related work, WedgeTail is also capable of distinguishing between malicious actions such as packet drop and generation. Moreover, WedgeTail employs a radically different methodology that enables detecting threats autonomously. In fact, it has no reliance on pre-defined rules by an administrator and may be easily imported to protect SDN networks with different setups, forwarding devices, and controllers. We have evaluated WedgeTail in simulated environments, and it has been capable of detecting and responding to all implanted malicious forwarding devices within a reasonable time-frame. We report on the design, implementation, and evaluation of WedgeTail in this manuscript.
Xun Gong, University of Illinois at Urbana-Champaign, Nikita Borisov, University of Illinois at Urbana-Champaign, Negar Kiyavash, University of Illinois at Urbana-Champaign, Nabil Schear, University of Illinois at Urbana-Champaign.  2012.  Website Detection Using Remote Traffic Analysis. 12th International Symposium on Privacy Enhancing Technologies (PETS 2012).

Recent work in traffic analysis has shown that traffic patterns leaked through side channels can be used to recover important semantic information. For instance, attackers can find out which website, or which page on a website, a user is accessing simply by monitoring the packet size distribution. We show that traffic analysis is even a greater threat to privacy than previously thought by introducing a new attack that can be carried out remotely. In particular, we show that, to perform traffic analysis, adversaries do not need to directly observe the traffic patterns. Instead, they can gain sufficient information by sending probes from a far-off vantage point that exploits a queuing side channel in routers.

To demonstrate the threat of such remote traffic analysis, we study a remote website detection attack that works against home broadband users. Because the remotely observed traffic patterns are more noisy than those obtained using previous schemes based on direct local traffic monitoring, we take a dynamic time warping (DTW) based approach to detecting fingerprints from the same website. As a new twist on website fingerprinting, we consider a website detection attack, where the attacker aims to find out whether a user browses a particular web site, and its privacy implications. We show experimentally that, although the success of the attack is highly variable, depending on the target site, for some sites very low error rates. We also show how such website detection can be used to deanonymize message board users.

Wu, Siyan, Tong, Xiaojun, Wang, Wei, Xin, Guodong, Wang, Bailing, Zhou, Qi.  2018.  Website Defacements Detection Based on Support Vector Machine Classification Method. Proceedings of the 2018 International Conference on Computing and Data Engineering. :62–66.
Website defacements can inflict significant harm on the website owner through the loss of reputation, the loss of money, or the leakage of information. Due to the complexity and diversity of all kinds of web application systems, especially a lack of necessary security maintenance, website defacements increased year by year. In this paper, we focus on detecting whether the website has been defaced by extracting website features and website embedded trojan features. We use three kinds of classification learning algorithms which include Gradient Boosting Decision Tree (GBDT), Random Forest (RF) and Support Vector Machine (SVM) to do the classification experiments, and experimental results show that Support Vector Machine classifier performed better than two other classifiers. It can achieve an overall accuracy of 95%-96% in detecting website defacements.
Nursetyo, Arif, Ignatius Moses Setiadi, De Rosal, Rachmawanto, Eko Hari, Sari, Christy Atika.  2019.  Website and Network Security Techniques against Brute Force Attacks using Honeypot. 2019 Fourth International Conference on Informatics and Computing (ICIC). :1—6.
The development of the internet and the web makes human activities more practical, comfortable, and inexpensive. So that the use of the internet and websites is increasing in various ways. Public networks make the security of websites vulnerable to attack. This research proposes a Honeypot for server security against attackers who want to steal data by carrying out a brute force attack. In this research, Honeypot is integrated on the server to protect the server by creating a shadow server. This server is responsible for tricking the attacker into not being able to enter the original server. Brute force attacks tested using Medusa tools. With the application of Honeypot on the server, it is proven that the server can be secured from the attacker. Even the log of activities carried out by the attacker in the shadow server is stored in the Kippo log activities.
Zheng, Wenbo, Yan, Lan, Gou, Chao, Wang, Fei-Yue.  2020.  Webly Supervised Knowledge Embedding Model for Visual Reasoning. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :12442–12451.
Visual reasoning between visual image and natural language description is a long-standing challenge in computer vision. While recent approaches offer a great promise by compositionality or relational computing, most of them are oppressed by the challenge of training with datasets containing only a limited number of images with ground-truth texts. Besides, it is extremely time-consuming and difficult to build a larger dataset by annotating millions of images with text descriptions that may very likely lead to a biased model. Inspired by the majority success of webly supervised learning, we utilize readily-available web images with its noisy annotations for learning a robust representation. Our key idea is to presume on web images and corresponding tags along with fully annotated datasets in learning with knowledge embedding. We present a two-stage approach for the task that can augment knowledge through an effective embedding model with weakly supervised web data. This approach learns not only knowledge-based embeddings derived from key-value memory networks to make joint and full use of textual and visual information but also exploits the knowledge to improve the performance with knowledge-based representation learning for applying other general reasoning tasks. Experimental results on two benchmarks show that the proposed approach significantly improves performance compared with the state-of-the-art methods and guarantees the robustness of our model against visual reasoning tasks and other reasoning tasks.
Jia, Yaoqi, Chua, Zheng Leong, Hu, Hong, Chen, Shuo, Saxena, Prateek, Liang, Zhenkai.  2016.  "The Web/Local" Boundary Is Fuzzy: A Security Study of Chrome's Process-based Sandboxing. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :791–804.

Process-based isolation, suggested by several research prototypes, is a cornerstone of modern browser security architectures. Google Chrome is the first commercial browser that adopts this architecture. Unlike several research prototypes, Chrome's process-based design does not isolate different web origins, but primarily promises to protect "the local system" from "the web". However, as billions of users now use web-based cloud services (e.g., Dropbox and Google Drive), which are integrated into the local system, the premise that browsers can effectively isolate the web from the local system has become questionable. In this paper, we argue that, if the process-based isolation disregards the same-origin policy as one of its goals, then its promise of maintaining the "web/local system (local)" separation is doubtful. Specifically, we show that existing memory vulnerabilities in Chrome's renderer can be used as a stepping-stone to drop executables/scripts in the local file system, install unwanted applications and misuse system sensors. These attacks are purely data-oriented and do not alter any control flow or import foreign code. Thus, such attacks bypass binary-level protection mechanisms, including ASLR and in-memory partitioning. Finally, we discuss various full defenses and present a possible way to mitigate the attacks presented.

Li, Zhijun, He, Tian.  2017.  WEBee: Physical-Layer Cross-Technology Communication via Emulation. Proceedings of the 23rd Annual International Conference on Mobile Computing and Networking. :2–14.
Recent advances in Cross-Technology Communication (CTC) have improved efficient coexistence and cooperation among heterogeneous wireless devices (e.g., WiFi, ZigBee, and Bluetooth) operating in the same ISM band. However, until now the effectiveness of existing CTCs, which rely on packet-level modulation, is limited due to their low throughput (e.g., tens of bps). Our work, named WEBee, opens a promising direction for high-throughput CTC via physical-level emulation. WEBee uses a high-speed wireless radio (e.g., WiFi OFDM) to emulate the desired signals of a low-speed radio (e.g., ZigBee). Our unique emulation technique manipulates only the payload of WiFi packets, requiring neither hardware nor firmware changes in commodity technologies – a feature allowing zero-cost fast deployment on existing WiFi infrastructure. We designed and implemented WEBee with commodity devices (Atheros AR2425 WiFi card and MicaZ CC2420) and the USRP-N210 platform (for PHY layer evaluation). Our comprehensive evaluation reveals that WEBee can achieve a more than 99% reliable parallel CTC between WiFi and ZigBee with 126 Kbps in noisy environments, a throughput about 16,000x faster than current state-of-the-art CTCs.
Neema, Himanshu, Vardhan, Harsh, Barreto, Carlos, Koutsoukos, Xenofon.  2019.  Web-Based Platform for Evaluation of Resilient and Transactive Smart-Grids. 2019 7th Workshop on Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES). :1–6.
Today's smart-grids have seen a clear rise in new ways of energy generation, transmission, and storage. This has not only introduced a huge degree of variability, but also a continual shift away from traditionally centralized generation and storage to distributed energy resources (DERs). In addition, the distributed sensors, energy generators and storage devices, and networking have led to a huge increase in attack vectors that make the grid vulnerable to a variety of attacks. The interconnection between computational and physical components through a largely open, IP-based communication network enables an attacker to cause physical damage through remote cyber-attacks or attack on software-controlled grid operations via physical- or cyber-attacks. Transactive Energy (TE) is an emerging approach for managing increasing DERs in the smart-grids through economic and control techniques. Transactive Smart-Grids use the TE approach to improve grid reliability and efficiency. However, skepticism remains in their full-scale viability for ensuring grid reliability. In addition, different TE approaches, in specific situations, can lead to very different outcomes in grid operations. In this paper, we present a comprehensive web-based platform for evaluating resilience of smart-grids against a variety of cyber- and physical-attacks and evaluating impact of various TE approaches on grid performance. We also provide several case-studies demonstrating evaluation of TE approaches as well as grid resilience against cyber and physical attacks.
Acar, Gunes, Huang, Danny Yuxing, Li, Frank, Narayanan, Arvind, Feamster, Nick.  2018.  Web-Based Attacks to Discover and Control Local IoT Devices. Proceedings of the 2018 Workshop on IoT Security and Privacy. :29-35.
In this paper, we present two web-based attacks against local IoT devices that any malicious web page or third-party script can perform, even when the devices are behind NATs. In our attack scenario, a victim visits the attacker's website, which contains a malicious script that communicates with IoT devices on the local network that have open HTTP servers. We show how the malicious script can circumvent the same-origin policy by exploiting error messages on the HTML5 MediaError interface or by carrying out DNS rebinding attacks. We demonstrate that the attacker can gather sensitive information from the devices (e.g., unique device identifiers and precise geolocation), track and profile the owners to serve ads, or control the devices by playing arbitrary videos and rebooting. We propose potential countermeasures to our attacks that users, browsers, DNS providers, and IoT vendors can implement.
Ray, K., Banerjee, A., Mohalik, S. K..  2019.  Web Service Selection with Correlations: A Feature-Based Abstraction Refinement Approach. 2019 IEEE 12th Conference on Service-Oriented Computing and Applications (SOCA). :33–40.
In this paper, we address the web service selection problem for linear workflows. Given a linear workflow specifying a set of ordered tasks and a set of candidate services providing different features for each task, the selection problem deals with the objective of selecting the most eligible service for each task, given the ordering specified. A number of approaches to solving the selection problem have been proposed in literature. With web services growing at an incredible pace, service selection at the Internet scale has resurfaced as a problem of recent research interest. In this work, we present our approach to the selection problem using an abstraction refinement technique to address the scalability limitations of contemporary approaches. Experiments on web service benchmarks show that our approach can add substantial performance benefits in terms of space when compared to an approach without our optimization.
Wu, K., Gao, X., Liu, Y..  2018.  Web server security evaluation method based on multi-source data. 2018 International Conference on Cloud Computing, Big Data and Blockchain (ICCBB). :1–6.
Traditional web security assessments are evaluated using a single data source, and the results of the calculations from different data sources are different. Based on multi-source data, this paper uses Analytic Hierarchy Process to construct an evaluation model, calculates the weight of each level of indicators in the web security evaluation model, analyzes and processes the data, calculates the host security threat assessment values at various levels, and visualizes the evaluation results through ECharts+WebGL technology.
Nasseralfoghara, M., Hamidi, H..  2019.  Web Covert Timing Channels Detection Based on Entropy. 2019 5th International Conference on Web Research (ICWR). :12-15.

Todays analyzing web weaknesses and vulnerabilities in order to find security attacks has become more urgent. In case there is a communication contrary to the system security policies, a covert channel has been created. The attacker can easily disclosure information from the victim's system with just one public access permission. Covert timing channels, unlike covert storage channels, do not have memory storage and they draw less attention. Different methods have been proposed for their identification, which generally benefit from the shape of traffic and the channel's regularity. In this article, an entropy-based detection method is designed and implemented. The attacker can adjust the amount of channel entropy by controlling measures such as changing the channel's level or creating noise on the channel to protect from the analyst's detection. As a result, the entropy threshold is not always constant for detection. By comparing the entropy from different levels of the channel and the analyst, we conclude that the analyst must investigate traffic at all possible levels.

Hasslinger, G., Kunbaz, M., Hasslinger, F., Bauschert, T..  2017.  Web Caching Evaluation from Wikipedia Request Statistics. 2017 15th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt). :1–6.

Wikipedia is one of the most popular information platforms on the Internet. The user access pattern to Wikipedia pages depends on their relevance in the current worldwide social discourse. We use publically available statistics about the top-1000 most popular pages on each day to estimate the efficiency of caches for support of the platform. While the data volumes are moderate, the main goal of Wikipedia caches is to reduce access times for page views and edits. We study the impact of most popular pages on the achievable cache hit rate in comparison to Zipf request distributions and we include daily dynamics in popularity.

Obaidat, M., Brown, J., Hayajneh, A. A..  2020.  Web Browser Extension User-Script XSS Vulnerabilities. 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :316—321.

Browser extensions have by and large become a normal and accepted omnipresent feature within modern browsers. However, since their inception, browser extensions have remained under scrutiny for opening vulnerabilities for users. While a large amount of effort has been dedicated to patching such issues as they arise, including the implementation of extension sandboxes and explicit permissions, issues remain within the browser extension ecosystem through user-scripts. User-scripts, or micro-script extensions hosted by a top-level extension, are largely unregulated but inherit the permissions of the top-level application manager, which popularly includes extensions such as Greasemonkey, Tampermonkey, or xStyle. While most user-scripts are docile and serve a specific beneficial functionality, due to their inherently open nature and the unregulated ecosystem, they are easy for malicious parties to exploit. Common attacks through this method involve hijacking of DOM elements to execute malicious javascript and/or XSS attacks, although other more advanced attacks can be deployed as well. User-scripts have not received much attention, and this vulnerability has persisted despite attempts to make browser extensions more secure. This ongoing vulnerability remains an unknown threat to many users who employ user-scripts, and circumvents security mechanisms otherwise put in place by browsers. This paper discusses this extension derivative vulnerability as it pertains to current browser security paradigms.

SHAR, L., Briand, L., Tan, H..  2014.  Web Application Vulnerability Prediction using Hybrid Program Analysis and Machine Learning. Dependable and Secure Computing, IEEE Transactions on. PP:1-1.

Due to limited time and resources, web software engineers need support in identifying vulnerable code. A practical approach to predicting vulnerable code would enable them to prioritize security auditing efforts. In this paper, we propose using a set of hybrid (static+dynamic) code attributes that characterize input validation and input sanitization code patterns and are expected to be significant indicators of web application vulnerabilities. Because static and dynamic program analyses complement each other, both techniques are used to extract the proposed attributes in an accurate and scalable way. Current vulnerability prediction techniques rely on the availability of data labeled with vulnerability information for training. For many real world applications, past vulnerability data is often not available or at least not complete. Hence, to address both situations where labeled past data is fully available or not, we apply both supervised and semi-supervised learning when building vulnerability predictors based on hybrid code attributes. Given that semi-supervised learning is entirely unexplored in this domain, we describe how to use this learning scheme effectively for vulnerability prediction. We performed empirical case studies on seven open source projects where we built and evaluated supervised and semi-supervised models. When cross validated with fully available labeled data, the supervised models achieve an average of 77 percent recall and 5 percent probability of false alarm for predicting SQL injection, cross site scripting, remote code execution and file inclusion vulnerabilities. With a low amount of labeled data, when compared to the supervised model, the semi-supervised model showed an average improvement of 24 percent higher recall and 3 percent lower probability of false alarm, thus suggesting semi-supervised learning may be a preferable solution for many real world applications where vulnerability data is missing.

Gadient, P., Ghafari, M., Tarnutzer, M., Nierstrasz, O..  2020.  Web APIs in Android through the Lens of Security. 2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER). :13—22.

Web communication has become an indispensable characteristic of mobile apps. However, it is not clear what data the apps transmit, to whom, and what consequences such transmissions have. We analyzed the web communications found in mobile apps from the perspective of security. We first manually studied 160 Android apps to identify the commonly-used communication libraries, and to understand how they are used in these apps. We then developed a tool to statically identify web API URLs used in the apps, and restore the JSON data schemas including the type and value of each parameter. We extracted 9714 distinct web API URLs that were used in 3 376 apps. We found that developers often use the package for network communication, however, third-party libraries like OkHttp are also used in many apps. We discovered that insecure HTTP connections are seven times more prevalent in closed-source than in open-source apps, and that embedded SQL and JavaScript code is used in web communication in more than 500 different apps. This finding is devastating; it leaves billions of users and API service providers vulnerable to attack.

Tran, Manh Cong, Nakamura, Yasuhiro.  2016.  Web Access Behaviour Model for Filtering Out HTTP Automated Software Accessed Domain. Proceedings of the 10th International Conference on Ubiquitous Information Management and Communication. :67:1–67:4.

In many decades, due to fast growth of the World Wide Web, HTTP automated software/applications (auto-ware) are blooming for multiple purposes. Unfortunately, beside normal applications such as virus defining or operating system updating, auto-ware can also act as abnormal processes such as botnet, worms, virus, spywares, and advertising software (adware). Therefore, auto-ware, in a sense, consumes network bandwidth, and it might become internal security threats, auto-ware accessed domain/server also might be malicious one. Understanding about behaviour of HTTP auto-ware is beneficial for anomaly/malicious detection, the network management, traffic engineering and security. In this paper, HTTP auto-ware communication behaviour is analysed and modeled, from which a method in filtering out its domain/server is proposed. The filtered results can be used as a good resource for other security action purposes such as malicious domain/URL detection/filtering or investigation of HTTP malware from internal threats.

Chen, D., Irwin, D..  2017.  Weatherman: Exposing Weather-Based Privacy Threats in Big Energy Data. 2017 IEEE International Conference on Big Data (Big Data). :1079–1086.

Smart energy meters record electricity consumption and generation at fine-grained intervals, and are among the most widely deployed sensors in the world. Energy data embeds detailed information about a building's energy-efficiency, as well as the behavior of its occupants, which academia and industry are actively working to extract. In many cases, either inadvertently or by design, these third-parties only have access to anonymous energy data without an associated location. The location of energy data is highly useful and highly sensitive information: it can provide important contextual information to improve big data analytics or interpret their results, but it can also enable third-parties to link private behavior derived from energy data with a particular location. In this paper, we present Weatherman, which leverages a suite of analytics techniques to localize the source of anonymous energy data. Our key insight is that energy consumption data, as well as wind and solar generation data, largely correlates with weather, e.g., temperature, wind speed, and cloud cover, and that every location on Earth has a distinct weather signature that uniquely identifies it. Weatherman represents a serious privacy threat, but also a potentially useful tool for researchers working with anonymous smart meter data. We evaluate Weatherman's potential in both areas by localizing data from over one hundred smart meters using a weather database that includes data from over 35,000 locations. Our results show that Weatherman localizes coarse (one-hour resolution) energy consumption, wind, and solar data to within 16.68km, 9.84km, and 5.12km, respectively, on average, which is more accurate using much coarser resolution data than prior work on localizing only anonymous solar data using solar signatures.

Eamsa-ard, T., Seesaard, T., Kerdcharoen, T..  2018.  Wearable Sensor of Humanoid Robot-Based Textile Chemical Sensors for Odor Detection and Tracking. 2018 International Conference on Engineering, Applied Sciences, and Technology (ICEAST). :1—4.

This paper revealed the development and implementation of the wearable sensors based on transient responses of textile chemical sensors for odorant detection system as wearable sensor of humanoid robot. The textile chemical sensors consist of nine polymer/CNTs nano-composite gas sensors which can be divided into three different prototypes of the wearable humanoid robot; (i) human axillary odor monitoring, (ii) human foot odor tracking, and (iii) wearable personal gas leakage detection. These prototypes can be integrated into high-performance wearable wellness platform such as smart clothes, smart shoes and wearable pocket toxic-gas detector. While operating mode has been designed to use ZigBee wireless communication technology for data acquisition and monitoring system. Wearable humanoid robot offers several platforms that can be applied to investigate the role of individual scent produced by different parts of the human body such as axillary odor and foot odor, which have potential health effects from abnormal or offensive body odor. Moreover, wearable personal safety and security component in robot is also effective for detecting NH3 leakage in environment. Preliminary results with nine textile chemical sensors for odor biomarker and NH3 detection demonstrates the feasibility of using the wearable humanoid robot to distinguish unpleasant odor released when you're physically active. It also showed an excellent performance to detect a hazardous gas like ammonia (NH3) with sensitivity as low as 5 ppm.

Chalkley, Joe D., Ranji, Thomas T., Westling, Carina E. I., Chockalingam, Nachiappan, Witchel, Harry J..  2017.  Wearable Sensor Metric for Fidgeting: Screen Engagement Rather Than Interest Causes NIMI of Wrists and Ankles. Proceedings of the European Conference on Cognitive Ergonomics 2017. :158–161.

Measuring fidgeting is an important goal for the psychology of mind-wandering and for human computer interaction (HCI). Previous work measuring the movement of the head, torso and thigh during HCI has shown that engaging screen content leads to non-instrumental movement inhibition (NIMI). Camera-based methods for measuring wrist movements are limited by occlusions. Here we used a high pass filtered magnitude of wearable tri-axial accelerometer recordings during 2-minute passive HCI stimuli as a surrogate for movement of the wrists and ankles. With 24 seated, healthy volunteers experiencing HCI, this metric showed that wrists moved significantly more than ankles. We found that NIMI could be detected in the wrists and ankles; it distinguished extremes of interest and boredom via restlessness. We conclude that both free-willed and forced screen engagement can elicit NIMI of the wrists and ankles.

Vhaduri, S., Poellabauer, C..  2017.  Wearable Device User Authentication Using Physiological and Behavioral Metrics. 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC). :1–6.

Wearables, such as Fitbit, Apple Watch, and Microsoft Band, with their rich collection of sensors, facilitate the tracking of healthcare- and wellness-related metrics. However, the assessment of the physiological metrics collected by these devices could also be useful in identifying the user of the wearable, e.g., to detect unauthorized use or to correctly associate the data to a user if wearables are shared among multiple users. Further, researchers and healthcare providers often rely on these smart wearables to monitor research subjects and patients in their natural environments over extended periods of time. Here, it is important to associate the sensed data with the corresponding user and to detect if a device is being used by an unauthorized individual, to ensure study compliance. Existing one-time authentication approaches using credentials (e.g., passwords, certificates) or trait-based biometrics (e.g., face, fingerprints, iris, voice) might fail, since such credentials can easily be shared among users. In this paper, we present a continuous and reliable wearable-user authentication mechanism using coarse-grain minute-level physical activity (step counts) and physiological data (heart rate, calorie burn, and metabolic equivalent of task). From our analysis of 421 Fitbit users from a two-year long health study, we are able to statistically distinguish nearly 100% of the subject-pairs and to identify subjects with an average accuracy of 92.97%.

Jain, Harsh, Vikram, Aditya, Mohana, Kashyap, Ankit, Jain, Ayush.  2020.  Weapon Detection using Artificial Intelligence and Deep Learning for Security Applications. 2020 International Conference on Electronics and Sustainable Communication Systems (ICESC). :193—198.
Security is always a main concern in every domain, due to a rise in crime rate in a crowded event or suspicious lonely areas. Abnormal detection and monitoring have major applications of computer vision to tackle various problems. Due to growing demand in the protection of safety, security and personal properties, needs and deployment of video surveillance systems can recognize and interpret the scene and anomaly events play a vital role in intelligence monitoring. This paper implements automatic gun (or) weapon detection using a convolution neural network (CNN) based SSD and Faster RCNN algorithms. Proposed implementation uses two types of datasets. One dataset, which had pre-labelled images and the other one is a set of images, which were labelled manually. Results are tabulated, both algorithms achieve good accuracy, but their application in real situations can be based on the trade-off between speed and accuracy.
Shen, Shiyu, Gao, Jianlin, Wu, Aitian.  2018.  Weakness Identification and Flow Analysis Based on Tor Network. Proceedings of the 8th International Conference on Communication and Network Security. :90–94.

As the Internet technology develops rapidly, attacks against Tor networks becomes more and more frequent. So, it's more and more difficult for Tor network to meet people's demand to protect their private information. A method to improve the anonymity of Tor seems urgent. In this paper, we mainly talk about the principle of Tor, which is the largest anonymous communication system in the world, analyze the reason for its limited efficiency, and discuss the vulnerability of link fingerprint and node selection. After that, a node recognition model based on SVM is established, which verifies that the traffic characteristics expose the node attributes, thus revealing the link and destroying the anonymity. Based on what is done above, some measures are put forward to improve Tor protocol to make it more anonymous.