Visible to the public Biblio

Filters: Keyword is human computer interaction  [Clear All Filters]
Gupta, Avinash, Cecil, J., Tapia, Oscar, Sweet-Darter, Mary.  2019.  Design of Cyber-Human Frameworks for Immersive Learning. 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). :1563—1568.

This paper focuses on the creation of information centric Cyber-Human Learning Frameworks involving Virtual Reality based mediums. A generalized framework is proposed, which is adapted for two educational domains: one to support education and training of residents in orthopedic surgery and the other focusing on science learning for children with autism. Users, experts and technology based mediums play a key role in the design of such a Cyber-Human framework. Virtual Reality based immersive and haptic mediums were two of the technologies explored in the implementation of the framework for these learning domains. The proposed framework emphasizes the importance of Information-Centric Systems Engineering (ICSE) principles which emphasizes a user centric approach along with formalizing understanding of target subjects or processes for which the learning environments are being created.

Almeida, L., Lopes, E., Yalçinkaya, B., Martins, R., Lopes, A., Menezes, P., Pires, G..  2019.  Towards natural interaction in immersive reality with a cyber-glove. 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). :2653—2658.

Over the past few years, virtual and mixed reality systems have evolved significantly yielding high immersive experiences. Most of the metaphors used for interaction with the virtual environment do not provide the same meaningful feedback, to which the users are used to in the real world. This paper proposes a cyber-glove to improve the immersive sensation and the degree of embodiment in virtual and mixed reality interaction tasks. In particular, we are proposing a cyber-glove system that tracks wrist movements, hand orientation and finger movements. It provides a decoupled position of the wrist and hand, which can contribute to a better embodiment in interaction and manipulation tasks. Additionally, the detection of the curvature of the fingers aims to improve the proprioceptive perception of the grasping/releasing gestures more consistent to visual feedback. The cyber-glove system is being developed for VR applications related to real estate promotion, where users have to go through divisions of the house and interact with objects and furniture. This work aims to assess if glove-based systems can contribute to a higher sense of immersion, embodiment and usability when compared to standard VR hand controller devices (typically button-based). Twenty-two participants tested the cyber-glove system against the HTC Vive controller in a 3D manipulation task, specifically the opening of a virtual door. Metric results showed that 83% of the users performed faster door pushes, and described shorter paths with their hands wearing the cyber-glove. Subjective results showed that all participants rated the cyber-glove based interactions as equally or more natural, and 90% of users experienced an equal or a significant increase in the sense of embodiment.

Baruwal Chhetri, Mohan, Uzunov, Anton, Vo, Bao, Nepal, Surya, Kowalczyk, Ryszard.  2019.  Self-Improving Autonomic Systems for Antifragile Cyber Defence: Challenges and Opportunities. 2019 IEEE International Conference on Autonomic Computing (ICAC). :18–23.
Antifragile systems enhance their capabilities and become stronger when exposed to adverse conditions, stresses or attacks, making antifragility a desirable property for cyber defence systems that operate in contested military environments. Self-improvement in autonomic systems refers to the improvement of their self-* capabilities, so that they are able to (a) better handle previously known (anticipated) situations, and (b) deal with previously unknown (unanticipated) situations. In this position paper, we present a vision of using self-improvement through learning to achieve antifragility in autonomic cyber defence systems. We first enumerate some of the major challenges associated with realizing distributed self-improvement. We then propose a reference model for middleware frameworks for self-improving autonomic systems and a set of desirable features of such frameworks.
Kosmyna, Nataliya.  2019.  Brain-Computer Interfaces in the Wild: Lessons Learned from a Large-Scale Deployment. 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). :4161–4168.
We present data from detailed observations of a “controlled in-the-wild” study of Brain-Computer Interface (BCI) system. During 10 days of demonstration at seven nonspecialized public events, 1563 people learned about the system in various social configurations. Observations of audience behavior revealed recurring behavioral patterns. From these observations a framework of interaction with BCI systems was deduced. It describes the phases of passing by an installation, viewing and reacting, passive and active interaction, group interactions, and follow-up actions. We also conducted semi-structured interviews with the people who interacted with the system. The interviews revealed the barriers and several directions for further research on BCIs. Our findings can be useful for designing the BCIs foxr everyday adoption by a wide range of people.
Almousa, May, Anwar, Mohd.  2019.  Detecting Exploit Websites Using Browser-based Predictive Analytics. 2019 17th International Conference on Privacy, Security and Trust (PST). :1—3.
The popularity of Web-based computing has given increase to browser-based cyberattacks. These cyberattacks use websites that exploit various web browser vulnerabilities. To help regular users avoid exploit websites and engage in safe online activities, we propose a methodology of building a machine learning-powered predictive analytical model that will measure the risk of attacks and privacy breaches associated with visiting different websites and performing online activities using web browsers. The model will learn risk levels from historical data and metadata scraped from web browsers.
Carneiro, Lucas R., Delgado, Carla A.D.M., da Silva, João C.P..  2019.  Social Analysis of Game Agents: How Trust and Reputation can Improve Player Experience. 2019 8th Brazilian Conference on Intelligent Systems (BRACIS). :485–490.
Video games normally use Artificial Intelligence techniques to improve Non-Player Character (NPC) behavior, creating a more realistic experience for their players. However, rational behavior in general does not consider social interactions between player and bots. Because of that, a new framework for NPCs was proposed, which uses a social bias to mix the default strategy of finding the best possible plays to win with a analysis to decide if other players should be categorized as allies or foes. Trust and reputation models were used together to implement this demeanor. In this paper we discuss an implementation of this framework inside the game Settlers of Catan. New NPC agents are created to this implementation. We also analyze the results obtained from simulations among agents and players to conclude how the use of trust and reputation in NPCs can create a better gaming experience.
Bardia, Vivek, Kumar, C.R.S..  2017.  Process trees amp; service chains can serve us to mitigate zero day attacks better. 2017 International Conference on Data Management, Analytics and Innovation (ICDMAI). :280–284.
With technology at our fingertips waiting to be exploited, the past decade saw the revolutionizing Human Computer Interactions. The ease with which a user could interact was the Unique Selling Proposition (USP) of a sales team. Human Computer Interactions have many underlying parameters like Data Visualization and Presentation as some to deal with. With the race, on for better and faster presentations, evolved many frameworks to be widely used by all software developers. As the need grew for user friendly applications, more and more software professionals were lured into the front-end sophistication domain. Application frameworks have evolved to such an extent that with just a few clicks and feeding values as per requirements we are able to produce a commercially usable application in a few minutes. These frameworks generate quantum lines of codes in minutes which leaves a contrail of bugs to be discovered in the future. We have also succumbed to the benchmarking in Software Quality Metrics and have made ourselves comfortable with buggy software's to be rectified in future. The exponential evolution in the cyber domain has also attracted attackers equally. Average human awareness and knowledge has also improved in the cyber domain due to the prolonged exposure to technology for over three decades. As the attack sophistication grows and zero day attacks become more popular than ever, the suffering end users only receive remedial measures in spite of the latest Antivirus, Intrusion Detection and Protection Systems installed. We designed a software to display the complete services and applications running in users Operating System in the easiest perceivable manner aided by Computer Graphics and Data Visualization techniques. We further designed a study by empowering the fence sitter users with tools to actively participate in protecting themselves from threats. The designed threats had impressions from the complete threat canvas in some form or other restricted to systems functioning. Network threats and any sort of packet transfer to and from the system in form of threat was kept out of the scope of this experiment. We discovered that end users had a good idea of their working environment which can be used exponentially enhances machine learning for zero day threats and segment the unmarked the vast threat landscape faster for a more reliable output.
Lian, Zheng, Li, Ya, Tao, Jianhua, Huang, Jian, Niu, Mingyue.  2018.  Region Based Robust Facial Expression Analysis. 2018 First Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia). :1–5.
Facial emotion recognition is an essential aspect in human-machine interaction. In the real-world conditions, it faces many challenges, i.e., illumination changes, large pose variations and partial or full occlusions, which cause different facial areas with different sharpness and completeness. Inspired by this fact, we focus on facial expression recognition based on partial faces in this paper. We compare contribution of seven facial areas of low-resolution images, including nose areas, mouse areas, eyes areas, nose to mouse areas, nose to eyes areas, mouth to eyes areas and the whole face areas. Through analysis on the confusion matrix and the class activation map, we find that mouth regions contain much emotional information compared with nose areas and eyes areas. In the meantime, considering larger facial areas is helpful to judge the expression more precisely. To sum up, contributions of this paper are two-fold: (1) We reveal concerned areas of human in emotion recognition. (2) We quantify the contribution of different facial parts.
Park, Chan Mi, Lee, Jung Yeon, Baek, Hyoung Woo, Lee, Hae-Sung, Lee, JeeHang, Kim, Jinwoo.  2019.  Lifespan Design of Conversational Agent with Growth and Regression Metaphor for the Natural Supervision on Robot Intelligence. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :646–647.
Human's direct supervision on robot's erroneous behavior is crucial to enhance a robot intelligence for a `flawless' human-robot interaction. Motivating humans to engage more actively for this purpose is however difficult. To alleviate such strain, this research proposes a novel approach, a growth and regression metaphoric interaction design inspired from human's communicative, intellectual, social competence aspect of developmental stages. We implemented the interaction design principle unto a conversational agent combined with a set of synthetic sensors. Within this context, we aim to show that the agent successfully encourages the online labeling activity in response to the faulty behavior of robots as a supervision process. The field study is going to be conducted to evaluate the efficacy of our proposal by measuring the annotation performance of real-time activity events in the wild. We expect to provide a more effective and practical means to supervise robot by real-time data labeling process for long-term usage in the human-robot interaction.
Barrere, M., Hankin, C., Barboni, A., Zizzo, G., Boem, F., Maffeis, S., Parisini, T..  2018.  CPS-MT: A Real-Time Cyber-Physical System Monitoring Tool for Security Research. 2018 IEEE 24th International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA). :240–241.

Monitoring systems are essential to understand and control the behaviour of systems and networks. Cyber-physical systems (CPS) are particularly delicate under that perspective since they involve real-time constraints and physical phenomena that are not usually considered in common IT solutions. Therefore, there is a need for publicly available monitoring tools able to contemplate these aspects. In this poster/demo, we present our initiative, called CPS-MT, towards a versatile, real-time CPS monitoring tool, with a particular focus on security research. We first present its architecture and main components, followed by a MiniCPS-based case study. We also describe a performance analysis and preliminary results. During the demo, we will discuss CPS-MT's capabilities and limitations for security applications.

Yu, Z., Du, H., Xiao, D., Wang, Z., Han, Q., Guo, B..  2018.  Recognition of Human Computer Operations Based on Keystroke Sensing by Smartphone Microphone. IEEE Internet of Things Journal. 5:1156–1168.

Human computer operations such as writing documents and playing games have become popular in our daily lives. These activities (especially if identified in a non-intrusive manner) can be used to facilitate context-aware services. In this paper, we propose to recognize human computer operations through keystroke sensing with a smartphone. Specifically, we first utilize the microphone embedded in a smartphone to sense the input audio from a computer keyboard. We then identify keystrokes using fingerprint identification techniques. The determined keystrokes are then corrected with a word recognition procedure, which utilizes the relations of adjacent letters in a word. Finally, by fusing both semantic and acoustic features, a classification model is constructed to recognize four typical human computer operations: 1) chatting; 2) coding; 3) writing documents; and 4) playing games. We recruited 15 volunteers to complete these operations, and evaluated the proposed approach from multiple aspects in realistic environments. Experimental results validated the effectiveness of our approach.

Shrestha, P., Shrestha, B., Saxena, N..  2018.  Home Alone: The Insider Threat of Unattended Wearables and A Defense using Audio Proximity. 2018 IEEE Conference on Communications and Network Security (CNS). :1–9.

In this paper, we highlight and study the threat arising from the unattended wearable devices pre-paired with a smartphone over a wireless communication medium. Most users may not lock their wearables due to their small form factor, and may strip themselves off of these devices often, leaving or forgetting them unattended while away from homes (or shared office spaces). An “insider” attacker (potentially a disgruntled friend, roommate, colleague, or even a spouse) can therefore get hold of the wearable, take it near the user's phone (i.e., within radio communication range) at another location (e.g., user's office), and surreptitiously use it across physical barriers for various nefarious purposes, including pulling and learning sensitive information from the phone (such as messages, photos or emails), and pushing sensitive commands to the phone (such as making phone calls, sending text messages and taking pictures). The attacker can then safely restore the wearable, wait for it to be left unattended again and may repeat the process for maximum impact, while the victim remains completely oblivious to the ongoing attack activity. This malicious behavior is in sharp contrast to the threat of stolen wearables where the victim would unpair the wearable as soon as the theft is detected. Considering the severity of this threat, we also respond by building a defense based on audio proximity, which limits the wearable to interface with the phone only when it can pick up on an active audio challenge produced by the phone.

Zhu, J., Liapis, A., Risi, S., Bidarra, R., Youngblood, G. M..  2018.  Explainable AI for Designers: A Human-Centered Perspective on Mixed-Initiative Co-Creation. 2018 IEEE Conference on Computational Intelligence and Games (CIG). :1–8.
Growing interest in eXplainable Artificial Intelligence (XAI) aims to make AI and machine learning more understandable to human users. However, most existing work focuses on new algorithms, and not on usability, practical interpretability and efficacy on real users. In this vision paper, we propose a new research area of eXplainable AI for Designers (XAID), specifically for game designers. By focusing on a specific user group, their needs and tasks, we propose a human-centered approach for facilitating game designers to co-create with AI/ML techniques through XAID. We illustrate our initial XAID framework through three use cases, which require an understanding both of the innate properties of the AI techniques and users' needs, and we identify key open challenges.
Shao, Y., Liu, B., Li, G., Yan, R..  2017.  A Fault Diagnosis Expert System for Flight Control Software Based on SFMEA and SFTA. 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :626–627.
Many accidents occurred frequently in aerospace applications, traditional software reliability analysis methods are not enough for modern flight control software. Developing a comprehensive, effective and intelligent method for software fault diagnosis is urgent for airborne software engineering. Under this background, we constructed a fault diagnosis expert system for flight control software which combines software failure mode and effect analysis with software fault tree analysis. To simplify the analysis, the software fault knowledge of four modules is acquired by reliability analysis methods. Then by taking full advantage of the CLIPS shell, knowledge representation and inference engine can be realized smoothly. Finally, we integrated CLIPS into VC++ to achieve visualization, fault diagnosis and inference for flight control software can be performed in the human-computer interaction interface. The results illustrate that the system is able to diagnose software fault, analysis the reasons and present some reasonable solutions like a human expert.
Turnley, J., Wachtel, A., Muñoz-Ramos, K., Hoffman, M., Gauthier, J., Speed, A., Kittinger, R..  2017.  Modeling human-technology interaction as a sociotechnical system of systems. 2017 12th System of Systems Engineering Conference (SoSE). :1–6.
As system of systems (SoS) models become increasingly complex and interconnected a new approach is needed to capture the effects of humans within the SoS. Many real-life events have shown the detrimental outcomes of failing to account for humans in the loop. This research introduces a novel and cross-disciplinary methodology for modeling humans interacting with technologies to perform tasks within an SoS specifically within a layered physical security system use case. Metrics and formulations developed for this new way of looking at SoS termed sociotechnical SoS allow for the quantification of the interplay of effectiveness and efficiency seen in detection theory to measure the ability of a physical security system to detect and respond to threats. This methodology has been applied to a notional representation of a small military Forward Operating Base (FOB) as a proof-of-concept.
Pandey, S. B., Rawat, M. D., Rathod, H. B., Chauhan, J. M..  2017.  Security throwbot. 2017 International Conference on Inventive Systems and Control (ICISC). :1–6.

We all are very much aware of IoT that is Internet of Things which is emerging technology in today's world. The new and advanced field of technology and inventions make use of IoT for better facility. The Internet of Things (IoT) is a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction. Our project is based on IoT and other supporting techniques which can bring out required output. Security issues are everywhere now-a-days which we are trying to deal with by our project. Our security throwbot (a throwable device) will be tossed into a room after activating it and it will capture 360 degree panaromic video from a single IP camera, by using two end connectivity that is, robot end and another is user end, will bring more features to this project. Shape of the robot will be shperical so that problem of retrieving back can be solved. Easy to use and cheap to buy is one of our goal which will be helpful to police and soldiers who get stuck in situations where they have to question oneself before entering to dangerous condition/room. Our project will help them to handle and verify any area before entering by just throwing this robot and getting the sufficient results.

Kwon, H., Harris, W., Esmaeilzadeh, H..  2017.  Proving Flow Security of Sequential Logic via Automatically-Synthesized Relational Invariants. 2017 IEEE 30th Computer Security Foundations Symposium (CSF). :420–435.

Due to the proliferation of reprogrammable hardware, core designs built from modules drawn from a variety of sources execute with direct access to critical system resources. Expressing guarantees that such modules satisfy, in particular the dynamic conditions under which they release information about their unbounded streams of inputs, and automatically proving that they satisfy such guarantees, is an open and critical problem.,,To address these challenges, we propose a domain-specific language, named STREAMS, for expressing information-flow policies with declassification over unbounded input streams. We also introduce a novel algorithm, named SIMAREL, that given a core design C and STREAMS policy P, automatically proves or falsifies that C satisfies P. The key technical insight behind the design of SIMAREL is a novel algorithm for efficiently synthesizing relational invariants over pairs of circuit executions.,,We expressed expected behavior of cores designed independently for research and production as STREAMS policies and used SIMAREL to check if each core satisfies its policy. SIMAREL proved that half of the cores satisfied expected behavior, but found unexpected information leaks in six open-source designs: an Ethernet controller, a flash memory controller, an SD-card storage manager, a robotics controller, a digital-signal processing (DSP) module, and a debugging interface.

Kulyk, O., Reinheimer, B. M., Gerber, P., Volk, F., Volkamer, M., Mühlhäuser, M..  2017.  Advancing Trust Visualisations for Wider Applicability and User Acceptance. 2017 IEEE Trustcom/BigDataSE/ICESS. :562–569.
There are only a few visualisations targeting the communication of trust statements. Even though there are some advanced and scientifically founded visualisations-like, for example, the opinion triangle, the human trust interface, and T-Viz-the stars interface known from e-commerce platforms is by far the most common one. In this paper, we propose two trust visualisations based on T-Viz, which was recently proposed and successfully evaluated in large user studies. Despite being the most promising proposal, its design is not primarily based on findings from human-computer interaction or cognitive psychology. Our visualisations aim to integrate such findings and to potentially improve decision making in terms of correctness and efficiency. A large user study reveals that our proposed visualisations outperform T-Viz in these factors.
Farinholt, B., Rezaeirad, M., Pearce, P., Dharmdasani, H., Yin, H., Blond, S. L., McCoy, D., Levchenko, K..  2017.  To Catch a Ratter: Monitoring the Behavior of Amateur DarkComet RAT Operators in the Wild. 2017 IEEE Symposium on Security and Privacy (SP). :770–787.

Remote Access Trojans (RATs) give remote attackers interactive control over a compromised machine. Unlike large-scale malware such as botnets, a RAT is controlled individually by a human operator interacting with the compromised machine remotely. The versatility of RATs makes them attractive to actors of all levels of sophistication: they've been used for espionage, information theft, voyeurism and extortion. Despite their increasing use, there are still major gaps in our understanding of RATs and their operators, including motives, intentions, procedures, and weak points where defenses might be most effective. In this work we study the use of DarkComet, a popular commercial RAT. We collected 19,109 samples of DarkComet malware found in the wild, and in the course of two, several-week-long experiments, ran as many samples as possible in our honeypot environment. By monitoring a sample's behavior in our system, we are able to reconstruct the sequence of operator actions, giving us a unique view into operator behavior. We report on the results of 2,747 interactive sessions captured in the course of the experiment. During these sessions operators frequently attempted to interact with victims via remote desktop, to capture video, audio, and keystrokes, and to exfiltrate files and credentials. To our knowledge, we are the first large-scale systematic study of RAT use.

Çeker, H., Upadhyaya, S..  2015.  Enhanced recognition of keystroke dynamics using Gaussian mixture models. MILCOM 2015 - 2015 IEEE Military Communications Conference. :1305–1310.

Keystroke dynamics is a form of behavioral biometrics that can be used for continuous authentication of computer users. Many classifiers have been proposed for the analysis of acquired user patterns and verification of users at computer terminals. The underlying machine learning methods that use Gaussian density estimator for outlier detection typically assume that the digraph patterns in keystroke data are generated from a single Gaussian distribution. In this paper, we relax this assumption by allowing digraphs to fit more than one distribution via the Gaussian Mixture Model (GMM). We have conducted an experiment with a public data set collected in a controlled environment. Out of 30 users with dynamic text, we obtain 0.08% Equal Error Rate (EER) with 2 components by using GMM, while pure Gaussian yields 1.3% EER for the same data set (an improvement of EER by 93.8%). Our results show that GMM can recognize keystroke dynamics more precisely and authenticate users with higher confidence level.

Vizer, L. M., Sears, A..  2015.  Classifying Text-Based Computer Interactions for Health Monitoring. IEEE Pervasive Computing. 14:64–71.

Detecting early trends indicating cognitive decline can allow older adults to better manage their health, but current assessments present barriers precluding the use of such continuous monitoring by consumers. To explore the effects of cognitive status on computer interaction patterns, the authors collected typed text samples from older adults with and without pre-mild cognitive impairment (PreMCI) and constructed statistical models from keystroke and linguistic features for differentiating between the two groups. Using both feature sets, they obtained a 77.1 percent correct classification rate with 70.6 percent sensitivity, 83.3 percent specificity, and a 0.808 area under curve (AUC). These results are in line with current assessments for MC–a more advanced disease–but using an unobtrusive method. This research contributes a combination of features for text and keystroke analysis and enhances understanding of how clinicians or older adults themselves might monitor for PreMCI through patterns in typed text. It has implications for embedded systems that can enable healthcare providers and consumers to proactively and continuously monitor changes in cognitive function.

Honauer, K., Maier-Hein, L., Kondermann, D..  2015.  The HCI Stereo Metrics: Geometry-Aware Performance Analysis of Stereo Algorithms. 2015 IEEE International Conference on Computer Vision (ICCV). :2120–2128.

Performance characterization of stereo methods is mandatory to decide which algorithm is useful for which application. Prevalent benchmarks mainly use the root mean squared error (RMS) with respect to ground truth disparity maps to quantify algorithm performance. We show that the RMS is of limited expressiveness for algorithm selection and introduce the HCI Stereo Metrics. These metrics assess stereo results by harnessing three semantic cues: depth discontinuities, planar surfaces, and fine geometric structures. For each cue, we extract the relevant set of pixels from existing ground truth. We then apply our evaluation functions to quantify characteristics such as edge fattening and surface smoothness. We demonstrate that our approach supports practitioners in selecting the most suitable algorithm for their application. Using the new Middlebury dataset, we show that rankings based on our metrics reveal specific algorithm strengths and weaknesses which are not quantified by existing metrics. We finally show how stacked bar charts and radar charts visually support multidimensional performance evaluation. An interactive stereo benchmark based on the proposed metrics and visualizations is available at:

Pinsenschaum, Richard, Neff, Flaithri.  2016.  Evaluating Gesture Characteristics When Using a Bluetooth Handheld Music Controller. Proceedings of the Audio Mostly 2016. :209–214.

This paper describes a study that investigates tilt-gesture depth on a Bluetooth handheld music controller for activating and deactivating music loops. Making use of a Wii Remote's 3-axis ADXL330 accelerometer, a Max patch was programmed to receive, handle, and store incoming accelerometer data. Each loop corresponded to the front, back, left and right tilt-gesture direction, with each gesture motion triggering a loop 'On' or 'Off' depending on its playback status. The study comprised 40 undergraduate students interacting with the prototype controller for a duration of 5 minutes per person. Each participant performed three full cycles beginning with the front gesture direction and moving clockwise. This corresponded to a total of 24 trigger motions per participant. Raw data associated with tilt-gesture motion depth was scaled, analyzed and graphed. Results show significant differences between each gesture direction in terms of tilt-gesture depth, as well as issues with noise for left/right gesture motion due to dependency on Roll and Yaw values. Front and Left tilt-gesture depths displayed significantly higher threshold levels compared to the Back and Right axes. Front and Left tilt-gesture thresholds therefore allow the device to easily differentiate between intentional sample triggering and general device handling, while this is more difficult for Back and Left directions. Future work will include finding an alternative method for evaluating intentional tilt-gesture triggering on the Back and Left axes, as well as utilizing two 2-axis accelerometers to garner clean data from the Left and Right axes.

Ghatak, S., Lodh, A., Saha, E., Goyal, A., Das, A., Dutta, S..  2014.  Development of a keyboardless social networking website for visually impaired: SocialWeb. Global Humanitarian Technology Conference - South Asia Satellite (GHTC-SAS), 2014 IEEE. :232-236.

Over the past decade, we have witnessed a huge upsurge in social networking which continues to touch and transform our lives till present day. Social networks help us to communicate amongst our acquaintances and friends with whom we share similar interests on a common platform. Globally, there are more than 200 million visually impaired people. Visual impairment has many issues associated with it, but the one that stands out is the lack of accessibility to content for entertainment and socializing safely. This paper deals with the development of a keyboard less social networking website for visually impaired. The term keyboard less signifies minimum use of keyboard and allows the user to explore the contents of the website using assistive technologies like screen readers and speech to text (STT) conversion technologies which in turn provides a user friendly experience for the target audience. As soon as the user with minimal computer proficiency opens this website, with the help of screen reader, he/she identifies the username and password fields. The user speaks out his username and with the help of STT conversion (using Web Speech API), the username is entered. Then the control moves over to the password field and similarly, the password of the user is obtained and matched with the one saved in the website database. The concept of acoustic fingerprinting has been implemented for successfully validating the passwords of registered users and foiling intentions of malicious attackers. On successful match of the passwords, the user is able to enjoy the services of the website without any further hassle. Once the access obstacles associated to deal with social networking sites are successfully resolved and proper technologies are put to place, social networking sites can be a rewarding, fulfilling, and enjoyable experience for the visually impaired people.

Saoud, Z., Faci, N., Maamar, Z., Benslimane, D..  2014.  A Fuzzy Clustering-Based Credibility Model for Trust Assessment in a Service-Oriented Architecture. WETICE Conference (WETICE), 2014 IEEE 23rd International. :56-61.

This paper presents a credibility model to assess trust of Web services. The model relies on consumers' ratings whose accuracy can be questioned due to different biases. A category of consumers known as strict are usually excluded from the process of reaching a majority consensus. We demonstrated that this exclusion should not be. The proposed model reduces the gap between these consumers' ratings and the current majority rating. Fuzzy clustering is used to compute consumers' credibility. To validate this model a set of experiments are carried out.