Visible to the public Biblio

Filters: Keyword is immersive systems  [Clear All Filters]
2021-02-03
Velaora, M., Roy, R. van, Guéna, F..  2020.  ARtect, an augmented reality educational prototype for architectural design. 2020 Fourth World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4). :110—115.

ARtect is an Augmented Reality application developed with Unity 3D, which envisions an educational interactive and immersive tool for architects, designers, researchers, and artists. This digital instrument renders the competency to visualize custom-made 3D models and 2D graphics in interior and exterior environments. The user-friendly interface offers an accurate insight before the materialization of any architectural project, enabling evaluation of the design proposal. This practice could be integrated into learning architectural design process, saving resources of printed drawings, and 3D carton models during several stages of spatial conception.

Clark, D. J., Turnbull, B..  2020.  Experiment Design for Complex Immersive Visualisation. 2020 Military Communications and Information Systems Conference (MilCIS). :1—5.

Experimentation focused on assessing the value of complex visualisation approaches when compared with alternative methods for data analysis is challenging. The interaction between participant prior knowledge and experience, a diverse range of experimental or real-world data sets and a dynamic interaction with the display system presents challenges when seeking timely, affordable and statistically relevant experimentation results. This paper outlines a hybrid approach proposed for experimentation with complex interactive data analysis tools, specifically for computer network traffic analysis. The approach involves a structured survey completed after free engagement with the software platform by expert participants. The survey captures objective and subjective data points relating to the experience with the goal of making an assessment of software performance which is supported by statistically significant experimental results. This work is particularly applicable to field of network analysis for cyber security and also military cyber operations and intelligence data analysis.

Aliman, N.-M., Kester, L..  2020.  Malicious Design in AIVR, Falsehood and Cybersecurity-oriented Immersive Defenses. 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR). :130—137.

Advancements in the AI field unfold tremendous opportunities for society. Simultaneously, it becomes increasingly important to address emerging ramifications. Thereby, the focus is often set on ethical and safe design forestalling unintentional failures. However, cybersecurity-oriented approaches to AI safety additionally consider instantiations of intentional malice – including unethical malevolent AI design. Recently, an analogous emphasis on malicious actors has been expressed regarding security and safety for virtual reality (VR). In this vein, while the intersection of AI and VR (AIVR) offers a wide array of beneficial cross-fertilization possibilities, it is responsible to anticipate future malicious AIVR design from the onset on given the potential socio-psycho-technological impacts. For a simplified illustration, this paper analyzes the conceivable use case of Generative AI (here deepfake techniques) utilized for disinformation in immersive journalism. In our view, defenses against such future AIVR safety risks related to falsehood in immersive settings should be transdisciplinarily conceived from an immersive co-creation stance. As a first step, we motivate a cybersecurity-oriented procedure to generate defenses via immersive design fictions. Overall, there may be no panacea but updatable transdisciplinary tools including AIVR itself could be used to incrementally defend against malicious actors in AIVR.

Lee, J..  2020.  CanvasMirror: Secure Integration of Third-Party Libraries in a WebVR Environment. 2020 50th Annual IEEE-IFIP International Conference on Dependable Systems and Networks-Supplemental Volume (DSN-S). :75—76.

Web technology has evolved to offer 360-degree immersive browsing experiences. This new technology, called WebVR, enables virtual reality by rendering a three-dimensional world on an HTML canvas. Unfortunately, there exists no browser-supported way of sharing this canvas between different parties. As a result, third-party library providers with ill intent (e.g., stealing sensitive information from end-users) can easily distort the entire WebVR site. To mitigate the new threats posed in WebVR, we propose CanvasMirror, which allows publishers to specify the behaviors of third-party libraries and enforce this specification. We show that CanvasMirror effectively separates the third-party context from the host origin by leveraging the privilege separation technique and safely integrates VR contents on a shared canvas.

Bahaei, S. Sheikh.  2020.  A Framework for Risk Assessment in Augmented Reality-Equipped Socio-Technical Systems. 2020 50th Annual IEEE-IFIP International Conference on Dependable Systems and Networks-Supplemental Volume (DSN-S). :77—78.

New technologies, such as augmented reality (AR) are used to enhance human capabilities and extend human functioning; nevertheless they may cause distraction and incorrect human functioning. Systems including socio entities (such as human) and technical entities (such as augmented reality) are called socio-technical systems. In order to do risk assessment in such systems, considering new dependability threats caused by augmented reality is essential, for example failure of an extended human function is a new type of dependability threat introduced to the system because of new technologies. In particular, it is required to identify these new dependability threats and extend modeling and analyzing techniques to be able to uncover their potential impacts. This research aims at providing a framework for risk assessment in AR-equipped socio-technical systems by identifying AR-extended human failures and AR-caused faults leading to human failures. Our work also extends modeling elements in an existing metamodel for modeling socio-technical systems, to enable AR-relevant dependability threats modeling. This extended metamodel is expected to be used for extending analysis techniques to analyze AR-equipped socio-technical systems.

Sabu, R., Yasuda, K., Kato, R., Kawaguchi, S., Iwata, H..  2020.  Does visual search by neck motion improve hemispatial neglect?: An experimental study using an immersive virtual reality system 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :262—267.

Unilateral spatial neglect (USN) is a higher cognitive dysfunction that can occur after a stroke. It is defined as an impairment in finding, reporting, reacting to, and directing stimuli opposite the damaged side of the brain. We have proposed a system to identify neglected regions in USN patients in three dimensions using three-dimensional virtual reality. The objectives of this study are twofold: first, to propose a system for numerically identifying the neglected regions using an object detection task in a virtual space, and second, to compare the neglected regions during object detection when the patient's neck is immobilized (‘fixed-neck’ condition) versus when the neck can be freely moved to search (‘free-neck’ condition). We performed the test using an immersive virtual reality system, once with the patient's neck fixed and once with the patient's neck free to move. Comparing the results of the study in two patients, we found that the neglected areas were similar in the fixed-neck condition. However, in the free-neck condition, one patient's neglect improved while the other patient’s neglect worsened. These results suggest that exploratory ability affects the symptoms of USN and is crucial for clinical evaluation of USN patients.

Powley, B. T..  2020.  Exploring Immersive and Non-Immersive Techniques for Geographic Data Visualization. 2020 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). :1—2.

Analyzing multi-dimensional geospatial data is difficult and immersive analytics systems are used to visualize geospatial data and models. There is little previous work evaluating when immersive and non-immersive visualizations are the most suitable for data analysis and more research is needed.

Kennard, M., Zhang, H., Akimoto, Y., Hirokawa, M., Suzuki, K..  2020.  Effects of Visual Biofeedback on Competition Performance Using an Immersive Mixed Reality System. 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :3793—3798.

This paper investigates the effects of real time visual biofeedback for improving sports performance using a large scale immersive mixed reality system in which users are able to play a simulated game of curling. The users slide custom curling stones across the floor onto a projected target whose size is dictated by the user’s stress-related physiological measure; heart rate (HR). The higher HR the player has, the smaller the target will be, and vice-versa. In the experiment participants were asked to compete in three different conditions: baseline, with and without the proposed biofeedback. The results show that when providing a visual representation of the player’s HR or "choking" in competition, it helped the player understand their condition and improve competition performance (P-value of 0.0391).

Illing, B., Westhoven, M., Gaspers, B., Smets, N., Brüggemann, B., Mathew, T..  2020.  Evaluation of Immersive Teleoperation Systems using Standardized Tasks and Measurements. 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :278—285.

Despite advances regarding autonomous functionality for robots, teleoperation remains a means for performing delicate tasks in safety critical contexts like explosive ordnance disposal (EOD) and ambiguous environments. Immersive stereoscopic displays have been proposed and developed in this regard, but bring about their own specific problems, e.g., simulator sickness. This work builds upon standardized test environments to yield reproducible comparisons between different robotic platforms. The focus was placed on testing three optronic systems of differing degrees of immersion: (1) A laptop display showing multiple monoscopic camera views, (2) an off-the-shelf virtual reality headset coupled with a pantilt-based stereoscopic camera, and (3) a so-called Telepresence Unit, providing fast pan, tilt, yaw rotation, stereoscopic view, and spatial audio. Stereoscopic systems yielded significant faster task completion only for the maneuvering task. As expected, they also induced Simulator Sickness among other results. However, the amount of Simulator Sickness varied between both stereoscopic systems. Collected data suggests that a higher degree of immersion combined with careful system design can reduce the to-be-expected increase of Simulator Sickness compared to the monoscopic camera baseline while making the interface subjectively more effective for certain tasks.

Cecotti, H., Richard, Q., Gravellier, J., Callaghan, M..  2020.  Magnetic Resonance Imaging Visualization in Fully Immersive Virtual Reality. 2020 6th International Conference of the Immersive Learning Research Network (iLRN). :205—209.

The availability of commercial fully immersive virtual reality systems allows the proposal and development of new applications that offer novel ways to visualize and interact with multidimensional neuroimaging data. We propose a system for the visualization and interaction with Magnetic Resonance Imaging (MRI) scans in a fully immersive learning environment in virtual reality. The system extracts the different slices from a DICOM file and presents the slices in a 3D environment where the user can display and rotate the MRI scan, and select the clipping plane in all the possible orientations. The 3D environment includes two parts: 1) a cube that displays the MRI scan in 3D and 2) three panels that include the axial, sagittal, and coronal views, where it is possible to directly access a desired slice. In addition, the environment includes a representation of the brain where it is possible to access and browse directly through the slices with the controller. This application can be used both for educational purposes as an immersive learning tool, and by neuroscience researchers as a more convenient way to browse through an MRI scan to better analyze 3D data.

Martin, S., Parra, G., Cubillo, J., Quintana, B., Gil, R., Perez, C., Castro, M..  2020.  Design of an Augmented Reality System for Immersive Learning of Digital Electronic. 2020 XIV Technologies Applied to Electronics Teaching Conference (TAEE). :1—6.

This article describes the development of two mobile applications for learning Digital Electronics. The first application is an interactive app for iOS where you can study the different digital circuits, and which will serve as the basis for the second: a game of questions in augmented reality.

2020-06-04
Gupta, Avinash, Cecil, J., Tapia, Oscar, Sweet-Darter, Mary.  2019.  Design of Cyber-Human Frameworks for Immersive Learning. 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). :1563—1568.

This paper focuses on the creation of information centric Cyber-Human Learning Frameworks involving Virtual Reality based mediums. A generalized framework is proposed, which is adapted for two educational domains: one to support education and training of residents in orthopedic surgery and the other focusing on science learning for children with autism. Users, experts and technology based mediums play a key role in the design of such a Cyber-Human framework. Virtual Reality based immersive and haptic mediums were two of the technologies explored in the implementation of the framework for these learning domains. The proposed framework emphasizes the importance of Information-Centric Systems Engineering (ICSE) principles which emphasizes a user centric approach along with formalizing understanding of target subjects or processes for which the learning environments are being created.

Cao, Lizhou, Peng, Chao, Hansberger, Jeffery T..  2019.  A Large Curved Display System in Virtual Reality for Immersive Data Interaction. 2019 IEEE Games, Entertainment, Media Conference (GEM). :1—4.

This work presents the design and implementation of a large curved display system in a virtual reality (VR) environment that supports visualization of 2D datasets (e.g., images, buttons and text). By using this system, users are allowed to interact with data in front of a wide field of view and gain a high level of perceived immersion. We exhibit two use cases of this system, including (1) a virtual image wall as the display component of a 3D user interface, and (2) an inventory interface for a VR-based educational game. The use cases demonstrate capability and flexibility of curved displays in supporting varied purposes of data interaction within virtual environments.

Almeida, L., Lopes, E., Yalçinkaya, B., Martins, R., Lopes, A., Menezes, P., Pires, G..  2019.  Towards natural interaction in immersive reality with a cyber-glove. 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). :2653—2658.

Over the past few years, virtual and mixed reality systems have evolved significantly yielding high immersive experiences. Most of the metaphors used for interaction with the virtual environment do not provide the same meaningful feedback, to which the users are used to in the real world. This paper proposes a cyber-glove to improve the immersive sensation and the degree of embodiment in virtual and mixed reality interaction tasks. In particular, we are proposing a cyber-glove system that tracks wrist movements, hand orientation and finger movements. It provides a decoupled position of the wrist and hand, which can contribute to a better embodiment in interaction and manipulation tasks. Additionally, the detection of the curvature of the fingers aims to improve the proprioceptive perception of the grasping/releasing gestures more consistent to visual feedback. The cyber-glove system is being developed for VR applications related to real estate promotion, where users have to go through divisions of the house and interact with objects and furniture. This work aims to assess if glove-based systems can contribute to a higher sense of immersion, embodiment and usability when compared to standard VR hand controller devices (typically button-based). Twenty-two participants tested the cyber-glove system against the HTC Vive controller in a 3D manipulation task, specifically the opening of a virtual door. Metric results showed that 83% of the users performed faster door pushes, and described shorter paths with their hands wearing the cyber-glove. Subjective results showed that all participants rated the cyber-glove based interactions as equally or more natural, and 90% of users experienced an equal or a significant increase in the sense of embodiment.

Tsiota, Anastasia, Xenakis, Dionysis, Passas, Nikos, Merakos, Lazaros.  2019.  Multi-Tier and Multi-Band Heterogeneous Wireless Networks with Black Hole Attacks. 2019 IEEE Global Communications Conference (GLOBECOM). :1—6.

Wireless networks are currently proliferated by multiple tiers and heterogeneous networking equipment that aims to support multifarious services ranging from distant monitoring and control of wireless sensors to immersive virtual reality services. The vast collection of heterogeneous network equipment with divergent radio capabilities (e.g. multi-GHz operation) is vulnerable to wireless network attacks, raising questions on the service availability and coverage performance of future multi-tier wireless networks. In this paper, we study the impact of black hole attacks on service coverage of multi-tier heterogeneous wireless networks and derive closed form expressions when network nodes are unable to identify and avoid black hole nodes. Assuming access to multiple bands, the derived expressions can be readily used to assess the performance gains following from the employment of different association policies and the impact of black hole attacks in multi-tier wireless networks.

Bang, Junseong, Lee, Youngho, Lee, Yong-Tae, Park, Wonjoo.  2019.  AR/VR Based Smart Policing For Fast Response to Crimes in Safe City. 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). :470—475.

With advances in information and communication technologies, cities are getting smarter to enhance the quality of human life. In smart cities, safety (including security) is an essential issue. In this paper, by reviewing several safe city projects, smart city facilities for the safety are presented. With considering the facilities, a design for a crime intelligence system is introduced. Then, concentrating on how to support police activities (i.e., emergency call reporting reception, patrol activity, investigation activity, and arrest activity) with immersive technologies in order to reduce a crime rate and to quickly respond to emergencies in the safe city, smart policing with augmented reality (AR) and virtual reality (VR) is explained.

Asiri, Somayah, Alzahrani, Ahmad A..  2019.  The Effectiveness of Mixed Reality Environment-Based Hand Gestures in Distributed Collaboration. 2019 2nd International Conference on Computer Applications Information Security (ICCAIS). :1—6.

Mixed reality (MR) technologies are widely used in distributed collaborative learning scenarios and have made learning and training more flexible and intuitive. However, there are many challenges in the use of MR due to the difficulty in creating a physical presence, particularly when a physical task is being performed collaboratively. We therefore developed a novel MR system to overcomes these limitations and enhance the distributed collaboration user experience. The primary objective of this paper is to explore the potential of a MR-based hand gestures system to enhance the conceptual architecture of MR in terms of both visualization and interaction in distributed collaboration. We propose a synchronous prototype named MRCollab as an immersive collaborative approach that allows two or more users to communicate with a peer based on the integration of several technologies such as video, audio, and hand gestures.

Shang, Jiacheng, Wu, Jie.  2019.  Enabling Secure Voice Input on Augmented Reality Headsets using Internal Body Voice. 2019 16th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON). :1—9.

Voice-based input is usually used as the primary input method for augmented reality (AR) headsets due to immersive AR experience and good recognition performance. However, recent researches have shown that an attacker can inject inaudible voice commands to the devices that lack voice verification. Even if we secure voice input with voice verification techniques, an attacker can easily steal the victim's voice using low-cast handy recorders and replay it to voice-based applications. To defend against voice-spoofing attacks, AR headsets should be able to determine whether the voice is from the person who is using the AR headsets. Existing voice-spoofing defense systems are designed for smartphone platforms. Due to the special locations of microphones and loudspeakers on AR headsets, existing solutions are hard to be implemented on AR headsets. To address this challenge, in this paper, we propose a voice-spoofing defense system for AR headsets by leveraging both the internal body propagation and the air propagation of human voices. Experimental results show that our system can successfully accept normal users with average accuracy of 97% and defend against two types of attacks with average accuracy of at least 98%.

Cong, Huy Phi, Tran, Ha Huu, Trinh, Anh Vu, Vu, Thang X..  2019.  Modeling a Virtual Reality System with Caching and Computing Capabilities at Mobile User’ Device. 2019 6th NAFOSTED Conference on Information and Computer Science (NICS). :393—397.

Virtual reality (VR) recently is a promising technique in both industry and academia due to its potential applications in immersive experiences including website, game, tourism, or museum. VR technique provides an amazing 3-Dimensional (3D) experiences by requiring a very high amount of elements such as images, texture, depth, focus length, etc. However, in order to apply VR technique to various devices, especially in mobiles, ultra-high transmission rate and extremely low latency are really big challenge. Considering this problem, this paper proposes a novel combination model by transforming the computing capability of VR device into an equivalent caching amount while remaining low latency and fast transmission. In addition, Classic caching models are used to computing and catching capabilities which is easily apply to multi-user models.

Gulhane, Aniket, Vyas, Akhil, Mitra, Reshmi, Oruche, Roland, Hoefer, Gabriela, Valluripally, Samaikya, Calyam, Prasad, Hoque, Khaza Anuarul.  2019.  Security, Privacy and Safety Risk Assessment for Virtual Reality Learning Environment Applications. 2019 16th IEEE Annual Consumer Communications Networking Conference (CCNC). :1—9.

Social Virtual Reality based Learning Environments (VRLEs) such as vSocial render instructional content in a three-dimensional immersive computer experience for training youth with learning impediments. There are limited prior works that explored attack vulnerability in VR technology, and hence there is a need for systematic frameworks to quantify risks corresponding to security, privacy, and safety (SPS) threats. The SPS threats can adversely impact the educational user experience and hinder delivery of VRLE content. In this paper, we propose a novel risk assessment framework that utilizes attack trees to calculate a risk score for varied VRLE threats with rate and duration of threats as inputs. We compare the impact of a well-constructed attack tree with an adhoc attack tree to study the trade-offs between overheads in managing attack trees, and the cost of risk mitigation when vulnerabilities are identified. We use a vSocial VRLE testbed in a case study to showcase the effectiveness of our framework and demonstrate how a suitable attack tree formalism can result in a more safer, privacy-preserving and secure VRLE system.

Briggs, Shannon, Perrone, Michael, Peveler, Matthew, Drozdal, Jaimie, Balagyozyan, Lilit, Su, Hui.  2019.  Multimodal, Multiuser Immersive Brainstorming and Scenario Planning for Intelligence Analysis. 2019 IEEE International Symposium on Technologies for Homeland Security (HST). :1—4.

This paper discusses two pieces of software designed for intelligence analysis, the brainstorming tool and the Scenario Planning Advisor. These tools were developed in the Cognitive Immersive Systems Lab (CISL) in conjunction with IBM. We discuss the immersive environment the tools are situated in, and the proposed benefit for intelligence analysis.

2018-01-10
Bönsch, Andrea, Trisnadi, Robert, Wendt, Jonathan, Vierjahn, Tom, Kuhlen, Torsten W..  2017.  Score-based Recommendation for Efficiently Selecting Individual Virtual Agents in Multi-agent Systems. Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology. :74:1–74:2.
Controlling user-agent-interactions by means of an external operator includes selecting the virtual interaction partners fast and faultlessly. However, especially in immersive scenes with a large number of potential partners, this task is non-trivial. Thus, we present a score-based recommendation system supporting an operator in the selection task. Agents are recommended as potential partners based on two parameters: the user's distance to the agents and the user's gazing direction. An additional graphical user interface (GUI) provides elements for configuring the system and for applying actions to those agents which the operator has confirmed as interaction partners.
Vellingiri, Shanthi, Balakrishnan, Prabhakaran.  2017.  Modeling User Quality of Experience (QoE) through Position Discrepancy in Multi-Sensorial, Immersive, Collaborative Environments. Proceeding MMSys'17 Proceedings of the 8th ACM on Multimedia Systems Conference Pages 296-307 .

Users' QoE (Quality of Experience) in Multi-sensorial, Immersive, Collaborative Environments (MICE) applications is mostly measured by psychometric studies. These studies provide a subjective insight into the performance of such applications. In this paper, we hypothesize that spatial coherence or the lack of it of the embedded virtual objects among users has a correlation to the QoE in MICE. We use Position Discrepancy (PD) to model this lack of spatial coherence in MICE. Based on that, we propose a Hierarchical Position Discrepancy Model (HPDM) that computes PD at multiple levels to derive the application/system-level PD as a measure of performance.; AB@Experimental results on an example task in MICE show that HPDM can objectively quantify the application performance and has a correlation to the psychometric study-based QoE measurements. We envisage HPDM can provide more insight on the MICE application without the need for extensive user study.

Schaefer, Gerald, Budnik, Mateusz, Krawczyk, Bartosz.  2017.  Immersive Browsing in an Image Sphere. Proceedings of the 11th International Conference on Ubiquitous Information Management and Communication. :26:1–26:4.
In this paper, we present an immersive image database navigation system. Images are visualised in a spherical visualisation space and arranged, on a grid, by colour so that images of similar colour are located close to each other, while access to large image sets is possible through a hierarchical browsing structure. The user is wearing a 3-D head mounted display (HMD) and is immersed inside the image sphere. Navigation is performed by head movement using a 6-degree-of-freedom tracker integrated in the HMD in conjunction with a wiimote remote control.
Hosseini, S., Swash, M. R., Sadka, A..  2017.  Immersive 360 Holoscopic 3D system design. 2017 4th International Conference on Signal Processing and Integrated Networks (SPIN). :325–329.
3D imaging has been a hot research topic recently due to a high demand from various applications of security, health, autonomous vehicle and robotics. Yet Stereoscopic 3D imaging is limited due to its principles which mimics the human eye technique thus the camera separation baseline defines amount of 3D depth can be captured. Holoscopic 3D (H3D) Imaging is based on the “Fly's eye” technique that uses coherent replication of light to record a spatial image of a real scene using a microlens array (MLA) which gives the complete 3D parallax. H3D Imaging has been considered a promising 3D imaging technique which pursues the simple form of 3D acquisition using a single aperture camera therefore it is the most suited for scalable digitization, security and autonomous applications. This paper proposes 360-degree holoscopic 3D imaging system design for immersive 3D acquisition and stitching.