Visible to the public Biblio

Filters: Keyword is virtual reality  [Clear All Filters]
2021-07-08
Lu, Yujun, Gao, BoYu, Long, Jinyi, Weng, Jian.  2020.  Hand Motion with Eyes-free Interaction for Authentication in Virtual Reality. 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). :714—715.
Designing an authentication method is a crucial component to secure privacy in information systems. Virtual Reality (VR) is a new interaction platform, in which the users can interact with natural behaviours (e.g. hand, gaze, head, etc.). In this work, we propose a novel authentication method in which user can perform hand motion in an eyes-free manner. We evaluate the usability and security between eyes-engage and eyes-free input with a pilot study. The initial result revealed our purposed method can achieve a trade-off between usability and security, showing a new way to behaviour-based authentication in VR.
2021-07-02
Haque, Shaheryar Ehsan I, Saleem, Shahzad.  2020.  Augmented reality based criminal investigation system (ARCRIME). 2020 8th International Symposium on Digital Forensics and Security (ISDFS). :1—6.
Crime scene investigation and preservation are fundamentally the pillars of forensics. Numerous cases have been discussed in this paper where mishandling of evidence or improper investigation leads to lengthy trials and even worse incorrect verdicts. Whether the problem is lack of training of first responders or any other scenario, it is essential for police officers to properly preserve the evidence. Second problem is the criminal profiling where each district department has its own method of storing information about criminals. ARCRIME intends to digitally transform the way police combat crime. It will allow police officers to create a copy of the scene of crime so that it can be presented in courts or in forensics labs. It will be in the form of wearable glasses for officers on site whereas officers during training will be wearing a headset. The trainee officers will be provided with simulations of cases which have already been resolved. Officers on scene would be provided with intelligence about the crime and the suspect they are interviewing. They would be able to create a case file with audio recording and images which can be digitally sent to a prosecution lawyer. This paper also explores the risks involved with ARCRIME and also weighs in their impact and likelihood of happening. Certain contingency plans have been highlighted in the same section as well to respond to emergency situations.
2021-06-01
Plager, Trenton, Zhu, Ying, Blackmon, Douglas A..  2020.  Creating a VR Experience of Solitary Confinement. 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). :692—693.
The goal of this project is to create a realistic VR experience of solitary confinement and study its impact on users. Although there have been active debates and studies on this subject, very few people have personal experience of solitary confinement. Our first aim is to create such an experience in VR to raise the awareness of solitary confinement. We also want to conduct user studies to compare the VR solitary confinement experience with other types of media experiences, such as films or personal narrations. Finally, we want to study people’s sense of time in such a VR environment.
2021-03-01
Tao, J., Xiong, Y., Zhao, S., Xu, Y., Lin, J., Wu, R., Fan, C..  2020.  XAI-Driven Explainable Multi-view Game Cheating Detection. 2020 IEEE Conference on Games (CoG). :144–151.
Online gaming is one of the most successful applications having a large number of players interacting in an online persistent virtual world through the Internet. However, some cheating players gain improper advantages over normal players by using illegal automated plugins which has brought huge harm to game health and player enjoyment. Game industries have been devoting much efforts on cheating detection with multiview data sources and achieved great accuracy improvements by applying artificial intelligence (AI) techniques. However, generating explanations for cheating detection from multiple views still remains a challenging task. To respond to the different purposes of explainability in AI models from different audience profiles, we propose the EMGCD, the first explainable multi-view game cheating detection framework driven by explainable AI (XAI). It combines cheating explainers to cheating classifiers from different views to generate individual, local and global explanations which contributes to the evidence generation, reason generation, model debugging and model compression. The EMGCD has been implemented and deployed in multiple game productions in NetEase Games, achieving remarkable and trustworthy performance. Our framework can also easily generalize to other types of related tasks in online games, such as explainable recommender systems, explainable churn prediction, etc.
2021-02-03
Velaora, M., Roy, R. van, Guéna, F..  2020.  ARtect, an augmented reality educational prototype for architectural design. 2020 Fourth World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4). :110—115.

ARtect is an Augmented Reality application developed with Unity 3D, which envisions an educational interactive and immersive tool for architects, designers, researchers, and artists. This digital instrument renders the competency to visualize custom-made 3D models and 2D graphics in interior and exterior environments. The user-friendly interface offers an accurate insight before the materialization of any architectural project, enabling evaluation of the design proposal. This practice could be integrated into learning architectural design process, saving resources of printed drawings, and 3D carton models during several stages of spatial conception.

Lee, J..  2020.  CanvasMirror: Secure Integration of Third-Party Libraries in a WebVR Environment. 2020 50th Annual IEEE-IFIP International Conference on Dependable Systems and Networks-Supplemental Volume (DSN-S). :75—76.

Web technology has evolved to offer 360-degree immersive browsing experiences. This new technology, called WebVR, enables virtual reality by rendering a three-dimensional world on an HTML canvas. Unfortunately, there exists no browser-supported way of sharing this canvas between different parties. As a result, third-party library providers with ill intent (e.g., stealing sensitive information from end-users) can easily distort the entire WebVR site. To mitigate the new threats posed in WebVR, we propose CanvasMirror, which allows publishers to specify the behaviors of third-party libraries and enforce this specification. We show that CanvasMirror effectively separates the third-party context from the host origin by leveraging the privilege separation technique and safely integrates VR contents on a shared canvas.

Sabu, R., Yasuda, K., Kato, R., Kawaguchi, S., Iwata, H..  2020.  Does visual search by neck motion improve hemispatial neglect?: An experimental study using an immersive virtual reality system 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :262—267.

Unilateral spatial neglect (USN) is a higher cognitive dysfunction that can occur after a stroke. It is defined as an impairment in finding, reporting, reacting to, and directing stimuli opposite the damaged side of the brain. We have proposed a system to identify neglected regions in USN patients in three dimensions using three-dimensional virtual reality. The objectives of this study are twofold: first, to propose a system for numerically identifying the neglected regions using an object detection task in a virtual space, and second, to compare the neglected regions during object detection when the patient's neck is immobilized (‘fixed-neck’ condition) versus when the neck can be freely moved to search (‘free-neck’ condition). We performed the test using an immersive virtual reality system, once with the patient's neck fixed and once with the patient's neck free to move. Comparing the results of the study in two patients, we found that the neglected areas were similar in the fixed-neck condition. However, in the free-neck condition, one patient's neglect improved while the other patient’s neglect worsened. These results suggest that exploratory ability affects the symptoms of USN and is crucial for clinical evaluation of USN patients.

Illing, B., Westhoven, M., Gaspers, B., Smets, N., Brüggemann, B., Mathew, T..  2020.  Evaluation of Immersive Teleoperation Systems using Standardized Tasks and Measurements. 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :278—285.

Despite advances regarding autonomous functionality for robots, teleoperation remains a means for performing delicate tasks in safety critical contexts like explosive ordnance disposal (EOD) and ambiguous environments. Immersive stereoscopic displays have been proposed and developed in this regard, but bring about their own specific problems, e.g., simulator sickness. This work builds upon standardized test environments to yield reproducible comparisons between different robotic platforms. The focus was placed on testing three optronic systems of differing degrees of immersion: (1) A laptop display showing multiple monoscopic camera views, (2) an off-the-shelf virtual reality headset coupled with a pantilt-based stereoscopic camera, and (3) a so-called Telepresence Unit, providing fast pan, tilt, yaw rotation, stereoscopic view, and spatial audio. Stereoscopic systems yielded significant faster task completion only for the maneuvering task. As expected, they also induced Simulator Sickness among other results. However, the amount of Simulator Sickness varied between both stereoscopic systems. Collected data suggests that a higher degree of immersion combined with careful system design can reduce the to-be-expected increase of Simulator Sickness compared to the monoscopic camera baseline while making the interface subjectively more effective for certain tasks.

Cecotti, H., Richard, Q., Gravellier, J., Callaghan, M..  2020.  Magnetic Resonance Imaging Visualization in Fully Immersive Virtual Reality. 2020 6th International Conference of the Immersive Learning Research Network (iLRN). :205—209.

The availability of commercial fully immersive virtual reality systems allows the proposal and development of new applications that offer novel ways to visualize and interact with multidimensional neuroimaging data. We propose a system for the visualization and interaction with Magnetic Resonance Imaging (MRI) scans in a fully immersive learning environment in virtual reality. The system extracts the different slices from a DICOM file and presents the slices in a 3D environment where the user can display and rotate the MRI scan, and select the clipping plane in all the possible orientations. The 3D environment includes two parts: 1) a cube that displays the MRI scan in 3D and 2) three panels that include the axial, sagittal, and coronal views, where it is possible to directly access a desired slice. In addition, the environment includes a representation of the brain where it is possible to access and browse directly through the slices with the controller. This application can be used both for educational purposes as an immersive learning tool, and by neuroscience researchers as a more convenient way to browse through an MRI scan to better analyze 3D data.

2021-02-01
Gupta, K., Hajika, R., Pai, Y. S., Duenser, A., Lochner, M., Billinghurst, M..  2020.  Measuring Human Trust in a Virtual Assistant using Physiological Sensing in Virtual Reality. 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). :756–765.
With the advancement of Artificial Intelligence technology to make smart devices, understanding how humans develop trust in virtual agents is emerging as a critical research field. Through our research, we report on a novel methodology to investigate user's trust in auditory assistance in a Virtual Reality (VR) based search task, under both high and low cognitive load and under varying levels of agent accuracy. We collected physiological sensor data such as electroencephalography (EEG), galvanic skin response (GSR), and heart-rate variability (HRV), subjective data through questionnaire such as System Trust Scale (STS), Subjective Mental Effort Questionnaire (SMEQ) and NASA-TLX. We also collected a behavioral measure of trust (congruency of users' head motion in response to valid/ invalid verbal advice from the agent). Our results indicate that our custom VR environment enables researchers to measure and understand human trust in virtual agents using the matrices, and both cognitive load and agent accuracy play an important role in trust formation. We discuss the implications of the research and directions for future work.
2020-11-04
Flores, P..  2019.  Digital Simulation in the Virtual World: Its Effect in the Knowledge and Attitude of Students Towards Cybersecurity. 2019 Sixth HCT Information Technology Trends (ITT). :1—5.

The search for alternative delivery modes to teaching has been one of the pressing concerns of numerous educational institutions. One key innovation to improve teaching and learning is e-learning which has undergone enormous improvements. From its focus on text-based environment, it has evolved into Virtual Learning Environments (VLEs) which provide more stimulating and immersive experiences among learners and educators. An example of VLEs is the virtual world which is an emerging educational platform among universities worldwide. One very interesting topic that can be taught using the virtual world is cybersecurity. Simulating cybersecurity in the virtual world may give a realistic experience to students which can be hardly achieved by classroom teaching. To date, there are quite a number of studies focused on cybersecurity awareness and cybersecurity behavior. But none has focused looking into the effect of digital simulation in the virtual world, as a new educational platform, in the cybersecurity attitude of the students. It is in this regard that this study has been conducted by designing simulation in the virtual world lessons that teaches the five aspects of cybersecurity namely; malware, phishing, social engineering, password usage and online scam, which are the most common cybersecurity issues. The study sought to examine the effect of this digital simulation design in the cybersecurity knowledge and attitude of the students. The result of the study ascertains that students exposed under simulation in the virtual world have a greater positive change in cybersecurity knowledge and attitude than their counterparts.

2020-08-28
Kolomeets, Maxim, Chechulin, Andrey, Zhernova, Ksenia, Kotenko, Igor, Gaifulina, Diana.  2020.  Augmented reality for visualizing security data for cybernetic and cyberphysical systems. 2020 28th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP). :421—428.
The paper discusses the use of virtual (VR) and augmented (AR) reality for visual analytics in information security. Paper answers two questions: “In which areas of information security visualization VR/AR can be useful?” and “What is the difference of the VR/AR from similar methods of visualization at the level of perception of information?”. The first answer is based on the investigation of information security areas and visualization models that can be used in VR/AR security visualization. The second answer is based on experiments that evaluate perception of visual components in VR.
2020-07-16
McNeely-White, David G., Ortega, Francisco R., Beveridge, J. Ross, Draper, Bruce A., Bangar, Rahul, Patil, Dhruva, Pustejovsky, James, Krishnaswamy, Nikhil, Rim, Kyeongmin, Ruiz, Jaime et al..  2019.  User-Aware Shared Perception for Embodied Agents. 2019 IEEE International Conference on Humanized Computing and Communication (HCC). :46—51.

We present Diana, an embodied agent who is aware of her own virtual space and the physical space around her. Using video and depth sensors, Diana attends to the user's gestures, body language, gaze and (soon) facial expressions as well as their words. Diana also gestures and emotes in addition to speaking, and exists in a 3D virtual world that the user can see. This produces symmetric and shared perception, in the sense that Diana can see the user, the user can see Diana, and both can see the virtual world. The result is an embodied agent that begins to develop the conceit that the user is interacting with a peer rather than a program.

2020-06-26
Ahmad, Jawad, Tahir, Ahsen, Khan, Jan Sher, Khan, Muazzam A, Khan, Fadia Ali, Arshad, Habib, Zeeshan.  2019.  A Partial Ligt-weight Image Encryption Scheme. 2019 UK/ China Emerging Technologies (UCET). :1—3.

Due to greater network capacity and faster data speed, fifth generation (5G) technology is expected to provide a huge improvement in Internet of Things (IoTs) applications, Augmented & Virtual Reality (AR/VR) technologies, and Machine Type Communications (MTC). Consumer will be able to send/receive high quality multimedia data. For the protection of sensitive multimedia data, a large number of encryption algorithms are available, however, these encryption schemes does not provide light-weight encryption solution for real-time application requirements. This paper proposes a new multi-chaos computational efficient encryption for digital images. In the proposed scheme, plaintext image is transformed using Lifting Wavelet Transform (LWT) and only one-fourth part of the transformed image is encrypted using light-weight Chebyshev and Intertwining maps. Both chaotic maps were chaotically coupled for the confusion and diffusion processes which further enhances the image security. Encryption/decryption speed and other security measures such as correlation coefficient, entropy, Number of Pixels Change Rate (NPCR), contrast, energy, homogeneity confirm the superiority of the proposed light-weight encryption scheme.

2020-06-19
Keshari, Tanya, Palaniswamy, Suja.  2019.  Emotion Recognition Using Feature-level Fusion of Facial Expressions and Body Gestures. 2019 International Conference on Communication and Electronics Systems (ICCES). :1184—1189.

Automatic emotion recognition using computer vision is significant for many real-world applications like photojournalism, virtual reality, sign language recognition, and Human Robot Interaction (HRI) etc., Psychological research findings advocate that humans depend on the collective visual conduits of face and body to comprehend human emotional behaviour. Plethora of studies have been done to analyse human emotions using facial expressions, EEG signals and speech etc., Most of the work done was based on single modality. Our objective is to efficiently integrate emotions recognized from facial expressions and upper body pose of humans using images. Our work on bimodal emotion recognition provides the benefits of the accuracy of both the modalities.

2020-06-04
Gupta, Avinash, Cecil, J., Tapia, Oscar, Sweet-Darter, Mary.  2019.  Design of Cyber-Human Frameworks for Immersive Learning. 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). :1563—1568.

This paper focuses on the creation of information centric Cyber-Human Learning Frameworks involving Virtual Reality based mediums. A generalized framework is proposed, which is adapted for two educational domains: one to support education and training of residents in orthopedic surgery and the other focusing on science learning for children with autism. Users, experts and technology based mediums play a key role in the design of such a Cyber-Human framework. Virtual Reality based immersive and haptic mediums were two of the technologies explored in the implementation of the framework for these learning domains. The proposed framework emphasizes the importance of Information-Centric Systems Engineering (ICSE) principles which emphasizes a user centric approach along with formalizing understanding of target subjects or processes for which the learning environments are being created.

Cao, Lizhou, Peng, Chao, Hansberger, Jeffery T..  2019.  A Large Curved Display System in Virtual Reality for Immersive Data Interaction. 2019 IEEE Games, Entertainment, Media Conference (GEM). :1—4.

This work presents the design and implementation of a large curved display system in a virtual reality (VR) environment that supports visualization of 2D datasets (e.g., images, buttons and text). By using this system, users are allowed to interact with data in front of a wide field of view and gain a high level of perceived immersion. We exhibit two use cases of this system, including (1) a virtual image wall as the display component of a 3D user interface, and (2) an inventory interface for a VR-based educational game. The use cases demonstrate capability and flexibility of curved displays in supporting varied purposes of data interaction within virtual environments.

Almeida, L., Lopes, E., Yalçinkaya, B., Martins, R., Lopes, A., Menezes, P., Pires, G..  2019.  Towards natural interaction in immersive reality with a cyber-glove. 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). :2653—2658.

Over the past few years, virtual and mixed reality systems have evolved significantly yielding high immersive experiences. Most of the metaphors used for interaction with the virtual environment do not provide the same meaningful feedback, to which the users are used to in the real world. This paper proposes a cyber-glove to improve the immersive sensation and the degree of embodiment in virtual and mixed reality interaction tasks. In particular, we are proposing a cyber-glove system that tracks wrist movements, hand orientation and finger movements. It provides a decoupled position of the wrist and hand, which can contribute to a better embodiment in interaction and manipulation tasks. Additionally, the detection of the curvature of the fingers aims to improve the proprioceptive perception of the grasping/releasing gestures more consistent to visual feedback. The cyber-glove system is being developed for VR applications related to real estate promotion, where users have to go through divisions of the house and interact with objects and furniture. This work aims to assess if glove-based systems can contribute to a higher sense of immersion, embodiment and usability when compared to standard VR hand controller devices (typically button-based). Twenty-two participants tested the cyber-glove system against the HTC Vive controller in a 3D manipulation task, specifically the opening of a virtual door. Metric results showed that 83% of the users performed faster door pushes, and described shorter paths with their hands wearing the cyber-glove. Subjective results showed that all participants rated the cyber-glove based interactions as equally or more natural, and 90% of users experienced an equal or a significant increase in the sense of embodiment.

Bang, Junseong, Lee, Youngho, Lee, Yong-Tae, Park, Wonjoo.  2019.  AR/VR Based Smart Policing For Fast Response to Crimes in Safe City. 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). :470—475.

With advances in information and communication technologies, cities are getting smarter to enhance the quality of human life. In smart cities, safety (including security) is an essential issue. In this paper, by reviewing several safe city projects, smart city facilities for the safety are presented. With considering the facilities, a design for a crime intelligence system is introduced. Then, concentrating on how to support police activities (i.e., emergency call reporting reception, patrol activity, investigation activity, and arrest activity) with immersive technologies in order to reduce a crime rate and to quickly respond to emergencies in the safe city, smart policing with augmented reality (AR) and virtual reality (VR) is explained.

Asiri, Somayah, Alzahrani, Ahmad A..  2019.  The Effectiveness of Mixed Reality Environment-Based Hand Gestures in Distributed Collaboration. 2019 2nd International Conference on Computer Applications Information Security (ICCAIS). :1—6.

Mixed reality (MR) technologies are widely used in distributed collaborative learning scenarios and have made learning and training more flexible and intuitive. However, there are many challenges in the use of MR due to the difficulty in creating a physical presence, particularly when a physical task is being performed collaboratively. We therefore developed a novel MR system to overcomes these limitations and enhance the distributed collaboration user experience. The primary objective of this paper is to explore the potential of a MR-based hand gestures system to enhance the conceptual architecture of MR in terms of both visualization and interaction in distributed collaboration. We propose a synchronous prototype named MRCollab as an immersive collaborative approach that allows two or more users to communicate with a peer based on the integration of several technologies such as video, audio, and hand gestures.

Cong, Huy Phi, Tran, Ha Huu, Trinh, Anh Vu, Vu, Thang X..  2019.  Modeling a Virtual Reality System with Caching and Computing Capabilities at Mobile User’ Device. 2019 6th NAFOSTED Conference on Information and Computer Science (NICS). :393—397.

Virtual reality (VR) recently is a promising technique in both industry and academia due to its potential applications in immersive experiences including website, game, tourism, or museum. VR technique provides an amazing 3-Dimensional (3D) experiences by requiring a very high amount of elements such as images, texture, depth, focus length, etc. However, in order to apply VR technique to various devices, especially in mobiles, ultra-high transmission rate and extremely low latency are really big challenge. Considering this problem, this paper proposes a novel combination model by transforming the computing capability of VR device into an equivalent caching amount while remaining low latency and fast transmission. In addition, Classic caching models are used to computing and catching capabilities which is easily apply to multi-user models.

Gulhane, Aniket, Vyas, Akhil, Mitra, Reshmi, Oruche, Roland, Hoefer, Gabriela, Valluripally, Samaikya, Calyam, Prasad, Hoque, Khaza Anuarul.  2019.  Security, Privacy and Safety Risk Assessment for Virtual Reality Learning Environment Applications. 2019 16th IEEE Annual Consumer Communications Networking Conference (CCNC). :1—9.

Social Virtual Reality based Learning Environments (VRLEs) such as vSocial render instructional content in a three-dimensional immersive computer experience for training youth with learning impediments. There are limited prior works that explored attack vulnerability in VR technology, and hence there is a need for systematic frameworks to quantify risks corresponding to security, privacy, and safety (SPS) threats. The SPS threats can adversely impact the educational user experience and hinder delivery of VRLE content. In this paper, we propose a novel risk assessment framework that utilizes attack trees to calculate a risk score for varied VRLE threats with rate and duration of threats as inputs. We compare the impact of a well-constructed attack tree with an adhoc attack tree to study the trade-offs between overheads in managing attack trees, and the cost of risk mitigation when vulnerabilities are identified. We use a vSocial VRLE testbed in a case study to showcase the effectiveness of our framework and demonstrate how a suitable attack tree formalism can result in a more safer, privacy-preserving and secure VRLE system.

2020-06-01
Halba, Khalid, Griffor, Edward, Kamongi, Patrick, Roth, Thomas.  2019.  Using Statistical Methods and Co-Simulation to Evaluate ADS-Equipped Vehicle Trustworthiness. 2019 Electric Vehicles International Conference (EV). :1–5.

With the increasing interest in studying Automated Driving System (ADS)-equipped vehicles through simulation, there is a growing need for comprehensive and agile middleware to provide novel Virtual Analysis (VA) functions of ADS-equipped vehicles towards enabling a reliable representation for pre-deployment test. The National Institute of Standards and Technology (NIST) Universal Cyber-physical systems Environment for Federation (UCEF) is such a VA environment. It provides Application Programming Interfaces (APIs) capable of ensuring synchronized interactions across multiple simulation platforms such as LabVIEW, OMNeT++, Ricardo IGNITE, and Internet of Things (IoT) platforms. UCEF can aid engineers and researchers in understanding the impact of different constraints associated with complex cyber-physical systems (CPS). In this work UCEF is used to produce a simulated Operational Domain Design (ODD) for ADS-equipped vehicles where control (drive cycle/speed pattern), sensing (obstacle detection, traffic signs and lights), and threats (unusual signals, hacked sources) are represented as UCEF federates to simulate a drive cycle and to feed it to vehicle dynamics simulators (e.g. OpenModelica or Ricardo IGNITE) through the Functional Mock-up Interface (FMI). In this way we can subject the vehicle to a wide range of scenarios, collect data on the resulting interactions, and analyze those interactions using metrics to understand trustworthiness impact. Trustworthiness is defined here as in the NIST Framework for Cyber-Physical Systems, and is comprised of system reliability, resiliency, safety, security, and privacy. The goal of this work is to provide an example of an experimental design strategy using Fractional Factorial Design for statistically assessing the most important safety metrics in ADS-equipped vehicles.

2020-02-10
Zojaji, Sahba, Peters, Christopher.  2019.  Towards Virtual Agents for Supporting Appropriate Small Group Behaviors in Educational Contexts. 2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games). :1–2.
Verbal and non-verbal behaviors that we use in order to effectively communicate with other people are vital for our success in our daily lives. Despite the importance of social skills, creating standardized methods for training them and supporting their training is challenging. Information and Communications Technology (ICT) may have a good potential to support social and emotional learning (SEL) through virtual social demonstration games. This paper presents initial work involving the design of a pedagogical scenario to facilitate teaching of socially appropriate and inappropriate behaviors when entering and standing in a small group of people, a common occurrence in collaborative social situations. This is achieved through the use of virtual characters and, initially, virtual reality (VR) environments for supporting situated learning in multiple contexts. We describe work done thus far on the demonstrator scenario and anticipated potentials, pitfalls and challenges involved in the approach.
Ruchkin, Vladimir, Fulin, Vladimir, Pikulin, Dmitry, Taganov, Aleksandr, Kolesenkov, Aleksandr, Ruchkina, Ekaterina.  2019.  Heterogenic Multi-Core System on Chip for Virtual Based Security. 2019 8th Mediterranean Conference on Embedded Computing (MECO). :1–5.
The paper describes the process of coding information in the heterogenic multi-core system on chip for virtual-based security designed For image processing, signal processing and neural networks emulation. The coding of information carried out in assembly language according to the GOST. This is an implementation of the GOST - a standard symmetric key block cipher has a 64-bit block size and 256-bit key size.