Similar Journals
HOME > Journal Current TOC
![]() |
Virtual Worlds
Number of Followers: 33 ![]() ISSN (Online) 2813-2084 Published by MDPI ![]() |
- Virtual Worlds, Vol. 3, Pages 256-269: A Virtual Reality Game-Based
Intervention to Enhance Stress Mindset and Performance among Firefighting
Trainees from the Singapore Civil Defence Force (SCDF)
Authors: Muhammad Akid Durrani Bin Imran, Cherie Shu Yun Goh, Nisha V, Meyammai Shanmugham, Hasan Kuddoos, Chen Huei Leo, Bina Rai
First page: 256
Abstract: This research paper investigates the effectiveness of a virtual reality (VR) game-based intervention using real-time biofeedback for stress management and performance among fire-fighting trainees from the Singapore Civil Defence Force (SCDF). Forty-seven trainees were enrolled in this study and randomly assigned into three groups: control, placebo, and intervention. The participants’ physiological responses, psychological responses, and training performances were evaluated during specific times over the standard 22-week training regimen. Participants from the control and placebo groups showed a similar overall perceived stress profile, with an initial increase in the early stages that was subsequently maintained over the remaining training period. Participants from the intervention group had a significantly lower level of perceived stress compared to the control and placebo groups, and their stress-is-enhancing mindset was significantly increased before the game in week 12 compared to week 3. Cortisol levels remained comparable between pre-game and post-game for the placebo group at week 12, but there was a significant reduction in cortisol levels post-game in comparison to pre-game for the intervention group. The biofeedback data as a measurement of root mean square of successive differences (RMSSD) during the gameplay were also significantly increased at week 12 when compared to week 3. Notably, the intervention group had a significant improvement in the final exercise assessment when compared to the control based on the participants’ role as duty officers. In conclusion, a VR game-based intervention with real-time biofeedback shows promise as an engaging and effective way of training firefighting trainees to enhance their stress mindset and reduce their perceived stress, which may enable them to perform better in the daily emergencies that they respond to.
Citation: Virtual Worlds
PubDate: 2024-07-01
DOI: 10.3390/virtualworlds3030013
Issue No: Vol. 3, No. 3 (2024)
- Virtual Worlds, Vol. 3, Pages 270-282: Geometric Fidelity Requirements for
Meshes in Automotive Lidar Simulation
Authors: Christopher Goodin, Marc N. Moore, Daniel W. Carruth, Zachary Aspin, John Kaniarz
First page: 270
Abstract: The perception of vegetation is a critical aspect of off-road autonomous navigation, and consequentially a critical aspect of the simulation of autonomous ground vehicles (AGVs). Representing vegetation with triangular meshes requires detailed geometric modeling that captures the intricacies of small branches and leaves. In this work, we propose to answer the question, “What degree of geometric fidelity is required to realistically simulate lidar in AGV simulations'” To answer this question, in this work we present an analysis that determines the required geometric fidelity of digital scenes and assets used in the simulation of AGVs. Focusing on vegetation, we use a comparison of the real and simulated perceived distribution of leaf orientation angles in lidar point clouds to determine the number of triangles required to reliably reproduce realistic results. By comparing real lidar scans of vegetation to simulated lidar scans of vegetation with a variety of geometric fidelities, we find that digital tree models (meshes) need to have a minimum triangle density of >1600 triangles per cubic meter in order to accurately reproduce the geometric properties of lidar scans of real vegetation, with a recommended triangle density of 11,000 triangles per cubic meter for best performance. Furthermore, by comparing these experiments to past work investigating the same question for cameras, we develop a general “rule-of-thumb” for vegetation mesh fidelity in AGV sensor simulation.
Citation: Virtual Worlds
PubDate: 2024-07-03
DOI: 10.3390/virtualworlds3030014
Issue No: Vol. 3, No. 3 (2024)
- Virtual Worlds, Vol. 3, Pages 283-302: Challenges and Opportunities of
Using Metaverse Tools for Participatory Architectural Design Processes
Authors: Provides Ng, Sara Eloy, Micaela Raposo, Alberto Fernández González, Nuno Pereira da Silva, Marcos Figueiredo, Hira Zuberi
First page: 283
Abstract: Participatory design emerges as a proactive approach involving different stakeholders in design and decision-making processes, addressing diverse values and ensuring outcomes align with users’ needs. However, the inadequacy of engaging stakeholders with a spatial experience can result in uninformed and, consequently, unsuccessful design solutions in a built environment. This paper explores how metaverse tools can help enhance participatory design by providing new collaborative opportunities via networked 3D environments. A hybrid format (online and in situ) co-creation process was documented and analysed, targeting public space design in London, Hong Kong, and Lisbon. The participants collaborated to address a set of design requirements via a tailored metaverse space, following a six-step methodology (Tour, Discuss, Rate, Define, Action, and Show and Tell). The preliminary results indicated that non-immersive metaverse tools help strengthen spatial collaboration through user perspective simulations, introducing novel interaction possibilities within design processes. The technology’s still-existing technical limitations may be tackled with careful engagement design, iterative reviews, and participants’ feedback. The experience documented prompts a reflection on the role of architects in process design and mediating multi-stakeholder collaboration, contributing to more inclusive, intuitive, and informed co-creation.
Citation: Virtual Worlds
PubDate: 2024-07-10
DOI: 10.3390/virtualworlds3030015
Issue No: Vol. 3, No. 3 (2024)
- Virtual Worlds, Vol. 3, Pages 303-318: Leveraging Virtual Reality for the
Visualization of Non-Observable Electrical Circuit Principles in
Engineering Education
Authors: Elliott Wolbach, Michael Hempel, Hamid Sharif
First page: 303
Abstract: As technology advances, the field of electrical and computer engineering continuously demands the introduction of innovative new tools and methodologies to facilitate the effective learning and comprehension of fundamental concepts. This research addresses an identified gap in technology-augmented education capabilities and researches the integration of virtual reality (VR) technology with real-time electronic circuit simulation to enable and enhance the visualization of non-observable concepts such as voltage distribution and current flow within these circuits. In this paper, we describe the development of our immersive educational platform, which makes understanding these abstract concepts intuitive and engaging. This research also involves the design and development of a VR-based circuit simulation environment. By leveraging VR’s immersive capabilities, our system enables users to physically interact with electronic components, observe the flow of electrical signals, and manipulate circuit parameters in real-time. Through this immersive experience, learners can gain a deeper understanding of fundamental electronic principles, transcending the limitations of traditional two-dimensional diagrams and equations. Furthermore, this research focuses on the implementation of advanced and novel visualization techniques within the VR environment for non-observable electrical and electromagnetic properties, providing users with a clearer and more intuitive understanding of electrical circuit concepts. Examples include color-coded pathways for current flow and dynamic voltage gradient visualization. Additionally, real-time data representation and graphical overlays are researched and integrated to offer users insights into the dynamic behavior of circuits, allowing for better analysis and troubleshooting.
Citation: Virtual Worlds
PubDate: 2024-08-02
DOI: 10.3390/virtualworlds3030016
Issue No: Vol. 3, No. 3 (2024)
- Virtual Worlds, Vol. 3, Pages 157-170: APIs in the Metaverse—A
Systematic Evaluation
Authors: Marius Traub, Markus Weinberger
First page: 157
Abstract: One of the most critical challenges for the success of the Metaverse is interoperability amongst its virtual platforms and worlds. In this context, application programming interfaces (APIs) are essential. This study analyzes a sample of 15 Metaverse platforms. In the first step, the availability of publicly accessible APIs was examined. For those platforms offering an API, i.e., Decentraland, Second Life, Voxels, Roblox, Axie Infinity, Upland, and VRChat, the available API contents were collected, analyzed, and presented in the paper. The results show that only a few Metaverse platforms offer APIs at all. In addition, the available APIs are very diverse and heterogeneous. Information is somewhat fragmented, requiring access to several APIs to compile a comprehensive data set. Thus, standardized APIs will enable better interoperability and foster a more seamless and immersive user experience in the Metaverse.
Citation: Virtual Worlds
PubDate: 2024-04-08
DOI: 10.3390/virtualworlds3020008
Issue No: Vol. 3, No. 2 (2024)
- Virtual Worlds, Vol. 3, Pages 171-183: Story Starter: A Tool for
Controlling Multiple Virtual Reality Headsets with No Active Internet
Connection
Authors: Andy T. Woods, Laryssa Whittaker, Neil Smith, Robert Ispas, Jackson Moore, Roderick D. Morgan, James Bennett
First page: 171
Abstract: Immersive events are becoming increasingly popular, allowing multiple people to experience a range of VR content simultaneously. Onboarders help people do VR experiences in these situations. Controlling VR headsets for others without physically having to put them on first is an important requirement here, as it streamlines the onboarding process and maximizes the number of viewers. Current off-the-shelf solutions require headsets to be connected to a cloud-based app via an active internet connection, which can be problematic in some locations. To address this challenge, we present Story Starter, a solution that enables the control of VR headsets without an active internet connection. Story Starter can start, stop, and install VR experiences, adjust device volume, and display information such as remaining battery life. We developed Story Starter in response to the UK-wide StoryTrails tour in the summer of 2022, which was held across 15 locations and attracted thousands of attendees who experienced a range of immersive content, including six VR experiences. Story Starter helped streamline the onboarding process by allowing onboarders to avoid putting the headset on themselves to complete routine tasks such as selecting and starting experiences, thereby minimizing COVID risks. Another benefit of not needing an active internet connection was that our headsets did not automatically update at inconvenient times, which we have found sometimes to break experiences. Converging evidence suggests that Story Starter was well-received and reliable. However, we also acknowledge some limitations of the solution and discuss several next steps we are considering.
Citation: Virtual Worlds
PubDate: 2024-04-08
DOI: 10.3390/virtualworlds3020009
Issue No: Vol. 3, No. 2 (2024)
- Virtual Worlds, Vol. 3, Pages 184-207: Tactile Speech Communication:
Reception of Words and Two-Way Messages through a Phoneme-Based Display
Authors: Jaehong Jung, Charlotte M. Reed, Juan S. Martinez, Hong Z. Tan
First page: 184
Abstract: The long-term goal of this research is the development of a stand-alone tactile device for the communication of speech for persons with profound sensory deficits as well as for applications for persons with intact hearing and vision. Studies were conducted with a phoneme-based tactile display of speech consisting of a 4-by-6 array of tactors worn on the dorsal and ventral surfaces of the forearm. Unique tactile signals were assigned to the 39 English phonemes. Study I consisted of training and testing on the identification of 4-phoneme words. Performance on a trained set of 100 words averaged 87% across the three participants and generalized well to a novel set of words (77%). Study II consisted of two-way messaging between two users of TAPS (TActile Phonemic Sleeve) for 13 h over 45 days. The participants conversed with each other by inputting text that was translated into tactile phonemes sent over the device. Messages were identified with an accuracy of 73% correct in conjunction with 82% of the words. Although rates of communication were slow (roughly 1 message per minute), the results obtained with this ecologically valid procedure represent progress toward the goal of a stand-alone tactile device for speech communication.
Citation: Virtual Worlds
PubDate: 2024-05-07
DOI: 10.3390/virtualworlds3020010
Issue No: Vol. 3, No. 2 (2024)
- Virtual Worlds, Vol. 3, Pages 208-229: An Augmented Reality Application
for Wound Management: Enhancing Nurses’ Autonomy, Competence and
Connectedness
Authors: Carina Albrecht-Gansohr, Lara Timm, Sabrina C. Eimler, Stefan Geisler
First page: 208
Abstract: The use of Augmented Reality glasses opens up many possibilities in hospital care, as they facilitate treatments and their documentation. In this paper, we present a prototype for the HoloLens 2 supporting wound care and documentation. It was developed in a participatory process with nurses using the positive computing paradigm, with a focus on the improvement of the working conditions of nursing staff. In a qualitative study with 14 participants, the factors of autonomy, competence and connectedness were examined in particular. It was shown that good individual adaptability and flexibility of the system with respect to the work task and personal preferences lead to a high degree of autonomy. The availability of the right information at the right time strengthens the feeling of competence. On the one hand, the connection to patients is increased by the additional information in the glasses, but on the other hand, it is hindered by the unusual appearance of the device and the lack of eye contact. In summary, the potential of Augmented Reality glasses in care was confirmed, and approaches for a well-being-centered system design were identified but, at the same time, a number of future research questions, including the effects on patients, were also identified.
Citation: Virtual Worlds
PubDate: 2024-06-03
DOI: 10.3390/virtualworlds3020011
Issue No: Vol. 3, No. 2 (2024)
- Virtual Worlds, Vol. 3, Pages 230-255: Exploring Dynamic Difficulty
Adjustment Methods for Video Games
Authors: Nicholas Fisher, Arun K. Kulshreshth
First page: 230
Abstract: Maintaining player engagement is pivotal for video game success, yet achieving the optimal difficulty level that adapts to diverse player skills remains a significant challenge. Initial difficulty settings in games often fail to accommodate the evolving abilities of players, necessitating adaptive difficulty mechanisms to keep the gaming experience engaging. This study introduces a custom first-person-shooter (FPS) game to explore Dynamic Difficulty Adjustment (DDA) techniques, leveraging both performance metrics and emotional responses gathered from physiological sensors. Through a within-subjects experiment involving casual and experienced gamers, we scrutinized the effects of various DDA methods on player performance and self-reported game perceptions. Contrary to expectations, our research did not identify a singular, most effective DDA strategy. Instead, findings suggest a complex landscape where no one approach—be it performance-based, emotion-based, or a hybrid—demonstrably surpasses static difficulty settings in enhancing player engagement or game experience. Noteworthy is the data’s alignment with Flow Theory, suggesting potential for the Emotion DDA technique to foster engagement by matching challenges to player skill levels. However, the overall modest impact of DDA on performance metrics and emotional responses highlights the intricate challenge of designing adaptive difficulty that resonates with both the mechanical and emotional facets of gameplay. Our investigation contributes to the broader dialogue on adaptive game design, emphasizing the need for further research to refine DDA approaches. By advancing our understanding and methodologies, especially in emotion recognition, we aim to develop more sophisticated DDA strategies. These strategies aspire to dynamically align game challenges with individual player states, making games more accessible, engaging, and enjoyable for a wider audience.
Citation: Virtual Worlds
PubDate: 2024-06-07
DOI: 10.3390/virtualworlds3020012
Issue No: Vol. 3, No. 2 (2024)
- Virtual Worlds, Vol. 3, Pages 21-39: Evaluating the Effect of Outfit on
Personality Perception in Virtual Characters
Authors: Yanbo Cheng, Yingying Wang
First page: 21
Abstract: Designing virtual characters that are capable of reflecting a sense of personality is a key goal in research and applications in virtual reality and computer graphics. More and more research efforts are dedicated to investigating approaches to construct a diverse, equitable, and inclusive metaverse by infusing expressive personalities and styles into virtual avatars. While most previous work focused on exploring variations in virtual characters’ dynamic behaviors, characters’ visual appearance plays a crucial role in affecting their perceived personalities. This paper presents a series of experiments evaluating the effect of virtual characters’ outfits on their perceived personality. Based on the related psychology research conducted in the real world, we determined a set of outfit factors likely to reflect personality in virtual characters: color, design, and type. As a framework for our study, we used the “Big Five” personality model for evaluating personality traits. To test our hypothesis, we conducted three perceptual experiments to evaluate the outfit parameters’ contributions to the characters’ personality. In our first experiment, we studied the color factor by varying color hue, saturation, and value; in the second experiment, we evaluated the impact of different neckline, waistline, and sleeve designs; and in our third experiment, we examined the personality perception of five outfit types: professional, casual, fashionable, outdoor, and indoor. Significant results offer guidance to avatar designers on how to create virtual characters with specific personality profiles. We further conducted a verification test to extend the application of our findings to animated virtual characters in augmented reality (AR) and virtual reality (VR) settings. Results confirmed that our findings can be broadly applied to both static and animated virtual characters in VR and AR environments that are commonly used in games, entertainment, and social networking scenarios.
Citation: Virtual Worlds
PubDate: 2024-01-04
DOI: 10.3390/virtualworlds3010002
Issue No: Vol. 3, No. 1 (2024)
- Virtual Worlds, Vol. 3, Pages 40-61: Speech Intelligibility versus
Congruency: User Preferences of the Acoustics of Virtual Reality Game
Spaces
Authors: Constantin Popp, Damian T. Murphy
First page: 40
Abstract: 3D audio spatializers for Virtual Reality (VR) can use the acoustic properties of the surfaces of a visualised game space to calculate a matching reverb. However, this approach could lead to reverbs that impair the tasks performed in such a space, such as listening to speech-based audio. Sound designers would then have to alter the room’s acoustic properties independently of its visualisation to improve speech intelligibility, causing audio-visual incongruency. As user expectation of simulated room acoustics regarding speech intelligibility in VR has not been studied, this study asked participants to rate the congruency of reverbs and their visualisations in 6-DoF VR while listening to speech-based audio. The participants compared unaltered, matching reverbs with sound-designed, mismatching reverbs. The latter feature improved D50s and reduced RT60s at the cost of lower audio-visual congruency. Results suggest participants preferred improved reverbs only when the unaltered reverbs had comparatively low D50s or excessive ringing. Otherwise, too dry or too reverberant reverbs were disliked. The range of expected RT60s depended on the surface visualisation. Differences in timbre between the reverbs may not affect preferences as strongly as shorter RT60s. Therefore, sound designers can intervene and prioritise speech intelligibility over audio-visual congruency in acoustically challenging game spaces.
Citation: Virtual Worlds
PubDate: 2024-01-19
DOI: 10.3390/virtualworlds3010003
Issue No: Vol. 3, No. 1 (2024)
- Virtual Worlds, Vol. 3, Pages 62-93: Cybersickness in Virtual Reality: The
Role of Individual Differences, Its Effects on Cognitive Functions and
Motor Skills, and Intensity Differences during and after Immersion
Authors: Panagiotis Kourtesis, Agapi Papadopoulou, Petros Roussos
First page: 62
Abstract: Background: Given that VR is used in multiple domains, understanding the effects of cybersickness on human cognition and motor skills and the factors contributing to cybersickness is becoming increasing important. This study aimed to explore the predictors of cybersickness and its interplay with cognitive and motor skills. Methods: 30 participants, 20–45 years old, completed the MSSQ and the CSQ-VR, and were immersed in VR. During immersion, they were exposed to a roller coaster ride. Before and after the ride, participants responded to the CSQ-VR and performed VR-based cognitive and psychomotor tasks. After the VR session, participants completed the CSQ-VR again. Results: Motion sickness susceptibility, during adulthood, was the most prominent predictor of cybersickness. Pupil dilation emerged as a significant predictor of cybersickness. Experience with videogaming was a significant predictor of cybersickness and cognitive/motor functions. Cybersickness negatively affected visuospatial working memory and psychomotor skills. Overall the intensity of cybersickness’s nausea and vestibular symptoms significantly decreased after removing the VR headset. Conclusions: In order of importance, motion sickness susceptibility and gaming experience are significant predictors of cybersickness. Pupil dilation appears to be a cybersickness biomarker. Cybersickness affects visuospatial working memory and psychomotor skills. Concerning user experience, cybersickness and its effects on performance should be examined during and not after immersion.
Citation: Virtual Worlds
PubDate: 2024-02-02
DOI: 10.3390/virtualworlds3010004
Issue No: Vol. 3, No. 1 (2024)
- Virtual Worlds, Vol. 3, Pages 94-114: Comparing and Contrasting
Near-Field, Object Space, and a Novel Hybrid Interaction Technique for
Distant Object Manipulation in VR
Authors: Wei-An Hsieh, Hsin-Yi Chien, David Brickler, Sabarish V. Babu, Jung-Hong Chuang
First page: 94
Abstract: In this contribution, we propose a hybrid interaction technique that integrates near-field and object-space interaction techniques for manipulating objects at a distance in virtual reality (VR). The objective of the hybrid interaction technique was to seamlessly leverage the strengths of both the near-field and object-space manipulation techniques. We employed bimanual near-field metaphor with scaled replica (BMSR) as our near-field interaction technique, which enabled us to perform multilevel degrees-of-freedom (DoF) separation transformations, such as 1~3DoF translation, 1~3DoF uniform and anchored scaling, 1DoF and 3DoF rotation, and 6DoF simultaneous translation and rotation, with enhanced depth perception and fine motor control provided by near-field manipulation techniques. The object-space interaction technique we utilized was the classic Scaled HOMER, which is known to be effective and appropriate for coarse transformations in distant object manipulation. In a repeated measures within-subjects evaluation, we empirically evaluated the three interaction techniques for their accuracy, efficiency, and economy of movement in pick-and-place, docking, and tunneling tasks in VR. Our findings revealed that the near-field BMSR technique outperformed the object space Scaled HOMER technique in terms of accuracy and economy of movement, but the participants performed more slowly overall with BMSR. Additionally, our results revealed that the participants preferred to use the hybrid interaction technique, as it allowed them to switch and transition seamlessly between the constituent BMSR and Scaled HOMER interaction techniques, depending on the level of accuracy, precision and efficiency required.
Citation: Virtual Worlds
PubDate: 2024-02-21
DOI: 10.3390/virtualworlds3010005
Issue No: Vol. 3, No. 1 (2024)
- Virtual Worlds, Vol. 3, Pages 115-134: Real-Time Diminished Reality
Application Specifying Target Based on 3D Region
Authors: Kaito Kobayashi, Masanobu Takahashi
First page: 115
Abstract: Diminished reality (DR) is a technology in which a background image is overwritten on a real object to make it appear as if the object has been removed from real space. This paper presents a real-time DR application that employs deep learning. A DR application can remove objects inside a 3D region defined by a user in images captured using a smartphone. By specifying the 3D region containing the target object to be removed, DR can be realized for targets with various shapes and sizes, and the specified target can be removed even if the viewpoint changes. To achieve fast and accurate DR, a suitable network was employed based on the experimental results. Additionally, the loss function during the training process was improved to enhance completion accuracy. Then, the operation of the DR application at 10 fps was verified using a smartphone and a laptop computer.
Citation: Virtual Worlds
PubDate: 2024-03-04
DOI: 10.3390/virtualworlds3010006
Issue No: Vol. 3, No. 1 (2024)
- Virtual Worlds, Vol. 3, Pages 135-156: Motion Capture in Mixed-Reality
Applications: A Deep Denoising Approach
Authors: André Correia Gonçalves, Rui Jesus, Pedro Mendes Jorge
First page: 135
Abstract: Motion capture is a fundamental technique in the development of video games and in film production to animate a virtual character based on the movements of an actor, creating more realistic animations in a short amount of time. One of the ways to obtain this movement from an actor is to capture the motion of the player through an optical sensor to interact with the virtual world. However, during movement some parts of the human body can be occluded by others and there can be noise caused by difficulties in sensor capture, reducing the user experience. This work presents a solution to correct the motion capture errors from the Microsoft Kinect sensor or similar through a deep neural network (DNN) trained with a pre-processed dataset of poses offered by Carnegie Mellon University (CMU) Graphics Lab. A temporal filter is implemented to smooth the movement, given by a set of poses returned by the deep neural network. This system is implemented in Python with the TensorFlow application programming interface (API), which supports the machine learning techniques and the Unity game engine to visualize and interact with the obtained skeletons. The results are evaluated using the mean absolute error (MAE) metric where ground truth is available and with the feedback of 12 participants through a questionnaire for the Kinect data.
Citation: Virtual Worlds
PubDate: 2024-03-11
DOI: 10.3390/virtualworlds3010007
Issue No: Vol. 3, No. 1 (2024)