Journal Description
Multimodal Technologies and Interaction
Multimodal Technologies and Interaction
is an international, peer-reviewed, open access journal on multimodal technologies and interaction published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Inspec, dblp Computer Science Bibliography, and other databases.
- Journal Rank: CiteScore - Q2 (Computer Science Applications)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 14 days after submission; acceptance to publication is undertaken in 3.8 days (median values for papers published in this journal in the second half of 2023).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.5 (2022)
Latest Articles
Show-and-Tell: An Interface for Delivering Rich Feedback upon Creative Media Artefacts
Multimodal Technol. Interact. 2024, 8(3), 23; https://doi.org/10.3390/mti8030023 - 14 Mar 2024
Abstract
►
Show Figures
In this paper, we explore an approach to feedback which could allow those learning creative digital media practices in remote and asynchronous environments to receive rich, multi-modal, and interactive feedback upon their creative artefacts. We propose the show-and-tell feedback interface which couples graphical
[...] Read more.
In this paper, we explore an approach to feedback which could allow those learning creative digital media practices in remote and asynchronous environments to receive rich, multi-modal, and interactive feedback upon their creative artefacts. We propose the show-and-tell feedback interface which couples graphical user interface changes (the show) to a text-based explanations (the tell). We describe the rationale behind the design and offer a tentative set of design criteria. We report the implementation and deployment into a real-world educational setting using a prototype interface developed to allow either traditional text-only feedback or our proposed show-and tell feedback across four sessions. The prototype was used to provide formative feedback upon music students’ coursework resulting in a total of 103 pieces of feedback. Thematic analysis was used to analyse the data obtained through interviews and focus groups with both educators and students (i.e., feedback givers and receivers). Recipients considered show-and-tell feedback to possess greater clarity and detail in comparison with the single modality text-only feedback they are used to receiving. We also report interesting emergent issues around control and artistic vision, and we discuss how these issues could be mitigated in future iterations of the interface.
Full article
Open AccessArticle
Do Not Freak Me Out! The Impact of Lip Movement and Appearance on Knowledge Gain and Confidence
by
Amal Abdulrahman, Katherine Hopman and Deborah Richards
Multimodal Technol. Interact. 2024, 8(3), 22; https://doi.org/10.3390/mti8030022 - 05 Mar 2024
Abstract
►▼
Show Figures
Virtual agents (VAs) have been used effectively for psychoeducation. However, getting the VA’s design right is critical to ensure the user experience does not become a barrier to receiving and responding to the intended message. The study reported in this paper seeks to
[...] Read more.
Virtual agents (VAs) have been used effectively for psychoeducation. However, getting the VA’s design right is critical to ensure the user experience does not become a barrier to receiving and responding to the intended message. The study reported in this paper seeks to help first-year psychology students to develop knowledge and confidence to recommend emotion regulation strategies. In previous work, we received negative feedback concerning the VA’s lip-syncing, including creepiness and visual overload, in the case of stroke patients. We seek to test the impact of the removal of lip-syncing on the perception of the VA and its ability to achieve its intended outcomes, also considering the influence of the visual features of the avatar. We conducted a 2 (lip-sync/no lip-sync) × 2 (human-like/cartoon-like) experimental design and measured participants’ perception of the VA in terms of eeriness, user experience, knowledge gain and participants’ confidence to practice their knowledge. While participants showed a tendency to prefer the cartoon look over the human look and the absence of lip-syncing over its presence, all groups reported no significant increase in knowledge but significant increases in confidence in their knowledge and ability to recommend the learnt strategies to others, concluding that realism and lip-syncing did not influence the intended outcomes. Thus, in future designs, we will allow the user to switch off the lip-sync function if they prefer. Further, our findings suggest that lip-syncing should not be a standard animation included with VAs, as is currently the case.
Full article
Figure 1
Open AccessArticle
Accessible Metaverse: A Theoretical Framework for Accessibility and Inclusion in the Metaverse
by
Achraf Othman, Khansa Chemnad, Aboul Ella Hassanien, Ahmed Tlili, Christina Yan Zhang, Dena Al-Thani, Fahriye Altınay, Hajer Chalghoumi, Hend S. Al-Khalifa, Maisa Obeid, Mohamed Jemni, Tawfik Al-Hadhrami and Zehra Altınay
Multimodal Technol. Interact. 2024, 8(3), 21; https://doi.org/10.3390/mti8030021 - 01 Mar 2024
Abstract
The following article investigates the Metaverse and its potential to bolster digital accessibility for persons with disabilities. Through qualitative analysis, we examine responses from eleven experts in digital accessibility, Metaverse development, disability advocacy, and policy formulation. This exploration uncovers key insights into the
[...] Read more.
The following article investigates the Metaverse and its potential to bolster digital accessibility for persons with disabilities. Through qualitative analysis, we examine responses from eleven experts in digital accessibility, Metaverse development, disability advocacy, and policy formulation. This exploration uncovers key insights into the Metaverse’s current state, its inherent principles, and the challenges and opportunities it presents in terms of accessibility. The findings reveal a mixed state of inclusivity within the Metaverse, highlighting significant advancements along with notable gaps, especially in integrating assistive technologies and ensuring interoperability across different virtual environments. This study emphasizes the Metaverse’s potential to revolutionize experiences for individuals with disabilities, provided that accessibility is embedded in its foundational design. Ethical and legal considerations, such as privacy, non-discrimination, and evolving legal frameworks, are identified as critical factors that shape an inclusive Metaverse. We propose a comprehensive framework that emphasizes technological adaptation and innovation, user-centric design, universal access, social and economic considerations, and global standards. This framework aims to guide future research and policy interventions to foster an inclusive digital environment in the Metaverse. This paper contributes to the emerging discourse on the Metaverse and digital accessibility, offering a nuanced understanding of its complexities and a roadmap for future exploration and development. This underscores the necessity of a multi-faceted approach that incorporates technological innovation, user-centered design, ethical considerations, legal compliance, and continuous research to create an inclusive and accessible Metaverse.
Full article
(This article belongs to the Special Issue Designing an Inclusive and Accessible Metaverse)
►▼
Show Figures
Graphical abstract
Open AccessArticle
Trust Development and Explainability: A Longitudinal Study with a Personalized Assistive System
by
Setareh Zafari, Jesse de Pagter, Guglielmo Papagni, Alischa Rosenstein, Michael Filzmoser and Sabine T. Koeszegi
Multimodal Technol. Interact. 2024, 8(3), 20; https://doi.org/10.3390/mti8030020 - 01 Mar 2024
Abstract
This article reports on a longitudinal experiment in which the influence of an assistive system’s malfunctioning and transparency on trust was examined over a period of seven days. To this end, we simulated the system’s personalized recommendation features to support participants with the
[...] Read more.
This article reports on a longitudinal experiment in which the influence of an assistive system’s malfunctioning and transparency on trust was examined over a period of seven days. To this end, we simulated the system’s personalized recommendation features to support participants with the task of learning new texts and taking quizzes. Using a 2 × 2 mixed design, the system’s malfunctioning (correct vs. faulty) and transparency (with vs. without explanation) were manipulated as between-subjects variables, whereas exposure time was used as a repeated-measure variable. A combined qualitative and quantitative methodological approach was used to analyze the data from 171 participants. Our results show that participants perceived the system making a faulty recommendation as a trust violation. Additionally, a trend emerged from both the quantitative and qualitative analyses regarding how the availability of explanations (even when not accessed) increased the perception of a trustworthy system.
Full article
(This article belongs to the Special Issue Cooperative Intelligence in Automated Driving- 2nd Edition)
►▼
Show Figures
Figure 1
Open AccessArticle
Enhancing Calculus Learning through Interactive VR and AR Technologies: A Study on Immersive Educational Tools
by
Logan Pinter and Mohammad Faridul Haque Siddiqui
Multimodal Technol. Interact. 2024, 8(3), 19; https://doi.org/10.3390/mti8030019 - 01 Mar 2024
Abstract
In the realm of collegiate education, calculus can be quite challenging for students. Many students struggle to visualize abstract concepts, as mathematics often moves into strict arithmetic rather than geometric understanding. Our study presents an innovative solution to this problem: an immersive, interactive
[...] Read more.
In the realm of collegiate education, calculus can be quite challenging for students. Many students struggle to visualize abstract concepts, as mathematics often moves into strict arithmetic rather than geometric understanding. Our study presents an innovative solution to this problem: an immersive, interactive VR graphing tool capable of standard 2D graphs, solids of revolution, and a series of visualizations deemed potentially useful to struggling students. This tool was developed within the Unity 3D engine, and while interaction and expression parsing rely on existing libraries, core functionalities were developed independently. As a pilot study, it includes qualitative information from a survey of students currently or previously enrolled in Calculus II/III courses, revealing its potential effectiveness. This survey primarily aims to determine the tool’s viability in future endeavors. The positive response suggests the tool’s immediate usefulness and its promising future in educational settings, prompting further exploration and consideration for adaptation into an Augmented Reality (AR) environment.
Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
►▼
Show Figures
Figure 1
Open AccessPerspective
Keep the Human in the Loop: Arguments for Human Assistance in the Synthesis of Simulation Data for Robot Training
by
Carina Liebers, Pranav Megarajan, Jonas Auda, Tim C. Stratmann, Max Pfingsthorn, Uwe Gruenefeld and Stefan Schneegass
Multimodal Technol. Interact. 2024, 8(3), 18; https://doi.org/10.3390/mti8030018 - 01 Mar 2024
Abstract
Robot training often takes place in simulated environments, particularly with reinforcement learning. Therefore, multiple training environments are generated using domain randomization to ensure transferability to real-world applications and compensate for unknown real-world states. We propose improving domain randomization by involving human application experts
[...] Read more.
Robot training often takes place in simulated environments, particularly with reinforcement learning. Therefore, multiple training environments are generated using domain randomization to ensure transferability to real-world applications and compensate for unknown real-world states. We propose improving domain randomization by involving human application experts in various stages of the training process. Experts can provide valuable judgments on simulation realism, identify missing properties, and verify robot execution. Our human-in-the-loop workflow describes how they can enhance the process in five stages: validating and improving real-world scans, correcting virtual representations, specifying application-specific object properties, verifying and influencing simulation environment generation, and verifying robot training. We outline examples and highlight research opportunities. Furthermore, we present a case study in which we implemented different prototypes, demonstrating the potential of human experts in the given stages. Our early insights indicate that human input can benefit robot training at different stages.
Full article
(This article belongs to the Special Issue Challenges in Human-Centered Robotics)
►▼
Show Figures
Figure 1
Open AccessFeature PaperArticle
The FlexiBoard: Tangible and Tactile Graphics for People with Vision Impairments
by
Mathieu Raynal, Julie Ducasse, Marc J.-M. Macé, Bernard Oriola and Christophe Jouffrais
Multimodal Technol. Interact. 2024, 8(3), 17; https://doi.org/10.3390/mti8030017 - 27 Feb 2024
Abstract
►▼
Show Figures
Over the last decade, several projects have demonstrated how interactive tactile graphics and tangible interfaces can improve and enrich access to information for people with vision impairments. While the former can be used to display a relatively large amount of information, they cannot
[...] Read more.
Over the last decade, several projects have demonstrated how interactive tactile graphics and tangible interfaces can improve and enrich access to information for people with vision impairments. While the former can be used to display a relatively large amount of information, they cannot be physically updated, which constrains the type of tasks that they can support. On the other hand, tangible interfaces are particularly suited for the (re)construction and manipulation of graphics, but the use of physical objects also restricts the type and amount of information that they can convey. We propose to bridge the gap between these two approaches by investigating the potential of tactile and tangible graphics for people with vision impairments. Working closely with special education teachers, we designed and developed the FlexiBoard, an affordable and portable system that enhances traditional tactile graphics with tangible interaction. In this paper, we report on the successive design steps that enabled us to identify and consider technical and design requirements. We thereafter explore two domains of application for the FlexiBoard: education and board games. Firstly, we report on one brainstorming session that we organized with four teachers in order to explore the application space of tangible and tactile graphics for educational activities. Secondly, we describe how the FlexiBoard enabled the successful adaptation of one visual board game into a multimodal accessible game that supports collaboration between sighted, low-vision and blind players.
Full article
Figure 1
Open AccessArticle
How to Design Human-Vehicle Cooperation for Automated Driving: A Review of Use Cases, Concepts, and Interfaces
by
Jakob Peintner, Bengt Escher, Henrik Detjen, Carina Manger and Andreas Riener
Multimodal Technol. Interact. 2024, 8(3), 16; https://doi.org/10.3390/mti8030016 - 26 Feb 2024
Abstract
Currently, a significant gap exists between academic and industrial research in automated driving development. Despite this, there is common sense that cooperative control approaches in automated vehicles will surpass the previously favored takeover paradigm in most driving situations due to enhanced driving performance
[...] Read more.
Currently, a significant gap exists between academic and industrial research in automated driving development. Despite this, there is common sense that cooperative control approaches in automated vehicles will surpass the previously favored takeover paradigm in most driving situations due to enhanced driving performance and user experience. Yet, the application of these concepts in real driving situations remains unclear, and a holistic approach to driving cooperation is missing. Existing research has primarily focused on testing specific interaction scenarios and implementations. To address this gap and offer a contemporary perspective on designing human–vehicle cooperation in automated driving, we have developed a three-part taxonomy with the help of an extensive literature review. The taxonomy broadens the notion of driving cooperation towards a holistic and application-oriented view by encompassing (1) the “Cooperation Use Case”, (2) the “Cooperation Frame”, and (3) the “Human–Machine Interface”. We validate the taxonomy by categorizing related literature and providing a detailed analysis of an exemplar paper. The proposed taxonomy offers designers and researchers a concise overview of the current state of driver cooperation and insights for future work. Further, the taxonomy can guide automotive HMI designers in ideation, communication, comparison, and reflection of cooperative driving interfaces.
Full article
(This article belongs to the Special Issue Cooperative Intelligence in Automated Driving- 2nd Edition)
►▼
Show Figures
Figure 1
Open AccessArticle
Substitute Buttons: Exploring Tactile Perception of Physical Buttons for Use as Haptic Proxies
by
Bram van Deurzen, Gustavo Alberto Rovelo Ruiz, Daniël M. Bot, Davy Vanacken and Kris Luyten
Multimodal Technol. Interact. 2024, 8(3), 15; https://doi.org/10.3390/mti8030015 - 20 Feb 2024
Abstract
Buttons are everywhere and are one of the most common interaction elements in both physical and digital interfaces. While virtual buttons offer versatility, enhancing them with realistic haptic feedback is challenging. Achieving this requires a comprehensive understanding of the tactile perception of physical
[...] Read more.
Buttons are everywhere and are one of the most common interaction elements in both physical and digital interfaces. While virtual buttons offer versatility, enhancing them with realistic haptic feedback is challenging. Achieving this requires a comprehensive understanding of the tactile perception of physical buttons and their transferability to virtual counterparts. This research investigates tactile perception concerning button attributes such as shape, size, and roundness and their potential generalization across diverse button types. In our study, participants interacted with each of the 36 buttons in our search space and provided a response to which one they thought they were touching. The findings were used to establish six substitute buttons capable of effectively emulating tactile experiences across various buttons. In a second study, these substitute buttons were validated against virtual buttons in VR. Highlighting the potential use of the substitute buttons as haptic proxies for applications such as encountered-type haptics.
Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
►▼
Show Figures
Graphical abstract
Open AccessArticle
Contact Resistance Sensing for Touch and Squeeze Interactions
by
Nianmei Zhou, Steven Devleminck and Luc Geurts
Multimodal Technol. Interact. 2024, 8(2), 14; https://doi.org/10.3390/mti8020014 - 17 Feb 2024
Abstract
►▼
Show Figures
This study investigates accessible and sensitive electrode solutions for detecting touches and squeezes on soft interfaces based on commercially available conductive polyurethane foam. Various electrode materials and configurations are explored, and for electrodes made of conductive threads, the static and dynamic electrical behaviors
[...] Read more.
This study investigates accessible and sensitive electrode solutions for detecting touches and squeezes on soft interfaces based on commercially available conductive polyurethane foam. Various electrode materials and configurations are explored, and for electrodes made of conductive threads, the static and dynamic electrical behaviors are studied in depth. In contrast to existing approaches that aim to minimize or stabilize contact resistance, we propose leveraging contact resistance to significantly enhance sensing sensitivity. Suggestions for future researchers and developers when building squeeze sensors based on this material are provided. Our findings offer insights for DIY enthusiasts and researchers, enabling them to develop sensitive soft interfaces for touch and squeeze interactions in an affordable and accessible manner and provide a completely soft user experience.
Full article
Figure 1
Open AccessArticle
HoberUI: An Exploration of Kinematic Structures as Interactive Input Devices
by
Gvidas Razevicius, Anne Roudaut and Abhijit Karnik
Multimodal Technol. Interact. 2024, 8(2), 13; https://doi.org/10.3390/mti8020013 - 13 Feb 2024
Abstract
►▼
Show Figures
Deployable kinematic structures can transform themselves from a small closed configuration to a large deployed one. These structures are widely used in many engineering fields including aerospace, architecture, robotics and to some extent within HCI. In this paper, we investigate the use of
[...] Read more.
Deployable kinematic structures can transform themselves from a small closed configuration to a large deployed one. These structures are widely used in many engineering fields including aerospace, architecture, robotics and to some extent within HCI. In this paper, we investigate the use of a symmetric spherical deployable structure and its application to interface control. We present HoberUI, a bimanual symmetric tangible interface with 7 degrees of freedom and explore its use for manipulating 3D environments. We base this on the toy version of the deployable structure called the Hoberman sphere, which consists of pantographic scissor mechanisms and is capable of homogeneous shrinkage and expansion. We first explore the space for designing and implementing interactions through such kinematic structures and apply this to 3D object manipulation. We then explore HoberUI’s usability through a user evaluation that shows the intuitiveness and potential of using instrumented kinematic structures as input devices for bespoke applications.
Full article
Figure 1
Open AccessArticle
Asymmetric VR Game Subgenres: Implications for Analysis and Design
by
Miah Dawes, Katherine Rackliffe, Amanda Lee Hughes and Derek L. Hansen
Multimodal Technol. Interact. 2024, 8(2), 12; https://doi.org/10.3390/mti8020012 - 11 Feb 2024
Abstract
This paper identifies subgenres of asymmetric virtual reality (AVR) games and proposes the AVR Game Genre (AVRGG) framework for developing AVR games. We examined 66 games “in the wild” to develop the AVRGG and used it to identify 5 subgenres of AVR games
[...] Read more.
This paper identifies subgenres of asymmetric virtual reality (AVR) games and proposes the AVR Game Genre (AVRGG) framework for developing AVR games. We examined 66 games “in the wild” to develop the AVRGG and used it to identify 5 subgenres of AVR games including David(s) vs. Goliath, Hide and Seek, Perspective Puzzle, Order Simulation, and Lifeline. We describe these genres, which account for nearly half of the 66 games reviewed, in terms of the AVRGG framework that highlights salient asymmetries in the mechanics, dynamics, and aesthetics categories. To evaluate the usefulness of the AVRGG framework, we conducted four workshops (two with the AVRGG framework and two without) with novice game designers who generated 16 original AVR game concepts. Comparisons between the workshop groups, observations of the design sessions, focus groups, and surveys showed the promise and limitations of the AVRGG framework as a design tool. We found that novice designers were able to understand and apply the AVRGG framework after only a brief introduction. The observations indicated two primary challenges that AVR designers face: balancing the game between VR and non-VR player(s) and generating original game concepts. The AVRGG framework helped overcome the balancing concerns due to its ability to inspire novice game designers with example subgenres and draw attention to the asymmetric mechanics and competitive/cooperative nature of games. While half of those who used the AVRGG framework to design with created games that fit directly into existing subgenres, the other half viewed the subgenres as “creative constraints” useful in jumpstarting novel game designs that combined, modified, or purposefully avoided existing subgenres. Additional benefits and limitations of the AVRGG framework are outlined in the paper.
Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
►▼
Show Figures
Figure 1
Open AccessArticle
Assessing the Efficacy of an Accessible Computing Curriculum for Students with Autism Spectrum Disorders
by
Abdu Arslanyilmaz, Margaret L. Briley, Gregory V. Boerio, Katie Petridis and Ramlah Ilyas
Multimodal Technol. Interact. 2024, 8(2), 11; https://doi.org/10.3390/mti8020011 - 09 Feb 2024
Abstract
There is a limited amount of research dedicated to designing and developing computing curricula specifically tailored for students with autism spectrum disorder (ASD), and thus far, no study has examined the effectiveness of an accessible computing curriculum designed specifically for students with ASD.
[...] Read more.
There is a limited amount of research dedicated to designing and developing computing curricula specifically tailored for students with autism spectrum disorder (ASD), and thus far, no study has examined the effectiveness of an accessible computing curriculum designed specifically for students with ASD. The goal of this study is to evaluate the effectiveness of an accessible curriculum in improving the learning of computational thinking concepts (CTCs) such as sequences, loops, parallelism, conditionals, operators, and data, as well as the development of proficiency in computational thinking practices (CTPs) including experimenting and iterating, testing and debugging, reusing and remixing, and abstracting and modularizing. The study involved two groups, each comprising twenty-four students. One group received instruction using the accessible curriculum, while the other was taught with the original curriculum. Evaluation of students’ CTCs included the analysis of pretest and posttest scores for both groups, and their CTPs were assessed through artifact-based interview scores. The results indicated improvement in both groups concerning the learning of CTCs, with no significant difference between the two curricula. However, the accessible computing curriculum demonstrated significant enhancements in students’ proficiency in debugging and testing, iterating and experimenting, modularizing and abstracting, as well as remixing and reusing. The findings suggest the effectiveness of accessible computing curricula for students with ASD.
Full article
Open AccessArticle
A Comparison of One- and Two-Handed Gesture User Interfaces in Virtual Reality—A Task-Based Approach
by
Taneli Nyyssönen, Seppo Helle, Teijo Lehtonen and Jouni Smed
Multimodal Technol. Interact. 2024, 8(2), 10; https://doi.org/10.3390/mti8020010 - 02 Feb 2024
Abstract
This paper presents two gesture-based user interfaces which were designed for a 3D design review in virtual reality (VR) with inspiration drawn from the shipbuilding industry’s need to streamline and make their processes more sustainable. The user interfaces, one focusing on single-hand (unimanual)
[...] Read more.
This paper presents two gesture-based user interfaces which were designed for a 3D design review in virtual reality (VR) with inspiration drawn from the shipbuilding industry’s need to streamline and make their processes more sustainable. The user interfaces, one focusing on single-hand (unimanual) gestures and the other focusing on dual-handed (bimanual) usage, are tested as a case study using 13 tasks. The unimanual approach attempts to provide a higher degree of flexibility, while the bimanual approach seeks to provide more control over the interaction. The interfaces were developed for the Meta Quest 2 VR headset using the Unity game engine. Hand-tracking (HT) is utilized due to potential usability benefits in comparison to standard controller-based user interfaces, which lack intuitiveness regarding the controls and can cause more strain. The user interfaces were tested with 25 test users, and the results indicate a preference toward the one-handed user interface with little variation in test user categories. Additionally, the testing order, which was counterbalanced, had a statistically significant impact on the preference and performance, indicating that learning novel interaction mechanisms requires an adjustment period for reliable results. VR sickness was also strongly experienced by a few users, and there were no signs that gesture controls would significantly alleviate it.
Full article
(This article belongs to the Special Issue 3D User Interfaces and Virtual Reality)
►▼
Show Figures
Figure 1
Open AccessArticle
Technology and Meditation: Exploring the Challenges and Benefits of a Physical Device to Support Meditation Routine
by
Tjaša Kermavnar and Pieter M. A. Desmet
Multimodal Technol. Interact. 2024, 8(2), 9; https://doi.org/10.3390/mti8020009 - 29 Jan 2024
Abstract
Existing studies of technology supporting meditation habit formation mainly focus on mobile applications which support users via reminders. A potentially more effective source of motivation could be contextual cues provided by meaningful objects in meaningful locations. This longitudinal mixed-methods 8-week study explored the
[...] Read more.
Existing studies of technology supporting meditation habit formation mainly focus on mobile applications which support users via reminders. A potentially more effective source of motivation could be contextual cues provided by meaningful objects in meaningful locations. This longitudinal mixed-methods 8-week study explored the effectiveness of such an object, Prana, in supporting forming meditation habits among seven novice meditators. First, the Meditation Intentions Questionnaire-24 and the Determinants of Meditation Practice Inventory-Revised were administered. The self-report habit index (SrHI) was administered before and after the study. Prana recorded meditation session times, while daily diaries captured subjective experiences. At the end of the study, the system usability scale, the ten-item personality inventory, and the brief self-control scale were completed, followed by individual semi-structured interviews. We expected to find an increase in meditation frequency and temporal consistency, but the results failed to confirm this. Participants meditated for between 16% and 84% of the study. The frequency decreased with time for four, decreased with subsequent increase for two, and remained stable for one of them. Daily meditation experiences were positive, and the perceived difficulty to start meditating was low. No relevant correlation was found between the perceived difficulty in starting to meditate and meditation experience overall; the latter was only weakly associated with the likelihood of meditating the next day. While meditation became more habitual for six participants, positive scores on SrHI were rare. Despite the inconclusive results, this study provides valuable insights into challenges and benefits of using a meditation device, as well as potential methodological difficulties in studying habit formation with physical devices.
Full article
(This article belongs to the Special Issue Multimodal User Interfaces and Experiences: Challenges, Applications, and Perspectives)
►▼
Show Figures
Figure 1
Open AccessArticle
Impact of Industrial Noise on Speech Interaction Performance and User Acceptance when Using the MS HoloLens 2
by
Maximilian Rosilius, Martin Spiertz, Benedikt Wirsing, Manuel Geuen, Volker Bräutigam and Bernd Ludwig
Multimodal Technol. Interact. 2024, 8(2), 8; https://doi.org/10.3390/mti8020008 - 27 Jan 2024
Abstract
►▼
Show Figures
Even though assistance systems offer more potential due to the increasing maturity of the inherent technologies, Automatic Speech Recognition faces distinctive challenges in the industrial context. Speech recognition enables immersive assistance systems to handle inputs and commands hands-free during two-handed operative jobs. The
[...] Read more.
Even though assistance systems offer more potential due to the increasing maturity of the inherent technologies, Automatic Speech Recognition faces distinctive challenges in the industrial context. Speech recognition enables immersive assistance systems to handle inputs and commands hands-free during two-handed operative jobs. The results of the conducted study (with n = 22 participants) based on the counterbalanced within-subject design demonstrated the performance (word error rate and information transfer rate) of the HMD HoloLens 2 as a function of the sound pressure level of industrial noise. The negative influence of industrial noise was higher on the word error rate of dictation than on the information transfer rate of the speech command. Contrary to expectations, no statistically significant difference in performance was found between the stationary and non-stationary noise. Furthermore, this study confirmed the hypothesis that user acceptance was negatively influenced by erroneous speech interactions. Furthermore, the erroneous speech interaction had no statistically significant influence on the workload or physiological parameters (skin conductance level and heart rate). It can be summarized that Automatic Speech Recognition is not yet a capable interaction paradigm in an industrial context.
Full article
Figure 1
Open AccessSystematic Review
Electrical Muscle Stimulation for Kinesthetic Feedback in AR/VR: A Systematic Literature Review
by
Apostolos Vrontos, Verena Nitsch and Christopher Brandl
Multimodal Technol. Interact. 2024, 8(2), 7; https://doi.org/10.3390/mti8020007 - 25 Jan 2024
Abstract
►▼
Show Figures
This paper presents a thorough review of electrical muscle stimulation (EMS) in the context of augmented reality (AR) and virtual reality (VR), specifically focusing on its application in providing kinesthetic feedback. Our systematic review of 17 studies reveals the growing interest and potential
[...] Read more.
This paper presents a thorough review of electrical muscle stimulation (EMS) in the context of augmented reality (AR) and virtual reality (VR), specifically focusing on its application in providing kinesthetic feedback. Our systematic review of 17 studies reveals the growing interest and potential of EMS in this domain, as evidenced by the growing body of literature and citations. The key elements presented in our review encompass a catalog of the applications developed to date, the specifics of the stimulation parameters used, the participant demographics of the studies, and the types of measures used in these research efforts. We discovered that EMS offers a versatile range of applications in AR/VR, from simulating physical interactions like touching virtual walls or objects to replicating the sensation of weight and impact. Notably, EMS has shown effectiveness in areas such as object handling and musical rhythm learning, indicating its broader potential beyond conventional haptic feedback mechanisms. However, our review also highlights major challenges in the research, such as inconsistent reporting of EMS parameters and a lack of diversity in study participants. These issues underscore the need for improved reporting standards and more inclusive research approaches to ensure wider applicability and reproducibility of results.
Full article
Figure 1
Open AccessReview
Optimal Stimulus Properties for Steady-State Visually Evoked Potential Brain–Computer Interfaces: A Scoping Review
by
Clemens Reitelbach and Kiemute Oyibo
Multimodal Technol. Interact. 2024, 8(2), 6; https://doi.org/10.3390/mti8020006 - 24 Jan 2024
Abstract
Brain–computer interfaces (BCIs) based on steady-state visually evoked potentials (SSVEPs) have been well researched due to their easy system configuration, little or no user training and high information transfer rates. To elicit an SSVEP, a repetitive visual stimulus (RVS) is presented to the
[...] Read more.
Brain–computer interfaces (BCIs) based on steady-state visually evoked potentials (SSVEPs) have been well researched due to their easy system configuration, little or no user training and high information transfer rates. To elicit an SSVEP, a repetitive visual stimulus (RVS) is presented to the user. The properties of this RVS (e.g., frequency, luminance) have a significant influence on the BCI performance and user comfort. Several studies in this area in the last one-and-half decades have focused on evaluating different stimulus parameters (i.e., properties). However, there is little research on the synthesis of the existing studies, as the last review on the subject was published in 2010. Consequently, we conducted a scoping review of related studies on the influence of stimulus parameters on SSVEP response and user comfort, analyzed them and summarized the findings considering the physiological and neurological processes associated with BCI performance. In the review, we found that stimulus type, frequency, color contrast, luminance contrast and size/shape of the retinal image are the most important stimulus properties that influence SSVEP response. Regarding stimulus type, frequency and luminance, there is a trade-off between the best SSVEP response quality and visual comfort. Finally, since there is no unified measuring method for visual comfort and a lack of differentiation in the high-frequency band, we proposed a measuring method and a division of the band. In summary, the review highlights which stimulus properties are important to consider when designing SSVEP BCIs. It can be used as a reference point for future research in BCI, as it will help researchers to optimize the design of their SSVEP stimuli.
Full article
(This article belongs to the Topic Interactive Artificial Intelligence and Man-Machine Communication)
►▼
Show Figures
Figure 1
Open AccessArticle
Validation of a Web App Enabling Children with Dyslexia to Identify Personalized Visual and Auditory Parameters Facilitating Online Text Reading
by
Maria Luisa Lorusso, Francesca Borasio, Paola Panetto, Mariangela Curioni, Giada Brotto, Giulio Pons, Alex Carsetti and Massimo Molteni
Multimodal Technol. Interact. 2024, 8(1), 5; https://doi.org/10.3390/mti8010005 - 15 Jan 2024
Cited by 1
Abstract
►▼
Show Figures
Previous research has shown the importance of font type, size, and spacing to facilitate text reading in dyslexia. Great heterogeneity in the population of readers with specific learning disorders suggests that personalized parameters should be preferable compared to one-fits-all ones. A special automatized
[...] Read more.
Previous research has shown the importance of font type, size, and spacing to facilitate text reading in dyslexia. Great heterogeneity in the population of readers with specific learning disorders suggests that personalized parameters should be preferable compared to one-fits-all ones. A special automatized procedure was designed to select the most favorable parameters for both text visualization and text-to-speech conversion. A total of 78 primary and middle school students (29 typical readers, 49 children with atypical reading skills, either diagnosed as specific reading disorder or as special learning needs) took part in this study, which included the application of the procedure and a validation of its outcomes through a systematic comparison of the use of the personalized versus standard fonts and voices in reading and writing tests. The results show a significant advantage for the personalized parameters. Moreover, in the case of text-to-speech personalization, the advantage is significantly larger for dyslexic readers than for typical readers. These results confirm the usefulness of a personalization approach in providing support to facilitate learning in dyslexic students.
Full article
Figure 1
Open AccessArticle
Optical Rules to Mitigate the Parallax-Related Registration Error in See-Through Head-Mounted Displays for the Guidance of Manual Tasks
by
Vincenzo Ferrari, Nadia Cattari, Sara Condino and Fabrizio Cutolo
Multimodal Technol. Interact. 2024, 8(1), 4; https://doi.org/10.3390/mti8010004 - 04 Jan 2024
Abstract
►▼
Show Figures
Head-mounted displays (HMDs) are hands-free devices particularly useful for guiding near-field tasks such as manual surgical procedures. See-through HMDs do not significantly alter the user’s direct view of the world, but the optical merging of real and virtual information can hinder their coherent
[...] Read more.
Head-mounted displays (HMDs) are hands-free devices particularly useful for guiding near-field tasks such as manual surgical procedures. See-through HMDs do not significantly alter the user’s direct view of the world, but the optical merging of real and virtual information can hinder their coherent and simultaneous perception. In particular, the coherence between the real and virtual content is affected by a viewpoint parallax-related misalignment, which is due to the inaccessibility of the user-perceived reality through the semi-transparent optical combiner of the OST Optical See-Through (OST) display. Recent works demonstrated that a proper selection of the collimation optics of the HMD significantly mitigates the parallax-related registration error without the need for any eye-tracking cameras and/or for any error-prone alignment-based display calibration procedures. These solutions are either based on HMDs that projects the virtual imaging plane directly at arm’s distance, or they require the integration on the HMD of additional lenses to optically move the image of the observed scene to the virtual projection plane of the HMD. This paper describes and evaluates the pros and cons of both the suggested solutions by providing an analytical estimation of the residual registration error achieved with both solutions and discussing the perceptual issues generated by the simultaneous focalization of real and virtual information.
Full article
Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Information, Mathematics, MTI, Symmetry
Youth Engagement in Social Media in the Post COVID-19 Era
Topic Editors: Naseer Abbas Khan, Shahid Kalim Khan, Abdul QayyumDeadline: 30 September 2024
Conferences
Special Issues
Special Issue in
MTI
3D User Interfaces and Virtual Reality
Guest Editors: Arun K. Kulshreshth, Kevin PfeilDeadline: 20 April 2024
Special Issue in
MTI
Designing an Inclusive and Accessible Metaverse
Guest Editors: Joel Fredericks, Youngho Lee, Youngjun Cho, Mark Billinghurst, Callum Parker, Soojeong YooDeadline: 20 June 2024
Special Issue in
MTI
Multimodal User Interfaces and Experiences: Challenges, Applications, and Perspectives
Guest Editors: Takumi Ohashi, Di Zhu, Kuo-Hsiang Chen, Wei Liu, Jan AuernhammerDeadline: 30 June 2024
Special Issue in
MTI
Innovative Theories and Practices for Designing and Evaluating Inclusive Educational Technology and Online Learning
Guest Editor: Julius NganjiDeadline: 12 July 2024