Journal on Multimodal User Interfaces
Published by Springer Nature
ISSN : 1783-7677 eISSN : 1783-8738
Abbreviation : J. Multimodal User Interface
Aims & Scope
The Journal of Multimodal User Interfaces publishes work in the design, implementation and evaluation of multimodal interfaces.
Research in the domain of multimodal interaction is by its very essence a multidisciplinary area involving several fields including signal processing, human-machine interaction, computer science, cognitive science and ergonomics.
This journal focuses on multimodal interfaces involving advanced modalities, several modalities and their fusion, user-centric design, usability and architectural considerations.
Use cases and descriptions of specific application areas are welcome including for example e-learning, assistance, serious games, affective and social computing, interaction with avatars and robots.
View Aims & ScopeMetrics & Ranking
Impact Factor
Year | Value |
---|---|
2025 | 2.1 |
2024 | 2.20 |
Journal Rank
Year | Value |
---|---|
2024 | 11158 |
Journal Citation Indicator
Year | Value |
---|---|
2024 | 224 |
SJR (SCImago Journal Rank)
Year | Value |
---|---|
2024 | 0.521 |
Quartile
Year | Value |
---|---|
2024 | Q2 |
h-index
Year | Value |
---|---|
2024 | 36 |
Impact Factor Trend
Abstracting & Indexing
Journal is indexed in leading academic databases, ensuring global visibility and accessibility of our peer-reviewed research.
Subjects & Keywords
Journal’s research areas, covering key disciplines and specialized sub-topics in Computer Science, designed to support cutting-edge academic discovery.
Most Cited Articles
The Most Cited Articles section features the journal's most impactful research, based on citation counts. These articles have been referenced frequently by other researchers, indicating their significant contribution to their respective fields.
-
EmoNets: Multimodal deep learning approaches for emotion recognition in video
Citation: 275
Authors: Samira Ebrahimi, Xavier, Pascal, Caglar, Vincent, Kishore, Sébastien, Pierre, Yann, Nicolas, Raul, Mehdi, David, Aaron, Pascal, Roland, Christopher, Yoshua
-
An insight into assistive technology for the visually impaired and blind people: state-of-the-art and future trends
Citation: 203
Authors: Alexy, Shyamanta M.
-
Multimodal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic analysis
Citation: 192
Authors: Loic, Ginevra, George
-
Hierarchical committee of deep convolutional neural networks for robust facial expression recognition
Citation: 176
Authors: Bo-Kyeong, Jihyeon, Suh-Yeon, Soo-Young
-
A survey of assistive technologies and applications for blind users on mobile platforms: a review and foundation for research
Citation: 139
Authors: Ãdám, György, Hunor, Tony
-
Multimodal assistive technologies for depression diagnosis and monitoring
Citation: 136
Authors: Jyoti, Roland, Sharifa, Abhinav, Michael, Julien, Gordon, Michael
-
When my robot smiles at me: Enabling human-robot rapport via real-time head gesture mimicry
Citation: 130
Authors: Laurel D., Philip C., Peter
-
Model-based adaptive user interface based on context and user experience evaluation
Citation: 112
Authors: Jamil, Anees, Hafiz Syed, Rahman, Muhammad, Shujaat, Jaehun, Oresti, Sungyoung
-
On-line emotion recognition in a 3-D activation-valence-time continuum using acoustic and linguistic cues
Citation: 94
Authors: Florian, Martin, Alex, Björn, Ellen, Roddy
-
“Let me explain!â€: exploring the potential of virtual agents in explainable AI interaction design
Citation: 88
Authors: Katharina, Dominik, Ruben, Tobias, Elisabeth