This website requires JavaScript.

Multi-Modal Data Fusion in Enhancing Human-Machine Interaction for Robotic Applications: A Survey

Tauheed Khan MohdNicole NguyenAhmad Y Javaid
摘要
Human-machine interaction has been around for several decades now, with newapplications emerging every day. One of the major goals that remain to beachieved is designing an interaction similar to how a human interacts withanother human. Therefore, there is a need to develop interactive systems thatcould replicate a more realistic and easier human-machine interaction. On theother hand, developers and researchers need to be aware of state-of-the-artmethodologies being used to achieve this goal. We present this survey toprovide researchers with state-of-the-art data fusion technologies implementedusing multiple inputs to accomplish a task in the robotic application domain.Moreover, the input data modalities are broadly classified into uni-modal andmulti-modal systems and their application in myriad industries, including thehealth care industry, which contributes to the medical industry's futuredevelopment. It will help the professionals to examine patients using differentmodalities. The multi-modal systems are differentiated by a combination ofinputs used as a single input, e.g., gestures, voice, sensor, and hapticfeedback. All these inputs may or may not be fused, which provides anotherclassification of multi-modal systems. The survey concludes with a summary oftechnologies in use for multi-modal systems.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答