This website requires JavaScript.

StepNet: Spatial-temporal Part-aware Network for Sign Language Recognition

Xiaolong ShenZhedong ZhengYi Yang
Dec 2022
摘要
Sign language recognition (SLR) aims to overcome the communication barrierfor the people with deafness or the people with hard hearing. Most existingapproaches can be typically divided into two lines, i.e., Skeleton-based andRGB-based methods, but both the two lines of methods have their limitations.RGB-based approaches usually overlook the fine-grained hand structure, whileSkeleton-based methods do not take the facial expression into account. Inattempts to address both limitations, we propose a new framework namedSpatial-temporal Part-aware network (StepNet), based on RGB parts. As the nameimplies, StepNet consists of two modules: Part-level Spatial Modeling andPart-level Temporal Modeling. Particularly, without using any keypoint-levelannotations, Part-level Spatial Modeling implicitly captures theappearance-based properties, such as hands and faces, in the feature space. Onthe other hand, Part-level Temporal Modeling captures the pertinent propertiesover time by implicitly mining the long-short term context. Extensiveexperiments show that our StepNet, thanks to Spatial-temporal modules, achievescompetitive Top-1 Per-instance accuracy on three widely-used SLR benchmarks,i.e., 56.89% on WLASL, 77.2% on NMFs-CSL, and 77.1% on BOBSL. Moreover, theproposed method is compatible with the optical flow input, and can yield higherperformance if fused. We hope that this work can serve as a preliminary stepfor the people with deafness.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答