This website requires JavaScript.

Rethinking Cooking State Recognition with Vision Transformers

Akib Mohammed KhanAlif AshrafeeReeshoon SayeraShahriar IvanSabbir Ahmed
Dec 2022
摘要
To ensure proper knowledge representation of the kitchen environment, it isvital for kitchen robots to recognize the states of the food items that arebeing cooked. Although the domain of object detection and recognition has beenextensively studied, the task of object state classification has remainedrelatively unexplored. The high intra-class similarity of ingredients duringdifferent states of cooking makes the task even more challenging. Researchershave proposed adopting Deep Learning based strategies in recent times, however,they are yet to achieve high performance. In this study, we utilized theself-attention mechanism of the Vision Transformer (ViT) architecture for theCooking State Recognition task. The proposed approach encapsulates the globallysalient features from images, while also exploiting the weights learned from alarger dataset. This global attention allows the model to withstand thesimilarities between samples of different cooking objects, while the employmentof transfer learning helps to overcome the lack of inductive bias by utilizingpretrained weights. To improve recognition accuracy, several augmentationtechniques have been employed as well. Evaluation of our proposed framework onthe `Cooking State Recognition Challenge Dataset' has achieved an accuracy of94.3%, which significantly outperforms the state-of-the-art.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答