This website requires JavaScript.

MRTNet: Multi-Resolution Temporal Network for Video Sentence Grounding

Wei JiLong ChenYinwei WeiYiming WuTat-Seng Chua
Dec 2022
摘要
Given an untrimmed video and natural language query, video sentence groundingaims to localize the target temporal moment in the video. Existing methodsmainly tackle this task by matching and aligning semantics of the descriptivesentence and video segments on a single temporal resolution, while neglectingthe temporal consistency of video content in different resolutions. In thiswork, we propose a novel multi-resolution temporal video sentence groundingnetwork: MRTNet, which consists of a multi-modal feature encoder, aMulti-Resolution Temporal (MRT) module, and a predictor module. MRT module isan encoder-decoder network, and output features in the decoder part are inconjunction with Transformers to predict the final start and end timestamps.Particularly, our MRT module is hot-pluggable, which means it can be seamlesslyincorporated into any anchor-free models. Besides, we utilize a hybrid loss tosupervise cross-modal features in MRT module for more accurate grounding inthree scales: frame-level, clip-level and sequence-level. Extensive experimentson three prevalent datasets have shown the effectiveness of MRTNet.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答