This website requires JavaScript.
DOI: 10.48550/arxiv.2207.12020

Domain-invariant Feature Exploration for Domain Generalization

Wang LuJindong WangHaoliang LiYiqiang ChenXing Xie
Jul 2022
摘要
Deep learning has achieved great success in the past few years. However, theperformance of deep learning is likely to impede in face of non-IID situations.Domain generalization (DG) enables a model to generalize to an unseen testdistribution, i.e., to learn domain-invariant representations. In this paper,we argue that domain-invariant features should be originating from bothinternal and mutual sides. Internal invariance means that the features can belearned with a single domain and the features capture intrinsic semantics ofdata, i.e., the property within a domain, which is agnostic to other domains.Mutual invariance means that the features can be learned with multiple domains(cross-domain) and the features contain common information, i.e., thetransferable features w.r.t. other domains. We then propose DIFEX forDomain-Invariant Feature EXploration. DIFEX employs a knowledge distillationframework to capture the high-level Fourier phase as the internally-invariantfeatures and learn cross-domain correlation alignment as the mutually-invariantfeatures. We further design an exploration loss to increase the featurediversity for better generalization. Extensive experiments on both time-seriesand visual benchmarks demonstrate that the proposed DIFEX achievesstate-of-the-art performance.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答