This website requires JavaScript.

DSI2I: Dense Style for Unpaired Image-to-Image Translation

Baran OzaydinTong ZhangSabine SusstrunkMathieu Salzmann
Dec 2022
摘要
Unpaired exemplar-based image-to-image (UEI2I) translation aims to translatea source image to a target image domain with the style of a target imageexemplar, without ground-truth input-translation pairs. Existing UEI2I methodsrepresent style using either a global, image-level feature vector, or onevector per object instance/class but requiring knowledge of the scenesemantics. Here, by contrast, we propose to represent style as a dense featuremap, allowing for a finer-grained transfer to the source image withoutrequiring any external semantic information. We then rely on perceptual andadversarial losses to disentangle our dense style and content representations,and exploit unsupervised cross-domain semantic correspondences to warp theexemplar style to the source content. We demonstrate the effectiveness of ourmethod on two datasets using standard metrics together with a new localizedstyle metric measuring style similarity in a class-wise manner. Our resultsevidence that the translations produced by our approach are more diverse andcloser to the exemplars than those of the state-of-the-art methods whilenonetheless preserving the source content.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?