This website requires JavaScript.

DSI2I: Dense Style for Unpaired Image-to-Image Translation

Baran OzaydinTong ZhangSabine SusstrunkMathieu Salzmann
Dec 2022
Unpaired exemplar-based image-to-image (UEI2I) translation aims to translatea source image to a target image domain with the style of a target imageexemplar, without ground-truth input-translation pairs. Existing UEI2I methodsrepresent style using either a global, image-level feature vector, or onevector per object instance/class but requiring knowledge of the scenesemantics. Here, by contrast, we propose to represent style as a dense featuremap, allowing for a finer-grained transfer to the source image withoutrequiring any external semantic information. We then rely on perceptual andadversarial losses to disentangle our dense style and content representations,and exploit unsupervised cross-domain semantic correspondences to warp theexemplar style to the source content. We demonstrate the effectiveness of ourmethod on two datasets using standard metrics together with a new localizedstyle metric measuring style similarity in a class-wise manner. Our resultsevidence that the translations produced by our approach are more diverse andcloser to the exemplars than those of the state-of-the-art methods whilenonetheless preserving the source content.