Publications

Contrastive Learning for Unpaired Image-to-Image Translation

European Conference on Computer Vision (ECCV)

Publication date: August 23, 2020

Taesung Park, Alexei A. Efros, Richard Zhang, Jun-Yan Zhu

In image-to-image translation, each patch in the output should reflect the {\em content} of the corresponding patch in the input, independent of domain. We propose a straightforward method for doing so -- maximizing mutual information between the two, using a framework based on contrastive learning. The method encourages two elements (corresponding patches) to map to a similar point in a learned feature space, relative to other elements (other patches) in the dataset, referred to as negatives. We explore several critical design choices for making contrastive learning effective in the image synthesis setting. Notably, we use a multilayer, patch-based approach, rather than operate on entire images. Furthermore, we draw negatives from within the input image itself, rather than from the rest of the dataset. We demonstrate that our framework enables one-sided translation in the unpaired image-to-image translation setting, while improving quality and reducing training time. In addition, our method can even be extended to the training setting where each ``domain'' is only a single image.