Publications

ASSET: Autoregressive Semantic Scene Editing with Transformers at High Resolutions

ACM Transactions on Graphics (TOG)

Publication date: August 1, 2022

Difan Liu, Sandesh Shetty, Tobias Hinz, Matt Fisher, Richard Zhang, Taesung Park, Evangelos Kalogerakis

Adobe Research thumbnail image

We present ASSET, a neural architecture for automatically modifying an input high-resolution image according to a user's edits on its semantic segmentation map. Our architecture is based on a transformer with a novel attention mechanism. Our key idea is to sparsify the transformer's attention matrix at high resolutions, guided by dense attention extracted at lower image resolutions. While previous attention mechanisms are computationally too expensive for handling high-resolution images or are overly constrained within specific image regions hampering long-range interactions, our novel attention mechanism is both computationally efficient and effective. Our sparsified attention mechanism is able to capture long-range interactions and context, leading to synthesizing interesting phenomena in scenes, such as reflections of landscapes onto water or flora consistent with the rest of the landscape, that were not possible to generate reliably with previous convnets and transformer approaches. We present qualitative and quantitative results, along with user studies, demonstrating the effectiveness of our method.

Learn More