Please use this identifier to cite or link to this item:
Title: DeepGIN: Deep generative inpainting network for extreme image inpainting
Author(s): Siu, Wan Chi 
Liu, Zhisong 
Author(s): Li, C.-T.
Wang, L.-W.
Lun, D. P.-K.
Issue Date: 2020
Publisher: Springer
Related Publication(s): Computer Vision – ECCV 2020 Workshops Proceedings, Part IV
Start page: 5
End page: 22
The degree of difficulty in image inpainting depends on the types and sizes of the missing parts. Existing image inpainting approaches usually encounter difficulties in completing the missing parts in the wild with pleasing visual and contextual results as they are trained for either dealing with one specific type of missing patterns (mask) or unilaterally assuming the shapes and/or sizes of the masked areas. We propose a deep generative inpainting network, named DeepGIN, to handle various types of masked images. We design a Spatial Pyramid Dilation (SPD) ResNet block to enable the use of distant features for reconstruction. We also employ Multi-Scale Self-Attention (MSSA) mechanism and Back Projection (BP) technique to enhance our inpainting results. Our DeepGIN outperforms the state-of-the-art approaches generally, including two publicly available datasets (FFHQ and Oxford Buildings), both quantitatively and qualitatively. We also demonstrate that our model is capable of completing masked images in the wild.
DOI: 10.1007/978-3-030-66823-5_1
CIHE Affiliated Publication: No
Appears in Collections:CIS Publication

SFX Query Show full item record

Google ScholarTM




Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.