Please use this identifier to cite or link to this item:
|Title:||DeepGIN: Deep generative inpainting network for extreme image inpainting||Author(s):||Siu, Wan Chi
Lun, D. P.-K.
|Issue Date:||2020||Publisher:||Springer||Related Publication(s):||Computer Vision – ECCV 2020 Workshops Proceedings, Part IV||Start page:||5||End page:||22||Abstract:||
The degree of difficulty in image inpainting depends on the types and sizes of the missing parts. Existing image inpainting approaches usually encounter difficulties in completing the missing parts in the wild with pleasing visual and contextual results as they are trained for either dealing with one specific type of missing patterns (mask) or unilaterally assuming the shapes and/or sizes of the masked areas. We propose a deep generative inpainting network, named DeepGIN, to handle various types of masked images. We design a Spatial Pyramid Dilation (SPD) ResNet block to enable the use of distant features for reconstruction. We also employ Multi-Scale Self-Attention (MSSA) mechanism and Back Projection (BP) technique to enhance our inpainting results. Our DeepGIN outperforms the state-of-the-art approaches generally, including two publicly available datasets (FFHQ and Oxford Buildings), both quantitatively and qualitatively. We also demonstrate that our model is capable of completing masked images in the wild.
|URI:||https://repository.cihe.edu.hk/jspui/handle/cihe/1253||DOI:||10.1007/978-3-030-66823-5_1||CIHE Affiliated Publication:||No|
|Appears in Collections:||CIS Publication|
Show full item record
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.