Please use this identifier to cite or link to this item:
|Title:||Seamless manga inpainting with semantics awareness||Author(s):||Liu, Xueting
|Issue Date:||2021||Publisher:||Association for Computing Machinery||Journal:||ACM Transactions on Graphics||Volume:||40||Issue:||4||Start page:||1||End page:||11||Abstract:||
Manga inpainting fills up the disoccluded pixels due to the removal of dialogue balloons or "sound effect" text. This process is long needed by the industry for the language localization and the conversion to animated manga. It is mostly done manually, as existing methods (mostly for natural image inpainting) cannot produce satisfying results. Manga inpainting is more tricky than natural image inpainting because its highly abstract illustration using structural lines and screentone patterns, which confuses the semantic interpretation and visual content synthesis. In this paper, we present the first manga inpainting method, a deep learning model, that generates high-quality results. Instead of direct inpainting, we propose to separate the complicated inpainting into two major phases, semantic inpainting and appearance synthesis. This separation eases both the feature understanding and hence the training of the learning model. A key idea is to disentangle the structural line and screentone, that helps the network to better distinguish the structural line and the screentone features for semantic interpretation. Both the visual comparison and the quantitative experiments evidence the effectiveness of our method and justify its superiority over existing state-of-the-art methods in the application of manga inpainting.
|URI:||https://repository.cihe.edu.hk/jspui/handle/cihe/1613||DOI:||10.1145/3450626.3459822||CIHE Affiliated Publication:||Yes|
|Appears in Collections:||CIS Publication|
Show full item record
Files in This Item:
|View Online||126 B||HTML||View/Open|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.