Please use this identifier to cite or link to this item:
|Title:||Deep extraction of manga structural lines||Author(s):||Liu, Xueting
|Author(s):||Wong, T.-T.||Issue Date:||2017||Publisher:||Association for Computing Machinery||Journal:||ACM Transactions on Graphics||Volume:||36||Issue:||4||Abstract:||
Extraction of structural lines from pattern-rich manga is a crucial step for migrating legacy manga to digital domain. Unfortunately, it is very challenging to distinguish structural lines from arbitrary, highly-structured, and black-and-white screen patterns. In this paper, we present a novel data-driven approach to identify structural lines out of pattern-rich manga, with no assumption on the patterns. The method is based on convolutional neural networks. To suit our purpose, we propose a deep network model to handle the large variety of screen patterns and raise output accuracy. We also develop an efficient and effective way to generate a rich set of training data pairs. Our method suppresses arbitrary screen patterns no matter whether these patterns are regular, irregular, tone-varying, or even pictorial, and regardless of their scales. It outputs clear and smooth structural lines even if these lines are contaminated by and immersed in complex patterns. We have evaluated our method on a large number of mangas of various drawing styles. Our method substantially outperforms state-of-the-art methods in terms of visual quality. We also demonstrate its potential in various manga applications, including manga colorization, manga retargeting, and 2.5D manga generation.
|URI:||https://repository.cihe.edu.hk/jspui/handle/cihe/438||DOI:||10.1145/3072959.3073675||CIHE Affiliated Publication:||No|
|Appears in Collections:||CIS Publication|
Show full item record
Files in This Item:
|View Online||126 B||HTML||View/Open|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.