Please use this identifier to cite or link to this item: https://repository.cihe.edu.hk/jspui/handle/cihe/4707
DC FieldValueLanguage
dc.contributor.authorLiu, Xuetingen_US
dc.contributor.otherCao, Y.-
dc.contributor.otherMeng, X.-
dc.contributor.otherMok, P. Y.-
dc.contributor.otherLee, T.-Y.-
dc.contributor.otherLi, P.-
dc.date.accessioned2025-04-30T09:49:49Z-
dc.date.available2025-04-30T09:49:49Z-
dc.date.issued2024-
dc.identifier.urihttps://repository.cihe.edu.hk/jspui/handle/cihe/4707-
dc.description.abstractBeing essential in animation creation, colorizing anime line drawings is usually a tedious and time-consuming manual task. Reference-based line drawing colorization provides an intuitive way to automatically colorize target line drawings using reference images. The prevailing approaches are based on generative adversarial networks (GANs), yet these methods still cannot generate high-quality results comparable to manually-colored ones. In this article, a new AnimeDiffusion approach is proposed via hybrid diffusions for the automatic colorization of anime face line drawings. This is the first attempt to utilize the diffusion model for reference-based colorization, which demands a high level of control over the image synthesis process. To do so, a hybrid end-to-end training strategy is designed, including phase 1 for training diffusion model with classifier-free guidance and phase 2 for efficiently updating color tone with a target reference colored image. The model learns denoising and structure-capturing ability in phase 1, and in phase 2, the model learns more accurate color information. Utilizing our hybrid training strategy, the network convergence speed is accelerated, and the colorization performance is improved. Our AnimeDiffusion generates colorization results with semantic correspondence and color consistency. In addition, the model has a certain generalization performance for line drawings of different line styles. To train and evaluate colorization methods, an anime face line drawing colorization benchmark dataset, containing 31,696 training data and 579 testing data, is introduced and shared. Extensive experiments and user studies have demonstrated that our proposed AnimeDiffusion outperforms state-of-the-art GAN-based methods and another diffusion-based model, both quantitatively and qualitatively.en_US
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.relation.ispartofIEEE Transactions on Visualization and Computer Graphicsen_US
dc.titleAnimeDiffusion: Anime diffusion colorizationen_US
dc.typejournal articleen_US
dc.identifier.doi10.1109/TVCG.2024.3357568-
dc.contributor.affiliationYam Pak Charitable Foundation School of Computing and Information Sciencesen_US
dc.relation.issn1941-0506en_US
dc.description.volume30en_US
dc.description.issue10en_US
dc.description.startpage6956en_US
dc.description.endpage6969en_US
dc.cihe.affiliatedYes-
item.grantfulltextnone-
item.languageiso639-1en-
item.openairetypejournal article-
item.fulltextNo Fulltext-
item.openairecristypehttp://purl.org/coar/resource_type/c_6501-
item.cerifentitytypePublications-
crisitem.author.deptYam Pak Charitable Foundation School of Computing and Information Sciences-
Appears in Collections:CIS Publication
SFX Query Show simple item record

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.