Please use this identifier to cite or link to this item: https://repository.cihe.edu.hk/jspui/handle/cihe/4707
Title: AnimeDiffusion: Anime diffusion colorization
Author(s): Liu, Xueting 
Author(s): Cao, Y.
Meng, X.
Mok, P. Y.
Lee, T.-Y.
Li, P.
Issue Date: 2024
Publisher: IEEE
Journal: IEEE Transactions on Visualization and Computer Graphics 
Volume: 30
Issue: 10
Start page: 6956
End page: 6969
Abstract: 
Being essential in animation creation, colorizing anime line drawings is usually a tedious and time-consuming manual task. Reference-based line drawing colorization provides an intuitive way to automatically colorize target line drawings using reference images. The prevailing approaches are based on generative adversarial networks (GANs), yet these methods still cannot generate high-quality results comparable to manually-colored ones. In this article, a new AnimeDiffusion approach is proposed via hybrid diffusions for the automatic colorization of anime face line drawings. This is the first attempt to utilize the diffusion model for reference-based colorization, which demands a high level of control over the image synthesis process. To do so, a hybrid end-to-end training strategy is designed, including phase 1 for training diffusion model with classifier-free guidance and phase 2 for efficiently updating color tone with a target reference colored image. The model learns denoising and structure-capturing ability in phase 1, and in phase 2, the model learns more accurate color information. Utilizing our hybrid training strategy, the network convergence speed is accelerated, and the colorization performance is improved. Our AnimeDiffusion generates colorization results with semantic correspondence and color consistency. In addition, the model has a certain generalization performance for line drawings of different line styles. To train and evaluate colorization methods, an anime face line drawing colorization benchmark dataset, containing 31,696 training data and 579 testing data, is introduced and shared. Extensive experiments and user studies have demonstrated that our proposed AnimeDiffusion outperforms state-of-the-art GAN-based methods and another diffusion-based model, both quantitatively and qualitatively.
URI: https://repository.cihe.edu.hk/jspui/handle/cihe/4707
DOI: 10.1109/TVCG.2024.3357568
CIHE Affiliated Publication: Yes
Appears in Collections:CIS Publication

SFX Query Show full item record

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.