Please use this identifier to cite or link to this item:
Title: Appearance-preserved portrait-to-anime translation via proxy-guided domain adaptation
Author(s): Li, Chengze 
Liu, Xueting 
Author(s): Xiao, W.
Xu, C.
Mai, J.
Xu, X.
Li, Y.
He, S.
Issue Date: 2022
Publisher: IEEE
Journal: IEEE Transactions on Visualization and Computer Graphics 
Converting a human portrait to anime style is a desirable but challenging problem. Existing methods fail to resolve this problem due to the large inherent gap between two domains that cannot be overcome by a simple direct mapping. For this reason, these methods struggle to preserve the appearance features in the original photo. In this paper, we discover an intermediate domain, the coser portrait (portraits of humans costuming as anime characters), that helps bridge this gap. It alleviates the learning ambiguity and loosens the mapping difficulty in a progressive manner. Specifically, we start from learning the mapping between coser and anime portraits, and present a proxy-guided domain adaptation learning scheme with three progressive adaptation stages to shift the initial model to the human portrait domain. In this way, our model can generate visually pleasant anime portraits with well-preserved appearances given the human portrait. Our model adopts a disentangled design by breaking down the translation problem into two specific subtasks of face deformation and portrait stylization. This further elevates the generation quality. Extensive experimental results show that our model can achieve visually compelling translation with better appearance preservation and perform favorably against the existing methods both qualitatively and quantitatively.
DOI: 10.1109/TVCG.2022.3228707
CIHE Affiliated Publication: Yes
Appears in Collections:CIS Publication

SFX Query Show full item record

Google ScholarTM




Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.