Please use this identifier to cite or link to this item:
https://repository.cihe.edu.hk/jspui/handle/cihe/4212
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Li, Chengze | en_US |
dc.contributor.author | Liu, Xueting | en_US |
dc.contributor.other | Xiao, W. | - |
dc.contributor.other | Xu, C. | - |
dc.contributor.other | Mai, J. | - |
dc.contributor.other | Xu, X. | - |
dc.contributor.other | Li, Y. | - |
dc.contributor.other | He, S. | - |
dc.date.accessioned | 2023-07-07T01:51:58Z | - |
dc.date.available | 2023-07-07T01:51:58Z | - |
dc.date.issued | 2022 | - |
dc.identifier.uri | https://repository.cihe.edu.hk/jspui/handle/cihe/4212 | - |
dc.description.abstract | Converting a human portrait to anime style is a desirable but challenging problem. Existing methods fail to resolve this problem due to the large inherent gap between two domains that cannot be overcome by a simple direct mapping. For this reason, these methods struggle to preserve the appearance features in the original photo. In this paper, we discover an intermediate domain, the coser portrait (portraits of humans costuming as anime characters), that helps bridge this gap. It alleviates the learning ambiguity and loosens the mapping difficulty in a progressive manner. Specifically, we start from learning the mapping between coser and anime portraits, and present a proxy-guided domain adaptation learning scheme with three progressive adaptation stages to shift the initial model to the human portrait domain. In this way, our model can generate visually pleasant anime portraits with well-preserved appearances given the human portrait. Our model adopts a disentangled design by breaking down the translation problem into two specific subtasks of face deformation and portrait stylization. This further elevates the generation quality. Extensive experimental results show that our model can achieve visually compelling translation with better appearance preservation and perform favorably against the existing methods both qualitatively and quantitatively. | en_US |
dc.language.iso | en | en_US |
dc.publisher | IEEE | en_US |
dc.relation.ispartof | IEEE Transactions on Visualization and Computer Graphics | en_US |
dc.title | Appearance-preserved portrait-to-anime translation via proxy-guided domain adaptation | en_US |
dc.type | journal article | en_US |
dc.identifier.doi | 10.1109/TVCG.2022.3228707 | - |
dc.contributor.affiliation | School of Computing and Information Sciences | en_US |
dc.contributor.affiliation | School of Computing and Information Sciences | en_US |
dc.relation.issn | 1941-0506 | en_US |
dc.cihe.affiliated | Yes | - |
item.languageiso639-1 | en | - |
item.fulltext | No Fulltext | - |
item.openairetype | journal article | - |
item.grantfulltext | none | - |
item.openairecristype | http://purl.org/coar/resource_type/c_6501 | - |
item.cerifentitytype | Publications | - |
crisitem.author.dept | Yam Pak Charitable Foundation School of Computing and Information Sciences | - |
crisitem.author.dept | Yam Pak Charitable Foundation School of Computing and Information Sciences | - |
Appears in Collections: | CIS Publication |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.