Please use this identifier to cite or link to this item:
|Title:||Learn to Sketch: A fast approach for universal photo sketch||Author(s):||Siu, Wan Chi
Chan, Anthony Hing-Hung
|Issue Date:||2021||Publisher:||IEEE||Related Publication(s):||2021 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC) Proceedings||Start page:||1450||End page:||1457||Abstract:||
Rendering real photos to abstract sketches is an interesting application that can help us understand the key features. In this paper, we propose a universal photo sketch model via a deep convolutional neural network. Prior arts often cast this problem as an edge or contour detection. However, the edges or contours may not exactly reflect the boundaries of the contents of the photos. They also fail to reveal the occlusion that separates the objects from each other. We resolve this problem by proposing Photo2Sketch and Sketch2Photo to form a loop to bridge the gap between photos and sketches. We introduce relevant sketch references as indicators to supervise the sketch generation. Meanwhile, we also introduce an adaptive sketching process that can generate drawing with confidence, hence multiple sketches can be obtained. Experimental results show that our proposed method surpasses other state-of-the-art methods in both qualitative and quantitative measures.
|URI:||https://repository.cihe.edu.hk/jspui/handle/cihe/3812||CIHE Affiliated Publication:||Yes|
|Appears in Collections:||CIS Publication|
Show full item record
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.