Please use this identifier to cite or link to this item:
|Title:||Deep visual sharing with colorblind||Author(s):||Liu, Xueting||Author(s):||Hu, X.
|Issue Date:||2019||Publisher:||IEEE||Journal:||IEEE Transactions on Computational Imaging||Volume:||5||Issue:||4||Start page:||649||End page:||659||Abstract:||
Visual sharing between color vision deficiency (CVD) and normal-vision audiences is challenging due to the need of simultaneous satisfaction of multiple binocular visual requirements, in order to offer a color-distinguishable and binocularly fusible visual experience to CVD audiences, without hurting the visual experience of the normal-vision audiences. Existing methods enable the feasibility of visual sharing but are not quite suitable for practical usage due to their instable and time-consuming optimization nature. In this paper, we propose the first deep-learning based solution for solving this visual sharing problem. Our method outperforms the existing solution in terms of all evaluation metrics. To achieve this, we propose to formulate this binocular image generation problem as a generation problem of a difference image, which can effectively enforce the binocular constraints. We also propose to retain only high-quality training data and enrich the variety of training data via intentionally synthesizing various confusing color combinations. With these, we train up a high-quality neural network model. Through multiple quantitative measurements and user study, we demonstrate this learning-based approach can significantly improve the quality of generated results with fast performance.
|URI:||https://repository.cihe.edu.hk/jspui/handle/cihe/828||DOI:||10.1109/TCI.2019.2908291||CIHE Affiliated Publication:||Yes|
|Appears in Collections:||CIS Publication|
Show full item record
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.