Please use this identifier to cite or link to this item:
|Title:||Deep binocular tone mapping||Author(s):||Liu, Xueting||Author(s):||Zhang, Z.
|Issue Date:||2019||Publisher:||Springer||Journal:||The Visual Computer||Volume:||35||Start page:||997||End page:||1011||Abstract:||
Binocular tone mapping is studied in the previous works to generate a fusible pair of LDR images in order to convey more visual content than one single LDR image. However, the existing methods are all based on monocular tone mapping operators. It greatly restricts the preservation of local details and global contrast in a binocular LDR pair. In this paper, we proposed the first binocular tone mapping operator to more effectively distribute visual content to an LDR pair, leveraging the great representability and interpretability of deep convolutional neural network. Based on the existing binocular perception models, novel loss functions are also proposed to optimize the output pairs in terms of local details, global contrast, content distribution, and binocular fusibility. Our method is validated with a qualitative and quantitative evaluation, as well as a user study. Statistics show that our method outperforms the state-of-the-art binocular tone mapping frameworks in terms of both visual quality and time performance.
|URI:||https://repository.cihe.edu.hk/jspui/handle/cihe/111||DOI:||10.1007/s00371-019-01669-8||CIHE Affiliated Publication:||Yes|
|Appears in Collections:||CIS Publication|
Show full item record
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.