Please use this identifier to cite or link to this item:
https://repository.cihe.edu.hk/jspui/handle/cihe/1619
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Siu, Wan Chi | - |
dc.contributor.author | Liu, Zhisong | - |
dc.contributor.other | Chan, Y. L. | - |
dc.date.accessioned | 2021-11-01T05:51:50Z | - |
dc.date.available | 2021-11-01T05:51:50Z | - |
dc.date.issued | 2021 | - |
dc.identifier.uri | https://repository.cihe.edu.hk/jspui/handle/cihe/1619 | - |
dc.description.abstract | Super-Resolving (SR) video is more challenging compared with image super-resolution because of the demanding computation time. To enlarge a low-resolution video, the temporal relationship among frames must be fully exploited. We can model video SR as a multi-frame SR problem and use deep learning methods to estimate the spatial and temporal information. This paper proposes a lighter residual network, based on a multi-stage back projection for multi-frame SR. We improve the back projection based residual block by adding weights for adaptive feature tuning, and add global & local connections to explore deeper feature representation. We jointly learn spatial-temporal feature maps by using the proposed Spatial Convolution Packing scheme as an attention mechanism to extract more information from both spatial and temporal domains. Different from others, our proposed network can input multiple low-resolution frames to obtain multiple super-resolved frames simultaneously. We can then further improve the video SR quality by self-ensemble enhancement to meet videos with different motions and distortions. Results of much experimental work show that our proposed approaches give large improvement over other state-of-the-art video SR methods. Compared to recent CNN based video SR works, our approaches can save, up to 60% computation time and achieve 0.6 dB PSNR improvement. | en_US |
dc.language.iso | en | en_US |
dc.publisher | IEEE | en_US |
dc.relation.ispartof | IEEE Access | en_US |
dc.title | Efficient video super-resolution via hierarchical temporal residual networks | en_US |
dc.type | journal article | en_US |
dc.identifier.doi | 10.1109/ACCESS.2021.3098326 | - |
dc.contributor.affiliation | School of Computing and Information Sciences | en_US |
dc.relation.issn | 2169-3536 | en_US |
dc.description.volume | 9 | en_US |
dc.description.startpage | 106049 | en_US |
dc.description.endpage | 106064 | en_US |
dc.cihe.affiliated | Yes | - |
item.languageiso639-1 | en | - |
item.fulltext | With Fulltext | - |
item.openairetype | journal article | - |
item.grantfulltext | open | - |
item.openairecristype | http://purl.org/coar/resource_type/c_6501 | - |
item.cerifentitytype | Publications | - |
crisitem.author.dept | Yam Pak Charitable Foundation School of Computing and Information Sciences | - |
crisitem.author.dept | Yam Pak Charitable Foundation School of Computing and Information Sciences | - |
crisitem.author.orcid | 0000-0001-8280-0367 | - |
crisitem.author.orcid | 0000-0003-4507-3097 | - |
Appears in Collections: | CIS Publication |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
View Online | 130 B | HTML | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.