Please use this identifier to cite or link to this item: https://repository.cihe.edu.hk/jspui/handle/cihe/4427
DC FieldValueLanguage
dc.contributor.authorLi, Chengzeen_US
dc.contributor.authorLiu, Xuetingen_US
dc.contributor.otherXu, C.-
dc.contributor.otherXu, X.-
dc.contributor.otherZhao, N.-
dc.contributor.otherCai, W.-
dc.contributor.otherZhang, H.-
dc.date.accessioned2024-03-26T08:13:00Z-
dc.date.available2024-03-26T08:13:00Z-
dc.date.issued2023-
dc.identifier.urihttps://repository.cihe.edu.hk/jspui/handle/cihe/4427-
dc.description.abstractUsing a sequence of discrete still images to tell a story or introduce a process has become a tradition in the field of digital visual media. With the surge in these media and the requirements in downstream tasks, acquiring their main topics or genres in a very short time is urgently needed. As a representative form of the media, comic enjoys a huge boom as it has gone digital. However, different from natural images, comic images are divided by panels, and the images are not visually consistent from page to page. Therefore, existing works tailored for natural images perform poorly in analyzing comics. Considering the identification of comic genres is tied to the overall story plotting, a long-term understanding that makes full use of the semantic interactions between multi-level comic fragments needs to be fully exploited. In this paper, we propose P<sup>2</sup> Comic, a Panel-Page-aware Comic genre classification model, which takes page sequences of comics as the input and produces class-wise probabilities. P<sup>2</sup> Comic utilizes detected panel boxes to extract panel representations and deploys self-attention to construct panel-page understanding, assisted with interdependent classifiers to model label correlation. We develop the first comic dataset for the task of comic genre classification with multi-genre labels. Our approach is proved by experiments to outperform state-of-the-art methods on related tasks. We also validate the extensibility of our network to perform in the multi-modal scenario. Finally, we show the practicability of our approach by giving effective genre prediction results for whole comic books.en_US
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.relation.ispartofIEEE Transactions on Image Processingen_US
dc.titlePanel-page-aware comic genre understandingen_US
dc.typejournal articleen_US
dc.identifier.doi10.1109/TIP.2023.3270105-
dc.contributor.affiliationSchool of Computing and Information Sciencesen_US
dc.contributor.affiliationSchool of Computing and Information Sciencesen_US
dc.relation.issn1941-0042en_US
dc.description.volume32en_US
dc.description.startpage2636en_US
dc.description.endpage2648en_US
dc.cihe.affiliatedYes-
item.grantfulltextnone-
item.fulltextNo Fulltext-
item.languageiso639-1en-
item.openairetypejournal article-
item.openairecristypehttp://purl.org/coar/resource_type/c_6501-
item.cerifentitytypePublications-
crisitem.author.deptSchool of Computing and Information Sciences-
crisitem.author.deptSchool of Computing and Information Sciences-
Appears in Collections:CIS Publication
SFX Query Show simple item record

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.