Please use this identifier to cite or link to this item:
https://repository.cihe.edu.hk/jspui/handle/cihe/4430
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Liu, Hui | en_US |
dc.contributor.other | Peng, Z. | - |
dc.contributor.other | Jia, Y. | - |
dc.contributor.other | Hou, J. | - |
dc.date.accessioned | 2024-03-26T08:39:49Z | - |
dc.date.available | 2024-03-26T08:39:49Z | - |
dc.date.issued | 2023 | - |
dc.identifier.uri | https://repository.cihe.edu.hk/jspui/handle/cihe/4430 | - |
dc.description.abstract | Existing graph clustering networks heavily rely on a predefined yet fixed graph, which can lead to failures when the initial graph fails to accurately capture the data topology structure of the embedding space. In order to address this issue, we propose a novel clustering network called Embedding-Induced Graph Refinement Clustering Network (EGRC-Net), which effectively utilizes the learned embedding to adaptively refine the initial graph and enhance the clustering performance. To begin, we leverage both semantic and topological information by employing a vanilla auto-encoder and a graph convolution network, respectively, to learn a latent feature representation. Subsequently, we utilize the local geometric structure within the feature embedding space to construct an adjacency matrix for the graph. This adjacency matrix is dynamically fused with the initial one using our proposed fusion architecture. To train the network in an unsupervised manner, we minimize the Jeffreys divergence between multiple derived distributions. Additionally, we introduce an improved approximate personalized propagation of neural predictions to replace the standard graph convolution network, enabling EGRC-Net to scale effectively. Through extensive experiments conducted on nine widely-used benchmark datasets, we demonstrate that our proposed methods consistently outperform several state-of-the-art approaches. Notably, EGRC-Net achieves an improvement of more than 11.99% in Adjusted Rand Index (ARI) over the best baseline on the DBLP dataset. Furthermore, our scalable approach exhibits a 10.73% gain in ARI while reducing memory usage by 33.73% and decreasing running time by 19.71%. | en_US |
dc.language.iso | en | en_US |
dc.publisher | IEEE | en_US |
dc.relation.ispartof | IEEE Transactions on Image Processing | en_US |
dc.title | EGRC-Net: Embedding-induced graph refinement clustering network | en_US |
dc.type | journal article | en_US |
dc.identifier.doi | 10.1109/TIP.2023.3333557 | - |
dc.contributor.affiliation | School of Computing and Information Sciences | en_US |
dc.relation.issn | 1941-0042 | en_US |
dc.description.volume | 32 | en_US |
dc.description.startpage | 6457 | en_US |
dc.description.endpage | 6468 | en_US |
dc.cihe.affiliated | Yes | - |
item.languageiso639-1 | en | - |
item.fulltext | No Fulltext | - |
item.openairetype | journal article | - |
item.grantfulltext | none | - |
item.openairecristype | http://purl.org/coar/resource_type/c_6501 | - |
item.cerifentitytype | Publications | - |
crisitem.author.dept | Yam Pak Charitable Foundation School of Computing and Information Sciences | - |
Appears in Collections: | CIS Publication |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.