Please use this identifier to cite or link to this item: https://repository.cihe.edu.hk/jspui/handle/cihe/4705
Title: Superpixel graph contrastive clustering with semantic-invariant augmentations for hyperspectral images
Author(s): Liu, Hui 
Author(s): Qi, J.
Jia, Y.
Hou, J.
Issue Date: 2024
Publisher: IEEE
Journal: IEEE Transactions on Circuits and Systems for Video Technology 
Volume: 34
Issue: 11
Start page: 11360
End page: 11372
Abstract: 
Hyperspectral images (HSI) clustering is an important but challenging task. The state-of-the-art (SOTA) methods usually rely on superpixels, however, they do not fully utilize the spatial and spectral information in HSI 3-D structure, and their optimization targets are not clustering-oriented. In this work, we first use 3-D and 2-D hybrid convolutional neural networks to extract the high-order spatial and spectral features of HSI through pre-training, and then design a superpixel graph contrastive clustering (SPGCC) model to learn discriminative superpixel representations. Reasonable augmented views are crucial for contrastive clustering, and conventional contrastive learning may hurt the cluster structure since different samples are pushed away in the embedding space even if they belong to the same class. In SPGCC, we design two semantic-invariant data augmentations for HSI superpixels: pixel sampling augmentation and model weight augmentation. Then sample-level alignment and clustering-center-level contrast are performed for better intra-class similarity and inter-class dissimilarity of superpixel embeddings. We perform clustering and network optimization alternatively. Experimental results on several HSI datasets verify the advantages of the proposed SPGCC compared to SOTA methods.
URI: https://repository.cihe.edu.hk/jspui/handle/cihe/4705
DOI: 10.1109/TCSVT.2024.3418610
CIHE Affiliated Publication: Yes
Appears in Collections:CIS Publication

SFX Query Show full item record

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.