ZHENG Zongsheng, WANG Zhenghan, WANG Zhenhua, LU Peng, GAO Meng, HUO Zhijun. 2024. An improved 3D Octave convolution-based method for hyperspectral image classification. Remote Sensing for Natural Resources, 36(4): 82-91. doi: 10.6046/zrzyyg.2023171
Citation: |
ZHENG Zongsheng, WANG Zhenghan, WANG Zhenhua, LU Peng, GAO Meng, HUO Zhijun. 2024. An improved 3D Octave convolution-based method for hyperspectral image classification. Remote Sensing for Natural Resources, 36(4): 82-91. doi: 10.6046/zrzyyg.2023171
|
An improved 3D Octave convolution-based method for hyperspectral image classification
More Information
-
Corresponding author:
WANG Zhenghan
-
Abstract
Hyperspectral image data are characterized by high dimensionality, sparse data, and rich spatial and spectral information. In spatial-spectral joint classification models, convolution operations for hyperspectral images can lead to computational spatial redundancy when processing large regions of pixels of the same category. Furthermore, the 3D convolution fails to sufficiently extract the deep spatial texture features, and the serial attention mechanism cannot fully account for spatial-spectral correlations. This study proposed an improved 3D Octave convolution-based model for hyperspectral image classification. First, the input hyperspectral images were divided into high- and low-frequency feature maps using an improved 3D Octave convolution module to reduce spatial redundancy information and extract multi-scale spatial-spectral features. Concurrently, a cross-layer fusion strategy was introduced to enhance the extraction of shallow spatial texture features and spectral features. Subsequently, 2D convolution was used to extract deep spatial texture features and perform spectral feature fusion. Finally, a 3D attention mechanism was used to focus on and activate effective features through interactions across latitudes, thereby enhancing the performance and robustness of the network model. The results indicate that, due to the adequate extraction of effective spatial-spectral joint features, the overall accuracy (OA), Kappa coefficient, and average accuracy (AA) were 99.32%, 99.13%, and 99.15%, respectively in the case where the Indian Pines (IP) dataset accounted for 10% in the training set and were 99.61%, 99.44%, and 99.08%, respectively when the Pavia University (PU) dataset represented for 3% of the training set. Compared to five mainstream classification models, the proposed method exhibits higher classification accuracy.
-
-
-
Access History