Mixed spectral responses from different ground materials often create confusions in complex remote sensing scenes and restrict classification performance. In this regard, unmixing approaches are being successfully carried out to decompose mixed pixels into a collection of spectral signatures. In this paper, we propose a method to integrate unmixing into a deep feature learning model in order to classify hyperspectral data. We propose to generate superpixels from the abundance estimations of the underlying materials of the image obtained from an unsupervised endmember extraction algorithm called vertex component analysis (VCA). The mean abundances of the superpixels are then used as features for a deep classifier. Our proposed deep model, formulated as a joint convolutional neural network and recurrent neural network, receives significant spectral-spatial information in the data to produce better and powerful features and achieve improved classification performance than several alternative methods.
Convolutional neural networks (CNNs) have demonstrated significant performance in various visual recognition problems in recent years. Recent research has shown that training multilayer neural networks can extensively improve the performance of hyperspectral image (HSI) classification. In this paper, we apply a triplet constraint property on a 3D CNN. This method directly learns a mapping from images to a Euclidean space in which distances directly correspond to a measure of spectral-spatial similarity. Once this embedding has been established, classification can be implemented with such embeddings as feature vectors. Moreover, we also augment the size of the training samples in different band groups. This produces different yet useful estimation of spectral-spatial characteristics of HSI data and contributes considerably in accurate classification. This method is evaluated on a new dataset and compared with several state-of-the-art models, which shows the promising potential of our method.
Image classification is considered to be one of the critical tasks in hyperspectral remote sensing image processing. Recently, a convolutional neural network (CNN) has established itself as a powerful model in classification by demonstrating excellent performances. The use of a graphical model such as a conditional random field (CRF) contributes further in capturing contextual information and thus improving the classification performance. In this paper, we propose a method to classify hyperspectral images by considering both spectral and spatial information via a combined framework consisting of CNN and CRF. We use multiple spectral band groups to learn deep features using CNN, and then formulate deep CRF with CNN-based unary and pairwise potential functions to effectively extract the semantic correlations between patches consisting of 3-D data cubes. Furthermore, we introduce a deep deconvolution network that improves the final classification performance. We also introduced a new data set and experimented our proposed method on it along with several widely adopted benchmark data sets to evaluate the effectiveness of our method. By comparing our results with those from several state-of-the-art models, we show the promising potential of our method.
Image classification is one of the critical tasks in hyperspectral remote sensing. In recent years, significant improvement have been achieved by various classification methods. However, mixed spectral responses from different ground materials still create confusions in complex scenes. In this regard, unmixing approaches are being successfully carried out to decompose mixed pixels into a collection of spectral signatures. Considering the usefulness of these techniques, we propose to utilize the unmixing results as an input to classifiers for better classification accuracy. We propose a novel band group based structure preserving nonnegative matrix factorization (NMF) method to estimate the individual spectral responses from different materials within different ranges of wavelengths. Then we train a convolutional neural network (CNN) with the unmixing results to generate powerful features and eventually classify the data. This method is evaluated on a new dataset and compared with several state-of-the-art models, which shows the promising potential of our method.
This paper proposes a method that uses both spectral and spatial information to segment remote sensing hyperspectral images. After a hyperspectral image is over-segmented into superpixels, a deep Convolutional Neural Network (CNN) is used to perform superpixel-level labelling. To further delineate objects from a hyperspectral scene, this paper attempts to combine the properties of CNN and Conditional Random Field (CRF). A mean-field approximation algorithm for CRF inference is used and formulated with Gaussian pairwise potentials as Recurrent Neural Network. This combined network is then plugged into the CNN which leads to a deep network that has robust characteristics of both CNN and CRF. Preliminary results suggest the usefulness of this framework to a promising extent.