Abstract
Hyperspectral Imagery (HSI) provides rich spectral information essential for precise material characterization, yet the high dimensionality of these data cubes often leads to significant computational overhead and the “curse of dimensionality,” which can degrade classifier performance. While various dimensionality reduction techniques exist, there remains a lack of comparative research on how unsupervised versus supervised linear compression methods interact with diverse classification architectures under varying training constraints. This paper investigates the effectiveness of two linear spectral compression techniques—unsupervised Principal Component Analysis (PCA) and supervised Linear Discriminant Analysis (LDA)—to address these challenges. Using a 96-band Shortwave Infrared (SWIR) HSI dataset, we evaluate the impact of spectral compression on seven distinct classifiers: Logistic Regression (LR), Support Vector Machines (SVM), Decision Trees (DT), Random Forests (RF), K-Nearest Neighbors (KNN), Naïve Bayes (NB), and Multilayer Perceptrons (MLP). Our analysis systematically examines how the choice of compression method, the number of retained synthetic features, and the training sample size influence pixel-level classification accuracy. Experimental results demonstrate that the HSI data cube can be significantly compressed into a small subset of synthesized bands without substantial loss in accuracy, highlighting the efficiency of linear feature extraction for HSI analysis. Although this study utilizes a dataset collected under static conditions, the findings offer scalable insights for optimizing classification workflows in broader remote sensing applications.
Keywords
Get full access to this article
View all access options for this article.
