Accurate classification of wetland vegetation is essential for biodiversity conservation and carbon cycle monitoring, yet traditional methods have struggled with the complex vegetation composition and similar canopy spectra found in karst wetlands. A new study published in the Journal of Remote Sensing on October 16, 2025, introduces an adaptive ensemble learning stacking (AEL-Stacking) framework that combines hyperspectral imagery and light detection and ranging (LiDAR) data captured by unmanned aerial vehicles (UAVs) to precisely identify vegetation species in these fragile ecosystems.
The research, conducted in China's Huixian Karst Wetland, achieved classification accuracy of 87.91% to 92.77%, substantially outperforming traditional models by up to 9.5%. The approach integrates Random Forest, LightGBM, and CatBoost classifiers within a grid-search-optimized adaptive framework, using 70% of data for training and 30% for testing with 10-fold cross-validation. The study's findings, published with DOI 10.34133/remotesensing.0452, demonstrate how spectral and structural features jointly improve ecosystem mapping and restoration strategies.
Karst wetlands are globally significant ecosystems that regulate water, store carbon, and harbor rich biodiversity, but their intricate vegetation composition has made accurate remote sensing classification challenging. Traditional field surveys are costly and spatially limited, while multispectral imaging lacks sufficient spectral resolution for species-level mapping. LiDAR provides 3D structural data but struggles with water-surface reflectance and weak signals. The integration of complementary optical and structural data through the AEL-Stacking model addresses these limitations.
The research team from Guilin University of Technology and collaborators collected over 4,500 hyperspectral images and dense point clouds (208 points/m²) using UAVs equipped with Headwall Nano-Hyperspec and DJI Zenmuse L1 LiDAR sensors. The integrated dataset covered 13 vegetation types, including lotus, miscanthus, and camphor trees. Through recursive feature elimination and correlation analysis, 40 optimal features were selected from more than 600 variables, with LiDAR-derived digital surface model (DSM) variables proving particularly important for distinguishing species with distinct vertical structures.
"Our approach bridges the gap between spectral and structural sensing," said Dr. Bolin Fu, corresponding author of the study. "By combining UAV hyperspectral and LiDAR data through adaptive ensemble learning, we achieved both precision and interpretability in vegetation mapping. The framework not only improves species recognition in complex karst environments but also provides a generalizable tool for ecological monitoring and habitat restoration worldwide."
The study used local interpretable model-agnostic explanations (LIME) to visualize how each feature contributes to the decision-making process, revealing DSM and blue spectral bands as the most influential features. Lotus and Miscanthus achieved classification F1-scores above 0.9, and the model significantly reduced misclassification between morphologically similar species, offering detailed vegetation maps critical for ecosystem monitoring.
This integrative framework demonstrates a scalable and explainable approach for high-resolution wetland mapping, potentially applicable to forest, grassland, and coastal ecosystems. Future work will focus on integrating multi-temporal UAV observations and satellite data fusion to monitor seasonal vegetation dynamics and climate-driven changes in wetland health. By enhancing the transparency and accuracy of AI-driven ecological models, this research supports the global agenda for biodiversity conservation and carbon neutrality. The original research is available at https://spj.science.org/doi/10.34133/remotesensing.0452.



