TerraMosaic Daily Digest: Mar 1, 2026
Daily Summary
This March 1, 2026 digest compiles 140 selected papers from 910 analyzed studies (910 deduplicated; 2,599 raw records). The strongest evidence centers on operationally useful hazard science: multiscale forecasting of rainfall-induced landslide displacement, field-to-numerical reconstruction of a giant landslide dam in the Jinsha catchment, long-term renewal dynamics of glacial lakes in the northern Tien Shan, and coupled seismic-infrasonic constraints on debris-flow source processes.
A parallel stream focuses on infrastructure-facing risk quantification, including flood exposure of wastewater-treatment systems, tunnel failure prediction in heterogeneous clay, and mining-support reinforcement under extreme stress conditions. The raw pool remains heavily method-dominated; generic AI papers and publication notices were retained only with low relevance where direct geohazard transfer is weak.
Key Trends
The main trajectory is from static description to trigger-aware, uncertainty-explicit, decision-oriented geohazard analytics.
- Displacement forecasting is becoming process-aware: top landslide studies now combine lag-sensitive rainfall signals, multiscale decomposition, and nonlinear temporal dynamics instead of relying on single-index correlations.
- Multi-sensor hazard diagnostics are improving source attribution: joint use of seismic, infrasound, and satellite records is resolving where and when hazardous processes are generated, not just where impacts are observed.
- Cryosphere risk work is shifting to lifecycle monitoring: new glacial-lake and ice-shelf datasets emphasize persistence, renewal, and uncertainty propagation, which are essential for credible long-horizon warning.
- Critical infrastructure is treated as a first-class hazard receptor: facility-scale flood modeling, underground stability analysis, and mining geomechanics are increasingly evaluated with actionable engineering thresholds.
- Event reconstruction is becoming quantitatively constrained: field evidence, numerical back-analysis, and seismic-infrasonic observations are increasingly combined to resolve stage-wise failure evolution and derive parameters usable for emergency planning.
Selected Papers
This digest features 140 selected papers from 910 papers analyzed (910 new papers after deduplication) across multiple journals. Each paper has been evaluated for its relevance to landslide and broader geohazard research and includes links to the original publications.
1. A hybrid multiscale forecasting framework for rainfall-induced landslide displacement: case studies from Guangxi, China
Core Problem: Accurate forecasting of rainfall-induced landslide displacement, which exhibits step-like, nonlinear, and delayed patterns, is challenging, especially in data-scarce environments with limited subsurface monitoring.
Key Innovation: This study proposes the TL-LF-HLDP framework, integrating multiscale decomposition (GWO–ICEEMDAN), lag-aware feature selection (TLCC, DLFS), and deep time-varying regression (DDS-TVAR) for landslide cumulative displacement prediction. Validated on two real-world landslides, it significantly reduces MAE and RMSE compared to baselines, demonstrating potential for early landslide warnings in data-scarce settings.
2. Field and numerical investigations of the highest natural dam in the Jinsha River catchment formed by the Wangdalong landslide
Core Problem: The formation and evolution mechanisms of large paleo-landslide dams, particularly the Wangdalong landslide dam, which significantly impacted the geomorphological evolution of the Jinsha River, were not fully understood, limiting insights for future disaster response.
Key Innovation: Conducted comprehensive field and numerical investigations (photogrammetry, drilling, kinematic analysis, DEM modeling) to characterize the Wangdalong landslide dam, elucidate its formation (wedge failure) and five-stage evolution, providing reliable constraints on its geological cause and emplacement mechanisms, which can guide future rapid decision-making.
3. Constantly renewing glacial lakes in the Kyrgyz Range, northern Tien Shan
Core Problem: The need to understand the historical evolution and formation mechanisms of glacial lakes in the Kyrgyz Range, particularly in the context of increasing concern over Glacial Lake Outburst Floods (GLOFs) and glacier recession.
Key Innovation: Quantified the historical evolution of glacial lakes (number and area) from 1968-2021 using multi-source satellite imagery, revealing a continuous renewal process driven by glacier retreat, glacier-moraine complex expansion, and buried-ice melt, which forms new thermokarst features that fill with water.
4. Investigating the relationship between seismic and infrasonic source mechanisms in debris flows
Core Problem: The radiation processes of seismic waves and infrasound from debris flows, and their interrelationship, are still poorly understood.
Key Innovation: This study analyzes seismic and infrasonic signals from a debris-flow event, confirming independent but near-field correlated processes. It suggests infrasonic source mechanisms develop effectively above a discharge threshold and at topographic steps, estimates flow velocity using infrasound, and links seismic radiation to sediment discharge variations, providing constraints on source mechanisms.
5. Selective Denoising Diffusion Model for Time Series Anomaly Detection
Core Problem: Existing diffusion-based methods for Time Series Anomaly Detection (TSAD) struggle to accurately reconstruct normal parts, leading to suboptimal detection performance, as they rely on conditional strategies that reconstruct entire instances from white noise.
Key Innovation: AnomalyFilter, a novel diffusion-based method for TSAD that acts as a selective filter, denoising only anomaly parts in an instance while retaining normal parts by masking Gaussian noise during training and conducting denoising without adding noise to normal instances, significantly enhancing performance.
6. ICEland-1: A geochronological database for reconstructing Late Quaternary glacier, relative sea level, and paleoclimate patterns in Iceland
Core Problem: Significant spatiotemporal gaps and biases in the understanding of Late Quaternary ice sheet, relative sea level, and paleoclimate variability in and around Iceland, hindering accurate ice sheet model calibrations and future projections.
Key Innovation: Presents ICEland-1, a comprehensive and quality-controlled geochronological database of 1744 data points for Late Quaternary glacier, relative sea level, and paleoclimate changes in Iceland, with a three-tier reliability ranking system, highlighting research avenues to minimize uncertainties.
7. Advancing ecohydrological modelling: coupling LPJ-GUESS with ParFlow for integrated vegetation and surface-subsurface hydrology simulations
Core Problem: Existing Earth system models often neglect complex topography-driven vegetation–surface–groundwater interactions, leading to inaccurate climate-hydrological responses.
Key Innovation: Integration of the 3D surface-subsurface hydrological model ParFlow with the dynamic global vegetation model LPJ-GUESS (PF-LPJG), substantially improving simulations of streamflow, surface soil moisture, and water table depth, providing a mechanistic framework for analyzing climate-induced modifications on vegetation-water-carbon interactions.
8. Overlooked bedload transport in Himalayan rivers threatens regional security
Core Problem: Bedload transport in high Himalayan rivers has been overlooked, hindering the development of morphodynamic models that explicitly couple river hydraulics, sediment transport, and channel morphology, which increases risks to regional hydropower, ecosystems, and food security under extreme floods and global warming.
Key Innovation: Identifies and highlights a critical knowledge gap regarding bedload transport in Himalayan rivers and its significant implications for regional security, emphasizing the urgent need for improved morphodynamic models to predict responses to extreme floods.
9. Research and application of grouting reinforcement technology for small coal pillar roadways along gob in extra-thick coal seams
Core Problem: Small coal pillars in extra-thick seams lead to fractured, low-capacity surrounding rock and large asymmetric deformation, threatening roadway stability in mining operations.
Key Innovation: Developed and validated a grouting-reinforcement technology coupling hollow high-pressure grouting cable bolts with a nano-modified microfine grout, demonstrating significant reductions in floor heave and rib convergence, thereby substantially enhancing roadway reliability.
10. Flood hazard assessment on wastewater treatment plants: a case study of the Metropolitan Area of Barcelona (Spain)
Core Problem: Wastewater Treatment Plants (WWTPs) are often located in flood-prone areas, but flood hazard assessments often overlook the significant role of streams and ephemeral watercourses at the facility scale, leading to incomplete risk characterization.
Key Innovation: This study assesses flood hazard for six WWTPs in Barcelona using 2D hydrodynamic models for design flood events, explicitly incorporating streams and ephemeral watercourses. It identifies significant inundation and high-hazard conditions within plants from these smaller watercourses, highlighting their critical role often missed by assessments focusing only on major rivers, thus supporting more robust adaptation planning.
11. An innovative UBRTME–ANN hybrid approach for failure characteristic prediction of circular tunnels in heterogeneous undrained clay
Core Problem: Efficiently and accurately predicting critical load factors and failure characteristics of circular tunnels in heterogeneous undrained clay, while maintaining physical interpretability of the failure mechanisms.
Key Innovation: Developed an innovative hybrid approach combining the Upper-Bound finite-element method with Rigid Translatory Moving Elements (UBRTME) and Artificial Neural Networks (ANN) to predict tunnel failure characteristics and reconstruct critical failure surfaces, offering an explainable and practical framework for geotechnical failure prediction.
12. Shear wear effect on cross-fracture: coupled seepage and heat transfer evolution and prediction of unstable slip risk in geothermal reservoirs
Core Problem: Complex coupling mechanism of cross-fracture seepage and heat transfer in Enhanced Geothermal Systems (EGS) and the need to predict unstable slip risk.
Key Innovation: Elucidated the three-stage shear process in cross-fractured granite, proposed a permeability inversion model, defined a heat transfer enhancement coefficient, and provided a theoretical framework for EGS design to optimize heat extraction while mitigating shear slip risk.
13. Phase variance as a seismic quality-control attribute
Core Problem: Seismic wavefields are strongly distorted by near-surface heterogeneity, introducing localized, non-surface-consistent phase perturbations that conventional processing struggles to correct, and there is no direct quantitative measure of phase reliability.
Key Innovation: Introducing phase variance as a seismic quality-control attribute, which quantifies localized phase dispersion using circular statistics on local trace ensembles, providing an automatic, frequency-by-frequency classification of phase reliability for improved seismic data analysis and phase-sensitive workflows.
14. Disentangled Mode-Specific Representations for Tensor Time Series via Contrastive Learning
Core Problem: Learning rich representations from complex multi-mode tensor time series (TTS) is challenging, hindering applications like classification and forecasting, especially in domains like environmental monitoring.
Key Innovation: MoST, a novel representation learning method for TTS that uses tensor slicing and a contrastive learning framework to disentangle mode-specific features (relationships within a mode) and mode-invariant features (common across modes), outperforming state-of-the-art methods in classification and forecasting accuracy on real-world datasets.
15. Physics-Informed Time-Integrated DeepONet: Temporal Tangent Space Operator Learning for High-Accuracy Inference
Core Problem: Traditional full rollout (FR) and autoregressive (AR) methods for solving time-dependent Partial Differential Equations (PDEs) struggle with capturing causal dependencies, generalizing beyond training horizons, and accumulating errors, limiting long-term accuracy.
Key Innovation: Introduces PITI-DeepONet, a dual-output physics-informed deep operator network that learns the time-derivative operator from the current state and integrates it using classical time-stepping schemes, ensuring stable and accurate long-term evolution of complex time-dependent PDEs.
16. FRIEDA: Benchmarking Multi-Step Cartographic Reasoning in Vision-Language Models
Core Problem: Cartographic reasoning, essential for critical tasks like disaster response and urban planning, remains largely unevaluated in Vision-Language Models (LVLMs), with existing map VQA often treating maps as simple charts and failing to capture complex spatial relations.
Key Innovation: Introduces FRIEDA, a benchmark for testing complex open-ended multi-step cartographic reasoning in LVLMs, sourcing real map images and targeting all three categories of spatial relations, revealing a persistent gap in current models' spatial intelligence.
17. Improved bathymetry estimates beneath Amundsen Sea ice shelves using a Markov Chain Monte Carlo gravity inversion (GravMCMC, version 1)
Core Problem: Previous bathymetry inversions beneath ice shelves have not robustly quantified uncertainty due to inherent assumptions and non-uniqueness, making it difficult to propagate uncertainty into ocean and ice-sheet models.
Key Innovation: Development of GravMCMC, a Markov Chain Monte Carlo gravity inversion method to generate ensembles of bathymetry models beneath ice shelves, robustly quantifying uncertainty due to background density and geological variability, improving bounds on sub-ice-shelf melting and grounding line retreat.
18. Numerical simulation of hydraulic fracture propagation and stimulation effectiveness under in-Situ stress conditions
Core Problem: Optimizing hydraulic fracturing parameters for safe and efficient mining in deep water-rich fractured thick coal seams, considering in-situ stress conditions and rock heterogeneity.
Key Innovation: Combined true triaxial physical experiments and numerical simulations to analyze the control effects of in-situ stress, fracture pressure, and interlayer interface on crack propagation, determining optimal fracturing pressure and water injection rates for complex fracture network formation and surrounding rock stability.
19. Experimental investigation of temperature effects on granite rockbursts behavior under biaxial compression
Core Problem: The mechanism of rockburst development and disaster in deep rock masses, subjected to high in situ stress, ground temperature, and engineering disturbance, is extremely complicated and not fully understood, especially regarding temperature effects.
Key Innovation: This study conducts biaxial loading tests on heat-treated granite with circular holes to induce and analyze multiple rockbursts. It reveals that temperature treatment significantly shortens rockburst duration, intensifies rockbursts, increases debris fragmentation, and elevates acoustic emission energy, providing insights into the spatial evolution and fracture types, useful for deep tunnel stability analysis and support optimization.
20. Correction: ISRM Suggested Method for Achieving Full Water Saturation of Rock
Core Problem: This item is a journal correction notice that updates previously published material rather than presenting new primary geohazard evidence.
Key Innovation: Improves record accuracy and reproducibility; it does not add new experimental or modeling results.
21. Retraction Note: A Study on the Effective Tracking of Hydraulic Fracturing Fracture Development Based on Microseismic Data
Core Problem: This record is a formal retraction notice; the underlying technical evidence is no longer reliable for hazard inference or operational use.
Key Innovation: No scientific innovation is introduced; the contribution is correction of the published record and removal of invalidated evidence.
22. Long‐term growth and persistence of granitic inselbergs in a semi‐arid cratonic landscape
Core Problem: The mechanisms and timescales of granitic inselberg persistence and their long-term evolution in cratonic landscapes are debated, with existing models potentially oversimplifying the processes.
Key Innovation: A multinuclide cosmogenic dataset (10Be, 26Al, 21Ne) is used to quantify denudation rates and reconstruct long-term dynamics, revealing systematic vertical differential denudation, cumulative relief growth over millions of years, and the interplay of lithological resistance, structural inheritance, and sediment transfer processes. It suggests hybrid mechanisms for inselberg development.
23. SDMixer: Sparse Dual-Mixer for Time Series Forecasting
Core Problem: Multivariate time series data commonly suffers from multi-scale characteristics, weak correlations, and noise interference, which limit the predictive performance of existing forecasting models.
Key Innovation: Proposing SDMixer, a dual-stream sparse Mixer prediction framework that extracts global trends and local dynamic features from sequences in both frequency and time domains, employing a sparsity mechanism to filter invalid information and enhance cross-variable dependency modeling for improved forecasting accuracy.
24. Any Model, Any Place, Any Time: Get Remote Sensing Foundation Model Embeddings On Demand
Core Problem: Practical adoption and fair comparison of remote sensing foundation models are challenging due to substantial heterogeneity in model release formats, platforms, interfaces, and input data specifications, increasing the cost of using and benchmarking embeddings.
Key Innovation: rs-embed, a Python library that offers a unified, region of interest (ROI) centric interface to retrieve embeddings from any supported remote sensing foundation model for any location and time range with a single line of code, enabling efficient batch processing and large-scale embedding generation and evaluation.
25. Fourier Angle Alignment for Oriented Object Detection in Remote Sensing
Core Problem: Mainstream methods for rotated object detection in remote sensing suffer from directional incoherence at the detector neck and task conflict at the detecting head, leading to suboptimal performance.
Key Innovation: Fourier Angle Alignment (FAA), a novel approach that uses Fourier rotation equivariance to analyze angle information and align the main direction of features. It introduces two plug-and-play modules, FAAFusion and FAA Head, which significantly improve performance and achieve state-of-the-art results in oriented object detection on remote sensing datasets.
26. Open-Vocabulary Semantic Segmentation in Remote Sensing via Hierarchical Attention Masking and Model Composition
Core Problem: Applying vision-language models like CLIP directly to remote sensing data for open-vocabulary semantic segmentation suffers from inappropriate interactions within self-attention layers and requires adaptation for RS-specific contexts.
Key Innovation: Proposes ReSeg-CLIP, a training-free method that uses hierarchical attention masking with SAM-generated masks to constrain interactions at multiple scales, and a model composition approach that averages parameters of multiple RS-specific CLIP variants with a new weighting scheme for improved representational quality.
27. Steering and Rectifying Latent Representation Manifolds in Frozen Multi-modal LLMs for Video Anomaly Detection
Core Problem: Existing tuning-free video anomaly detection (VAD) methods using frozen multi-modal large language models (MLLMs) are limited by inherited pre-training biases and inability to adapt internal representations to specific video contexts, hindering their performance on subtle anomalies.
Key Innovation: Proposes SteerVAD, an intervention framework that actively steers and rectifies MLLM internal representations for VAD by identifying latent anomaly experts (LAEs) via representational separability analysis and using a hierarchical meta-controller to generate dynamic rectification signals that amplify anomaly-relevant dimensions.
28. BLISSNet: Deep Operator Learning for Fast and Accurate Flow Reconstruction from Sparse Sensor Measurements
Core Problem: The persistent tradeoff between accuracy and computational efficiency in reconstructing complex, multiscale fluid flows from sparse sensor measurements, especially for large-scale real-time applications.
Key Innovation: Introduces BLISSNet, a DeepONet-like model that achieves a strong balance of high accuracy and computational efficiency for fluid flow reconstruction and data assimilation, offering zero-shot inference on arbitrary domains and faster inference than classical methods after initial setup.
29. TimeMAE: Self-Supervised Representations of Time Series with Decoupled Masked Autoencoders
Core Problem: Existing self-supervised methods for time series often operate at the point level and rely on unidirectional encoding, leading to low semantic density and a mismatch between pre-training and downstream optimization, especially in data-scarce scenarios.
Key Innovation: Proposes TimeMAE, a self-supervised framework that reformulates masked modeling for time series by segmenting into semantically enriched sub-series, using a decoupled masked autoencoder, and introducing complementary objectives (masked codeword classification and representation regression) to learn transferable representations.
30. TREND: Unsupervised 3D Representation Learning via Temporal Forecasting for LiDAR Perception
Core Problem: Existing unsupervised 3D representation learning methods for LiDAR perception primarily focus on single frames, neglecting the valuable temporal information in LiDAR sequences that accounts for object motion and semantics, leading to a heavy reliance on costly labeled data.
Key Innovation: Proposes TREND (Temporal REndering with Neural fielD), the first unsupervised 3D representation learning method that leverages temporal forecasting of future LiDAR observations through a Recurrent Embedding scheme and a Temporal Neural Field, significantly improving downstream 3D object detection tasks.
31. CLAP: Unsupervised 3D Representation Learning for Fusion 3D Perception via Curvature Sampling and Prototype Learning
Core Problem: Existing differentiable-rendering-based unsupervised 3D representation learning methods for fusion perception pre-train modalities separately due to computational costs, failing to exploit the mutual benefits of high-level image semantics and 3D point cloud structure.
Key Innovation: Proposes CLAP (Curvature sampLing and leArnable Prototype), a joint unsupervised differentiable-rendering-based pre-training method for images and point clouds that overcomes computational hurdles via Curvature Sampling and uses learnable prototypes with an Expectation-Maximization scheme to exploit inter-modality complementarity, significantly improving fusion 3D perception.
32. Probabilistic Neural Networks (PNNs) with t-Distributed Outputs: Adaptive Prediction Intervals Beyond Gaussian Assumptions
Core Problem: Traditional neural networks only provide point estimates, and existing probabilistic neural networks often assume Gaussian output distributions, leading to overly wide and less adaptive prediction intervals, especially with non-Gaussian data or outliers.
Key Innovation: Proposes t-Distributed Neural Networks (TDistNNs) that generate t-distributed outputs, parameterized by location, scale, and degrees of freedom, allowing for adaptive modeling of heavy-tailed predictive distributions, improved robustness to non-Gaussian data, and narrower, yet properly covered, prediction intervals.
33. On the use of Graphs for Satellite Image Time Series
Core Problem: Effectively analyzing the large volume and complexity of Satellite Image Time Series (SITS) to monitor dynamic Earth surface processes and capture crucial spatio-temporal interactions.
Key Innovation: Presents a versatile graph-based pipeline for spatio-temporal SITS analysis that abandons the regular Euclidean structure to model spatial and temporal interactions between identified objects, demonstrating its potential for tasks like land cover mapping and water resource forecasting.
34. Uncertainty-aware data assimilation through variational inference
Core Problem: Data assimilation, which combines dynamical models with noisy and incomplete observations, often struggles to explicitly account for and quantify uncertainty in the inferred system state over time.
Key Innovation: Proposes a variational inference-based extension to a deterministic machine learning data assimilation approach, enabling the predicted state to follow a multivariate Gaussian distribution, which yields nearly perfectly calibrated predictions and improves benefits from longer assimilation windows.
35. Position: Beyond Model-Centric Prediction -- Agentic Time Series Forecasting
Core Problem: Traditional model-centric, static, single-pass time series forecasting is insufficient for adaptive, multi-turn settings requiring informative feature extraction, reasoning, iterative refinement, and continual adaptation.
Key Innovation: Proposes Agentic Time Series Forecasting (ATSF), reframing forecasting as an agentic process (perception, planning, action, reflection, memory) that interacts with tools, incorporates feedback, and evolves through experience, establishing a new foundation for future research.
36. Upscaling the Navier-Stokes-Cahn-Hilliard model for incompressible multiphase flow in inhomogeneous porous media
Core Problem: The need for a robust macroscopic model to describe the complex behavior of two immiscible, incompressible fluids flowing through inhomogeneous porous media, accurately capturing pore-scale physics like phase interface evolution and wetting behavior at the Darcy scale.
Key Innovation: A rigorously derived upscaled model for two-phase flow in inhomogeneous porous media, based on volume averaging of Navier-Stokes and Cahn-Hilliard equations, which formally incorporates wetting behavior into the averaged chemical potential and provides a theoretical distinction from standard empirical Darcy models.
37. Structure tensor Reynolds-averaged Navier-Stokes turbulence models with equivariant neural networks
Core Problem: Accurate and generalizable Reynolds-averaged Navier-Stokes (RANS) models for turbulent flows are hindered by notoriously unreliable closures, hypothesized to be due to an insufficient description of the turbulence's statistical state, particularly for the rapid pressure-strain term.
Key Innovation: Introduces tensor-based, symmetry-aware closures for the rapid pressure-strain term using equivariant neural networks (ENNs) and structure tensors, along with an algorithm for enforcing algebraic contraction relations. This approach achieves orders of magnitude more accurate models than existing ones, validating the hypothesis that structure tensors provide a richer description and enabling physically consistent, end-to-end learning for RANS and other tensor modeling domains.
38. Global high-resolution forest disturbance type dataset
Core Problem: The lack of a high-resolution global dataset classifying diverse forest disturbance types to better understand their impact on carbon cycling and biodiversity, and to inform conservation strategies.
Key Innovation: Developed the first high-resolution (30 m) global forest disturbance dataset (GFD) for 2000–2020, classifying 11 disturbance types by integrating Landsat-based CCDC time-series analysis with spatial metrics and machine learning, achieving high accuracy and revealing regional differences in disturbance drivers.
39. Shelf-Bench: A benchmark dataset for Antarctic ice shelf front and coastline delineation from multi-sensor radar satellite data
Core Problem: Lack of suitable training data for deep learning models to automate continuous delineation of Antarctic ice shelf fronts.
Key Innovation: Development of Shelf-Bench, a comprehensive benchmark dataset of 161 manually annotated SAR scenes for Antarctic ice shelf front and coastline delineation, enabling accelerated development of deep learning methodologies.
40. Climate change effects on river droughts in Bavaria using a hydrological large ensemble
Core Problem: Understanding and robustly assessing the impact of climate change on rare and extreme river droughts, including changes in seasonality and return periods, is crucial for water resource management, given projected intensification and increased frequency of meteorological drivers.
Key Innovation: Investigates climate change effects on river droughts in Bavaria using a unique physically-based hydrological large ensemble (WaSiM driven by 50 members of CRCM5 under RCP8.5), providing robust assessment of very rare events and bivariate design values, highlighting shifts in low-flow regimes and the increasing importance of lagged effects due to hotter and drier summers.
41. Characterization of Fractures and Veins from Images of Drill Core: A review of imaging technologies
Core Problem: Traditional rock mass classification systems (RMR, Q index) have limitations in characterizing fracture and vein morphology, spatial variability, and mineralogical composition, leading to subjectivity and inefficiency.
Key Innovation: This review examines how advanced imaging technologies (hyperspectral imaging) and AI (YOLO, Mask R-CNN) optimize fracture and vein characterization from drill core images. It highlights their ability to provide high-resolution mineralogical mapping and automated detection/segmentation, overcoming limitations of traditional methods, improving geotechnical property evaluation, and promoting safer mining operations.
42. Correction: Dynamic Optimization of Powder Factor in Extreme-cold Region Bench Blasting Considering Temperature Effects on Single-hole Blasting
Core Problem: This item is a journal correction notice that updates previously published material rather than presenting new primary geohazard evidence.
Key Innovation: Improves record accuracy and reproducibility; it does not add new experimental or modeling results.
43. Probabilistic analysis of roadbed consolidation considering spatial variability of soft clay parameters
Core Problem: Existing studies on soft clay reinforcement often neglect the spatial variability of soil parameters and its time-dependent impact on foundation settlement, leading to potential underestimation of risks and unsafe designs.
Key Innovation: Developed a three-dimensional stochastic finite element model based on the Modified Cam-Clay model to simulate settlement evolution, quantifying the impact of spatial variability (scale of fluctuation, compression, and permeability coefficients) on settlement response and highlighting the risks of neglecting such variability.
44. Asymmetric Changes in the Cooling Capacity of China's Lakes
Core Problem: A systematic assessment of the cooling effect of lakes across diverse regions and the spatiotemporal patterns of Lake Cooling Capacity (LCC) remains limited, despite their significant influence on local climate and role in mitigating heat extremes.
Key Innovation: Develops a multi-metric framework to evaluate LCC for 265 major Chinese lakes, revealing substantial cooling effects, spatially asymmetric trends (intensifying on the Tibetan Plateau, weakening in eastern plains), and identifying albedo, depth, and topography as dominant drivers, underscoring lakes' role in mitigating regional heat extremes.
45. Impact of Hydrological Regime and Temperature on Vegetation Growth Patterns in Floodplain Lakes Under Extreme Drought
Core Problem: The nonlinear interactive effects of hydrological regime (flooding duration and depth) and temperature on vegetation growth patterns in floodplain lakes under extreme drought conditions remain poorly understood.
Key Innovation: Developed a robust framework integrating a hydrodynamic model, interpretable machine learning, and a geographical detector to analyze individual and interactive impacts, finding that temperature factors can exert stronger effects than hydrological factors during extreme droughts, and identifying synergistic interactions and critical threshold shifts, providing insights for wetland conservation.
46. Few-Shot Continual Learning for 3D Brain MRI with Frozen Foundation Models
Core Problem: Foundation models pretrained on large-scale 3D medical imaging data face challenges when adapted to multiple downstream tasks under continual learning with limited labeled data, often suffering from catastrophic forgetting.
Key Innovation: Proposes a few-shot continual learning approach for 3D brain MRI that combines a frozen pretrained backbone with task-specific Low-Rank Adaptation (LoRA) modules. This design eliminates catastrophic forgetting by training only the adapter and task-specific head, achieving balanced performance across sequential tasks (tumor segmentation and brain age estimation) with zero forgetting and minimal trainable parameters.
47. Evidential Neural Radiance Fields
Core Problem: Existing uncertainty quantification methods for Neural Radiance Fields (NeRFs) fail to capture both aleatoric and epistemic uncertainty, or they compromise rendering quality and incur significant computational overhead.
Key Innovation: Introducing Evidential Neural Radiance Fields, a probabilistic approach that seamlessly integrates with the NeRF rendering process to directly quantify both aleatoric and epistemic uncertainty from a single forward pass, achieving state-of-the-art scene reconstruction fidelity and uncertainty estimation.
48. Hybrid Quantum Temporal Convolutional Networks
Core Problem: Quantum machine learning models for sequential data face scalability challenges when dealing with complex multivariate signals.
Key Innovation: Introducing the Hybrid Quantum Temporal Convolutional Network (HQTCN), which combines classical temporal windowing with a quantum convolutional neural network core, achieving significant parameter reduction and competitive performance on multivariate time-series analysis, especially under data-limited conditions.
49. BuildAnyPoint: 3D Building Structured Abstraction from Diverse Point Clouds
Core Problem: Recovering artist-created building abstraction from diverse and underconstrained point clouds (e.g., noisy, sparse LiDAR/SfM data) is challenging.
Key Innovation: BuildAnyPoint, a novel generative framework using a Loosely Cascaded Diffusion Transformer (Loca-DiT) that first recovers the underlying distribution from point clouds via conditional latent diffusion, then autoregressively encapsulates them into compact meshes, improving 3D building reconstruction accuracy and uniformity.
50. Learning Accurate Segmentation Purely from Self-Supervision
Core Problem: Accurately segmenting foreground objects from raw images without any manual annotations, pre-trained models, or post-processing remains a core challenge in computer vision.
Key Innovation: Selfment, a fully self-supervised framework that constructs patch-level affinity graphs, applies NCut for initial separation, and uses Iterative Patch Optimization (IPO) for refinement, achieving state-of-the-art results in unsupervised saliency detection and zero-shot generalization.
51. Altitude-Aware Visual Place Recognition in Top-Down View
Core Problem: Aerial visual place recognition (VPR) is challenging under significant altitude variations, requiring robust localization for airborne platforms without additional hardware.
Key Innovation: Proposes an altitude-adaptive VPR approach that estimates relative altitude by analyzing ground feature density, then uses this for canonical image cropping and classification-based VPR, achieving high accuracy and robustness without extra hardware.
52. DACESR: Degradation-Aware Conditional Embedding for Real-World Image Super-Resolution
Core Problem: Multimodal large models struggle with real-world image super-resolution, particularly for degraded images, as directly fine-tuning recognition models like RAM in degraded spaces is difficult.
Key Innovation: Introduces a Degradation Selection Strategy and Real Embedding Extractor (REE) for improved recognition on degraded images, combined with a Conditional Feature Modulator (CFM) and a Mamba-based network to effectively restore image textures and balance fidelity/perceptual quality in super-resolution.
53. PointCoT: A Multi-modal Benchmark for Explicit 3D Geometric Reasoning
Core Problem: Multimodal Large Language Models (MLLMs) struggle with 3D point cloud understanding and explicit geometric reasoning, often suffering from 'geometric hallucinations' because they treat reasoning as an implicit mapping process, bypassing intermediate logical steps.
Key Innovation: Presents PointCoT, a novel framework that empowers MLLMs with explicit Chain-of-Thought (CoT) reasoning for 3D data using a 'Look, Think, then Answer' paradigm, and introduces Point-Reason-Instruct, a large-scale benchmark for training geometry-grounded rationales.
54. Thinking with Images as Continuous Actions: Numerical Visual Chain-of-Thought
Core Problem: Existing multimodal large language models (MLLMs) for region-grounded reasoning use textified coordinates or fixed-granularity patches, leading to modality mismatch, semantic fragmentation, or limited precision in region selection.
Key Innovation: Proposes Numerical Visual Chain-of-Thought (NV-CoT), a framework that enables MLLMs to reason over images using continuous numerical coordinates by expanding the MLLM action space to directly generate bounding-box coordinates, significantly improving localization precision and final answer accuracy.
55. Foundation World Models for Agents that Learn, Verify, and Adapt Reliably Beyond Static Environments
Core Problem: Current autonomous agents and their world models are limited by assumptions of fixed tasks and static environments, hindering their ability to learn efficiently, act reliably, and adapt their policies in open, novel conditions.
Key Innovation: Outlines a vision for 'foundation world models' as persistent, compositional representations unifying RL, program synthesis, and abstraction, built around learnable reward models, adaptive formal verification, online abstraction calibration, and test-time synthesis, enabling agents to learn, verify, and adapt reliably in open worlds.
56. SR3R: Rethinking Super-Resolution 3D Reconstruction With Feed-Forward Gaussian Splatting
Core Problem: Existing 3D super-resolution (3DSR) methods for reconstructing high-resolution 3D scenes from low-resolution multi-view images rely on dense inputs and per-scene optimization, limiting reconstruction fidelity, generalization, and real-time usability by restricting high-frequency priors to those from 2DSR models.
Key Innovation: Proposes SR3R, a feed-forward framework that directly maps sparse low-resolution views to high-resolution 3D Gaussian Splatting (3DGS) representations, enabling autonomous learning of 3D-specific high-frequency geometry and appearance from large-scale data, improving generalization and reconstruction fidelity.
57. EvalMVX: A Unified Benchmarking for Neural 3D Reconstruction under Diverse Multiview Setups
Core Problem: Current real-world datasets for neural 3D reconstruction primarily benchmark multiview stereo (MVS) with RGB inputs, neglecting quantitative assessment of other crucial techniques like multiview photometric stereo (MVPS) and multiview shape from polarization (MVSfP) together.
Key Innovation: Proposes EvalMVX, a unified real-world dataset and benchmarking framework for neural 3D reconstruction, containing 25 objects captured under diverse multiview and lighting conditions with aligned ground-truth 3D meshes, enabling simultaneous quantitative assessment of MVS, MVPS, and MVSfP methods.
58. Neural Diffusion Intensity Models for Point Process Data
Core Problem: Nonparametric estimation of intensity models and posterior inference over intensity paths for Cox processes (modeling overdispersed point process data) are typically intractable, relying on expensive MCMC methods.
Key Innovation: Introduces Neural Diffusion Intensity Models, a variational framework for Cox processes driven by neural SDEs, which, through a key theoretical result on enlargement of filtrations, guarantees the variational family contains the true posterior, enabling accurate recovery of latent intensity dynamics and posterior paths with orders-of-magnitude speedups over MCMC-based methods via an amortized encoder.
59. Manifold-Preserving Superpixel Hierarchies and Embeddings for the Exploration of High-Dimensional Images
Core Problem: Existing hierarchical embedding techniques for exploring large high-dimensional images (e.g., remote sensing data) construct hierarchies purely based on attribute information, ignoring spatial layout and impeding consistent exploration of regions of interest in both image and attribute space.
Key Innovation: Presents a superpixel hierarchy for high-dimensional images that explicitly takes the high-dimensional attribute manifold into account during construction, enabling consistent exploration of data in both image and attribute space, which is beneficial for analyzing remote sensing data.
60. A Mixed Diet Makes DINO An Omnivorous Vision Encoder
Core Problem: Pre-trained vision encoders like DINOv2 exhibit poor feature alignment across different modalities (e.g., RGB, depth), hindering robust cross-modal understanding of scenes.
Key Innovation: Proposes the Omnivorous Vision Encoder, a novel framework that learns a modality-agnostic feature space using a dual objective: maximizing feature alignment between modalities and distilling representations from a frozen teacher. This enables consistent, powerful embeddings regardless of input modality.
61. An Efficient Unsupervised Federated Learning Approach for Anomaly Detection in Heterogeneous IoT Networks
Core Problem: The heterogeneous nature of IoT data (device capabilities, data formats, communication constraints) poses significant challenges to maintaining global model performance and privacy in unsupervised federated learning for anomaly detection.
Key Innovation: Proposes an efficient unsupervised Federated Learning framework that enhances anomaly detection by leveraging shared features from complementary IoT datasets while preserving dataset-specific features. It also employs explainable AI (SHAP) for interpretability, significantly outperforming conventional FL approaches.
62. MuViT: Multi-Resolution Vision Transformers for Learning Across Scales in Microscopy
Core Problem: Modern microscopy images contain structures across multiple spatial scales, but most vision models operate at a single resolution or derive multi-scale features from one view, limiting their ability to exploit the inherently multi-resolution nature of the data.
Key Innovation: Introduces MuViT, a transformer architecture that fuses true multi-resolution observations by embedding patches into a shared world-coordinate system and extending rotary positional embeddings. This enables attention to integrate wide-field context with high-resolution detail, delivering consistent improvements across microscopy tasks.
63. Neural ensemble Kalman filter: Data assimilation for compressible flows with shocks
Core Problem: Standard ensemble Kalman filters (EnKF) perform poorly in data assimilation for compressible flows with shocks due to non-Gaussian, bimodal forecast distributions near uncertain shock locations, leading to spurious oscillations.
Key Innovation: Introduces the neural EnKF, which embeds neural function approximations within ensemble DA by mapping forecast ensembles to the parameter space of a deep neural network, and uses physics-informed transfer learning to enforce smooth parameter variation, thereby avoiding spurious oscillations and nonphysical features.
64. Exploring Robust Intrusion Detection: A Benchmark Study of Feature Transferability in IoT Botnet Attack Detection
Core Problem: Cross-domain intrusion detection faces significant challenges due to variability in network traffic characteristics and feature distributions across different IoT/IIoT environments, leading to performance degradation of models trained on one domain when applied to another.
Key Innovation: Conducts a benchmark study evaluating the transferability of three flow-based feature sets across four heterogeneous IoT/IIoT datasets, demonstrating significant performance degradation under domain shifts and providing practical guidelines for feature engineering and algorithm selection to improve robustness and transferability in intrusion detection systems.
65. CO^3: Cooperative Unsupervised 3D Representation Learning for Autonomous Driving
Core Problem: Unsupervised 3D representation learning for outdoor scene point clouds is challenging due to moving objects and the infeasibility of reconstructing whole scenes or capturing partial views for contrastive objectives.
Key Innovation: Introduces CO^3, a Cooperative Contrastive Learning and Contextual Shape Prediction framework that utilizes LiDAR point clouds from both vehicle and infrastructure sides to build robust views, improving 3D representation learning for autonomous driving.
66. DRL-ORA: Distributional Reinforcement Learning with Online Risk Adaption
Core Problem: Achieving reliable policies in safety-critical reinforcement learning settings with incomplete environmental knowledge, where dynamically adjusting epistemic risk is crucial for better efficiency.
Key Innovation: Proposes DRL-ORA, a Distributional Reinforcement Learning framework that unifies epistemic and implicit aleatory uncertainty quantification and dynamically adjusts epistemic risk levels online via total variation minimization, outperforming methods with fixed or manually designed risk levels.
67. Towards Generating Realistic 3D Semantic Training Data for Autonomous Driving
Core Problem: The high cost and complexity of collecting and annotating real 3D data for semantic scene understanding in autonomous driving, coupled with domain gaps and quality issues in existing synthetic data generation methods (e.g., reliance on image projection or decoupled models).
Key Innovation: Proposes a novel approach to generate realistic 3D semantic scene-scale data for autonomous driving without relying on image projection or decoupled multi-resolution models, leading to higher quality synthetic data that, when used for training, improves semantic segmentation network performance and reduces annotation effort.
68. Knowledge-Guided Machine Learning: Illustrating the use of Explainable Boosting Machines to Identify Overshooting Tops in Satellite Imagery
Core Problem: Machine learning algorithms struggle to extrapolate beyond training data and are opaque, making failures unpredictable in high-stakes meteorological applications like severe weather forecasting.
Key Innovation: Illustrates the use of Explainable Boosting Machines (EBMs) with human-guided strategies and knowledge-guided feature extraction (e.g., Gray-Level Co-occurrence Matrices) to develop interpretable ML for identifying overshooting tops in satellite imagery.
69. In-Context Learning of Temporal Point Processes with Foundation Inference Models
Core Problem: Current neural network approaches for Marked Temporal Point Process (MTPP) inference require training separate, specialized models for each target system, limiting generalizability and efficiency.
Key Innovation: FIM-PP, a Foundation Inference Model for Point Processes, which is a pretrained deep neural network capable of inferring MTPPs from real-world data in-context without additional training, matching specialized models' performance.
70. Provably Safe Generative Sampling with Constricting Barrier Functions
Core Problem: Flow-based generative models lack formal guarantees that generated samples will satisfy hard constraints, critical for safety-critical domains.
Key Innovation: A safety filtering framework using constricting Control Barrier Functions (CBFs) to synthesize feedback control, guaranteeing safe sampling while minimizing distributional shift from the original model, applicable to any pre-trained flow-based generative scheme.
71. From Statics to Dynamics: Physics-Aware Image Editing with Latent Transition Priors
Core Problem: State-of-the-art instruction-based image editing models frequently fail to render physically plausible results when editing involves complex causal dynamics like refraction or material deformation.
Key Innovation: Reformulating physics-aware editing as predictive physical state transitions, introducing PhysicTran38K dataset, and proposing PhysicEdit, an end-to-end framework with a textual-visual dual-thinking mechanism for physically grounded and dynamic image editing.
72. Stationary Kernels and Gaussian Processes on Lie Groups and their Homogeneous Spaces I: the compact case
Core Problem: The need for constructive and practical techniques to build stationary Gaussian processes (GPs) on non-Euclidean spaces like Lie Groups and their Homogeneous Spaces to encode prior information about function invariance to symmetries, crucial in fields like geostatistics.
Key Innovation: Development of constructive and practical techniques for building stationary Gaussian processes on compact Lie Groups and their Homogeneous Spaces, enabling calculation of covariance kernels and sampling from prior/posterior GPs in a manner compatible with standard GP software, thereby generalizing stationarity to these non-Euclidean spaces.
73. Assessment of Spatio-Temporal Predictors in the Presence of Missing and Heterogeneous Data
Core Problem: Assessing the quality and identifying underperforming regions of deep learning models for complex, spatio-temporal data is challenging, especially with missing and heterogeneous observations, as classical statistical assumptions may not apply.
Key Innovation: A residual correlation analysis framework for spatio-temporal relational-enabled neural predictive models, which identifies and localizes regions of poor predictive performance under minimal assumptions, even with missing and heterogeneous data.
74. CLEAR-IR: Clarity-Enhanced Active Reconstruction of Infrared Imagery
Core Problem: Robust robotic perception in dark environments is hindered by active emitter patterns in infrared (IR) streams, which, despite being less susceptible to noise than RGB, degrade high-level tasks like object detection, tracking, and localization.
Key Innovation: CLEAR-IR, a Deep Multi-scale Aware Overcomplete (DeepMAO) inspired architecture that reconstructs clean IR images from emitter-populated input. This improves image quality and downstream robotic performance, enabling reliable vision-driven robotic systems in extreme low-light conditions and allowing tasks trained on RGB images to operate effectively in IR.
75. Continuous meteorological surface and soil records (2004–2024) at the Met Office surface site of Cardington, UK
Core Problem: The need for a continuous, high-quality, and comprehensive meteorological and hydrological observational record to improve process-based physics representation in atmospheric models and support environmental research.
Key Innovation: Describes a 20-year continuous meteorological surface and soil observational record (2004–2024) from the Met Office Cardington site, providing detailed data on boundary layer, fog, air-surface exchange, and soil properties (temperature, moisture, water table depth), with high data availability and quality control.
76. Seasonal patterns and diagnostic values of δ2H, δ18O, d-excess, and Δ′17O in precipitation over Seoul, South Korea (2016–2020)
Core Problem: Scarcity of long-term isotope records in mid-latitude regions like South Korea to understand climate variability and the hydrological cycle.
Key Innovation: Comprehensive analysis of stable isotopes (δ2H, δ18O, d-excess, and Δ′17O) in precipitation over Seoul (2016-2020), providing insights into source humidity, transport dynamics, and seasonal precipitation processes in East Asia.
77. The Effect of Triaxial Stress on Borehole Geometry and Cuttings Particle Characteristics in Coal Using Self-Rotating Multi-jet
Core Problem: The lack of understanding regarding how true triaxial stress and intrinsic coal properties affect borehole geometry and cuttings characteristics during self-rotating multi-jet (SRMJ) drilling in coal.
Key Innovation: Experimentally demonstrated that borehole morphology, diameter, depth, volume, and coal cuttings particle size are significantly influenced by triaxial stress and the direction between bedding and drilling, providing a foundational understanding for future research into borehole morphology in coal.
78. Comparing objective and subjective measures of household resilience: Evidence from Ethiopia
Core Problem: The lack of clarity and potential discrepancies when using different objective and subjective measures of household resilience for targeting and evaluating development interventions, leading to different classifications of vulnerable households.
Key Innovation: Systematically comparing objective (FAO’s RIMA, TANGO) and subjective measures of household resilience, revealing significant statistical differences in their distributions and classifications, and highlighting that the choice of measure can substantially affect policy targeting and impact assessment outcomes.
79. Modeling the dynamic interaction between hospital beds and surrounding pedestrians during emergencies
Core Problem: Existing dynamic models for hospital emergencies largely overlook the complex motion mechanisms of bedridden patient transportation, its interaction with surrounding crowds, and its impact on overall collective dynamics.
Key Innovation: Proposing an enhanced social force model (VG-SFM) that explicitly accounts for dynamic interactions between bedridden patients and pedestrians, integrating a volunteer-based game-theoretic module for altruistic yielding, and demonstrating its effectiveness in simulating hospital evacuations and patient transfers during emergencies.
80. A methodological comparison of interaction neighborhoods in the social force model of panic evacuation
Core Problem: Understanding how different hypotheses for defining interaction neighborhoods (metric, topological, visual networks) affect crowd dynamics and collective outcomes in the social force model of panic evacuation, which remains unclear.
Key Innovation: Systematically comparing the impact of metric, topological, and visual interaction neighborhoods within a social force model of panic evacuation, demonstrating that all reproduce the individualistic-to-herding transition but shape outcomes differently, and identifying visual neighborhoods as outperforming others due to robust adaptivity.
81. Hysteresis during storm events impacts stream sediment and nutrient load calculations
Core Problem: Substantial errors in stream sediment and nutrient load estimates may occur when hysteresis effects during storm events are overlooked, hindering effective catchment management strategies.
Key Innovation: Tested a non-linear Hysteresis Area, Residual, and Peak (HARP) analysis tool to improve Q-C relationships and nutrient load estimates during storm events, revealing distinct site- and constituent-specific hysteresis characteristics influenced by land use and groundwater, and demonstrating that overlooking hysteresis leads to inaccuracies in load calculations.
82. Quantifying size effects on particle breakage strength and energy of red-bed soft and hard rock waste materials: Experiments, simulations, and model correction
Core Problem: Accurately predicting the breakage strength and energy of red-bed soft-hard rock waste materials across different particle sizes, especially very large ones, is challenging with conventional models, hindering effective evaluation of gradation evolution and control of breakage-induced risks in road engineering fills.
Key Innovation: Conducted extensive single-particle breakage tests and discrete element method simulations to quantify particle-size scale effects on breakage strength and energy, and proposed corrected models applicable to very large particle sizes, providing a quantitative tool for assessing gradation evolution and breakage in mixed granular fill systems.
83. Enabling High‐Fidelity Wave‐Particle Interaction Studies: A Novel Filtering for Isolating Whistlers From Spacecraft Noise
Core Problem: Resolving the mixture of natural plasma waves and persistent spacecraft interference is a fundamental challenge in space physics, as traditional signal decomposition methods often fail due to time-varying frequencies and overlapping spectra.
Key Innovation: The instantaneous bandwidth Vold-Kalman Filtering (IB-VKF) is proposed, which defines component-specific bandwidth weighting functions for independent and precise dynamic tracking of disparate signal features. It successfully isolates persistent reaction wheel interference and separates transient whistler waves from background platform noise, enhancing the fidelity of space magnetic data.
84. Improving Aerosol Absorption Estimates Via Size‐Resolved Constraints Based on AERONET and In Situ Measurements
Core Problem: Accurate aerosol particle size distribution, essential for estimating radiative forcing, is often hindered by oversimplified assumptions about aerosol mixing state and size.
Key Innovation: A single-site observational-closure study combines AERONET multi-wavelength extinction and absorption retrievals, in situ particle size observations, and Mie modeling, treating the BC-sulfate core-shell scheme as a mass- and number-conserved, radiatively-closed set of probabilistic solutions. This approach yields more physically consistent PSDs and systematically modifies aerosol optical properties, leading to enhanced atmospheric heating and reduced top-of-atmosphere cooling in radiative transfer simulations.
85. SegReg: Latent Space Regularization for Improved Medical Image Segmentation
Core Problem: Medical image segmentation models, optimized with voxel-wise losses, leave latent feature representations largely unconstrained, potentially limiting generalisation and continual learning performance.
Key Innovation: SegReg, a latent-space regularisation framework, operates on U-Net feature maps to encourage structured embeddings. This improves domain generalisation and continual learning by reducing task drift and enhancing forward transfer across sequential tasks without adding memory or parameters.
86. No Calibration, No Depth, No Problem: Cross-Sensor View Synthesis with 3D Consistency
Core Problem: The significant engineering effort required for calibration to obtain aligned RGB-X data, which acts as a bottleneck for large-scale cross-sensor learning and data collection.
Key Innovation: A match-densify-consolidate method that enables cross-sensor view synthesis without explicit calibration or 3D priors for the X-sensor, using guided point densification and 3D Gaussian Splatting to create a scalable solution for RGB-X data.
87. Incremental dimension reduction for efficient and accurate visual anomaly detection
Core Problem: The high dimensionality of features extracted by deep neural networks in visual anomaly detection makes it difficult to apply these algorithms efficiently to large datasets with thousands of images.
Key Innovation: An incremental dimension reduction algorithm that computes truncated singular value decomposition in batches, updating singular values and vectors iteratively. This reduces memory overhead and accelerates the training of state-of-the-art anomaly detection algorithms with comparable accuracy.
88. Vision-Language Semantic Grounding for Multi-Domain Crop-Weed Segmentation
Core Problem: Existing deep learning models for fine-grained crop-weed segmentation struggle to generalize across heterogeneous agricultural environments due to reliance on dataset-specific visual features.
Key Innovation: Vision-Language Weed Segmentation (VL-WS), a novel framework that grounds pixel-level segmentation in semantically aligned, domain-invariant representations by fusing frozen CLIP embeddings and task-specific spatial features via FiLM layers conditioned on natural language captions, achieving improved generalization and data efficiency across diverse agricultural domains.
89. Breaking the Data Barrier: Robust Few-Shot 3D Vessel Segmentation using Foundation Models
Core Problem: State-of-the-art 3D segmentation methods require large-scale annotated datasets and suffer severe performance degradation under domain shifts, making them impractical for clinical settings with limited data.
Key Innovation: A novel framework leveraging a pre-trained Vision Foundation Model (DINOv3) adapted for robust few-shot 3D volumetric vessel segmentation, using a lightweight 3D Adapter, multi-scale 3D Aggregator, and Z-channel embedding to achieve significant improvements with limited data and across domain shifts.
90. Provable Subspace Identification of Nonlinear Multi-view CCA
Core Problem: Identifying and separating shared latent signals from view-private variations in nonlinear multi-view data is challenging, especially when exact unmixing is ill-posed.
Key Innovation: A theoretical framework that reframes nonlinear multi-view Canonical Correlation Analysis (CCA) as a basis-invariant subspace identification problem, proving that it can provably recover pairwise correlated signal subspaces (and jointly correlated subspaces for N>=3 views) and providing finite-sample consistency guarantees.
91. UPath: Universal Planner Across Topological Heterogeneity For Grid-Based Pathfinding
Core Problem: Existing learning-based heuristic functions for grid-based pathfinding (e.g., A*) perform poorly on out-of-distribution grid maps, limiting their practical application where a universal solver is needed.
Key Innovation: UPath, a universal heuristic predictor that is trained once but capable of generalizing across a full spectrum of unseen and topologically heterogeneous tasks, significantly reducing the computational effort of A* while maintaining near-optimal solution quality.
92. See, Act, Adapt: Active Perception for Unsupervised Cross-Domain Visual Adaptation via Personalized VLM-Guided Agent
Core Problem: Pre-trained perception models degrade significantly in novel environments, and conventional fine-tuning incurs catastrophic forgetting and demands costly, scene-specific annotations.
Key Innovation: Proposes Sea^2, an active perception paradigm where an intelligent pose-control agent adapts how perception modules are deployed (rather than adapting the modules themselves) by navigating to informative viewpoints using a VLM-guided unsupervised reinforcement learning approach, without requiring downstream labels or retraining perception models.
93. Denoising-Enhanced YOLO for Robust SAR Ship Detection
Core Problem: Robust ship detection in SAR imagery is challenging due to clutter, speckle noise, and difficulty in detecting small targets in complex scenes.
Key Innovation: Proposes CPN-YOLO, a YOLOv8-based framework with a learnable large-kernel denoising module for cleaner representations, a PPA attention mechanism for multi-scale feature extraction, and a Gaussian similarity loss (NWD) for improved bounding-box similarity measurement.
94. Hierarchical Concept-based Interpretable Models
Core Problem: Modern deep neural networks are challenging to interpret due to the opacity of their latent representations, and existing Concept Embedding Models (CEMs) fail to represent inter-concept relationships and require extensive concept annotations at different granularities.
Key Innovation: Introduces Hierarchical Concept Embedding Models (HiCEMs) that explicitly model concept relationships through hierarchical structures, and proposes Concept Splitting to automatically discover finer-grained sub-concepts from pretrained CEMs without additional annotations, enabling more granular explanations and improved task accuracy.
95. SpikeTrack: A Spike-driven Framework for Efficient Visual Tracking
Core Problem: Applying Spiking Neural Networks (SNNs) to RGB visual tracking faces a trade-off between energy efficiency and accuracy, as existing SNN frameworks either don't fully align with spike-driven computation or don't fully leverage spatiotemporal dynamics.
Key Innovation: Introduces SpikeTrack, an energy-efficient spike-driven framework for RGB object tracking, employing an asymmetric design with timestep expansion and unidirectional information flow, and a memory-retrieval module, achieving state-of-the-art SNN tracking performance with significantly reduced energy consumption.
96. FocusTrack: One-Stage Focus-and-Suppress Framework for 3D Point Cloud Object Tracking
Core Problem: Existing two-stage motion-centric methods for 3D point cloud object tracking suffer from error accumulation due to decoupled optimization (explicit foreground segmentation prior to motion estimation) and computational bottlenecks from sequential processing.
Key Innovation: Proposes FocusTrack, a novel one-stage framework that unifies motion-semantics co-modeling through Inter-frame Motion Modeling (IMM) and Focus-and-Suppress Attention, enabling end-to-end training. This achieves new state-of-the-art performance on prominent 3D tracking benchmarks (KITTI, nuScenes, Waymo) at a high speed (105 FPS) by enhancing foreground semantics and suppressing background noise without explicit segmentation.
97. Time Series Foundation Models as Strong Baselines in Transportation Forecasting: A Large-Scale Benchmark Analysis
Core Problem: Traditional deep learning models for transportation forecasting require extensive dataset-specific training, architecture design, and hyper-parameter tuning.
Key Innovation: Demonstrating that general-purpose time-series foundation models (Chronos-2) can serve as strong, zero-shot baselines for transportation forecasting, often outperforming specialized models and providing useful uncertainty quantification without task-specific fine-tuning.
98. Joint Geometric and Trajectory Consistency Learning for One-Step Real-World Super-Resolution
Core Problem: Diffusion-based super-resolution is computationally expensive, and existing one-step distillation methods suffer from high parameter counts and teacher model limitations. Consistency models, while efficient, struggle with consistency drift and 'Geometric Decoupling' (lack of structural coherence).
Key Innovation: Introduction of GTASR, a consistency training paradigm for Real-ISR that uses a Trajectory Alignment (TA) strategy to rectify the tangent vector field and a Dual-Reference Structural Rectification (DRSR) mechanism to enforce structural constraints, achieving superior performance with minimal latency.
99. UFO-4D: Unposed Feedforward 4D Reconstruction from Two Images
Core Problem: Dense 4D reconstruction from unposed images remains challenging, with existing methods being slow or fragmented.
Key Innovation: UFO-4D, a unified feedforward framework that directly estimates dynamic 3D Gaussian Splats from two unposed images, enabling joint and consistent estimation of 3D geometry, 3D motion, and camera pose in a feedforward manner, outperforming prior work in joint estimation.
100. VaSST: Variational Inference for Symbolic Regression using Soft Symbolic Trees
Core Problem: Existing symbolic regression methods are often dominated by heuristic search or data-intensive approaches, struggle to efficiently explore the highly multimodal combinatorial space of symbolic expressions, and lack principled uncertainty quantification.
Key Innovation: Introduces VaSST, a scalable probabilistic framework for symbolic regression based on variational inference, employing a continuous relaxation of symbolic expression trees (soft symbolic trees) to transform combinatorial search into efficient gradient-based optimization, enabling principled uncertainty quantification and achieving superior performance.
101. Moment Matters: Mean and Variance Causal Graph Discovery from Heteroscedastic Observational Data
Core Problem: Standard causal discovery methods return a single moment-agnostic graph, failing to reveal which causes act on the mean versus the variance, limiting interpretability and intervention design in heteroscedastic data.
Key Innovation: Proposes a Bayesian framework to infer separate mean and variance causal graphs from heteroscedastic observational data, deriving identification results, developing a variational inference method for posterior distribution over graphs, and incorporating curvature-aware optimization and prior knowledge for improved accuracy and sample efficiency.
102. Fairness under Graph Uncertainty: Achieving Interventional Fairness with Partially Known Causal Graphs over Clusters of Variables
Core Problem: Achieving causal notions of fairness in algorithmic predictions often assumes access to detailed knowledge of the underlying causal graph, which is a demanding and often impractical assumption.
Key Innovation: Proposes a learning framework that achieves interventional fairness by leveraging a causal graph over clusters of variables (which is easier to estimate), training a prediction model by reducing the worst-case discrepancy between interventional distributions across identified adjustment cluster sets, and developing an efficient barycenter kernel maximum mean discrepancy (MMD).
103. Multivariate Spatio-Temporal Neural Hawkes Processes
Core Problem: Existing temporal neural Hawkes processes fail to adequately capture complex spatio-temporal intensity structures in multivariate event data, especially beyond likelihood-based performance.
Key Innovation: Proposes a Multivariate Spatio-Temporal Neural Hawkes Process that integrates spatial information into latent state evolution with learned temporal and spatial decay dynamics, enabling flexible modeling of excitation and inhibition without predefined kernels.
104. Predictive Hotspot Mapping for Data-driven Crime Prediction
Core Problem: Effective crime prediction and control require accurate predictive hotspot mapping to optimally allocate resources, but traditional methods may lack data-driven decision-making and automation capabilities.
Key Innovation: Develops a non-parametric model using a spatio-temporal kernel density formulation for data-driven crime prediction and hotspot mapping, capable of incorporating expert inputs, and demonstrates its effectiveness in a real-world collaboration with the Delhi police department for assigning patrol vehicles.
105. Operationalizing Longitudinal Causal Discovery Under Real-World Workflow Constraints
Core Problem: Causal discovery in large-scale longitudinal systems is hindered by unformalized institutional workflows, leading to an enlarged and inconsistent admissible graph space and structural ambiguity.
Key Innovation: Characterizes and explicitly encodes workflow-induced partial order constraints (structural masks, timeline-aligned indexing) into longitudinal causal discovery, reducing structural ambiguity and improving interpretability, demonstrated with LiNGAM on a health screening cohort.
106. ReasonX: Declarative Reasoning on Explanations
Core Problem: Existing eXplanation in AI (XAI) methods for opaque ML models suffer from insufficient abstraction, limited user interactivity, and inadequate integration of symbolic knowledge.
Key Innovation: Proposes ReasonX, an explanation tool that enables declarative and interactive reasoning on explanations for decision trees (or surrogate models) using a closed algebra of operators over linear constraints, leveraging Mixed-Integer Linear Programming (MILP) to integrate background knowledge and reason at multiple abstraction levels.
107. Enhancing Continual Learning for Software Vulnerability Prediction: Addressing Catastrophic Forgetting via Hybrid-Confidence-Aware Selective Replay for Temporal LLM Fine-Tuning
Core Problem: Large Language Models (LLMs) applied to software vulnerability detection suffer from catastrophic forgetting and performance degradation when deployed on evolving codebases under temporal distribution shift, as traditional training methods ignore time.
Key Innovation: Proposes Hybrid Class-Aware Selective Replay (Hybrid-CASR), a confidence-aware replay method for continual learning that prioritizes uncertain samples and maintains a balanced ratio of vulnerable/fixed functions in the replay buffer, significantly improving Macro-F1 and backward retention for LLM-based temporal vulnerability detection while reducing training time.
108. RF-Agent: Automated Reward Function Design via Language Agent Tree Search
Core Problem: Designing efficient reward functions for low-level control tasks is challenging, and existing LLM-based methods for automated reward design suffer from poor utilization of historical feedback and inefficient search, limiting improvements in complex tasks.
Key Innovation: Proposes RF-Agent, a framework that treats LLMs as language agents and frames reward function design as a sequential decision-making process, integrating Monte Carlo Tree Search (MCTS) to manage the design and optimization, leveraging LLM's multi-stage contextual reasoning to better utilize historical information and improve search efficiency for identifying promising reward functions.
109. Uncertainty Quantification for Multimodal Large Language Models with Incoherence-adjusted Semantic Volume
Core Problem: The challenge of accurately quantifying uncertainty in Multimodal Large Language Models (MLLMs) to identify erroneous outputs, given the limitations of existing metrics (modality-specific, external tool reliance, computational expense).
Key Innovation: Introduces UMPIRE, a training-free, efficient uncertainty quantification framework for MLLMs that works across various input/output modalities by computing the incoherence-adjusted semantic volume of sampled responses, effectively capturing both semantic diversity and local model confidence.
110. A Variational Estimator for $L_p$ Calibration Errors
Core Problem: The challenge of accurately estimating calibration error in multiclass machine learning settings, particularly for $L_p$ divergences, where traditional methods can lead to overestimation and cannot separate over- from under-confidence.
Key Innovation: Extends a variational framework to estimate a broad class of $L_p$ calibration errors, enabling accurate assessment that avoids overestimation and can differentiate between over- and under-confidence, integrated into an open-source package.
111. Less is more -- the Dispatcher/ Executor principle for multi-task Reinforcement Learning
Core Problem: Improving generalization properties and data-efficiency in multi-task Reinforcement Learning, especially when data is not abundant, by abstracting away unnecessary details.
Key Innovation: Introduces the 'dispatcher/executor principle' for multi-task Reinforcement Learning controllers, partitioning the controller into a task-understanding dispatcher and a device-specific executor connected by a strongly regularizing communication channel to boost generalization and data-efficiency.
112. DirMixE: Harnessing Test Agnostic Long-tail Recognition with Hierarchical Label Vartiations
Core Problem: Effectively performing test-agnostic long-tail recognition where test label distributions are unknown and arbitrarily imbalanced, especially considering both global and local variations in these distributions.
Key Innovation: Proposes DirMixE, a Mixture-of-Expert strategy that assigns experts to different Dirichlet meta-distributions to hierarchically capture both global and local variations in label distributions, leading to a more stable objective and improved performance in long-tail recognition, also introducing a Latent Skill Finetuning (LSF) framework.
113. Shuffle Mamba: State Space Models with Random Shuffle for Multi-Modal Image Fusion
Core Problem: Existing Mamba-based multi-modal image fusion methods use fixed scanning strategies, introducing biased prior information and limiting robust information interaction.
Key Innovation: Proposes Shuffle Mamba, a framework using a Bayesian-inspired Random Shuffle scanning strategy with an inverse shuffle to eliminate biases, enabling robust modality-aware and cross-modality information interaction for multi-modal image fusion.
114. JiSAM: Alleviate Labeling Burden and Corner Case Problems in Autonomous Driving via Minimal Real-World Data
Core Problem: Deep-learning-based autonomous driving perception is limited by the high cost of annotating real 3D LiDAR data and the lack of corner cases, while synthetic data suffers from sample inefficiency and simulation-to-real gaps.
Key Innovation: Proposes JiSAM, a plug-and-play method combining jittering augmentation, a domain-aware backbone, and memory-based sectorized alignment, enabling the use of minimal real-world data (2.5%) and synthetic data to achieve comparable performance to models trained on full real data, and improving corner case detection.
115. Operator Learning with Domain Decomposition for Geometry Generalization in PDE Solving
Core Problem: Neural operators, while effective for PDEs, struggle with geometry generalization and data efficiency when applied to new, complex domains.
Key Innovation: Proposes Schwarz Neural Inference (SNI), a local-to-global framework using domain decomposition to solve PDEs on arbitrary geometries, improving generalization and data efficiency by stitching local neural operator solutions.
116. What Makes Good Synthetic Training Data for Zero-Shot Stereo Matching?
Core Problem: The design principles for effective synthetic datasets for training stereo matching networks, particularly for zero-shot performance, remain underexplored.
Key Innovation: Systematically investigates the design parameters of procedural synthetic dataset generators for stereo matching, identifies optimal settings, and creates a large-scale dataset (InfinigenStereo) that achieves state-of-the-art zero-shot performance.
117. Continuous Optimization for Feature Selection with Permutation-Invariant Embedding and Policy-Guided Search
Core Problem: Existing feature selection methods struggle with capturing complex feature interactions, permutation sensitivity in embedding feature subsets, and effective exploration of non-convex embedding spaces.
Key Innovation: Proposes a new framework for continuous optimization feature selection that uses an encoder-decoder paradigm with an inducing point mechanism for permutation-invariant continuous embeddings and employs a policy-based reinforcement learning agent for robust, assumption-free exploration of the embedding space.
118. SelvaBox: A high-resolution dataset for tropical tree crown detection
Core Problem: Detecting individual tree crowns in complex, overlapping tropical forests from high-resolution imagery is challenging, and annotated datasets for robust model development are scarce.
Key Innovation: Introduces SelvaBox, the largest open-access dataset for tropical tree crown detection (83,000+ manually labeled crowns from high-resolution drone imagery). It demonstrates that higher-resolution inputs boost accuracy and models trained on SelvaBox achieve competitive zero-shot performance on unseen datasets, especially when combined in a multi-resolution pipeline.
119. Less is More: AMBER-AFNO -- a New Benchmark for Lightweight 3D Medical Image Segmentation
Core Problem: The computational bottleneck of volumetric transformers in 3D medical image segmentation, requiring efficient architectures for global context modeling.
Key Innovation: Proposes AMBER-AFNO, an architecture that replaces multi-head self-attention with Adaptive Fourier Neural Operators (AFNO) for global token mixing in the frequency domain, achieving quasi-linear computational complexity and linear memory scaling for lightweight 3D medical image segmentation.
120. Beyond Na\"ive Prompting: Strategies for Improved Context-aided Forecasting with LLMs
Core Problem: Large language models (LLMs) for context-aided forecasting lack diagnostic tools, underperform their potential, and incur high computational costs, limiting practical deployment in real-world scenarios.
Key Innovation: Introduces a unified framework of four strategies addressing model diagnostics (revealing the 'Execution Gap'), accuracy (achieving 25-50% improvements), and efficiency (reducing inference costs via adaptive routing). These strategies provide a comprehensive toolkit for practical LLM-based context-aided forecasting.
121. ProtoTS: Learning Hierarchical Prototypes for Explainable Time Series Forecasting
Core Problem: Existing interpretable time series forecasting models provide only local/partial explanations and struggle to reveal how heterogeneous input variables jointly shape overall temporal patterns in forecasts.
Key Innovation: ProtoTS, a novel interpretable forecasting framework, achieves high accuracy and transparent decision-making by modeling hierarchical prototypical temporal patterns, enabling expert-steerable multi-level interpretability.
122. Less is More: Lean yet Powerful Vision-Language Model for Autonomous Driving
Core Problem: Developing an efficient and effective end-to-end autonomous driving system that can perform trajectory planning directly from front-view camera input, leveraging advanced AI models.
Key Innovation: Max-V1, a novel one-stage end-to-end autonomous driving framework that reconceptualizes trajectory planning as a generalized language problem, using a Vision-Language Model (VLM) for single-pass prediction from camera input, achieving state-of-the-art performance and strong generalization.
123. ColaVLA: Leveraging Cognitive Latent Reasoning for Hierarchical Parallel Trajectory Planning in Autonomous Driving
Core Problem: Current VLM-based planners for autonomous driving face challenges including a mismatch between discrete text reasoning and continuous control, high latency from autoregressive decoding, and inefficient or non-causal planners, limiting real-time deployment.
Key Innovation: Proposes ColaVLA, a unified vision-language-action framework that transfers reasoning from text to a unified latent space and couples it with a hierarchical, parallel trajectory decoder, enabling efficient, accurate, and safe trajectory generation by compressing scene understanding into meta-action embeddings and generating multi-scale trajectories in a single pass.
124. Convex Loss Functions for Support Vector Machines (SVMs) and Neural Networks
Core Problem: Standard loss functions for Support Vector Machines (SVMs) and neural networks may not fully leverage pattern correlations, potentially limiting generalization performance.
Key Innovation: Proposes a new convex loss function for SVMs (both classification and regression) that incorporates pattern correlations, demonstrating comparable or superior generalization performance compared to standard losses, and suggests its application with neural networks.
125. FedVG: Gradient-Guided Aggregation for Enhanced Federated Learning
Core Problem: Data heterogeneity across clients in Federated Learning (FL) leads to client drift and degrades generalization performance, compounded by overemphasis on poorly performing clients.
Key Innovation: Proposing FedVG, a gradient-based federated aggregation framework that uses a global validation set to guide optimization by assessing client model generalization ability via layerwise validation gradient norms, improving performance in heterogeneous settings.
126. Forecasting Local Ionospheric Parameters Using Transformers
Core Problem: The need for accurate forecasting and uncertainty quantification of local ionospheric parameters (foF2, hmF2, TEC) using advanced methods that can generalize across geographic locations and time periods.
Key Innovation: A novel transformer-based neural network method, LIFT (Local Ionospheric Forecast Transformer), which provides accurate 24-hour forecasts and nonparametric uncertainty bounds for local ionospheric parameters by training in a data assimilation-like fashion with exogenous variables and climatology predictions.
127. Conformal Prediction for Long-Tailed Classification
Core Problem: In long-tailed classification, existing conformal prediction methods struggle to provide prediction sets that simultaneously offer good class-conditional coverage for rare classes and a reasonable size, forcing a binary choice between these two desirable properties.
Key Innovation: New methods are proposed to smoothly trade off set size and class-conditional coverage: a prevalence-adjusted softmax conformal score function optimizes for macro-coverage, and a procedure interpolates between marginal and class-conditional conformal prediction by linearly interpolating their score thresholds.
128. Embracing Discrete Search: A Reasonable Approach to Causal Structure Learning
Core Problem: Existing score-based causal discovery algorithms for linear models are often computationally intensive, making it difficult to fully leverage discrete search to find globally optimal causal graphs, which can lead to less accurate structural recovery.
Key Innovation: FLOP (Fast Learning of Order and Parents), a score-based causal discovery algorithm that pairs fast parent selection with iterative Cholesky-based score updates. This significantly reduces run-times, enabling iterated local search with principled order initialization to find highly accurate causal graphs with scores at or close to the global optimum.
129. A consistent co-rotational method for fluid-structure interaction dynamic analysis of 2D flexible beams
Core Problem: Accurately and efficiently performing dynamic analysis of slender flexible structures in marine environments, considering arbitrarily distributed loads and fluid-structure interaction (FSI) within a consistent framework.
Key Innovation: A consistent co-rotational method is presented for 2D flexible beams, deriving consistent tangent stiffness and mass matrices, a consistent load equivalence formulation for concentrated and distributed loads, and integrating the Morison equation for fluid loads, enabling accurate and computationally efficient FSI dynamic analysis.
130. AI agents are ‘aeroplanes for the mind’: five ways to ensure that scientists are responsible pilots
Core Problem: As artificial intelligence systems increasingly integrate into scientific workflows, there is a need to ensure their responsible deployment to preserve human creativity, responsibility, and the element of surprise, rather than aiming for complete automation.
Key Innovation: Proposes five ways to guide scientists in responsibly piloting AI agents, framing AI as a tool to augment human intellect rather than replace it, thereby preserving core scientific values in the scientific workflow.
131. Sea-urchin spines generate electrical signals in flowing water
Core Problem: The need for novel and effective underwater flow sensors.
Key Innovation: Discovery that sea-urchin spines can generate a voltage when water moves around them, a phenomenon that could be utilized to design new underwater flow sensors.
132. Enhancing Coalbed Methane Extraction via Ultrasonic Cavitation: Mechanisms and Applications
Core Problem: The permeability bottleneck and kinetic lag in the desorption of adsorbed gas hinder efficient coalbed methane (CBM) development, particularly in complex geological environments.
Key Innovation: Demonstrated that water-based ultrasonic cavitation (WUC-ECBM) significantly enhances CBM extraction by driving fissure expansion (up to 61.17%), improving coal pore structures, and increasing gas desorption efficiency (up to 2.11 times), leading to the development of intelligent focused ultrasound cavitation equipment.
133. Hyperspectral retrieval of phytoplankton absorption and community composition from NASA’s PACE-OCI in estuarine–coastal waters using a hybrid framework combining mixture-of-experts and Variational Autoencoder
Core Problem: Challenging retrieval of the phytoplankton absorption coefficient and subsequent estimation of phytoplankton community composition in optically complex coastal waters using remote sensing, particularly with new hyperspectral missions like PACE.
Key Innovation: Introduction of Hyper-MoE-VAE, a deep-learning architecture integrating a Mixture-of-Experts with a Variational Autoencoder, for accurate retrieval of high-dimensional phytoplankton absorption and estimation of community composition from PACE-OCI hyperspectral remote sensing reflectance in estuarine-coastal waters.
134. A framework to detect tillage practices from space: A demonstration in the US Midwest
Core Problem: The need for accurate spatial and temporal maps of tillage practices to monitor conservation tillage adoption and assess its impacts, while effectively considering confounding local soil, vegetation, and environmental effects on crop residues.
Key Innovation: Development of a dynamic feature threshold framework utilizing satellite data, environmental variables, and machine learning to accurately map tillage practices (no-till, reduced-till, conventional-till) for corn and soybean fields in the US Midwest, demonstrating improved performance over fixed threshold methods.
135. Global mapping of tidal wetlands and adjacent environments using tidal analysis and multi-source Earth observations
Core Problem: Inconsistencies across existing global tidal wetland datasets due to tidal fluctuations and limited data availability, and the oversight of adjacent terrestrial environments, hindering understanding of coastal dynamics and conservation efforts.
Key Innovation: A novel global coastal mapping framework that uses tidal analysis (EOT20 model) to adaptively select low-tide Sentinel imagery, integrates multi-source global datasets for training, and employs iterative random forest models to produce a 10-m resolution global coastal dataset with 11 distinct land cover types, including tidal wetlands.
136. Development of the satellite bio-optical algorithm for the shelf waters along the southern Kamchatka Peninsula: effect of optically active components variability on the spectral remote sensing reflectance
Core Problem: Adjusting existing semi-analytical algorithms for optically complex shelf waters to accurately retrieve in-water optically active components, particularly phytoplankton and colored detrital matter absorption, given their high variability.
Key Innovation: Regional parameterization of in-water optically active components absorption and development of a semi-analytical algorithm that separates phytoplankton and colored detrital matter absorption using specific spectral sites, demonstrating its applicability for ecological monitoring in the southern Kamchatka Peninsula.
137. Evaluation of the daily sea surface net radiation from nine satellite, reanalysis and reconstructed products and uncertainty estimates
Core Problem: The largely unknown performance and significant discrepancies among various global sea surface net radiation (Rn) products from satellite, reanalysis, and reconstructed sources, making product selection challenging for users.
Key Innovation: A comprehensive comparison and evaluation of nine daily mean long-term sea surface Rn products using observations from 55 moored buoys, highlighting large discrepancies, identifying J-OFURO3 as the best performing satellite product, and providing guidance for users regarding product selection.
138. Future impacts of climate change and land-use dynamics on streamflow and nutrient export in the Changbai Mountains of Northeast China under multi-scenario SWAT modeling
Core Problem: Predicting the future impacts of climate change and land-use dynamics on streamflow and nutrient export in cold-region alpine catchments.
Key Innovation: Developed an integrated modeling framework coupling SWAT with multi-scenario land-use projections (SSPs) and CMIP6 climate inputs to project streamflow and nutrient loads, revealing pronounced sub-basin heterogeneity and highlighting the need for integrated watershed management strategies.
139. Hydro-environmental shifts in Laolike peatland of Northeast China: Evidence from grain size distributions and end-member modeling analysis
Core Problem: Hydrological conditions within peatlands exhibit significant spatial heterogeneity influenced by local environmental factors (paleotopography, autogenic processes), which are often not sufficiently addressed in paleoclimate reconstructions.
Key Innovation: Demonstrated evident spatial heterogeneity in peatland hydrological evolution using grain size distributions, showing that while climate was the primary control on millennial scales, local paleotopography and autogenic self-organization also modulated internal hydrological conditions, highlighting limitations of single-core reconstructions.
140. Towards precision in segment assembly: A particle swarm optimization-based ellipticity correction method
Core Problem: Maintaining segment assembly quality and correcting ellipticity during shield tunnel construction is pivotal for structural safety and resilience, but ellipticity often worsens progressively across multiple rings.
Key Innovation: Proposed a Particle Swarm Optimization (PSO)-based ellipticity correction method for shield tunnel segment assembly, which effectively improves the ellipticity of newly assembled rings (e.g., from 5.0‰ to 3.1‰) and ensures successful key segment assembly.