Abstract
Generative deep learning models are increasingly adopted for climate downscaling due to their ability to efficiently produce high-resolution outputs from coarse-resolution boundary conditions. However, unlike dynamical downscaling models, which explicitly represent physical mechanisms governing regional climate behavior, generative models remain largely opaque in terms of whether and how they reflect known or plausible regional physical processes. This lack of domain knowledge connection and understanding limits scientific trust and hinders the use of generative downscaling in risk-sensitive and decision-making situations. In this work, we present a visual analytics approach that enables domain experts to explore regional physical processes reflected in trained generative climate downscaling models by examining how spatially localized, multivariable input patterns relate to model outputs across cohorts of similar predictions.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
