|
15th International Meeting on Statistical Climatology
Centre International de Conférences - Météo-France - Toulouse - France
June 24-28, 2024
|
CIC meetings
|
|
|
|
1 |
Climaterecords: dataset creation, homogenization, gridding and uncertainty quantification, including observationally constrained analyzed and reanalysed products - Nathan Lenssen, Xiaolan Wang
|
details |
|
2 |
Meetingthe challenge of analyzing very large datasets - Dorit Hammerling, David Huard, Mark Risser
|
details |
|
3 |
Space-timestatistical methods for modelling and analyzing climate variability - Denis Allard, Bo Li
|
details |
|
4 |
Weather/climateforecasting, predictability and forecast evaluation - Kamoru Lawal, Damien Specq
|
details |
|
5 |
Statisticsfor climate models, ensemble design, uncertainty quantification, model tuning - Tim DelSole, James Salter, Laurent Terray
|
details |
|
6 |
Statisticaland machine learning in climate science - Blanka Balogh, Seung-Ki Min
|
details |
|
7 |
Long-termdetection and attribution and emergent constraints on future climate projections - Chao Li, Dáithí Stone
|
details |
|
8 |
Attributionand analysis of single weather events - Erich Fischer, Megan Kirchmeier-Young
|
details |
|
9 |
Extremevalue analysis methods and theory for climate applications - Whitney Huang, Glwadys Toulemonde
|
details |
|
10 |
Changesin extremes including temperature, hydrologic, and multi-variate compound events - Qiaohong Sun, Xuebin Zhang, Jakob Zschleischler
|
details |
|
11 |
Fromglobal change to regional impacts, downscaling and bias correction - Bastien François, Soulivanh Thao
|
details |
|
12 |
Impactattribution: from source to suffering - Gabi Hegerl, Jana Sillmann, Wim Thiery
|
details |
|
|
|
|
|
1. Climate records: dataset creation, homogenization, gridding and uncertainty quantification, including observationally constrained analyzed and reanalysed products Conveners: Nathan Lenssen, Xiaolan Wang
|
|
|
Observationally based climate data sets are indispensable for many aspects of climate research. In an ideal world, climate data sets would be accurate, free from artificial trends
in time or space and globally complete. Unfortunately, in the real world, there are severe challenges to realising this dream - measurements are noisy, stations move, instruments
change or age, satellite orbits drift and the density of the observing network constantly changes - all of which can lead to artificial, non-climatic changes if untreated or treated
improperly. Such artificial effects could be detrimental to climate analyses that use these data, especially analyses of trends and extremes. Making the data sets that climate science
needs requires us to address these problems, and to understand and effectively communicate reliable and comprehensive information about uncertainty in those data sets.
This session calls for contributions that are related to the problem of creating climate data sets and quantifying their uncertainties, for example: bias correction and
homogenization of in situ or satellite data, quality control of observations, infilling of spatially incomplete data, modelling complex error structures, combining multiple
observational products. It also calls for contributions that use homogeneous climate data to assess climate trends, variability and extremes and their uncertainties.
|
top
|
|
|
|
2. Meeting the challenge of analyzing very large datasets Conveners: Dorit Hammerling, David Huard, Mark Risser
|
|
|
We are experiencing a very rapid increase in the size of geophysical datasets, including remotely sensed observations, large climate modelling experiments including large ensemble
simulations, and ambitious compilations of historical archives. This increase challenges our traditional approaches to data discovery, acquisition, and statistical data analysis.
These very large datasets are often difficult and time consuming to acquire, explore and evaluate, and ultimately analyze with statistical models using standard approaches.
This session invites papers that address all aspects of this challenge, including a) the development of systems that allow the analysis of data where it resides, thereby avoiding
the time consuming task of acquiring and organizing data on local storage prior to analysis; b) statistical data analysis approaches that are well adapted to large datasets, including
large archives of instrumental data, data obtained from climate modelling experiments, remotely sensed datasets with very high spatial and temporal resolution but limited temporal
coverage and reanalysed datasets; and c) alternative analysis approaches based on machine learning methods.
|
top
|
|
|
|
3. Space-time statistical methods for modelling and analyzing climate variability Conveners: Denis Allard, Bo Li
|
|
|
The climate system is a complex set of interacting processes over the land, sea, and air. Understanding the natural variations of these processes leads to the analysis of climate
variability. As we observe more diverse and complex climate datasets and computer model output, we may be able to better model and understand these variations. One approach to the
analysis of fluctuations over seasonal and multi-seasonal spatial and temporal scales requires the development of advanced statistical techniques, especially when climate processes
are nonstationary and/or non-Gaussian.
We welcome contributions of time series, spatial, and spatiotemporal analyses of climate variability. This could include, for example, the development of methods that allow for the
investigation and understanding of interactions and variation between multiple climate processes, possibly using hierarchical statistical models. Methods that can capture long range
variations such as climate teleconnections or can be used to derive multivariate indices of climate variability are of interest. We also encourage contributions on the application of
multivariate methods in a spatio-temporal context such as, but not limited to, Canonical Correlation Analysis, Principal and Predictable Component Analysis, and Discriminant Analysis
to climate analyses, especially involving regularization methods for dealing with ill-posed problems. Applications involving dynamically constrained space-time filtering, such as
projecting on theoretical eigenmodes, are also welcome. Methods for the analysis of climate variability in a changing climate system are also encouraged.
|
top
|
|
|
|
4. Weather/climate forecasting, predictability and forecast evaluation Conveners: Kamoru Lawal, Damien Specq
|
|
|
Weather and climate prediction systems provide forecasts at lead times ranging from zero lead (nowcasting), short and medium range weather forecasts (hours to a week or more),
intraseasonal to seasonal (S2S) up to interannual and decadal. Forecasts at all of these time scales potentially have utility in managing and anticipating hazards and risks that
are being induced by externally forced climate change. Confidently using these products, however, requires information from reliable evidence-based approaches for a) assessing the
performance of previous forecasts (verification) and b) assessing potential future forecast skill (predictability). These activities invariably involve the statistical comparison
of forecasts with observations, which can be issued in a wide variety of different data formats, e.g., univariate, multivariate, spatial gridded fields, etc. Furthermore, the
forecasts can be either single deterministic ones, multiple ensembles of forecasts, or probability estimates.
This session welcomes contributions on dynamical, statistical and machine learning weather and climate forecasting systems, the assessment of potential and realised predictability,
forecast verification and other related topics that demonstrate either interesting new statistical approaches or novel applications of existing methodologies to data. In the
interests of a wide and lively discussion, we are happy to receive talks and posters that address any forecast lead time up to decadal climate. We also welcome contributions that
range from operational applications to research that is focussed on more fundamental issues in predictability and verification.
|
top
|
|
|
|
5. Statistics for climate models, ensemble design, uncertainty quantification, model tuning Conveners: Tim DelSole, James Salter, Laurent Terray
|
|
|
Ensembles of climate model simulations remain the primary source for the assessment of future climate change, yet they pose challenges for formal analysis in terms of the
quantification of model skill and interdependency. In this session, we consider novel approaches for integrating results from model ensembles, including both 'ensembles of
opportunity' such as the CMIP archive, and designed ensembles formed from parameter or structural perturbations within a common framework, including perturbed physics ensembles
(PPEs).
We welcome contributions on model parameter perturbation and model tuning and optimization, uncertainty analysis applied to ensemble output for climate impact risk assessment
and novel approaches for addressing structural error and interdependency in multi-model archives. In addition, we also welcome contributions on the design of model experiments
intended to be used for purposes such as detection and attribution, extreme event attribution, near-term ensemble forecasting and long-term ensemble projection.
|
top
|
|
|
|
6. Statistical and machine learning in climate science Conveners: Blanka Balogh, Seung-Ki Min
|
|
|
Advances in high performance computing have enabled the production of much larger climate model datasets that are of higher spatial and temporal resolution and/or encompassing more
realizations than ever before possible. The volume of data from space-based or remote-sensing platforms, as well as ground-based or in situ sensors, also continues to increase.
These very large datasets can be difficult to explore using standard statistical techniques.
Statistical and machine learning offers the promise of both obtaining information that we know is con-tained in these datasets as well as the discovery of otherwise unknown
information. Feature detec-tion, for instance, can be used to identify known weather or climate patterns, such as storms in a su-pervised but non-heuristic manner. Causal conditions
leading to such features may also be identified. On the other hand, quickly growing literature is now available on using machine learning for replacing physical parameterizations or,
more generally, all or part of climate models.
We invite contributions in all aspects of the application of machine learning in climate science, in-cluding contributions that connect machine learning to statistical and physically
based approaches to the analysis of climate variability, or to model any aspect of the climate system. Examples of potential topics include tailored developments based on nonlinear
approaches for predictions (deep learning, physics-informed neural networks, random forests, etc.), statistical learning approaches (e.g., ana-log/nearest neighbour, supervised or
unsupervised techniques), causal discovery algorithms, probabil-ity representations of geophysical processes (e.g., via residual or recurrent neural networks, genera-tive models,
etc.), dimension reduction (e.g., variable selection) are welcome.
|
top
|
|
|
|
7. Long-term detection and attribution and emergent constraints on future climate projections Conveners: Chao Li, Dáithí Stone
|
|
|
Detection and attribution and the identification and application of emergent constraints are important areas in the study of climate change and have recently also attracted the
attention of the statistics community. Climate change detection and attribution refers to a set of statistical tools to relate observed changes to external forcings, specifically
to anthropogenic influence. Emergent constraints refer to statistical tools used to produce observationally constrained projections of future change based on relationships between
historical and future simulated change. While both issues can be viewed in different ways, most studies use linear regression frameworks. The problem formulation per se seems
straight forward in both cases, but the challenges lie in the high dimensionality of the problem and the relatively large number of unknown quantities that need to be estimated
in the context of limited observations.
Current methods differ in the complexity of the problem formulation and what assumptions are being made to reduce the dimensionality of the problem. Most detection and attribution
methods implemented so far are of frequentist nature, while emergent constraints are more often obtained via Bayesian implementations. We invite presentations on new methodological
and computational developments, software implementations, and comparisons between methods. We further invite presentations describing and/or applying detection and attribution or
emergent constraint methods in any area of scientific study.
|
top
|
|
|
|
8. Attribution and analysis of single weather events Conveners: Erich Fischer, Megan Kirchmeier-Young
|
|
|
The attribution of extreme weather events to anthropogenic influence is rapidly evolving into a quasi-operational activity, which has important implications for the continued
scientific development of this important area of weather and climate science. Traditionally, the magnitude of an extreme event is defined in the observational record via arbitrary
spatial and temporal bounds, and its characteristics compared between model simulations that respectively omit and include historical greenhouse gas emissions. The results indicate
how the nature of the event of interest or related class of events has changed due to anthropogenic influence on the global climate. Recent developments in event attribution have
moved beyond historical assessments based on the event magnitude and span several key areas. Examples include, but are not limited to: improved estimation and statistical methods
(e.g., combining models and observations, tackling record-shattering events); the inclusion of key physical processes that initiated and/or sustained the event, via multivariate
or conditional analyses; the inclusion of other coincident events or physical mechanisms (i.e., compound events); the application of event attribution methods to new types of weather
and climate events; moving towards operationalisation and making event attribution faster; future attribution assessments under prescribed global warming thresholds; the attribution
of impacts of a specific event on human, biophysical or physiological systems to anthropogenic influence on the climate; and exploring the uncertainty in attribution assessments
depending on the methods, models, and datasets employed. Such developments are working towards improving the robustness of attribution assessments.
We invite contributions covering all aspects of extreme event attribution and particularly encourage contributions that take a systematic approach to analyzing anthropogenic
influence on the likelihood of extreme events and thus inform the operational development and improvement of extreme event attribution as a service. We encourage presentations
that aim to advance the field of event attribution, such as innovative statistical techniques, the inclusion of compound events and/or key physical mechanisms, consideration of
selection bias from analyzing only events that have occurred, assessment of model fitness for purpose, and multi-method approaches.
|
top
|
|
|
|
9. Extreme value analysis methods and theory for climate applications Conveners: Whitney Huang, Glwadys Toulemonde
|
|
|
Interest in climate extremes remains high, both from the point of view of how climate change affects extremes and characterizations of extreme events themselves. The basic methods of
univariate extreme value theory (e.g., Generalised Extreme Value distribution, Generalised Pareto distribution, and numerous extensions or variants) are by now well established in the
climate literature and have been incorporated into a number of software packages. Methods beyond univariate extreme value theory remain relatively unused in the climate context despite
rapid methodological development. Examples include multivariate extreme value theory (including the dichotomy between asymptotic dependence and asymptotic independence of two variables,
and methods appropriate for higher dimensions) and models for spatial extremes including max-stable and max-infinitely divisible processes. In addition, there are problems involving
combinations of multiple variables that do not necessarily require all of the individual variables to all be extreme (e.g. the combined role of temperature and humidity in heatwave deaths)
and may require new developments in statistical theory.
Our primary objective in this session is to identify interesting problems concerning climate extremes that are not well handled by currently well-established methods, and solutions that
require the development of more advanced methods, including but not limited to multivariate extreme value distributions, max-stable processes, and related concepts. We also welcome
expository talks that will introduce these concepts to climate scientists.
This session particularly targets research on statistical methods suitable to extreme events, while session 10 targets studies that demonstrate the application of such methods.
|
top
|
|
|
|
10. Changes in extremes including temperature, hydrologic, and multi-variate compound events Conveners: Qiaohong Sun, Xuebin Zhang, Jakob Zschleischler
|
|
|
Climate extremes and their changes are relevant to society and ecosystems due to their potentially severe impacts. Correspondingly, the demand for consistent and robust projections
of future changes in climate extremes continue to grow rapidly. Changes in extremes such as heat waves, meteorological droughts or convective rainstorms can be expressed and analysed
in terms of single variables, such as temperature and precipitation. Many high impact extreme events are, however, the result of a combination of variables, such as the extreme wind
speed and heavy rainfall often experienced during tropical cyclone events. The resulting interaction of different physical processes that occurs during these compound events can lead
to severe impacts, such as the flooding that occurs because of extreme rainfall, combined with the failure of storm water pumping systems because of loss of power caused by damage
to the electrical distribution system by extreme winds. Understanding the observed changes in univariate extreme events is challenging, particularly for unprecedented events, because
our understanding of the odds of reoccurrence, and whether they are changing, depends on both an understanding of the physical process involved and the statistical characteristics of
very rare extreme events in a non-stationary climate framework. Compound events constitute an even bigger statistical challenge with higher dimensionality, even sparser sampling and
a more complex understanding of the physical factors that played a role in the event. In both cases, advanced physics-friendly statistical methods are required to account for extreme
values and climate variability in observational datasets and climate model simulations.
This session invites contributions analyzing changes in univariate and multivariate extreme events, using statistical methods for challenging questions such as bias-correction of
climate model data as input for impact models, evaluation of dynamical processes in climate models with respect to their performance in terms of climate extremes, as well as analysis
and detection of changes in climate extremes and compound events under future climate change.
This session targets analyses of changes in climate extremes, while session 9 targets improved statistical methods for extremes.
|
top
|
|
|
|
11. From global change to regional impacts, downscaling and bias correction Conveners: Bastien François, Soulivanh Thao
|
|
|
Climate projections are often based on rather coarse resolution global and regional climate models. Many users interested in climate projections (such as impact modellers), however,
act on regional to local scales and often desire high-resolution model output. This demand is especially evident for assessing the occurrence of climate and weather extremes. One way
to bridge this scale-gap is by means of statistical downscaling, either by so-called perfect prognosis approaches or model output statistics, including statistical bias correction
methods of global and regional climate models.
This session seeks to present and discuss recent methodological and conceptual developments in statistical downscaling and bias correction. We especially welcome contributions
addressing spatial-temporal and multi-variable variability (in particular of extreme events); non-stationary methods; the development of statistical models for sub-daily variability
such as convective events; the integration of process understanding into downscaling and bias correction methods; the selection of predictors to capture climate change; the
performance and added value of downscaling methods on seasonal to centennial scales (including the ability to extrapolate beyond observed values); the development of process-based
validation diagnostics for statistical downscaling; the assessment of advantages and inherent limitations of different approaches; and the application of these methods in various
impact studies.
|
top
|
|
|
|
12. Impact attribution: from source to suffering Conveners: Gabi Hegerl, Jana Sillmann, Wim Thiery
|
|
|
The attribution of climate change impacts seeks to extend the attribution question to non-climatic variables that quantify climate change impacts across sectors health, agriculture,
ecosystems, and socioeconomics. It is particularly complex due to the influence of climatic and potentially confounding non-climatic human influences. This session invites recent
studies from the broad spectrum of attribution research that address some or all steps of the climate-impact chain from emissions to climate variables, to impacts in natural,
managed, and human systems and aims to explore the diversity of methods employed across disciplines and schools of thought.
This session covers research on new methodologies to investigate impact variables, including differences and commonalities in climate and impact attribution. It also covers a broad
range of applications, case studies, current challenges of the field, and avenues for expanding the attribution community to impact variables. We particularly welcome studies that
push the limits by attributing impacts further downstream from climatic phenomena by attributing changes and events in ecosystems, agriculture, health, economics, and other natural,
managed, and/or human systems. Contributions that compare approaches, develop or explore the influence of different counterfactual data for attribution studies, or account for
changes in exposure and vulnerability are as welcome as applications of existing approaches to novel terrain.
|
top
|
|