Technical note: Diagnostic efficiency – specific evaluation of model performance

<p>A better understanding of the reasons why hydrological model performance is unsatisfying represents a crucial part of meaningful model evaluation. However, current evaluation efforts are mostly based on aggregated efficiency measures such as Kling–Gupta efficiency (KGE) or Nash–Sutcliffe ef...

Full description

Bibliographic Details
Main Authors: R. Schwemmle, D. Demand, M. Weiler
Format: Article
Language:English
Published: Copernicus Publications 2021-04-01
Series:Hydrology and Earth System Sciences
Online Access:https://hess.copernicus.org/articles/25/2187/2021/hess-25-2187-2021.pdf
id doaj-70df16e2672843288094cb5e5b792ed1
record_format Article
spelling doaj-70df16e2672843288094cb5e5b792ed12021-04-22T09:55:07ZengCopernicus PublicationsHydrology and Earth System Sciences1027-56061607-79382021-04-01252187219810.5194/hess-25-2187-2021Technical note: Diagnostic efficiency – specific evaluation of model performanceR. SchwemmleD. DemandM. Weiler<p>A better understanding of the reasons why hydrological model performance is unsatisfying represents a crucial part of meaningful model evaluation. However, current evaluation efforts are mostly based on aggregated efficiency measures such as Kling–Gupta efficiency (KGE) or Nash–Sutcliffe efficiency (NSE). These aggregated measures provide a relative gradation of model performance. Especially in the case of a weak model performance it is important to identify the different errors which may have caused such unsatisfactory predictions. These errors may originate from the model parameters, the model structure, and/or the input data. In order to provide more insight, we define three types of errors which may be related to their source: constant error (e.g. caused by consistent input data error such as precipitation), dynamic error (e.g. structural model errors such as a deficient storage routine) and timing error (e.g. caused by input data errors or deficient model routines/parameters). Based on these types of errors, we propose the novel diagnostic efficiency (DE) measure, which accounts for these three error types. The disaggregation of DE into its three metric terms can be visualized in a plain radial space using diagnostic polar plots. A major advantage of this visualization technique is that error contributions can be clearly differentiated. In order to provide a proof of concept, we first generated time series artificially with the three different error types (i.e. simulations are surrogated by manipulating observations). By computing DE and the related diagnostic polar plots for the reproduced errors, we could then supply evidence for the concept. Finally, we tested the applicability of our approach for a modelling example. For a particular catchment, we compared streamflow simulations realized with different parameter sets to the observed streamflow. For this modelling example, the diagnostic polar plot suggests that dynamic errors explain the overall error to a large extent. The proposed evaluation approach provides a diagnostic tool for model developers and model users and the diagnostic polar plot facilitates interpretation of the proposed performance measure as well as a relative gradation of model performance similar to the well-established efficiency measures in hydrology.</p>https://hess.copernicus.org/articles/25/2187/2021/hess-25-2187-2021.pdf
collection DOAJ
language English
format Article
sources DOAJ
author R. Schwemmle
D. Demand
M. Weiler
spellingShingle R. Schwemmle
D. Demand
M. Weiler
Technical note: Diagnostic efficiency – specific evaluation of model performance
Hydrology and Earth System Sciences
author_facet R. Schwemmle
D. Demand
M. Weiler
author_sort R. Schwemmle
title Technical note: Diagnostic efficiency – specific evaluation of model performance
title_short Technical note: Diagnostic efficiency – specific evaluation of model performance
title_full Technical note: Diagnostic efficiency – specific evaluation of model performance
title_fullStr Technical note: Diagnostic efficiency – specific evaluation of model performance
title_full_unstemmed Technical note: Diagnostic efficiency – specific evaluation of model performance
title_sort technical note: diagnostic efficiency – specific evaluation of model performance
publisher Copernicus Publications
series Hydrology and Earth System Sciences
issn 1027-5606
1607-7938
publishDate 2021-04-01
description <p>A better understanding of the reasons why hydrological model performance is unsatisfying represents a crucial part of meaningful model evaluation. However, current evaluation efforts are mostly based on aggregated efficiency measures such as Kling–Gupta efficiency (KGE) or Nash–Sutcliffe efficiency (NSE). These aggregated measures provide a relative gradation of model performance. Especially in the case of a weak model performance it is important to identify the different errors which may have caused such unsatisfactory predictions. These errors may originate from the model parameters, the model structure, and/or the input data. In order to provide more insight, we define three types of errors which may be related to their source: constant error (e.g. caused by consistent input data error such as precipitation), dynamic error (e.g. structural model errors such as a deficient storage routine) and timing error (e.g. caused by input data errors or deficient model routines/parameters). Based on these types of errors, we propose the novel diagnostic efficiency (DE) measure, which accounts for these three error types. The disaggregation of DE into its three metric terms can be visualized in a plain radial space using diagnostic polar plots. A major advantage of this visualization technique is that error contributions can be clearly differentiated. In order to provide a proof of concept, we first generated time series artificially with the three different error types (i.e. simulations are surrogated by manipulating observations). By computing DE and the related diagnostic polar plots for the reproduced errors, we could then supply evidence for the concept. Finally, we tested the applicability of our approach for a modelling example. For a particular catchment, we compared streamflow simulations realized with different parameter sets to the observed streamflow. For this modelling example, the diagnostic polar plot suggests that dynamic errors explain the overall error to a large extent. The proposed evaluation approach provides a diagnostic tool for model developers and model users and the diagnostic polar plot facilitates interpretation of the proposed performance measure as well as a relative gradation of model performance similar to the well-established efficiency measures in hydrology.</p>
url https://hess.copernicus.org/articles/25/2187/2021/hess-25-2187-2021.pdf
work_keys_str_mv AT rschwemmle technicalnotediagnosticefficiencyspecificevaluationofmodelperformance
AT ddemand technicalnotediagnosticefficiencyspecificevaluationofmodelperformance
AT mweiler technicalnotediagnosticefficiencyspecificevaluationofmodelperformance
_version_ 1721514677169553408