Compute metrics for given backtest results
Usage
backtest_metrics(
...,
metrics = c("mae", "rmse", "ape", "quantile"),
horizons = 0
)
Arguments
- ...
Results of calls to backtest.
- metrics
list of metrics which should be calculated. Currently supporting: 'mae' (mean absolute error), 'rmse' (root mean squared error), 'ape' (absolute percent error) and 'quantile' for all quantile-based metrics or input specific quantile-base metrics from
get_quantile_metrics
.- horizons
vector of horizons for which the metrics be calculated. Default is to calculate metrics only for horizon 0 (the nowcast).
Examples
#These examples require the `scoringutils` function
if (requireNamespace("scoringutils", quietly = TRUE)){
# Load the data
data(denguedat)
# In this example we will test two models
now <- as.Date("1990-10-01")
ncast1 <- nowcast(denguedat, "onset_week", "report_week", now = now,
method = "optimization", seed = 2495624, iter = 10)
ncast2 <- nowcast(denguedat, "onset_week", "report_week", now = now,
method = "optimization", seed = 2495624, iter = 10, dist = "Poisson")
# Run a backtest for each of the models
btest1 <- backtest(ncast1, dates_to_test = as.Date("1990-06-11"), model_name = "Classic")
btest2 <- backtest(ncast2, dates_to_test = as.Date("1990-06-11"), model_name = "Poisson")
# Compare the models to select the best model
backtest_metrics(btest1, btest2)
}
#> model now horizon Strata_unified mae rmse ape wis
#> 1 Classic 1990-06-11 0 No strata 10.865 10.865 1.810833 4.221429
#> 2 Poisson 1990-06-11 0 No strata 7.418 7.418 1.236333 3.914286
#> overprediction underprediction dispersion bias interval_coverage_50
#> 1 2.857143 0 1.3642857 0.9 0
#> 2 3.285714 0 0.6285714 1.0 0
#> interval_coverage_90 ae_median
#> 1 1 10
#> 2 0 7