Utilities for Scoring and Assessing Predictions


[Up] [Top]

Documentation for package ‘scoringutils’ version 2.1.1

Help Pages

A B C D E G I L M O P Q R S T U V W

-- A --

add_relative_skill Add relative skill scores based on pairwise comparisons
ae_median_quantile Absolute error of the median (quantile-based version)
ae_median_sample Absolute error of the median (sample-based version)
assert_dims_ok_point Assert Inputs Have Matching Dimensions
assert_forecast Assert that input is a forecast object and passes validations
assert_forecast.default Assert that input is a forecast object and passes validations
assert_forecast.forecast_binary Assert that input is a forecast object and passes validations
assert_forecast.forecast_point Assert that input is a forecast object and passes validations
assert_forecast.forecast_quantile Assert that input is a forecast object and passes validations
assert_forecast.forecast_sample Assert that input is a forecast object and passes validations
assert_forecast_generic Validation common to all forecast types
assert_forecast_type Assert that forecast type is as expected
assert_input_binary Assert that inputs are correct for binary forecast
assert_input_categorical Assert that inputs are correct for categorical forecasts
assert_input_interval Assert that inputs are correct for interval-based forecast
assert_input_nominal Assert that inputs are correct for nominal forecasts
assert_input_ordinal Assert that inputs are correct for ordinal forecasts
assert_input_point Assert that inputs are correct for point forecast
assert_input_quantile Assert that inputs are correct for quantile-based forecast
assert_input_sample Assert that inputs are correct for sample-based forecast
as_forecast_binary Create a 'forecast' object for binary forecasts
as_forecast_binary.default Create a 'forecast' object for binary forecasts
as_forecast_doc_template General information on creating a 'forecast' object
as_forecast_generic Common functionality for as_forecast_<type> functions
as_forecast_nominal Create a 'forecast' object for nominal forecasts
as_forecast_nominal.default Create a 'forecast' object for nominal forecasts
as_forecast_ordinal Create a 'forecast' object for ordinal forecasts
as_forecast_ordinal.default Create a 'forecast' object for ordinal forecasts
as_forecast_point Create a 'forecast' object for point forecasts
as_forecast_point.default Create a 'forecast' object for point forecasts
as_forecast_point.forecast_quantile Create a 'forecast' object for point forecasts
as_forecast_quantile Create a 'forecast' object for quantile-based forecasts
as_forecast_quantile.default Create a 'forecast' object for quantile-based forecasts
as_forecast_quantile.forecast_sample Create a 'forecast' object for quantile-based forecasts
as_forecast_sample Create a 'forecast' object for sample-based forecasts
as_forecast_sample.default Create a 'forecast' object for sample-based forecasts

-- B --

bias_quantile Determines bias of quantile forecasts
bias_sample Determine bias of forecasts
brier_score Metrics for binary outcomes

-- C --

check_columns_present Check column names are present in a data.frame
check_dims_ok_point Check Inputs Have Matching Dimensions
check_duplicates Check that there are no duplicate forecasts
check_input_binary Check that inputs are correct for binary forecast
check_input_interval Check that inputs are correct for interval-based forecast
check_input_point Check that inputs are correct for point forecast
check_input_quantile Check that inputs are correct for quantile-based forecast
check_input_sample Check that inputs are correct for sample-based forecast
check_number_per_forecast Check that all forecasts have the same number of rows
check_numeric_vector Check whether an input is an atomic vector of mode 'numeric'
check_try Helper function to convert assert statements into checks
crps_sample (Continuous) ranked probability score

-- D --

dispersion_quantile Weighted interval score (WIS)
dispersion_sample (Continuous) ranked probability score
dss_sample Dawid-Sebastiani score

-- E --

example_binary Binary forecast example data
example_nominal Nominal example data
example_ordinal Ordinal example data
example_point Point forecast example data
example_quantile Quantile example data
example_sample_continuous Continuous forecast example data
example_sample_discrete Discrete forecast example data

-- G --

get_correlations Calculate correlation between metrics
get_coverage Get quantile and interval coverage values for quantile-based forecasts
get_duplicate_forecasts Find duplicate forecasts
get_forecast_counts Count number of available forecasts
get_forecast_type Get forecast type from forecast object
get_forecast_unit Get unit of a single forecast
get_metrics Get metrics
get_metrics.forecast_binary Get default metrics for binary forecasts
get_metrics.forecast_nominal Get default metrics for nominal forecasts
get_metrics.forecast_ordinal Get default metrics for nominal forecasts
get_metrics.forecast_point Get default metrics for point forecasts
get_metrics.forecast_quantile Get default metrics for quantile-based forecasts
get_metrics.forecast_sample Get default metrics for sample-based forecasts
get_metrics.scores Get names of the metrics that were used for scoring
get_pairwise_comparisons Obtain pairwise comparisons between models
get_pit_histogram Probability integral transformation histogram
get_pit_histogram.default Probability integral transformation histogram
get_pit_histogram.forecast_quantile Probability integral transformation histogram
get_pit_histogram.forecast_sample Probability integral transformation histogram
get_type Get type of a vector or matrix of observed values or predictions

-- I --

interval_coverage Interval coverage (for quantile-based forecasts)
interval_score Interval score
is_forecast Test whether an object is a forecast object
is_forecast_binary Test whether an object is a forecast object
is_forecast_nominal Test whether an object is a forecast object
is_forecast_ordinal Test whether an object is a forecast object
is_forecast_point Test whether an object is a forecast object
is_forecast_quantile Test whether an object is a forecast object
is_forecast_sample Test whether an object is a forecast object

-- L --

logs_binary Metrics for binary outcomes
logs_categorical Log score for categorical outcomes
logs_sample Logarithmic score (sample-based version)
log_shift Log transformation with an additive shift

-- M --

mad_sample Determine dispersion of a probabilistic forecast

-- O --

overprediction_quantile Weighted interval score (WIS)
overprediction_sample (Continuous) ranked probability score

-- P --

pit_histogram_sample Probability integral transformation for counts
plot_correlations Plot correlation between metrics
plot_forecast_counts Visualise the number of available forecasts
plot_heatmap Create a heatmap of a scoring metric
plot_interval_coverage Plot interval coverage
plot_pairwise_comparisons Plot heatmap of pairwise comparisons
plot_quantile_coverage Plot quantile coverage
plot_wis Plot contributions to the weighted interval score
print.forecast Print information about a forecast object

-- Q --

quantile_score Quantile score

-- R --

rps_ordinal Ranked Probability Score for ordinal outcomes

-- S --

score Evaluate forecasts
score.forecast_binary Evaluate forecasts
score.forecast_nominal Evaluate forecasts
score.forecast_ordinal Evaluate forecasts
score.forecast_point Evaluate forecasts
score.forecast_quantile Evaluate forecasts
score.forecast_sample Evaluate forecasts
scoring-functions-binary Metrics for binary outcomes
select_metrics Select metrics from a list of functions
set_forecast_unit Set unit of a single forecast manually
se_mean_sample Squared error of the mean (sample-based version)
summarise_scores Summarise scores as produced by 'score()'
summarize_scores Summarise scores as produced by 'score()'

-- T --

test_columns_not_present Test whether column names are NOT present in a data.frame
test_columns_present Test whether all column names are present in a data.frame
theme_scoringutils Scoringutils ggplot2 theme
transform_forecasts Transform forecasts and observed values

-- U --

underprediction_quantile Weighted interval score (WIS)
underprediction_sample (Continuous) ranked probability score

-- V --

validate_metrics Validate metrics

-- W --

wis Weighted interval score (WIS)