java.lang.Object
es.upm.fi.cig.multictbnc.performance.Metrics
Computes different metrics for the evaluation of multi-dimensional classifications.
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionevaluate
(Prediction[] predicted, Dataset actualDataset) Uses different performance metrics to evaluate how good the given predictions are.static double
Compute the F1 score from aMap
containing a confusion matrix.static double
globalAccuracy
(Prediction[] predicted, Dataset actualDataset) Computes the global accuracy, which is the ratio between the number of instances that were correctly classified for all the class variables and the total number of instances.static double
globalBrierScore
(Prediction[] predicted, Dataset actualDataset) The Brier score measures the performance of probabilistic predictions.static double
macroAveraging
(Prediction[] predicted, Dataset actualDataset, Metric metric) Computes the value of a given evaluation metric for a multi-dimensional classification problem using macro-averaging (Gil-Begue et al., 2021).static double
meanAccuracy
(Prediction[] predicted, Dataset actualDataset) Computes the mean of the accuracies for each class variable (Bielza et al., 2011).static double
meanAccuracy
(Prediction[] predicted, Dataset actualDataset, Map<String, Double> results) Computes the mean of the accuracies for each class variable (Bielza et al., 2011).static double
microAveraging
(Prediction[] predicted, Dataset actualDataset, Metric metric) Computes the value of a given evaluation metric for a multi-dimensional classification problem using a micro-averaging (Gil-Begue et al., 2021).static double
Computes the precision evaluation metric from aMap
containing a confusion matrix.static double
Computes the recall evaluation metric from aMap
containing a confusion matrix.static void
showPredictions
(Prediction[] predicted, Dataset actualDataset) Displays the predictions along with the actual values.
-
Constructor Details
-
Metrics
public Metrics()
-
-
Method Details
-
evaluate
Uses different performance metrics to evaluate how good the given predictions are.- Parameters:
predicted
-Prediction
arrayactualDataset
- dataset with actual classes- Returns:
Map
with the name of the evaluation metrics and their values
-
f1score
Compute the F1 score from aMap
containing a confusion matrix. TheMap
should contain, at least, the keys "tp" (true positive), "fp" (false positive) and "fn" (false negative). If there are no positive examples in the test dataset (tp = 0 and fn = 0) and no false positives (fp = 0), a division by 0 will occur. In those cases, the F1 score is ill-defined and set to 0.- Parameters:
cm
-Map
representing a confusion matrix- Returns:
- F1 score
-
globalAccuracy
Computes the global accuracy, which is the ratio between the number of instances that were correctly classified for all the class variables and the total number of instances. A partially correct classification will be considered as an error (Bielza et al., 2011).- Parameters:
predicted
-Prediction
arrayactualDataset
- dataset with actual classes- Returns:
- 0/1 subset accuracy
-
globalBrierScore
The Brier score measures the performance of probabilistic predictions. Models that assign a higher probability to correct predictions will have a lower brier score (0 is the best). This method implements a generalised version for multi-dimensional problems, which rewards only the probability of the class configuration where all classes are correct (Fernandes et al., 2013).- Parameters:
predicted
-Prediction
arrayactualDataset
- dataset with actual classes- Returns:
- global brier score
-
macroAveraging
Computes the value of a given evaluation metric for a multi-dimensional classification problem using macro-averaging (Gil-Begue et al., 2021).- Parameters:
predicted
-Prediction
arrayactualDataset
- dataset with actual classesmetric
- evaluation metric- Returns:
- result of the evaluation metric
-
meanAccuracy
Computes the mean of the accuracies for each class variable (Bielza et al., 2011).- Parameters:
predicted
-Prediction
arrayactualDataset
- dataset with actual classes- Returns:
- mean accuracy
-
meanAccuracy
public static double meanAccuracy(Prediction[] predicted, Dataset actualDataset, Map<String, Double> results) Computes the mean of the accuracies for each class variable (Bielza et al., 2011).- Parameters:
predicted
-Prediction
arrayactualDataset
- dataset with actual classesresults
- aMap
to store the accuracies of each class variables- Returns:
- mean accuracy
-
microAveraging
Computes the value of a given evaluation metric for a multi-dimensional classification problem using a micro-averaging (Gil-Begue et al., 2021).- Parameters:
predicted
-Prediction
arrayactualDataset
- dataset with actual classesmetric
- evaluation metric- Returns:
- result of the evaluation metric
-
precision
Computes the precision evaluation metric from aMap
containing a confusion matrix. TheMap
should contain at least the keys "tp" (true positive) and "fp" (false positive). If there are no cases predicted as positive (tp = 0 and fp = 0), a division by 0 will occur. In those cases, the precision is ill-defined and set to 0.- Parameters:
cm
-Map
representing a confusion matrix- Returns:
- precision
-
recall
Computes the recall evaluation metric from aMap
containing a confusion matrix. TheMap
should contain at least the keys "tp" (true positive) and "fn" (false negative). If there are no positive examples in the test dataset (tp = 0 and fn = 0), a division by 0 will occur. In those cases, the recall is ill-defined and set to 0.- Parameters:
cm
-Map
representing a confusion matrix- Returns:
- recall
-
showPredictions
Displays the predictions along with the actual values.- Parameters:
predicted
-Prediction
arrayactualDataset
- dataset with actual classes
-