java.lang.Object
es.upm.fi.cig.multictbnc.performance.Metrics
Computes different metrics for the evaluation of multi-dimensional classifications.
-
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionevaluate(Prediction[] predicted, Dataset actualDataset) Uses different performance metrics to evaluate how good the given predictions are.static doubleCompute the F1 score from aMapcontaining a confusion matrix.static doubleglobalAccuracy(Prediction[] predicted, Dataset actualDataset) Computes the global accuracy, which is the ratio between the number of instances that were correctly classified for all the class variables and the total number of instances.static doubleglobalBrierScore(Prediction[] predicted, Dataset actualDataset) The Brier score measures the performance of probabilistic predictions.static doublemacroAveraging(Prediction[] predicted, Dataset actualDataset, Metric metric) Computes the value of a given evaluation metric for a multi-dimensional classification problem using macro-averaging (Gil-Begue et al., 2021).static doublemeanAccuracy(Prediction[] predicted, Dataset actualDataset) Computes the mean of the accuracies for each class variable (Bielza et al., 2011).static doublemeanAccuracy(Prediction[] predicted, Dataset actualDataset, Map<String, Double> results) Computes the mean of the accuracies for each class variable (Bielza et al., 2011).static doublemicroAveraging(Prediction[] predicted, Dataset actualDataset, Metric metric) Computes the value of a given evaluation metric for a multi-dimensional classification problem using a micro-averaging (Gil-Begue et al., 2021).static doubleComputes the precision evaluation metric from aMapcontaining a confusion matrix.static doubleComputes the recall evaluation metric from aMapcontaining a confusion matrix.static voidshowPredictions(Prediction[] predicted, Dataset actualDataset) Displays the predictions along with the actual values.
-
Constructor Details
-
Metrics
public Metrics()
-
-
Method Details
-
evaluate
Uses different performance metrics to evaluate how good the given predictions are.- Parameters:
predicted-PredictionarrayactualDataset- dataset with actual classes- Returns:
Mapwith the name of the evaluation metrics and their values
-
f1score
Compute the F1 score from aMapcontaining a confusion matrix. TheMapshould contain, at least, the keys "tp" (true positive), "fp" (false positive) and "fn" (false negative). If there are no positive examples in the test dataset (tp = 0 and fn = 0) and no false positives (fp = 0), a division by 0 will occur. In those cases, the F1 score is ill-defined and set to 0.- Parameters:
cm-Maprepresenting a confusion matrix- Returns:
- F1 score
-
globalAccuracy
Computes the global accuracy, which is the ratio between the number of instances that were correctly classified for all the class variables and the total number of instances. A partially correct classification will be considered as an error (Bielza et al., 2011).- Parameters:
predicted-PredictionarrayactualDataset- dataset with actual classes- Returns:
- 0/1 subset accuracy
-
globalBrierScore
The Brier score measures the performance of probabilistic predictions. Models that assign a higher probability to correct predictions will have a lower brier score (0 is the best). This method implements a generalised version for multi-dimensional problems, which rewards only the probability of the class configuration where all classes are correct (Fernandes et al., 2013).- Parameters:
predicted-PredictionarrayactualDataset- dataset with actual classes- Returns:
- global brier score
-
macroAveraging
Computes the value of a given evaluation metric for a multi-dimensional classification problem using macro-averaging (Gil-Begue et al., 2021).- Parameters:
predicted-PredictionarrayactualDataset- dataset with actual classesmetric- evaluation metric- Returns:
- result of the evaluation metric
-
meanAccuracy
Computes the mean of the accuracies for each class variable (Bielza et al., 2011).- Parameters:
predicted-PredictionarrayactualDataset- dataset with actual classes- Returns:
- mean accuracy
-
meanAccuracy
public static double meanAccuracy(Prediction[] predicted, Dataset actualDataset, Map<String, Double> results) Computes the mean of the accuracies for each class variable (Bielza et al., 2011).- Parameters:
predicted-PredictionarrayactualDataset- dataset with actual classesresults- aMapto store the accuracies of each class variables- Returns:
- mean accuracy
-
microAveraging
Computes the value of a given evaluation metric for a multi-dimensional classification problem using a micro-averaging (Gil-Begue et al., 2021).- Parameters:
predicted-PredictionarrayactualDataset- dataset with actual classesmetric- evaluation metric- Returns:
- result of the evaluation metric
-
precision
Computes the precision evaluation metric from aMapcontaining a confusion matrix. TheMapshould contain at least the keys "tp" (true positive) and "fp" (false positive). If there are no cases predicted as positive (tp = 0 and fp = 0), a division by 0 will occur. In those cases, the precision is ill-defined and set to 0.- Parameters:
cm-Maprepresenting a confusion matrix- Returns:
- precision
-
recall
Computes the recall evaluation metric from aMapcontaining a confusion matrix. TheMapshould contain at least the keys "tp" (true positive) and "fn" (false negative). If there are no positive examples in the test dataset (tp = 0 and fn = 0), a division by 0 will occur. In those cases, the recall is ill-defined and set to 0.- Parameters:
cm-Maprepresenting a confusion matrix- Returns:
- recall
-
showPredictions
Displays the predictions along with the actual values.- Parameters:
predicted-PredictionarrayactualDataset- dataset with actual classes
-