java.lang.Object
es.upm.fi.cig.multictbnc.performance.Metrics

public class Metrics extends Object
Computes different metrics for the evaluation of multi-dimensional classifications.
  • Constructor Summary

    Constructors
    Constructor
    Description
     
  • Method Summary

    Modifier and Type
    Method
    Description
    static Map<String,Double>
    evaluate(Prediction[] predicted, Dataset actualDataset)
    Uses different performance metrics to evaluate how good the given predictions are.
    static double
    Compute the F1 score from a Map containing a confusion matrix.
    static double
    globalAccuracy(Prediction[] predicted, Dataset actualDataset)
    Computes the global accuracy, which is the ratio between the number of instances that were correctly classified for all the class variables and the total number of instances.
    static double
    globalBrierScore(Prediction[] predicted, Dataset actualDataset)
    The Brier score measures the performance of probabilistic predictions.
    static double
    macroAveraging(Prediction[] predicted, Dataset actualDataset, Metric metric)
    Computes the value of a given evaluation metric for a multi-dimensional classification problem using macro-averaging (Gil-Begue et al., 2021).
    static double
    meanAccuracy(Prediction[] predicted, Dataset actualDataset)
    Computes the mean of the accuracies for each class variable (Bielza et al., 2011).
    static double
    meanAccuracy(Prediction[] predicted, Dataset actualDataset, Map<String,Double> results)
    Computes the mean of the accuracies for each class variable (Bielza et al., 2011).
    static double
    microAveraging(Prediction[] predicted, Dataset actualDataset, Metric metric)
    Computes the value of a given evaluation metric for a multi-dimensional classification problem using a micro-averaging (Gil-Begue et al., 2021).
    static double
    Computes the precision evaluation metric from a Map containing a confusion matrix.
    static double
    Computes the recall evaluation metric from a Map containing a confusion matrix.
    static void
    showPredictions(Prediction[] predicted, Dataset actualDataset)
    Displays the predictions along with the actual values.

    Methods inherited from class java.lang.Object

    clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
  • Constructor Details

    • Metrics

      public Metrics()
  • Method Details

    • evaluate

      public static Map<String,Double> evaluate(Prediction[] predicted, Dataset actualDataset)
      Uses different performance metrics to evaluate how good the given predictions are.
      Parameters:
      predicted - Prediction array
      actualDataset - dataset with actual classes
      Returns:
      Map with the name of the evaluation metrics and their values
    • f1score

      public static double f1score(Map<String,Double> cm)
      Compute the F1 score from a Map containing a confusion matrix. The Map should contain, at least, the keys "tp" (true positive), "fp" (false positive) and "fn" (false negative). If there are no positive examples in the test dataset (tp = 0 and fn = 0) and no false positives (fp = 0), a division by 0 will occur. In those cases, the F1 score is ill-defined and set to 0.
      Parameters:
      cm - Map representing a confusion matrix
      Returns:
      F1 score
    • globalAccuracy

      public static double globalAccuracy(Prediction[] predicted, Dataset actualDataset)
      Computes the global accuracy, which is the ratio between the number of instances that were correctly classified for all the class variables and the total number of instances. A partially correct classification will be considered as an error (Bielza et al., 2011).
      Parameters:
      predicted - Prediction array
      actualDataset - dataset with actual classes
      Returns:
      0/1 subset accuracy
    • globalBrierScore

      public static double globalBrierScore(Prediction[] predicted, Dataset actualDataset)
      The Brier score measures the performance of probabilistic predictions. Models that assign a higher probability to correct predictions will have a lower brier score (0 is the best). This method implements a generalised version for multi-dimensional problems, which rewards only the probability of the class configuration where all classes are correct (Fernandes et al., 2013).
      Parameters:
      predicted - Prediction array
      actualDataset - dataset with actual classes
      Returns:
      global brier score
    • macroAveraging

      public static double macroAveraging(Prediction[] predicted, Dataset actualDataset, Metric metric)
      Computes the value of a given evaluation metric for a multi-dimensional classification problem using macro-averaging (Gil-Begue et al., 2021).
      Parameters:
      predicted - Prediction array
      actualDataset - dataset with actual classes
      metric - evaluation metric
      Returns:
      result of the evaluation metric
    • meanAccuracy

      public static double meanAccuracy(Prediction[] predicted, Dataset actualDataset)
      Computes the mean of the accuracies for each class variable (Bielza et al., 2011).
      Parameters:
      predicted - Prediction array
      actualDataset - dataset with actual classes
      Returns:
      mean accuracy
    • meanAccuracy

      public static double meanAccuracy(Prediction[] predicted, Dataset actualDataset, Map<String,Double> results)
      Computes the mean of the accuracies for each class variable (Bielza et al., 2011).
      Parameters:
      predicted - Prediction array
      actualDataset - dataset with actual classes
      results - a Map to store the accuracies of each class variables
      Returns:
      mean accuracy
    • microAveraging

      public static double microAveraging(Prediction[] predicted, Dataset actualDataset, Metric metric)
      Computes the value of a given evaluation metric for a multi-dimensional classification problem using a micro-averaging (Gil-Begue et al., 2021).
      Parameters:
      predicted - Prediction array
      actualDataset - dataset with actual classes
      metric - evaluation metric
      Returns:
      result of the evaluation metric
    • precision

      public static double precision(Map<String,Double> cm)
      Computes the precision evaluation metric from a Map containing a confusion matrix. The Map should contain at least the keys "tp" (true positive) and "fp" (false positive). If there are no cases predicted as positive (tp = 0 and fp = 0), a division by 0 will occur. In those cases, the precision is ill-defined and set to 0.
      Parameters:
      cm - Map representing a confusion matrix
      Returns:
      precision
    • recall

      public static double recall(Map<String,Double> cm)
      Computes the recall evaluation metric from a Map containing a confusion matrix. The Map should contain at least the keys "tp" (true positive) and "fn" (false negative). If there are no positive examples in the test dataset (tp = 0 and fn = 0), a division by 0 will occur. In those cases, the recall is ill-defined and set to 0.
      Parameters:
      cm - Map representing a confusion matrix
      Returns:
      recall
    • showPredictions

      public static void showPredictions(Prediction[] predicted, Dataset actualDataset)
      Displays the predictions along with the actual values.
      Parameters:
      predicted - Prediction array
      actualDataset - dataset with actual classes