fatf.utils.metrics.metrics.multiclass_true_negative_rate

fatf.utils.metrics.metrics.multiclass_true_negative_rate(confusion_matrix: numpy.ndarray, label_index: int, strict: bool = False)[source]

Calculates the “true negative rate” for a multi-class confusion matrix.

There are two possible ways of calculating it:

strict

The true negatives are all non-positive ground truth predicted correctly.

relaxed

The true negatives are defined as all non-positive ground truth predicted as any non-positive.

See the documentation of fatf.utils.metrics.tools.validate_confusion_matrix for all the possible errors and exceptions.

Parameters
confusion_matrixnumpy.ndarray

A confusion matrix based on which the metric will be computed.

label_indexinteger

The index of a label that should be treated as “positive”. All the other labels will be treated as “negative”.

strictboolean, optional (default=False)

If True, the “true negatives” are calculated “strictly”, otherwise a generalised approach to “true negatives” is used.

Returns
metricnumber

The “true negative rate”.

Raises
TypeError

The strict parameter is not a boolean.