fatf.utils.metrics.metrics.multiclass_negative_predictive_value

fatf.utils.metrics.metrics.multiclass_negative_predictive_value(confusion_matrix: numpy.ndarray, label_index: int, strict: bool = False) → float[source]

Gets the “negative predictive value” for a multi-class confusion matrix.

There are two possible ways of calculating it:

strict

The true negatives are all non-positive ground truth predicted correctly.

relaxed

The true negatives are defined as all non-positive ground truth predicted as any non-positive.

See the documentation of fatf.utils.metrics.tools.validate_confusion_matrix for all the possible errors and exceptions.

Parameters
confusion_matrixnumpy.ndarray

A confusion matrix based on which the metric will be computed.

label_indexinteger

The index of a label that should be treated as “positive”. All the other labels will be treated as “negative”.

strictboolean, optional (default=False)

If True, the “true negatives” are calculated “strictly”, otherwise a generalised approach to “true negatives” is used.

Returns
metricnumber

The “negative predictive value”.

Raises
TypeError

The strict parameter is not a boolean.