fatf.fairness.models.measures.equal_accuracy

fatf.fairness.models.measures.equal_accuracy(confusion_matrix_list: List[numpy.ndarray], tolerance: float = 0.2, label_index: int = 0) → numpy.ndarray[source]

Checks if accuracy difference of all grouping pairs is within tolerance.

Note

This function expects a list of confusion matrices per sub-group for tested data. To get this list please use either fatf.utils.metrics.tools.confusion_matrix_per_subgroup or fatf.utils.metrics.tools.confusion_matrix_per_subgroup_indexed function.

Alternatively you can call either fatf.fairness.models.measures.disparate_impact or fatf.fairness.models.measures.disparate_impact_indexed function, which handles both the grouping and calculates the desired group-fairness criterion.

Parameters
confusion_matrix_listList[numpy.ndarray]

A list of confusion matrices, one for each sub-population.

tolerancenumber, optional (default=0.2)

A number between 0 and 1 that indicates how much any two accuracies can differ to be considered “equal”.

label_indexinteger, optional (default=0)

The index of the “positive” class in the confusion matrix. (Not required for binary problems.)

Returns
disparitynumpy.ndarray

A square and diagonally symmetric numpy array with boolean values. An entry is True if a pair of two sub-populations’ accuracy difference is above the tolerance level and False otherwise.

Raises
TypeError

The tolerance parameter is not a number.

ValueError

The tolerance parameter is out of [0, 1] range.

Examples using fatf.fairness.models.measures.equal_accuracy