fatf.accountability.models.measures.systematic_performance_bias_grid

fatf.accountability.models.measures.systematic_performance_bias_grid(metrics_list: List[float], threshold: float = 0.8) → numpy.ndarray[source]

Checks for pairwise systematic bias in group-wise predictive performance.

If a disparity in performance is found to be above the specified threshold a given pair sub-population performance metrics is considered biased.

Parameters
metrics_listList[Number]

A list of predictive performance measurements for each sub-population.

thresholdnumber, optional (default=0.8)

A threshold (between 0 and 1) that defines performance disparity.

Returns
grid_checknp.ndarray

A symmetric and square boolean numpy ndarray that indicates (True) whether any pair of sub-populations has significantly different predictive performance.

Raises
TypeError

The metrics_list is not a list. One of the metric values in the metrics_list is not a number. The threshold is not a number.

ValueError

The metrics_list is an empty list. The threshold is out of 0 to 1 range.

Examples using fatf.accountability.models.measures.systematic_performance_bias_grid