fatf.accountability.models.measures.systematic_performance_bias_grid¶
-
fatf.accountability.models.measures.systematic_performance_bias_grid(metrics_list: List[float], threshold: float = 0.8) → numpy.ndarray[source]¶ Checks for pairwise systematic bias in group-wise predictive performance.
If a disparity in performance is found to be above the specified
thresholda given pair sub-population performance metrics is considered biased.Note
This function expects a list of predictive performance per sub-group for tested data. To get this list please use either of the following functions:
fatf.utils.metrics.subgroup_metrics.performance_per_subgroup/fatf.utils.metrics.subgroup_metrics.performance_per_subgroup_indexedorfatf.utils.metrics.tools.confusion_matrix_per_subgroup/fatf.utils.metrics.tools.confusion_matrix_per_subgroup_indexedin conjunction withfatf.utils.metrics.subgroup_metrics.apply_metric_function/fatf.utils.metrics.subgroup_metrics.apply_metric.- Parameters
- metrics_listList[Number]
A list of predictive performance measurements for each sub-population.
- thresholdnumber, optional (default=0.8)
A threshold (between 0 and 1) that defines performance disparity.
- Returns
- grid_checknp.ndarray
A symmetric and square boolean numpy ndarray that indicates (
True) whether any pair of sub-populations has significantly different predictive performance.
- Raises
- TypeError
The
metrics_listis not a list. One of the metric values in themetrics_listis not a number. Thethresholdis not a number.- ValueError
The
metrics_listis an empty list. The threshold is out of 0 to 1 range.