fatf.fairness.predictions.measures.counterfactual_fairness

fatf.fairness.predictions.measures.counterfactual_fairness(instance: Union[numpy.ndarray, numpy.void], protected_feature_indices: List[Union[str, int]], counterfactual_class: Union[int, str, None] = None, model: object = None, predictive_function: Optional[Callable] = None, dataset: Optional[numpy.ndarray] = None, categorical_indices: Optional[List[Union[str, int]]] = None, numerical_indices: Optional[List[Union[str, int]]] = None, max_counterfactual_length: int = 2, feature_ranges: Optional[Dict[Union[int, str], Union[Tuple[float, float], List[Union[float, str]]]]] = None, distance_functions: Optional[Dict[Union[int, str], Callable]] = None, step_sizes: Optional[Dict[Union[int, str], float]] = None, default_numerical_step_size: float = 1.0, normalise_distance: bool = False) → Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray][source]

Checks counterfactual fairness of a prediction given a model.

This is an example of disparate treatment fairness approach, i.e. individual fairness. It checks whether there are two “similar” individuals (who only differ in the protected attributes) who are treated differently, i.e. get a different prediction.

The counterfactual fairness function is based on the object. It is based on the fatf.transparency.predictions.counterfactuals.CounterfactualExplainer object. For all the errors, warnings and exceptions please see the documentation of fatf.transparency.predictions.counterfactuals.CounterfactualExplainer object and its methods.

Parameters
instance, counterfactual_class, and normalise_distance

For the description of these parameters please see the documentation of fatf.transparency.predictions.counterfactuals.CounterfactualExplainer.explain_instance method.

protected_feature_indices, model, predictive_function, dataset, categorical_indices, numerical_indices, max_counterfactual_length, feature_ranges, distance_functions, step_sizes, and default_numerical_step_size

For the desctiption of these parameters please see the documentation of fatf.transparency.predictions.counterfactuals.CounterfactualExplainer object. The only difference is that the counterfactual_feature_indices parameter is renamed to protected_feature_indices and is required by this function.

Returns
counterfactualsnumpy.ndarray

A 2-dimensional numpy array with counterfactually unfair data points.

distancesnumpy.ndarray

A 1-dimensional numpy array with distances from the input instance to every counterfactual data point.

predictionsnumpy.ndarray

A 1-dimensional numpy array with predictions for every counterfactual data point.

Examples using fatf.fairness.predictions.measures.counterfactual_fairness