Changelog

0.1.2 (04/09/2022)

The following bugs are fixed in this release:

  • Segmentation:

    • A Segmentation object holds incorrect segment count after manipulation (#39).

    • Slic segmentation fails quietly by not starting the segment count at 1. (This issue appears to have been fixed in scikit-image 0.19.2 and higher.)

  • Occlusion:

    • occlude_segments_vectorised (Occlusion) returns an occlusion of incorrect shape if the input array is 2D with just one row (#40).

0.1.1 (10/04/2022)

The following functionality is made available with this release:

Fairness

Accountability

Transparency

Data/ Features

Models

  • Submodular Pick

Predictions

  • Image bLIMEy (LIME-equivalent)

This update focuses on surrogate image explainers for predictions of crisp and probabilistic black-box classifiers. In particular, it implements:

  • Segmentation:

  • Occlusion:

  • Sampling:

  • Incremental model processing:

    • Batch-processing and -transforming data for predicting it with a model – batch_data.

  • Surrogate image explainability:

  • Aggregation-based model explainability:

Additionally, this release moves away from Travis CI in favour of GitHub Actions.

0.1.0 (18/05/2020)

The following functionality is made available with this release:

Fairness

Accountability

Transparency

Data/ Features

Models

Predictions

  • Tabular bLIMEy for regression

This is an incremental update focused on surrogate explainers for black-box regression:

This release coincides with publication of a paper describing FAT Forensic in The Journal of Open Source Software (JOSS).

0.0.2 (04/11/2019)

The following functionality is made available with this release:

Fairness

Accountability

Transparency

Data/ Features

Models

Predictions

  • Tabular bLIMEy

Included tutorials:

Included how-to guides:

Included code examples:

bLIMEy

This release adds support for custom surrogate explainers of tabular data called bLIMEy. The two pre-made classes are available as part of the fatf.transparency.predictions.surrogate_explainers module:

Since the latter class implements LIME from components available in FAT Forensics, the LIME wrapper available under fatf.transparency.lime.Lime will be retired in release 0.0.3.

To facilitate building custom tabular surrogate explainers a range of functionality has been implemented including: data discretisation, data transformation, data augmentation, data point augmentation, distance kernelisation, scikit-learn model tools, feature selection and surrogate model evaluation.

Other Functionality

Seeding of the random number generators via the fatf.setup_random_seed function can now be done by passing a parameter to this function (in addition to using the FATF_SEED system variable).

0.0.1 (01/08/2019)

This is the initial releases of the package. The following functionality is made available with this release:

Fairness

Accountability

Transparency

Data/ Features

  • Systemic Bias (disparate treatment labelling)

  • Sample size disparity (e.g., class imbalance)

  • Sampling bias

  • Data Density Checker

  • Data description

Models

  • Group-based fairness (disparate impact)

  • Systematic performance bias

  • Partial dependence

  • Individual conditional expectation

Predictions

  • Counterfactual fairness (disparate treatment)

  • Counterfactuals

  • Tabular LIME (wrapper)