class fatf.transparency.lime.Lime(data: numpy.ndarray, local_explanation: bool = True, model: object = None, **kwargs)[source]

Wraps LIME package’s lime.lime_tabular.LimeTabularExplainer class.


Contrarily to the LIME tabular explainer this wrapper sets the sample_around_instance parameter to True meaning that by default it provides a local rather than a global explanation. This can be changed by either providing a sample_around_instance or a local_explanation parameter with the first one taking precedence.

This LIME wrapper can be initialised with any of the lime.lime_tabular.LimeTabularExplainer named parameters. Additionally, one can also pass in any of the lime.lime_tabular.LimeTabularExplainer.explain_instance method named parameters, which will be saved within the object and used when the local explain_instance method is called. In case the same parameters are provided both when initialising the object and when explaining an instance the values passed to the latter take precedence.

In addition to all the named parameters one may decide to specify a model to be used with the explainer – the model requires predict method for the regressor mode and predict_proba method for the classification mode – or a predictive function (predict_fn), which is accessed directly via the explainer. If both are given, the latter takes the precedence.

For all of the available parameters please consult the LIME API documentation.


Since LIME does not support structured arrays by default the predictive function and the model have to operate on unstructured types. If a structured data or structured data point are passed in they are converted to unstructured types and these are used inside the lime. If your data array is structured, consider using fatf.utils.array.tools.as_unstructured function first to convert it to an unstructured array before training a model.

This function loggs a warning if the model does not have a predict_proba method.


A 2-dimensional numerical numpy array with a dataset to be used.

local_explanationboolean, optional (default=True)

If True the LIME explainer will sample data from the neighbourhood of the selected instance (a local explanation), otherwise the data will be sampled from the whole data distribution (a global explanation). This parameter controls sample_around_instance LIMES parameter. If both, local_explanation and sample_around_instance, are provided, the latter takes the precedence.

modelobject, optional (default=None)

An object that contains predict – outputs predictions – and/or predict_proba – outputs probability vectors corresponding to the probability of an instance belonging to each class – methods. The first method is used when LIME operates in a regressor mode while the latter is used with a LIME classification mode.

predict_fnfunction, optional (default=None)

Alternatively to a whole model LIME can use a python function that either outputs regression results or classification probabilities. In case both model and predict_fn are provided, the latter takes the precedence.


LIME optional parameters.


A list of names of the LimeTabularExplainer named parameter.


A list of names of the LimeTabularExplainer.explain_instance function named parameter.


An initialised LimeTabularExplainer object.


LIME mode of operation; 'classification' or 'regression'.


A model to be used for LIME explanations.


An indicator whether the model is a probabilistic model, i.e. has a predict_proba method.

explain_instance_paramsDictionary[string, Any]

A dictionary that holds named parameters for the future calls of the local explain_instance method.


One of the named parameters is invalid for the LIME tabular explainer.


The input data is not a 2-dimensional array. The categorical indices list/array (categorical_features) is not 1-dimensional.


The model does not have fit and predict methods.


Categorical features index parameter (categorical_features) is neither of the following: a list, a numpy array or a None. The pred_fn parameter is not a callable object, i.e. a function.


The input data is not purely numerical. For a structured data array some of the categorical features indices (categorical_features) are not strings or they are not valid indices. The mode parameter is neither of ‘classification’ nor ‘regression’.


This class will be deprecated in FAT Forensics version 0.0.3.


The user is warned when both a model and a predict_fn are provided. In such a case the predict_fn takes the precedence.


explain_instance(instance, **kwargs)

Explains an instance with the LIME tabular explainer.

explain_instance(instance: numpy.ndarray, **kwargs) → Union[Dict[str, Tuple[str, float]], List[Tuple[str, float]]][source]

Explains an instance with the LIME tabular explainer.

This method wraps around explain_instance method in the LIME tabular explainer object.


Contrarily to the LIME tabular explainer this wrapper produces explanations for all of the classes for a classification task by default.

If any of the named parameters for this function were specified when initialising this object they will be used unless they are also defined when calling this method, in which case the latter take the precedence.

If all: a class-wide model, a class-wide prediction function and a local prediction function (via named parameter to this function) are specified, they are used in the following order:

  • local prediction function,

  • global prediction function, and finally

  • the model.

Based on whether the task at hand is classification or regression either predict (regression) or predict_proba (classification) method of the model is used.


A 1-dimensional data point (numpy array) to be explained.


LIME tabular explainer’s explain_instance optional parameters.

explanationDictionary[string, Tuple[string, float]] or List[Tuple[string, float]]

For classification a dictionary where the keys correspond to class names and the values are tuples (string and float), which represent an explanation in terms of one of the features and the importance of this explanation. For regression a list of tuples (string and float) with the same meaning.


One of the named parameters is invalid for the explain_instance method of the LIME tabular explainer.


The input instance is not a 1-dimensional numpy array.


A predictive function is not available (neither as a model attribute of this class, nor as a predict_fn parameter).


The input instance is not purely numerical.