Low-level building blocks (visualime.lime)

visualime.lime.create_segments(image: ndarray, segmentation_method: Literal['felzenszwalb', 'slic', 'quickshift', 'watershed'], segmentation_settings: Dict[str, Any] | None = None) ndarray[source]

Divide the image into segments (superpixels).

Proper segmentation of the images is key to producing meaningful explanations with LIME. Which method and settings are appropriate is highly use-case specific.

For an introduction into image segmentation and a comparison of the different methods, see this tutorial in the scikit-image documentation.

Parameters:
imagenp.ndarray

The image to segment as a three-dimensional array of shape (image_width, image_height, 3) where the last dimension are the RGB channels.

segmentation_methodstr

The method used to segment the image into superpixels. Available options are “felzenszwalb”, “slic”, “quickshift”, and “watershed”.

See the scikit-image documentation for details.

segmentation_settingsdict, optional

Keyword arguments to pass to the segmentation method.

See the scikit-image documentation for details.

Returns:
np.ndarray

An array of shape (image_width, image_height) where each entry is an integer that corresponds to the segment number.

Segment numbers start at 0 and are continuous. The number of segments can be computed by determining the maximum value in the array and adding 1.

visualime.lime.generate_images(image: ndarray, segment_mask: ndarray, samples: ndarray, background: ndarray | int | float | None = None) ndarray[source]

Generate images from a list of samples.

Parameters:
imagenp.ndarray

The image to explain: An array of shape (image_width, image_height, 3).

segment_masknp.ndarray

The mask generated by visualime.lime.create_segments(): An array of shape (image_width, image_height).

samplesnp.ndarray

The samples generated by visualime.lime.generate_samples(): An array of shape (num_of_samples, num_of_segments).

background{np.ndarray, int}, optional

The background to replace the excluded segments with. Can be a single number or an array of the same shape as the image. If not given, excluded segments are replaced with 0.

Returns:
np.ndarray

An array of shape (num_of_samples, image_width, image_height, 3).

visualime.lime.generate_samples(segment_mask: ndarray, num_of_samples: int = 64, p: float = 0.5) ndarray[source]

Generate samples by randomly selecting a subset of the segments.

Parameters:
segment_masknp.ndarray

The mask generated by visualime.lime.create_segments(): An array of shape (image_width, image_height).

num_of_samplesint

The number of samples to generate.

pfloat

The probability for each segment to be removed from a sample.

Returns:
np.ndarray

A two-dimensional array of shape (num_of_samples, num_of_segments).

visualime.lime.compute_distances(image: ndarray, images: ndarray, norm: Literal['fro', 'nuc'] | int | None = None, select: str = 'sum') ndarray[source]

Calculate the distances between the original image and the generated images.

Parameters:
imagenp.ndarray

The original image.

imagesnp.ndarray

The sample images.

norm{non-zero int, numpy.inf, -np.inf, str}, optional

The norm used to compute the distance between two images. It is calculated for the difference between each color channel.

Defaults to the Frobenius norm if not given.

For all available options, see the documentation for numpy.linalg.norm.

select{“sum”, “max”}, default “sum”

Method to combine the channel-wise distances to the final distance.

There are two options:

  • “sum” (the default): Sum the channel-wise distances

  • “max”: Take the maximum of the channel-wise distances

Returns:
np.ndarray

Array of length images.shape[0] containing the distances of each image to the original image.

visualime.lime.predict_images(images: ndarray, predict_fn: Callable[[ndarray], ndarray]) ndarray[source]

Obtain model predictions for all images.

Parameters:
imagesnp.ndarray

Images as an array of shape (num_of_samples, image_width, image_height, 3).

predict_fncallable()

A function that takes an input of shape (num_of_samples, image_width, image_height, 3) and returns an array of shape (num_of_samples, num_of_classes), where num_of_classes is the number of output classes (labels) assigned by the model.

Commonly, predict_fn() feeds the images to the image classification model to be explained and takes care of any preprocessing and batching. When building explanation pipelines, it is generally preferable to replace predict_images() entirely.

Returns:
np.ndarray

An array of shape (num_of_samples, num_of_classes).

visualime.lime.weigh_segments(samples: ~numpy.ndarray, predictions: ~numpy.ndarray, label_idx: int, model_type: ~typing.Literal['linear_regression', 'lasso', 'ridge', 'bayesian_ridge', 'bayesian_ridge_fixed_lambda', 'bayesian_ridge_fixed_alpha_lambda'] = 'bayesian_ridge', model_params: ~typing.Dict[str, ~typing.Any] | None = None, distances: ~numpy.ndarray | None = None, kernel: ~typing.Callable[[~numpy.ndarray], ~numpy.ndarray] = <function exponential_kernel>, segment_subset: ~typing.List[int] | None = None) ndarray[source]

Generate the list of coefficients to weigh segments.

Parameters:
samplesnp.ndarray

The samples generated by visualime.lime.generate_samples(): An array of shape (num_of_samples, num_of_segments).

predictionsnp.ndarray

The predictions produced by visualime.lime.predict_images(): An array of shape (num_of_samples, num_of_classes).

label_idxint

The index of the label to explain in the output of predict_fn(). Can be the class predicted by the model, or a different class.

model_typestr

The type of linear model to fit. Available options are: “linear_regression”, “lasso”, “ridge”, “bayesian_ridge”, “bayesian_ridge_fixed_lambda”, and “bayesian_ridge_fixed_alpha_lambda”.

See the scikit-learn documentation for details on each of the models.

model_paramsdict, optional

Parameters to pass to the model during instantiation.

See the scikit-learn documentation for details on each of the models.

distancesnp.ndarray, optional

The distances between the images and the original images used as sample weights when fitting the linear model.

If not given, the cosine distance between a sample and the original image is used. Note that this is only a rough approximation and not a good measure if the image contains a lot of variation or the segments are of very different size.

kernelcallable(), default exponential_kernel

Kernel function to weigh the samples based on the distances.

Operates on the distances and returns an array of the same shape: kernel(distances: np.ndarray) -> np.ndarray

Defaults to an exponential kernel with width .25 as in the original LIME implementation.

segment_subsetlist of ints, optional

List of the indices of the segments to consider when fitting the linear model. Note that the resulting array will nevertheless have length num_of_segments. The weights of segments not in segment_subset will be 0.0.

If not given, all segments will be used.

Returns:
np.ndarray

Array of length num_of_segments where each entry corresponds to the segment’s coefficient in the fitted linear model.