wildboar.explain.counterfactual
#
Counterfactual explanations.
The wildboar.explain.counterfactual
module includes numerous
methods for generating counterfactual explanations.
Classes#
Fit a counterfactual explainer to a k-nearest neighbors classifier. |
|
Native guide counterfactual explanations. |
|
An algorithm designed to generate counterfactual explanations. |
|
Model agnostic approach for constructing counterfactual explanations. |
|
Counterfactual explanations for shapelet forest classifiers. |
Functions#
|
Compute a single counterfactual example for each sample. |
|
Compute the proximity of the counterfactuals. |
- class wildboar.explain.counterfactual.KNeighborsCounterfactual(method='auto', random_state=None)[source]#
Fit a counterfactual explainer to a k-nearest neighbors classifier.
- Parameters:
- method{“auto”, “mean”, “medoid”}, optional
The method for generating counterfactuals. If ‘auto’, counterfactuals are generated using k-means if possible and k-medoids otherwise. If ‘mean’, counterfactuals are always generated using k-means, which fails if the estimator is fitted with a metric other than ‘euclidean’, ‘dtw’ or ‘wdtw. If ‘medoid’, counterfactuals are generated using k-medoids.
Added in version 1.2.
- random_stateint or RandomState, optional
If int, random_state is the seed used by the random number generator.
If RandomState instance, random_state is the random number generator.
If None, the random number generator is the RandomState instance used by np.random.
- Attributes:
- explainer_dict
The explainer for each label
References
- Karlsson, I., Rebane, J., Papapetrou, P., & Gionis, A. (2020).
Locally and globally explainable time series tweaking. Knowledge and Information Systems, 62(5), 1671-1700.
- fit_explain(estimator, x=None, y=None, **kwargs)[source]#
Fit and return the explanation.
- Parameters:
- estimatorEstimator
The estimator to explain.
- xtime-series, optional
The input time series.
- yarray-like of shape (n_samples, ), optional
The labels.
- **kwargsdict, optional
Optional extra arguments.
- Returns:
- ndarray
The explanation.
- get_metadata_routing()[source]#
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns:
- routingMetadataRequest
A
MetadataRequest
encapsulating routing information.
- get_params(deep=True)[source]#
Get parameters for this estimator.
- Parameters:
- deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns:
- paramsdict
Parameter names mapped to their values.
- plot(x=None, y=None, ax=None)[source]#
Plot the explanation.
- Parameters:
- xarray-like, optional
Optional imput samples.
- yarray-like, optional
Optional target labels.
- axAxes, optional
Optional axes to plot to.
- Returns:
- Axes
The axes object.
- score(x, y)[source]#
Score the counterfactual explainer in terms of closeness of fit.
- Parameters:
- xarray-like of shape (n_samples, n_timestep)
The samples.
- yarray-like of shape (n_samples, )
The desired counterfactal label.
- Returns:
- float
The proximity.
- set_params(**params)[source]#
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters:
- **paramsdict
Estimator parameters.
- Returns:
- selfestimator instance
Estimator instance.
- class wildboar.explain.counterfactual.NativeGuideCounterfactual(*, metric='euclidean', metric_params=None, importance='interval', target='predict', window=2, max_iter=100, random_state=None, n_jobs=None)[source]#
Native guide counterfactual explanations.
Counterfactual explanations are constructed by replacing parts of the explained sample with the most important region from the closest sample of the desired class.
- Parameters:
- metricstr or callable, optional
The distance metric
See
_METRICS.keys()
for a list of supported metrics.- metric_paramsdict, optional
Parameters to the metric.
Read more about the parameters in the User guide.
- importance{“interval”}, array-like or callable, optional
The importance assigned to the time steps.
If “interval”, use
IntervalImportance
to assign the importance of the time steps.If array-like, an array of shape (n_timestep, ).
If callable, a function
f(x, y)
, where x and y are the time series and label being explained. The return value is a ndarray of shape (n_timestep, ).
- target{“predict”} or float, optional
The target evaluation of counterfactuals:
if ‘predict’ the counterfactual prediction must return the correct label.
if float, the counterfactual prediction probability must exceed target value.
- windowint, optional
The window parameter. Only used if importance=”interval”.
- max_iterint, optional
The maximum number of iterations.
- random_stateRandomState or int, optional
Pseudo-random number for consistency between different runs.
- n_jobsint, optional
The number of parallel jobs.
- Attributes:
- target_TargetEvaluator
The target evaluator.
- importance_Importance
The importance.
- estimator_Estimator
The estimator.
- clasess_ndarray
The classes known to the explainer.
Notes
The current implementation uses the
IntervalImportance
as the default method for assigning importances and selecting the time points where to grow the replacement. Unfortunately this method assigns the same score for each sample, that is, it provides a model level interpretation of the importance of each time step. To exactly replicate the work by Delaney (2021), you have to supply your own importance function. The default recommendation by the original authors is to use GradCAM.References
- Delaney, E., Greene, D., Keane, M.T. (2021)
Instance-Based Counterfactual Explanations for Time Series Classification. Case-Based Reasoning Research and Development, vol. 12877, pp. 32–47. Springer International Publishing, Cham Science.
Examples
>>> from wildboar.datasets import load_gun_point >>> from wildboar.distance import KNeighborsClassifier >>> from wildboar.explain.counterfactual import NativeGuideCounterfactual >>> X_train, X_test, y_train, y_test = load_gun_point(merge_train_test=False) >>> clf = KNeighborsClassifier(n_neighbors=1) >>> clf.fit(X_train, y_train) >>> ngc = NativeGuideCounterfactual(window=1, target=0.51) >>> ngc.fit(clf, X_train, y_train) >>> X_test[1:3] array([2., 2.], dtype=float32) >>> cf = nfc.explain(X_test[1:3], [1, 1]) # Desired label is [1, 1] >>> clf.predict(cf) array([1., 1.], dtype=float32)
- fit_explain(estimator, x=None, y=None, **kwargs)[source]#
Fit and return the explanation.
- Parameters:
- estimatorEstimator
The estimator to explain.
- xtime-series, optional
The input time series.
- yarray-like of shape (n_samples, ), optional
The labels.
- **kwargsdict, optional
Optional extra arguments.
- Returns:
- ndarray
The explanation.
- get_metadata_routing()[source]#
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns:
- routingMetadataRequest
A
MetadataRequest
encapsulating routing information.
- get_params(deep=True)[source]#
Get parameters for this estimator.
- Parameters:
- deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns:
- paramsdict
Parameter names mapped to their values.
- plot(x=None, y=None, ax=None)[source]#
Plot the explanation.
- Parameters:
- xarray-like, optional
Optional imput samples.
- yarray-like, optional
Optional target labels.
- axAxes, optional
Optional axes to plot to.
- Returns:
- Axes
The axes object.
- score(x, y)[source]#
Score the counterfactual explainer in terms of closeness of fit.
- Parameters:
- xarray-like of shape (n_samples, n_timestep)
The samples.
- yarray-like of shape (n_samples, )
The desired counterfactal label.
- Returns:
- float
The proximity.
- set_params(**params)[source]#
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters:
- **paramsdict
Estimator parameters.
- Returns:
- selfestimator instance
Estimator instance.
- class wildboar.explain.counterfactual.NiceCounterfactual(n_neighbors=1, *, reward='compactness', metric='euclidean', metric_params=None, verbose=0)[source]#
An algorithm designed to generate counterfactual explanations.
As described by Brughmans (2024), it is designed for tabular data, addressing key requirements for real-life deployments:
Provides explanations for all predictions.
Compatible with any classification model, including non-differentiable ones.
Efficient in runtime.
Offers multiple counterfactual explanations with varying characteristics.
The algorithm leverages information from a nearest unlike neighbor to iteratively incorporating timesteps from this neighbor into the instance being explained.
- Parameters:
- n_neighborsint, optional
The number of neighbors.
- rewardstr or callable, optional
The reward function to optimize the counterfactual explanations. Can be a string specifying one of the predefined reward functions or a custom callable. The callable is a function f(original, current, current_pred, candidates, candidate_preds) that returns a ndarray of scores for each candidate.
- metricstr, optional
The distance metric to use for calculating proximity between instances. Must be one of the supported metrics.
- metric_paramsdict, optional
Additional parameters to pass to the distance metric function.
- verboseint, optional
Increase feedback. No feedback (0) and some feedback (1).
- explain(X, y)[source]#
Explain the predictions for the given data.
- Parameters:
- Xarray-like
The input data for which explanations are to be generated.
- yarray-like
The target values corresponding to the input data.
- fit(estimator, X, y)[source]#
Fit the counterfactual explanation model.
- Parameters:
- estimatorobject
The estimator object to be validated and used for fitting.
- Xarray-like of shape (n_samples, n_features)
The input samples.
- yarray-like of shape (n_samples,)
The target values.
- Returns:
- self
Returns the instance itself.
- fit_explain(estimator, x=None, y=None, **kwargs)[source]#
Fit and return the explanation.
- Parameters:
- estimatorEstimator
The estimator to explain.
- xtime-series, optional
The input time series.
- yarray-like of shape (n_samples, ), optional
The labels.
- **kwargsdict, optional
Optional extra arguments.
- Returns:
- ndarray
The explanation.
- get_metadata_routing()[source]#
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns:
- routingMetadataRequest
A
MetadataRequest
encapsulating routing information.
- get_params(deep=True)[source]#
Get parameters for this estimator.
- Parameters:
- deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns:
- paramsdict
Parameter names mapped to their values.
- plot(x=None, y=None, ax=None)[source]#
Plot the explanation.
- Parameters:
- xarray-like, optional
Optional imput samples.
- yarray-like, optional
Optional target labels.
- axAxes, optional
Optional axes to plot to.
- Returns:
- Axes
The axes object.
- score(x, y)[source]#
Score the counterfactual explainer in terms of closeness of fit.
- Parameters:
- xarray-like of shape (n_samples, n_timestep)
The samples.
- yarray-like of shape (n_samples, )
The desired counterfactal label.
- Returns:
- float
The proximity.
- set_params(**params)[source]#
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters:
- **paramsdict
Estimator parameters.
- Returns:
- selfestimator instance
Estimator instance.
- class wildboar.explain.counterfactual.PrototypeCounterfactual(metric='euclidean', *, r=1.0, g=0.05, max_iter=100, step_size=0.1, n_prototypes='auto', target='predict', method='sample', min_shapelet_size=0.0, max_shapelet_size=1.0, random_state=None, verbose=False)[source]#
Model agnostic approach for constructing counterfactual explanations.
- Attributes:
- estimator_object
The estimator for which counterfactuals are computed
- classes_ndarray
The classes
- partitions_dict
Dictionary of classes and PrototypeSampler
- target_TargetEvaluator
The target evaluator
References
- Samsten, Isak (2020).
Model agnostic time series counterfactuals
- fit_explain(estimator, x=None, y=None, **kwargs)[source]#
Fit and return the explanation.
- Parameters:
- estimatorEstimator
The estimator to explain.
- xtime-series, optional
The input time series.
- yarray-like of shape (n_samples, ), optional
The labels.
- **kwargsdict, optional
Optional extra arguments.
- Returns:
- ndarray
The explanation.
- get_metadata_routing()[source]#
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns:
- routingMetadataRequest
A
MetadataRequest
encapsulating routing information.
- get_params(deep=True)[source]#
Get parameters for this estimator.
- Parameters:
- deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns:
- paramsdict
Parameter names mapped to their values.
- plot(x=None, y=None, ax=None)[source]#
Plot the explanation.
- Parameters:
- xarray-like, optional
Optional imput samples.
- yarray-like, optional
Optional target labels.
- axAxes, optional
Optional axes to plot to.
- Returns:
- Axes
The axes object.
- score(x, y)[source]#
Score the counterfactual explainer in terms of closeness of fit.
- Parameters:
- xarray-like of shape (n_samples, n_timestep)
The samples.
- yarray-like of shape (n_samples, )
The desired counterfactal label.
- Returns:
- float
The proximity.
- set_params(**params)[source]#
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters:
- **paramsdict
Estimator parameters.
- Returns:
- selfestimator instance
Estimator instance.
- class wildboar.explain.counterfactual.ShapeletForestCounterfactual(*, cost='euclidean', aggregation='mean', epsilon=1.0, batch_size=0.1, max_paths=1.0, verbose=False, random_state=None)[source]#
Counterfactual explanations for shapelet forest classifiers.
- Parameters:
- cost{“euclidean”, “cosine”, “manhattan”} or callable, optional
The cost function to determine the goodness of counterfactual.
- aggregationcallable, optional
The aggregation function for the cost of multivariate counterfactuals.
- epsilonfloat, optional
Control the degree of change from the decision threshold.
- batch_sizefloat, optional
Batch size when evaluating the cost and predictions of counterfactual candidates. The default setting is to evaluate all counterfactual samples.
Changed in version 1.1: The default value changed to 0.1
- max_pathsfloat, optional
Sample a fraction of the positive prediction paths.
Added in version 1.1: Add support for subsampling prediction paths.
- verbosebool, optional
Print information to stdout during execution.
- random_stateRandomState or int, optional
Pseudo-random number for consistency between different runs.
- Attributes:
- paths_dict
A dictionary of prediction paths per label
Warning
Only shapelet forests fit with the Euclidean distance is supported i.e.,
metric="euclidean"
Notes
This implementation only supports the reversible algorithm described by Karlsson (2020)
References
- Karlsson, I., Rebane, J., Papapetrou, P., & Gionis, A. (2020).
Locally and globally explainable time series tweaking. Knowledge and Information Systems, 62(5), 1671-1700.
- Karlsson, I., Rebane, J., Papapetrou, P., & Gionis, A. (2018).
Explainable time series tweaking via irreversible and reversible temporal transformations. In 2018 IEEE International Conference on Data Mining (ICDM)
- fit_explain(estimator, x=None, y=None, **kwargs)[source]#
Fit and return the explanation.
- Parameters:
- estimatorEstimator
The estimator to explain.
- xtime-series, optional
The input time series.
- yarray-like of shape (n_samples, ), optional
The labels.
- **kwargsdict, optional
Optional extra arguments.
- Returns:
- ndarray
The explanation.
- get_metadata_routing()[source]#
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns:
- routingMetadataRequest
A
MetadataRequest
encapsulating routing information.
- get_params(deep=True)[source]#
Get parameters for this estimator.
- Parameters:
- deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns:
- paramsdict
Parameter names mapped to their values.
- plot(x=None, y=None, ax=None)[source]#
Plot the explanation.
- Parameters:
- xarray-like, optional
Optional imput samples.
- yarray-like, optional
Optional target labels.
- axAxes, optional
Optional axes to plot to.
- Returns:
- Axes
The axes object.
- score(x, y)[source]#
Score the counterfactual explainer in terms of closeness of fit.
- Parameters:
- xarray-like of shape (n_samples, n_timestep)
The samples.
- yarray-like of shape (n_samples, )
The desired counterfactal label.
- Returns:
- float
The proximity.
- set_params(**params)[source]#
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters:
- **paramsdict
Estimator parameters.
- Returns:
- selfestimator instance
Estimator instance.
- wildboar.explain.counterfactual.counterfactuals(estimator, x, y, *, train_x=None, train_y=None, method='best', proximity=None, random_state=None, method_args=None)[source]#
Compute a single counterfactual example for each sample.
- Parameters:
- estimatorobject
The estimator used to compute the counterfactual example.
- xarray-like of shape (n_samples, n_timestep)
The data samples to fit counterfactuals to.
- yarray-like broadcast to shape (n_samples,)
The desired label of the counterfactual.
- train_xarray-like of shape (n_samples, n_timestep), optional
Training samples if required by the explainer.
- train_yarray-like of shape (n_samples, ), optional
Training labels if required by the explainer.
- methodstr or BaseCounterfactual, optional
The method to generate counterfactual explanations
if ‘best’, infer the most appropriate counterfactual explanation method based on the estimator.
Changed in version 1.1.0.
if str, select counterfactual explainer from named collection. See
_COUNTERFACTUALS.keys()
for a list of valid values.if, BaseCounterfactual use the supplied counterfactual.
- proximitystr, callable, list or dict, optional
The scoring function to determine the similarity between the counterfactual sample and the original sample.
- random_stateRandomState or int, optional
The pseudo random number generator to ensure stable result.
- method_argsdict, optional
Optional arguments to the counterfactual explainer.
Added in version 1.1.0.
- Returns:
- x_counterfactualsndarray of shape (n_samples, n_timestep)
The counterfactual example.
- validndarray of shape (n_samples,)
Indicator matrix for valid counterfactuals.
- scorendarray of shape (n_samples,) or dict, optional
Return score of the counterfactual transform, if
scoring
is not None.
- wildboar.explain.counterfactual.proximity(x_true, x_counterfactuals, metric='normalized_euclidean', metric_params=None)[source]#
Compute the proximity of the counterfactuals.
- Parameters:
- x_truearray-like of shape (n_samples, n_timestep)
The true samples.
- x_counterfactualsarray-like of shape (n_samples, n_timestep)
The counterfactual samples.
- metricstr or callable, optional
The distance metric
See
_METRICS.keys()
for a list of supported metrics.- metric_paramsdict, optional
Parameters to the metric.
Read more about the parameters in the User guide.
- Returns:
- ndarray
The scores.