***************************************** :py:mod:`wildboar.explain.counterfactual` ***************************************** .. py:module:: wildboar.explain.counterfactual .. autoapi-nested-parse:: Counterfactual explanations. The :mod:`wildboar.explain.counterfactual` module includes numerous methods for generating counterfactual explanations. .. !! processed by numpydoc !! Classes ------- .. autoapisummary:: wildboar.explain.counterfactual.KNeighborsCounterfactual wildboar.explain.counterfactual.NativeGuideCounterfactual wildboar.explain.counterfactual.NiceCounterfactual wildboar.explain.counterfactual.PrototypeCounterfactual wildboar.explain.counterfactual.ShapeletForestCounterfactual Functions --------- .. autoapisummary:: wildboar.explain.counterfactual.counterfactuals wildboar.explain.counterfactual.proximity .. raw:: html
.. py:class:: KNeighborsCounterfactual(method='auto', random_state=None) Fit a counterfactual explainer to a k-nearest neighbors classifier. :Parameters: **method** : {"auto", "mean", "medoid"}, optional The method for generating counterfactuals. If 'auto', counterfactuals are generated using k-means if possible and k-medoids otherwise. If 'mean', counterfactuals are always generated using k-means, which fails if the estimator is fitted with a metric other than 'euclidean', 'dtw' or 'wdtw. If 'medoid', counterfactuals are generated using k-medoids. .. versionadded:: 1.2 **random_state** : int or RandomState, optional - If `int`, `random_state` is the seed used by the random number generator. - If `RandomState` instance, `random_state` is the random number generator. - If `None`, the random number generator is the `RandomState` instance used by `np.random`. :Attributes: **explainer_** : dict The explainer for each label .. rubric:: References Karlsson, I., Rebane, J., Papapetrou, P., & Gionis, A. (2020). Locally and globally explainable time series tweaking. Knowledge and Information Systems, 62(5), 1671-1700. .. only:: latex .. !! processed by numpydoc !! .. py:method:: fit_explain(estimator, x=None, y=None, **kwargs) Fit and return the explanation. :Parameters: **estimator** : Estimator The estimator to explain. **x** : time-series, optional The input time series. **y** : array-like of shape (n_samples, ), optional The labels. **\*\*kwargs** : dict, optional Optional extra arguments. :Returns: ndarray The explanation. .. !! processed by numpydoc !! .. py:method:: get_metadata_routing() Get metadata routing of this object. Please check :ref:`User Guide ` on how the routing mechanism works. :Returns: **routing** : MetadataRequest A :class:`~sklearn.utils.metadata_routing.MetadataRequest` encapsulating routing information. .. !! processed by numpydoc !! .. py:method:: get_params(deep=True) Get parameters for this estimator. :Parameters: **deep** : bool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. :Returns: **params** : dict Parameter names mapped to their values. .. !! processed by numpydoc !! .. py:method:: plot(x=None, y=None, ax=None) Plot the explanation. :Parameters: **x** : array-like, optional Optional imput samples. **y** : array-like, optional Optional target labels. **ax** : Axes, optional Optional axes to plot to. :Returns: Axes The axes object. .. !! processed by numpydoc !! .. py:method:: score(x, y) Score the counterfactual explainer in terms of closeness of fit. :Parameters: **x** : array-like of shape (n_samples, n_timestep) The samples. **y** : array-like of shape (n_samples, ) The desired counterfactal label. :Returns: float The proximity. .. !! processed by numpydoc !! .. py:method:: set_params(**params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as :class:`~sklearn.pipeline.Pipeline`). The latter have parameters of the form ``__`` so that it's possible to update each component of a nested object. :Parameters: **\*\*params** : dict Estimator parameters. :Returns: **self** : estimator instance Estimator instance. .. !! processed by numpydoc !! .. py:class:: NativeGuideCounterfactual(*, metric='euclidean', metric_params=None, importance='interval', target='predict', window=2, max_iter=100, random_state=None, n_jobs=None) Native guide counterfactual explanations. Counterfactual explanations are constructed by replacing parts of the explained sample with the most important region from the closest sample of the desired class. :Parameters: **metric** : str or callable, optional The distance metric See ``_METRICS.keys()`` for a list of supported metrics. **metric_params** : dict, optional Parameters to the metric. Read more about the parameters in the :ref:`User guide `. **importance** : {"interval"}, array-like or callable, optional The importance assigned to the time steps. - If "interval", use :class:`~wildboar.explain.IntervalImportance` to assign the importance of the time steps. - If array-like, an array of shape (n_timestep, ). - If callable, a function ``f(x, y)``, where `x` and `y` are the time series and label being explained. The return value is a ndarray of shape (n_timestep, ). **target** : {"predict"} or float, optional The target evaluation of counterfactuals: - if 'predict' the counterfactual prediction must return the correct label. - if float, the counterfactual prediction probability must exceed target value. **window** : int, optional The `window` parameter. Only used if `importance="interval"`. **max_iter** : int, optional The maximum number of iterations. **random_state** : RandomState or int, optional Pseudo-random number for consistency between different runs. **n_jobs** : int, optional The number of parallel jobs. :Attributes: **target_** : TargetEvaluator The target evaluator. **importance_** : Importance The importance. **estimator_** : Estimator The estimator. **clasess_** : ndarray The classes known to the explainer. .. rubric:: Notes The current implementation uses the :class:`~wildboar.explain.IntervalImportance` as the default method for assigning importances and selecting the time points where to grow the replacement. Unfortunately this method assigns the same score for each sample, that is, it provides a model level interpretation of the importance of each time step. To exactly replicate the work by Delaney (2021), you have to supply your own importance function. The default recommendation by the original authors is to use GradCAM. .. rubric:: References Delaney, E., Greene, D., Keane, M.T. (2021) Instance-Based Counterfactual Explanations for Time Series Classification. Case-Based Reasoning Research and Development, vol. 12877, pp. 32–47. Springer International Publishing, Cham Science. .. only:: latex .. rubric:: Examples >>> from wildboar.datasets import load_gun_point >>> from wildboar.distance import KNeighborsClassifier >>> from wildboar.explain.counterfactual import NativeGuideCounterfactual >>> X_train, X_test, y_train, y_test = load_gun_point(merge_train_test=False) >>> clf = KNeighborsClassifier(n_neighbors=1) >>> clf.fit(X_train, y_train) >>> ngc = NativeGuideCounterfactual(window=1, target=0.51) >>> ngc.fit(clf, X_train, y_train) >>> X_test[1:3] array([2., 2.], dtype=float32) >>> cf = nfc.explain(X_test[1:3], [1, 1]) # Desired label is [1, 1] >>> clf.predict(cf) array([1., 1.], dtype=float32) .. !! processed by numpydoc !! .. py:method:: fit_explain(estimator, x=None, y=None, **kwargs) Fit and return the explanation. :Parameters: **estimator** : Estimator The estimator to explain. **x** : time-series, optional The input time series. **y** : array-like of shape (n_samples, ), optional The labels. **\*\*kwargs** : dict, optional Optional extra arguments. :Returns: ndarray The explanation. .. !! processed by numpydoc !! .. py:method:: get_metadata_routing() Get metadata routing of this object. Please check :ref:`User Guide ` on how the routing mechanism works. :Returns: **routing** : MetadataRequest A :class:`~sklearn.utils.metadata_routing.MetadataRequest` encapsulating routing information. .. !! processed by numpydoc !! .. py:method:: get_params(deep=True) Get parameters for this estimator. :Parameters: **deep** : bool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. :Returns: **params** : dict Parameter names mapped to their values. .. !! processed by numpydoc !! .. py:method:: plot(x=None, y=None, ax=None) Plot the explanation. :Parameters: **x** : array-like, optional Optional imput samples. **y** : array-like, optional Optional target labels. **ax** : Axes, optional Optional axes to plot to. :Returns: Axes The axes object. .. !! processed by numpydoc !! .. py:method:: score(x, y) Score the counterfactual explainer in terms of closeness of fit. :Parameters: **x** : array-like of shape (n_samples, n_timestep) The samples. **y** : array-like of shape (n_samples, ) The desired counterfactal label. :Returns: float The proximity. .. !! processed by numpydoc !! .. py:method:: set_params(**params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as :class:`~sklearn.pipeline.Pipeline`). The latter have parameters of the form ``__`` so that it's possible to update each component of a nested object. :Parameters: **\*\*params** : dict Estimator parameters. :Returns: **self** : estimator instance Estimator instance. .. !! processed by numpydoc !! .. py:class:: NiceCounterfactual(n_neighbors=1, *, reward='compactness', metric='euclidean', metric_params=None, verbose=0) An algorithm designed to generate counterfactual explanations. As described by Brughmans (2024), it is designed for tabular data, addressing key requirements for real-life deployments: 1. Provides explanations for all predictions. 2. Compatible with any classification model, including non-differentiable ones. 3. Efficient in runtime. 4. Offers multiple counterfactual explanations with varying characteristics. The algorithm leverages information from a nearest unlike neighbor to iteratively incorporating timesteps from this neighbor into the instance being explained. :Parameters: **n_neighbors** : int, optional The number of neighbors. **reward** : str or callable, optional The reward function to optimize the counterfactual explanations. Can be a string specifying one of the predefined reward functions or a custom callable. The callable is a function `f(original, current, current_pred, candidates, candidate_preds)` that returns a ndarray of scores for each candidate. **metric** : str, optional The distance metric to use for calculating proximity between instances. Must be one of the supported metrics. **metric_params** : dict, optional Additional parameters to pass to the distance metric function. **verbose** : int, optional Increase feedback. No feedback (0) and some feedback (1). .. !! processed by numpydoc !! .. py:method:: explain(X, y) Explain the predictions for the given data. :Parameters: **X** : array-like The input data for which explanations are to be generated. **y** : array-like The target values corresponding to the input data. .. !! processed by numpydoc !! .. py:method:: fit(estimator, X, y) Fit the counterfactual explanation model. :Parameters: **estimator** : object The estimator object to be validated and used for fitting. **X** : array-like of shape (n_samples, n_features) The input samples. **y** : array-like of shape (n_samples,) The target values. :Returns: self Returns the instance itself. .. !! processed by numpydoc !! .. py:method:: fit_explain(estimator, x=None, y=None, **kwargs) Fit and return the explanation. :Parameters: **estimator** : Estimator The estimator to explain. **x** : time-series, optional The input time series. **y** : array-like of shape (n_samples, ), optional The labels. **\*\*kwargs** : dict, optional Optional extra arguments. :Returns: ndarray The explanation. .. !! processed by numpydoc !! .. py:method:: get_metadata_routing() Get metadata routing of this object. Please check :ref:`User Guide ` on how the routing mechanism works. :Returns: **routing** : MetadataRequest A :class:`~sklearn.utils.metadata_routing.MetadataRequest` encapsulating routing information. .. !! processed by numpydoc !! .. py:method:: get_params(deep=True) Get parameters for this estimator. :Parameters: **deep** : bool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. :Returns: **params** : dict Parameter names mapped to their values. .. !! processed by numpydoc !! .. py:method:: plot(x=None, y=None, ax=None) Plot the explanation. :Parameters: **x** : array-like, optional Optional imput samples. **y** : array-like, optional Optional target labels. **ax** : Axes, optional Optional axes to plot to. :Returns: Axes The axes object. .. !! processed by numpydoc !! .. py:method:: score(x, y) Score the counterfactual explainer in terms of closeness of fit. :Parameters: **x** : array-like of shape (n_samples, n_timestep) The samples. **y** : array-like of shape (n_samples, ) The desired counterfactal label. :Returns: float The proximity. .. !! processed by numpydoc !! .. py:method:: set_params(**params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as :class:`~sklearn.pipeline.Pipeline`). The latter have parameters of the form ``__`` so that it's possible to update each component of a nested object. :Parameters: **\*\*params** : dict Estimator parameters. :Returns: **self** : estimator instance Estimator instance. .. !! processed by numpydoc !! .. py:class:: PrototypeCounterfactual(metric='euclidean', *, r=1.0, g=0.05, max_iter=100, step_size=0.1, n_prototypes='auto', target='predict', method='sample', min_shapelet_size=0.0, max_shapelet_size=1.0, random_state=None, verbose=False) Model agnostic approach for constructing counterfactual explanations. :Attributes: **estimator_** : object The estimator for which counterfactuals are computed **classes_** : ndarray The classes **partitions_** : dict Dictionary of classes and PrototypeSampler **target_** : TargetEvaluator The target evaluator .. rubric:: References Samsten, Isak (2020). Model agnostic time series counterfactuals .. only:: latex .. !! processed by numpydoc !! .. py:method:: fit_explain(estimator, x=None, y=None, **kwargs) Fit and return the explanation. :Parameters: **estimator** : Estimator The estimator to explain. **x** : time-series, optional The input time series. **y** : array-like of shape (n_samples, ), optional The labels. **\*\*kwargs** : dict, optional Optional extra arguments. :Returns: ndarray The explanation. .. !! processed by numpydoc !! .. py:method:: get_metadata_routing() Get metadata routing of this object. Please check :ref:`User Guide ` on how the routing mechanism works. :Returns: **routing** : MetadataRequest A :class:`~sklearn.utils.metadata_routing.MetadataRequest` encapsulating routing information. .. !! processed by numpydoc !! .. py:method:: get_params(deep=True) Get parameters for this estimator. :Parameters: **deep** : bool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. :Returns: **params** : dict Parameter names mapped to their values. .. !! processed by numpydoc !! .. py:method:: plot(x=None, y=None, ax=None) Plot the explanation. :Parameters: **x** : array-like, optional Optional imput samples. **y** : array-like, optional Optional target labels. **ax** : Axes, optional Optional axes to plot to. :Returns: Axes The axes object. .. !! processed by numpydoc !! .. py:method:: score(x, y) Score the counterfactual explainer in terms of closeness of fit. :Parameters: **x** : array-like of shape (n_samples, n_timestep) The samples. **y** : array-like of shape (n_samples, ) The desired counterfactal label. :Returns: float The proximity. .. !! processed by numpydoc !! .. py:method:: set_params(**params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as :class:`~sklearn.pipeline.Pipeline`). The latter have parameters of the form ``__`` so that it's possible to update each component of a nested object. :Parameters: **\*\*params** : dict Estimator parameters. :Returns: **self** : estimator instance Estimator instance. .. !! processed by numpydoc !! .. py:class:: ShapeletForestCounterfactual(*, cost='euclidean', aggregation='mean', epsilon=1.0, batch_size=0.1, max_paths=1.0, verbose=False, random_state=None) Counterfactual explanations for shapelet forest classifiers. :Parameters: **cost** : {"euclidean", "cosine", "manhattan"} or callable, optional The cost function to determine the goodness of counterfactual. **aggregation** : callable, optional The aggregation function for the cost of multivariate counterfactuals. **epsilon** : float, optional Control the degree of change from the decision threshold. **batch_size** : float, optional Batch size when evaluating the cost and predictions of counterfactual candidates. The default setting is to evaluate all counterfactual samples. .. versionchanged:: 1.1 The default value changed to 0.1 **max_paths** : float, optional Sample a fraction of the positive prediction paths. .. versionadded:: 1.1 Add support for subsampling prediction paths. **verbose** : bool, optional Print information to stdout during execution. **random_state** : RandomState or int, optional Pseudo-random number for consistency between different runs. :Attributes: **paths_** : dict A dictionary of prediction paths per label .. warning:: Only shapelet forests fit with the Euclidean distance is supported i.e., ``metric="euclidean"`` .. rubric:: Notes This implementation only supports the reversible algorithm described by Karlsson (2020) .. rubric:: References Karlsson, I., Rebane, J., Papapetrou, P., & Gionis, A. (2020). Locally and globally explainable time series tweaking. Knowledge and Information Systems, 62(5), 1671-1700. Karlsson, I., Rebane, J., Papapetrou, P., & Gionis, A. (2018). Explainable time series tweaking via irreversible and reversible temporal transformations. In 2018 IEEE International Conference on Data Mining (ICDM) .. only:: latex .. !! processed by numpydoc !! .. py:method:: fit_explain(estimator, x=None, y=None, **kwargs) Fit and return the explanation. :Parameters: **estimator** : Estimator The estimator to explain. **x** : time-series, optional The input time series. **y** : array-like of shape (n_samples, ), optional The labels. **\*\*kwargs** : dict, optional Optional extra arguments. :Returns: ndarray The explanation. .. !! processed by numpydoc !! .. py:method:: get_metadata_routing() Get metadata routing of this object. Please check :ref:`User Guide ` on how the routing mechanism works. :Returns: **routing** : MetadataRequest A :class:`~sklearn.utils.metadata_routing.MetadataRequest` encapsulating routing information. .. !! processed by numpydoc !! .. py:method:: get_params(deep=True) Get parameters for this estimator. :Parameters: **deep** : bool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. :Returns: **params** : dict Parameter names mapped to their values. .. !! processed by numpydoc !! .. py:method:: plot(x=None, y=None, ax=None) Plot the explanation. :Parameters: **x** : array-like, optional Optional imput samples. **y** : array-like, optional Optional target labels. **ax** : Axes, optional Optional axes to plot to. :Returns: Axes The axes object. .. !! processed by numpydoc !! .. py:method:: score(x, y) Score the counterfactual explainer in terms of closeness of fit. :Parameters: **x** : array-like of shape (n_samples, n_timestep) The samples. **y** : array-like of shape (n_samples, ) The desired counterfactal label. :Returns: float The proximity. .. !! processed by numpydoc !! .. py:method:: set_params(**params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as :class:`~sklearn.pipeline.Pipeline`). The latter have parameters of the form ``__`` so that it's possible to update each component of a nested object. :Parameters: **\*\*params** : dict Estimator parameters. :Returns: **self** : estimator instance Estimator instance. .. !! processed by numpydoc !! .. py:function:: counterfactuals(estimator, x, y, *, train_x=None, train_y=None, method='best', proximity=None, random_state=None, method_args=None) Compute a single counterfactual example for each sample. :Parameters: **estimator** : object The estimator used to compute the counterfactual example. **x** : array-like of shape (n_samples, n_timestep) The data samples to fit counterfactuals to. **y** : array-like broadcast to shape (n_samples,) The desired label of the counterfactual. **train_x** : array-like of shape (n_samples, n_timestep), optional Training samples if required by the explainer. **train_y** : array-like of shape (n_samples, ), optional Training labels if required by the explainer. **method** : str or BaseCounterfactual, optional The method to generate counterfactual explanations - if 'best', infer the most appropriate counterfactual explanation method based on the estimator. .. versionchanged:: 1.1.0 - if str, select counterfactual explainer from named collection. See ``_COUNTERFACTUALS.keys()`` for a list of valid values. - if, BaseCounterfactual use the supplied counterfactual. **proximity** : str, callable, list or dict, optional The scoring function to determine the similarity between the counterfactual sample and the original sample. **random_state** : RandomState or int, optional The pseudo random number generator to ensure stable result. **method_args** : dict, optional Optional arguments to the counterfactual explainer. .. versionadded:: 1.1.0 :Returns: **x_counterfactuals** : ndarray of shape (n_samples, n_timestep) The counterfactual example. **valid** : ndarray of shape (n_samples,) Indicator matrix for valid counterfactuals. **score** : ndarray of shape (n_samples,) or dict, optional Return score of the counterfactual transform, if ``scoring`` is not None. .. !! processed by numpydoc !! .. py:function:: proximity(x_true, x_counterfactuals, metric='normalized_euclidean', metric_params=None) Compute the proximity of the counterfactuals. :Parameters: **x_true** : array-like of shape (n_samples, n_timestep) The true samples. **x_counterfactuals** : array-like of shape (n_samples, n_timestep) The counterfactual samples. **metric** : str or callable, optional The distance metric See ``_METRICS.keys()`` for a list of supported metrics. **metric_params** : dict, optional Parameters to the metric. Read more about the parameters in the :ref:`User guide `. :Returns: ndarray The scores. .. !! processed by numpydoc !!