Explanations
mercury.explainability.explanations
anchors
AnchorsWithImportanceExplanation(explain_data, explanations, categorical={})
Bases: object
Extended Anchors Explanations
Parameters:
Name | Type | Description | Default |
---|---|---|---|
explain_data |
DataFrame
|
A pandas DataFrame containing the observations for which an explanation has to be found. |
required |
explanations |
List
|
A list containing the results of computing the explanations for explain_data. |
required |
categorical |
dict
|
A dictionary containing as key the features that are categorical and as value, the possible categorical values. |
{}
|
Source code in mercury/explainability/explanations/anchors.py
21 22 23 24 25 26 27 28 29 |
|
interpret_explanations(n_important_features)
This method prints a report of the important features obtaiend.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
n_important_features |
int
|
The number of imporant features that will appear in the report. Defaults to 3. |
required |
Source code in mercury/explainability/explanations/anchors.py
31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
|
clustering_tree_explanation
ClusteringTreeExplanation(tree, feature_names=None)
Explanation for ClusteringTreeExplainer. Represents a Decision Tree for the explanation of a clustering algorithm. Using the plot method generates a visualization of the decision tree (requires graphviz package)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
tree |
Node
|
the fitted decision tree |
required |
feature_names |
List
|
the feature names used in the decision tree |
None
|
Source code in mercury/explainability/explanations/clustering_tree_explanation.py
24 25 26 27 28 29 30 |
|
plot(filename='tree_explanation', feature_names=None, scalers=None)
Generates a graphviz.Source object representing the decision tree, which can be visualized in a notebook or saved in a file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
filename |
str
|
filename to save if render() method is called over the returned object |
'tree_explanation'
|
feature_names |
List
|
the feature names to use. If not specified, the feature names specified in the constructor are used. |
None
|
scalers |
dict
|
dictionary of scalers. If passed, the tree will show the denormalized value in the split instead
of the normalized value. The key is the feature name and the scaler must have the |
None
|
Returns:
Type | Description |
---|---|
Source
|
object representing the decision tree. |
Source code in mercury/explainability/explanations/clustering_tree_explanation.py
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 |
|
counter_factual
CounterfactualBasicExplanation(from_, to_, p, path, path_ps, bounds, explored=np.array([]), explored_ps=np.array([]), labels=[])
Bases: object
A Panallet explanation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
from_ |
ndarray
|
Starting point. |
required |
to_ |
ndarray
|
Found solution. |
required |
p |
float
|
Probability of found solution. |
required |
path |
ndarray
|
Path followed to get to the found solution. |
required |
path_ps |
ndarray
|
Probabilities of each path step. |
required |
bounds |
ndarray
|
Feature bounds used when exploring the probability space. |
required |
explored |
ndarray
|
Points explored but not visited (available only when backtracking strategy is used, empty for Simulated Annealing) |
array([])
|
explored_ps |
ndarray
|
Probabilities of explored points (available only when backtracking strategy is used, empty for Simulated Annealing) |
array([])
|
labels |
Optional[List[str]]
|
Labels to be used for each point dimension (used when plotting). |
[]
|
Raises:
Type | Description |
---|---|
AssertionError
|
if from_ shape != to_.shape |
AssertionError
|
if dim(from_) != 1 |
AssertionError
|
if not 0 <= p <= 1 |
AssertionError
|
if path.shape[0] != path_ps.shape[0] |
AssertionError
|
if bounds.shape[0] != from_.shape[0] |
AssertionError
|
if explored.shape[0] != explored_ps.shape[0] |
AssertionError
|
if len(labels) > 0 and len(labels) != bounds.shape[0] |
Source code in mercury/explainability/explanations/counter_factual.py
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
|
__verbose()
Internal debug information.
Source code in mercury/explainability/explanations/counter_factual.py
236 237 238 239 240 241 242 243 244 245 246 247 248 249 |
|
get_changes(relative=True)
Returns relative/absolute changes between initial and ending point.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
relative |
bool
|
True for relative changes, False for absolute changes. |
True
|
Returns:
Type | Description |
---|---|
ndarray
|
(np.ndarray) Relative or absolute changes for each feature. |
Source code in mercury/explainability/explanations/counter_factual.py
85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 |
|
show(figsize=(12, 6), debug=False, path=None, backend='matplotlib')
Creates a plot with the explanation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
figsize |
tuple
|
Width and height of the figure (inches if matplotlib backend is used, pixels for bokeh backend). |
(12, 6)
|
debug |
bool
|
Display verbose information (debug mode). |
False
|
Source code in mercury/explainability/explanations/counter_factual.py
141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 |
|
CounterfactualWithImportanceExplanation(explain_data, counterfactuals, importances, count_diffs, count_diffs_norm)
Bases: object
Extended Counterfactual Explanations
Parameters:
Name | Type | Description | Default |
---|---|---|---|
explain_data |
DataFrame
|
A pandas DataFrame containing the observations for which an explanation has to be found. |
required |
explanations |
A list containing the results of computing the explanations for explain_data. |
required | |
categorical |
A dictionary containing as key the features that are categorical and as value, the possible categorical values. |
required |
Source code in mercury/explainability/explanations/counter_factual.py
266 267 268 269 270 271 272 273 274 275 276 277 278 |
|
interpret_explanations(n_important_features=3)
This method prints a report of the important features obtaiend.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
n_important_features |
int
|
The number of imporant features that will appear in the report. Defaults to 3. |
3
|
Source code in mercury/explainability/explanations/counter_factual.py
280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 |
|
partial_dependence
PartialDependenceExplanation(data)
This class holds the result of a Partial Dependence explanation and provides functionality for plotting those results via Partial Dependence Plots.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data |
dict
|
Contains the result of the PartialDependenceExplainer. It must be in the form of: :: { 'feature_name': {'values': [...], 'preds': [...], 'lower_quantile': [...], 'upper_quantile': [...]}, 'feature_name2': {'values': [...], 'preds': [...], 'lower_quantile': [...], 'upper_quantile': [...]}, ... } |
required |
Source code in mercury/explainability/explanations/partial_dependence.py
23 24 |
|
__getitem__(key)
Gets the dependence data of the desired feature.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
str
|
Name of the feature. |
required |
Source code in mercury/explainability/explanations/partial_dependence.py
185 186 187 188 189 190 191 192 193 |
|
plot(ncols=1, figsize=(15, 15), quantiles=False, filter_classes=None, **kwargs)
Plots a summary of all the partial dependences.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ncols |
int
|
Number of columns of the summary. 1 as default. |
1
|
quantiles |
bool or list
|
Whether to also plot the quantiles and a shaded area between them. Useful to check whether the predictions
have high or low dispersion. If this is a list of booleans, quantiles
will be plotted filtered by class (i.e. |
False
|
filter_clases |
list
|
List of bool with the classes to plot. If None, all classes will be plotted. Ignored if the target variable is not categorical. |
required |
figsize |
tuple
|
Size of the plotted figure |
(15, 15)
|
Source code in mercury/explainability/explanations/partial_dependence.py
160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 |
|
plot_single(var_name, ax=None, quantiles=False, filter_classes=None, **kwargs)
Plots the partial dependence of a single variable.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
var_name |
str
|
Name of the desired variable to plot. |
required |
quantiles |
bool or list[bool]
|
Whether to also plot the quantiles and a shaded area between them. Useful to check whether the predictions have high or low dispersion. If data doesn't contain the quantiles this parameter will be ignored. |
False
|
filter_clases |
list
|
List of bool with the classes to plot. If None, all classes will be plotted. Ignored if the target variable is not categorical. |
required |
ax |
AxesSubplot
|
Axes object on which the data will be plotted. |
None
|
Source code in mercury/explainability/explanations/partial_dependence.py
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 |
|
shuffle_importance
FeatureImportanceExplanation(data, reverse=False)
This class holds the data related to the importance a given feature has for a model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data |
dict
|
Contains the result of the PartialDependenceExplainer. It must be in the form of: :: { 'feature_name': 1.0, 'feature_name2': 2.3, ... } |
required |
reverse |
bool
|
Whether to reverse sort the features by increasing order (i.e. Worst performance (latest) = Smallest value). Default False (decreasing order). |
False
|
Source code in mercury/explainability/explanations/shuffle_importance.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
|
__getitem__(key)
Gets the feature importance of the desired feature.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key |
str
|
Name of the feature. |
required |
Source code in mercury/explainability/explanations/shuffle_importance.py
48 49 50 51 52 53 54 55 |
|
get_importances()
Returns a list of tuples (feature, importance) sorted by importances.
Source code in mercury/explainability/explanations/shuffle_importance.py
57 58 59 60 |
|
plot(ax=None, figsize=(15, 15), limit_axis_x=False, **kwargs)
Plots a summary of the importances for each feature
Parameters:
Name | Type | Description | Default |
---|---|---|---|
figsize |
tuple
|
Size of the plotted figure |
(15, 15)
|
limit_axis_x |
bool
|
Whether to adjust axis x to limit between the minimum and maximum feature values |
False
|
Source code in mercury/explainability/explanations/shuffle_importance.py
28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
|