Counterfactual Explanations for Cautious Random Forests
Haifei Zhang, Benjamin Quost, Marie-Hélène Masson
Published in ACM Transactions on Knowledge Discovery from Data, 2026
https://dl.acm.org/doi/full/10.1145/3794856
Traditional machine learning models provide a single-class prediction for a given input instance. This may be inadequate in some scenarios, especially when the cost of erroneous predictions is high. Cautious random forests are cautious classification models that may output sets of possible classes as predictions when uncertainty is high, thus reducing the risk of making incorrect decisions. However, making such indeterminate predictions carries a cost, as resolving indeterminacy typically necessitates further analysis and manual intervention. This work focuses on explaining why an indeterminate prediction has been made and how indeterminacy can be resolved. To this end, we use counterfactual examples associated with determinate predictions. We propose a branch-and-bound algorithm that can efficiently generate proximal, plausible, and actionable counterfactual examples. Several experimental results are presented to demonstrate the advantages of our proposed method.
