The explosion in artificial intelligence (AI) and machine learning applications is permeating nearly every industry and slice of life.
But its growth does not come without irony. While AI exists to simplify and/or accelerate decision-making or workflows, the methodology for doing so is often extremely complex. Indeed, some “black box” machine learning algorithms are so intricate and multifaceted that they can defy simple explanation, even by the computer scientists who created them.
That can be quite problematic when certain use cases – such as in the fields of finance and medicine – are defined by industry best practices or government regulations that require transparent explanations into the inner workings of AI solutions. And if these applications are not expressive enough to meet explainability requirements, they may be rendered useless regardless of their overall efficacy.
To address this conundrum, our team at the Fidelity Center for Applied Technology (FCAT) — in collaboration with the Amazon Quantum Solutions Lab — has proposed and implemented an interpretable machine learning model for Explainable AI (XAI) based on expressive Boolean formulas. Such an approach can include any operator that can be applied to one or more Boolean variables, thus providing higher expressivity compared to more rigid rule-based and tree-based approaches.
You may read the full paper here for comprehensive details on this project.
Our hypothesis was that since models — such as decision trees — can get deep and difficult to interpret, the need to find an expressive rule with low complexity but high accuracy was an intractable optimization problem that needed to be solved. Further, by simplifying the model through this advanced XAI approach, we could achieve additional benefits, such as exposing biases that are important in the context of ethical and responsible usage of ML; while also making it easier to maintain and improve the model.
We proposed an approach based on expressive Boolean formulas because they define rules with tunable complexity (or interpretability) according to which input data are being classified. Such a formula can include any operator that can be applied to one or more Boolean variables (such as And or AtLeast), thus providing higher expressivity compared to more rigid rule-based and tree-based methodologies.
In this problem we have two competing objectives: maximizing the performance of the algorithm, while minimizing its complexity. Thus, rather than taking the typical approach of applying one of two optimization methods – combining multiple objectives into one or constraining one of the objectives – we chose to include both in our formulation. In doing so, and without loss of generality, we mainly use balanced accuracy as our overarching performance metric.
Also, by including operators like AtLeast, we were motivated by the idea of addressing the need for highly interpretable checklists, such as a list of medical symptoms that signify a particular condition. It is conceivable that a decision would be made by using such a checklist of symptoms in a manner by which a minimum number would have to be present for a positive diagnosis. Similarly, in finance, a bank may decide whether or not to provide credit to a customer based on the presence of a certain number of factors from a larger list.
We successfully implemented our XAI model, and benchmarked it on some public datasets for credit, customer behavior and medical conditions. We found that our model is generally competitive with other well-known alternatives. We also found that our XAI model can potentially be powered by special purpose hardware or quantum devices for solving fast Integer Linear Programming (ILP) or Quadratic Unconstrained Binary Optimization (QUBO). The addition of QUBO solvers reduces the number of iterations – thus leading to a speedup by fast proposal of non-local moves.
As noted, explainable AI models using Boolean formulas can have many applications in healthcare and in Fidelity’s field of finance (such as credit scoring or to assess why some customers may have selected a product while others did not). By creating these interpretable rules, we can attain higher levels of insights that can lead to future improvements in product development or refinement, as well as optimizing marketing campaigns.
Based on our findings, we have determined that Explainable AI using expressive Boolean formulas is both appropriate and desirable for those use cases that mandate further explainability. Plus, as quantum computing continues to develop, we foresee the opportunity to gain potential speedups by using it and other special purpose hardware accelerators.
Future work may center on applying these classifiers to other datasets, introducing new operators, or applying these concepts to other uses cases.
Credit: Source link
Comments are closed.