Today, most life sciences experts agree that Artificial Intelligence (AI) is poised to revolutionize drug discovery. However, in order for AI to achieve its full, game-changing potential significant changes will need to be made to the R&D status quo. For heavily regulated industries like biopharma, that will mean satisfying a high bar of explainability – especially as regulatory bodies require evidence of repeatability, and clinicians and scientists want to be able to explain the inner workings of the complex AI models challenging their established methods. [Further reading: "50 shades of AI in regulatory science" by Dr. Szczepan Baran and Dr. Weida Tong from the FDA, published in Drug Discovery Today, June 2024]
Enter: explainable AI (XAI). XAI is emerging as a methodology that can help improve trust, confidence, and successful adoption of AI-driven approaches – especially in the context of drug discovery. Let’s take a closer look.
Explainable artificial intelligence (XAI), sometimes also called Interpretable AI, is a framework and methodology aimed to help human users understand and trust the results or outputs of machine learning (ML) algorithms.
Overall, the goal of XAI is to:
An organization’s XAI toolkit might include: videos, tutorials, example-based explanations, model analysis, feature attributions, explainability algorithms and more. XAI toolkits allow users and regulators to explore an organization’s AI technology in more detail in order to build trust in its outcomes (e.g. accuracy; fairness/bias) and transparency in its processes.
XAI helps organizations adopt a responsible and ethical approach to AI development through an enhanced understanding of model data, inputs, outputs and algorithms. By providing explanations around how AI systems work and make decisions, developers and ML scientists can better ensure project requirements are satisfied, and non-technical audiences and stakeholders are able to address their concerns about model behavior. Not only does this increase transparency and help build trust, it also serves to mitigate many of the compliance, legal, security and reputational risk inherent in AI-based approaches.
Overall, the benefits of XAI can be summarized as follows:
Although typical AI and ML techniques tend to have high accuracy rates, these models can be very difficult to interpret, especially in deep learning where complex neural networks are often opaque. Sometimes known as “black box” models, in non-explainable AI, users (and possibly even the humans designing them) can’t articulate precisely how a model reached its conclusions. This, of course, impacts the trust of its outputs, as users and regulators want to be certain these models are operating without bias or other irregularities. In addition, when a typical AI model acts in an unexpected way or fails, developers and end-users may struggle to understand the root cause and find suitable solutions to address the issue.
By contrast, the XAI techniques and methods implemented across a ML lifecycle can analyze the data used to develop models (sometimes called pre-modeling), while also incorporating interpretability into the architecture of its system via explainable modeling. Doing so allows for post-modeling explanations of system behavior. These interpretable models may be deliberately constrained in order to provide this level of transparency and clarity, whereas most standard machine learning models would not be designed with such interpretability constraints.
In drug discovery, most researchers now agree that explainability is a requirement for AI-based clinical decision support systems, as transparency and interpretability are central to decision-making between medical professionals and patients. In addition, demand for explainable deep learning methods is strong in the molecular sciences, where deep learning, AI-based methods are being used for image analysis, structure of molecule and function prediction, and other critical research applications.
XAI techniques like SHAP (SHApley Additive ExPlanations), LIME (Locally Interpretable Model-Independent Explanations) and causal-inference methods, like Bayesian networks, can help give valuable insight into the particular drivers of a treatment decision and help clarify the drug targeting phase. XAI methods are also showing the ability to create time and cost efficiencies when it comes to computational drug discovery studies.
XAI is also helping increase collaboration between medicinal chemists, chemoinformaticians and data scientists – supporting shared analysis and interpretation of complex chemical data.
Because of the pharma industry’s highly regulated nature, we should expect to see more and more advanced development of XAI models for healthcare diagnostics, drug design and treatment. While the field of XAI is still in its relative infancy, progress is coming quickly and its relevance continually increasing.
VeriSIM Life has developed its own sophisticated computational platform that leverages advanced AI and ML techniques to improve drug discovery and development by greatly reducing the time and money it takes to bring a drug to market. The BIOiSIM® platform’s primary output, a Translational Index™ score, is an explainable metric for use in the prediction of drug translatability. Contact us to learn more about BIOiSIM™ and how our AI-enabled platform helps de-risk R&D decisions.