MITB Banner

Is Explainability In AI Always Necessary?

Share

“AI models do not need to be interpretable to be useful.”

Nigam Shah, Stanford

Interpretability in machine learning goes back to the 1990s when it was neither referred to as “interpretability” nor “explainability”. Interpretable and explainable machine learning techniques emerged from the need to design intelligible machine learning systems and understand and explain predictions made by opaque models like deep neural networks. 

In general, the ML community is yet to agree on a definition for explainability or interpretability. Sometimes it is even called understandability. Some define interpretability as “the ability to explain or to present in understandable terms to a human”. According to experts, interpretability depends on the domain of application and the target audience. Therefore, a one-size-fits-all definition might be infeasible or unnecessary. When concepts are used interchangeably, would it be wise to sacrifice the usability of a model for lack of comprehension? Where does one draw the line?

Despite deep learning’s popularity, many organisations are still comfortable using logistic regression, support vector machines and other conventional methods. Though model agnostic techniques can be used for traditional models, they are considered overkill for explaining kernel-based ML models. Model-agnostic methods can be computationally expensive and can lead to poorly approximated explanations. 

Stanford’s Nigam Shah, in a recent interview, touched on why explainability may not always be necessary. “We don’t fully know how most of them really work. But we still use them because we have convinced ourselves via randomized control trials that they are beneficial,” said Shah.

Explainability In Its Many Forms

Image credits: Stanford HAI blog

For any organisation, explainability becomes an issue when clients or other stakeholders come into the picture. The stakeholders fall into two categories:

  • One where explanations can be used as a one-off sanity check or shown to other stakeholders as reasoning for a particular prediction.
  • Explanations that can be used to garner feedback from the stakeholder regarding how the model ought to be updated to better align with their intuition.

It is generally believed that explainable methodologies can have broader advantages as they can be communicated to a wider audience and not just the immediate stakeholders. These methodologies help share the insights across the organisation without the need for a specialist in every scenario.

According to Shah, there are three main types of AI interpretability: 

  1. Explainability that focuses on how a model works.
  2. Causal explainability deals with the “whys and hows” of the model input and output.
  3. Trust-inducing explainability provides the information required to trust a model and confidently deploy it.  

So, it is important to know what type of explainability a data science team is targeting. That said, there is a chance that a use case might be a mix of all three. Such trade-offs and overlaps present a bundle of paradoxes to a decision-maker.

With increasing sophistication and completeness, the system becomes less understandable. “As a model grows more realistic, it becomes more difficult to understand,” said David Hauser at the recently concluded machine learning developers conference. According to Hauser, clients want the model to be understandable and realistic.This is another paradox a data scientist has to live with. He also stressed that understandable solutions give up on accuracy. For instance, network pruning one such technique which takes a hit on accuracy. The moment non-linearities or interactions are introduced, the answers become less intuitive. 

“Do you, as a user, care how the weather is predicted, and what the causal explanation is, as long as you know a day ahead if it is going to rain and the forecast is correct?”

We live in a world of an abundance of tools and services. Making the right choice leads to another paradox– Fredkin’s paradox, which states the more two alternatives seem similar, the harder it is to choose and the more time/effort required to decide.

Stanford professor Shah has also emphasised the Trust paradox. According to him, explanations aren’t always necessary. What can be worse is, sometimes they lead people to rely on a model even when it’s wrong. According to Shah, what engineers need from interpretability might not coincide with those of the model users whose focus is around causality and trust. Furthermore, explanations can also dent the chances of knowing what one really needs. 

Key Takeaways

In his interview with Stanford HAI, Shah shared:

  • AI models do not need to be interpretable to be useful.
  • Doctors at Stanford prescribe drugs on a routine basis, without fully knowing how most of them really work.
  • In health care, where AI models rarely lead to such automated decision making, an explanation may or may not be useful.
  • If it is too late to intervene for the clinician, what good are the explanations?
  • But, AI for job interviews, bail, loans, health care programs or housing, absolutely require a causal explanation.

One of the vital purposes of explanations is to improve ML engineers’ understanding of their models to refine and improve performance. Since machine learning models are “dual-use”, explanations or other tools could enable malicious users to increase capabilities and performance of undesirable systems. 

There is no denying that explanations allow model refinement. And, as we go forward, apart from the debugging and auditing of the models, organisations are looking at data privacy through the lens of explainability. Medical diagnosis or credit card risk estimation, making models more explainable, cannot come at the cost of privacy. Thus, sensitive information is another hurdle for explainability.

Share
Picture of Ram Sagar

Ram Sagar

I have a master's degree in Robotics and I write about machine learning advancements.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.