MITB Banner

How Causal Inference Can Lead To Real Intelligence In Machines

Share

Last year, the machine learning community was thrown into disarray when its top minds Yann LeCun, Ali Rahimi and Judea Pearl had a faceoff on the state of artificial intelligence and machine learning.

While Rahimi and Pearl tried to tone down the hype around AI, LeCun was aghast over the scepticism around intelligence and causality of the models.

Pearl also went on record to say that deep learning was stuck with curve fitting and called it “sacrilege”. From the point of view of the mathematical hierarchy, Pearl said that no matter how well the data is manipulated, it’s still a curve-fitting exercise. 

“There are no predictions without assumptions.” 

– Max Welling, VP Qualcomm

This a very controversial accusation coming from Pearl, who was awarded the ACM Turing Award for fundamental contributions to artificial intelligence through the development of a calculus for probabilistic and causal reasoning.

“I think a lot of people from outside the field criticise the current status while ignoring that people actively work on fixing the very aspects they criticise. This includes causality, self-sup learning, reasoning, memory,” fired back LeCun in his recent post on Twitter, complementing the views of Max Welling, another noted AI researcher. 

To get a sense of what the critics of AI are suggesting, consider a reinforcement learning system that interacts and intervenes. This RL system only allows one to infer the consequences of those interventions, but ONLY those interventions. For a model to be causal, it has to go BEYOND — to actions not used in training.

However, in a recent work published by OpenAI, they show how the agents in a hide and seek game, do something astounding and break the game rules. The researchers at OpenAI came across agent’s new strategies to win the game, that was previously never thought of. This is indeed, a step in the right direction — emerging intelligence.

Deep learning, as the critics say, is just not all about curve fitting, the research that goes into causality doesn’t get the attention it deserves. In this next section, we list a few interesting works in this field.

What Is Hot In Causal Inference 

Before we go any further, first let’s define what is causality and why so much talk about it in the machine learning circles:

Causality is the degree to which one can rule out plausible alternative explanations. The ability to rule out competing explanations is addressed by the design of the study (random assignment, sampling methods, etc). 

So, by defining causality in systems, one gets to ask or even answer why one needs or doesn’t need a certain feature in a model. Enabling a machine to think in terms of causality leads to certain form of intelligence, which is close to what humans think like—AGI.

Despite the claims of lack of research in this field, there are few significant works that prove otherwise.

Here are few interesting research that is being done to address causal inference in machine learning:

Causality for Machine Learning by Bernhard Schölkopf

It argues that the hard open problems of machine learning and AI are intrinsically related to causality, and explains how the field is beginning to understand them.

DeepMind’s Causal Bayesian Networks

The researchers at DeepMind release two papers that demonstrate the use of Causal Bayesian networks(CBNs) to visualise and quantify the degree of unfairness.

Adversarial Learning of Causal Graphs

A new causal discovery method, Structural Agnostic Modeling (SAM), is presented in this paper. SAM aims at recovering full causal models from continuous observational data along a multivariate non-parametric setting. 

Causal Regularization

This work proposes a causal regularizer to steer predictive models towards causally-interpretable solutions and theoretically study its properties. Causal regularizer can be used together with neural representation learning algorithms in detecting multivariate causation, a situation common in healthcare.

Neural Granger Causality

This paper introduces a class of nonlinear methods by applying structured multilayer perceptrons (MLPs) or recurrent neural networks (RNNs) combined with sparsity-inducing penalties on the weights.

Future Direction

By understanding the cause-effect relationship between models, researchers believe that machines can mimic human-like intelligence. Not just playing chess or recommending movies but also indulge in intellectual thought processes.

If we want machines to reason about interventions (“What if we ban cigarettes?”) and introspection (“What if I had finished high school?”), we must invoke causal models. Associations are not enough—and this is a mathematical fact, not opinion.

Judea Pearl

Researchers like Pearl insist that machine learning has experienced unexpected success without paying attention to fundamental theoretical impediments. Though there is active research going on to develop new tools and methods to make AI more explainable and intelligent, scepticism from top researchers will make strengthen the systems and not the other way around. 

However, since innovations like those of AI are in their infancy, they require support from all quarters; financial and moral. If researchers play down the innovations vociferously, there is a chance of another AI winter in the coming decades.

Not adhering to either side of arguments, Yoshua Bengio, another Turing award-winning researcher, expresses his concerns regarding getting lost in championing deep learning for winning small battles. He insists on respecting current state of the art of algorithms for what they have accomplished and move on to building systems that deal with reasoning and causality to attain artificial general intelligence in its truest sense.

Share
Picture of Ram Sagar

Ram Sagar

I have a master's degree in Robotics and I write about machine learning advancements.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.