by John McDowall

An achievable view of artificial intelligence

Opinion
Oct 04, 2019
Artificial IntelligenceEnterpriseTechnology Industry

Artificial intelligence has been the holy grail of computing for half a century. And like the mythical cup, it always remains just out of reach But there are ways to deploy a measure of real artificial intelligence that yields tangible benefits.

automation iot machine learning process ai artificial intelligence by zapp2photo getty
Credit: Zapp2Photo / Getty Images

Artificial intelligence (AI) has been just over the horizon for decades now. From the cautionary tale of AI run amok in Stanley Kubrick’s 2001: A Space Odyssey, to the benign computerized assistant that helped Captain Kirk “boldly go where no man had gone before” in Star Trek, the 1960s were filled with visions of an AI-enhanced future that still hasn’t materialized a half-century later. But today we are assured that, despite the slow progress of the early years, we are truly on the cusp of realizing the vision of practical AI. It seems that every product that includes software advertises itself as leveraging the power of AI. With so much hype, a sober consideration of reality is in order.

The ultimate next big thing

We are in the midst of another wave of excitement about artificial intelligence. In some quarters, artificial intelligence is being touted as finally on the verge of reality due to recent advances in processing power. Once again, the public is being regaled with visions of how our lives will be completely upended and millions will be replaced by intelligent machines. The ready availability of inexpensive Graphics Processing Units has made Convolutional Neural Networks commercially practical for some uses such as image recognition. And almost every product on the market that includes software is advertised as “powered by AI.” But before you bet your technical future on the brave new world of AI, a clear-eyed look at some facts would be wise.

Practical AI has been “the next big thing” for a long time now. It has promised to relieve us of many mundane day-to-day tasks while simultaneously helping us achieve feats of science and engineering we can barely imagine. There have also been the more dystopian visions of AI displacing wide swaths of the human workforce, leading to millions of people whose jobs were taken over by the AI-powered machines, or even an ultimate war fueled by our AI progeny’s conclusion that humans are superfluous and inefficient, such as that envisioned in the Terminator movies.

Neither of these visions is likely for a long time to come. Almost twenty years ago, a colleague offered the opinion that AI stood for “Ain’t Invented.” He was right then, and he would be just as right saying it today. There are practical applications of AI that are coming of age. However, those applications are limited in noteworthy ways that make general-purpose AI just as remote as ever.

Defining artificial intelligence

When most of us hear the term “artificial intelligence,” we tend to think of the kind of science fiction AI that can respond to somewhat ambiguous voice commands and perform complex calculations and feats of logic. These impressive machines draw conclusions that we cannot achieve ourselves because of our limited memory and our slower reasoning abilities. This is a slippery and imprecise definition for the simple reason that we have a very hard time defining “intelligence.”

Set aside the purported measures of intelligence such as the Intelligence Quotient (IQ) test or academic achievement tests such as those used for college admissions. Most of us know people that we consider very intelligent who do not score well on those tests for a variety of reasons. Instead, for purposes of this discussion, let us consider intelligence the combination of the ability to store and recall basic facts, to relate and reason about those facts, and to apply creative solutions to new situations.

This definition is admittedly limited and imprecise, but so is our understanding of the function of this thing we call the human mind. Indeed, the human mind is so far beyond our own understanding that we cannot even agree on the idea of what it means to think. But this definition of intelligence will serve the needs of the discussion of AI capabilities that follows.

To really appreciate just how far we are from understanding what intelligence truly is, I recommend reading Douglas Hofstadter’s seminal work on the topic, Gödel, Escher, Bach: An Eternal Golden Braid. Written forty years ago, this book breaks down what it means to think at the lowest level, and delves into the mind-boggling layers of abstraction that exist between even simple mathematical concepts and how we think about them when we use arithmetic in our daily lives. I found it a very enlightening read, and it convinced me that general-purpose AI is much further away than I had previously believed. Consider this: to program a computer to perform mathematical calculations correctly, we must understand every aspect of those calculations with astonishing precision. If we do not even understand what intelligence is, how can we possibly program a computer to be truly intelligent?

Achievable artificial intelligence

We cannot program a computer to be truly intelligent, but we can program a computer to have limited intelligence, particularly in a specialized field. IBM’s Watson is probably the most famous example of such a machine, but even Watson has some significant limitations. Regardless, most enterprises do not have the resources needed to mount a Watson-scale AI project.

But there is another avenue to achieving some of the benefits of AI within the limited techniques available to us. The most basic first step is to improve your data modeling. As I described in an earlier missive, defining your data model in the form of an ontology is always a good idea because it helps you define both the syntax and the semantics of the data. But is has additional benefits in the form of basic AI capabilities.

Data modeled using the Web Ontology Language (OWL), is documented in a format that enables machine reasoning—a simple but powerful form of rudimentary artificial intelligence. Because OWL is founded in a branch of reasoning called Description Logics, it lends itself to a number of logic-based reasoning processes that are both powerful and explainable. The ability to explain how an AI process arrived at its result is increasingly important as AI-powered applications are being deployed in applications such as medicine and military operations. Before making any important decisions on the basis of an AI-assisted recommendation, users rightly want to understand how the AI arrived at that conclusion. This is driving the desire for “explainable AI.”

A Convolutional Neural Network (CNN) can perform impressive feats of image recognition, but it is difficult to trace exactly how they arrive at their decisions. There are many layers of classification and comparison, and the end result is both remarkably accurate and consistent. But that does not mean we can truly explain how the CNN arrived at its decision for each image.

In contrast, when a data model if formalized using OWL or another formal logic representation (e.g., Common Logic), we can write inference rules and apply them using the rules of formal logic. Consider a simple example: We create a simple data model that contains one class, “Person,” with two attributes, “name” and “sex.” We establish two relationships that can exist among instances of the Person class: “has_parent” and “has_sibling.” With this simple model we can store data such as PersonA has_sibling PersonB and PersonB has_Parent PersonC.

A simple rule such as “if person1 has_sibling person2 and person2 sex=Female, then person1 has_sister person2” lets us infer new knowledge about every person in the database (remember, the original data model did not include the concept of a sister). We can use similar rules to infer relationships such as grandparent, brother, cousin, and many others. And because this is all built on formal logic, the result is fully explainable. In fact, the result is more than explainable—it is provably correct.

This kind of inference can be done using off-the-shelf reasoners (both commercial and open source). Ontology editing tools such as Protégé can employ a number of reasoning engines such as HermiT and Pellet. There are many database and analysis products that support such logic-based reasoning, and when they are properly configured their performance is comparable to other database technologies.

Logic-based reasoning will not give you an AI system that can discuss the finer points of Hegelian philosophy, or that can write piano sonatas. But it can give you an AI system that can perform many routine data processing tasks. And more importantly, you will have an AI system whose workings can be explained to skeptics.