Dive into Tetralab: Where AI mastery reshapes the art of conversation.

Explore More

XAI: Moving from Artificial Intelligence to Explainable Artificial Intelligence

Artificial Intelligence and Machine Learning are trending terms nowadays, they may seem closely related and they are in fact connected. Artificial Intelligence (AI) is intelligence demonstrated by machines, while Machine Learning is a specific implementation of AI.
We can think of an AI task as trying to solve an input and output problem. For example, tell me what the object is in an image. This is a typical image recognition/classification problem, which requires classifying the input image into several classes. Supervised Machine Learning methods can be used to solve such problems. Supervised machine learning involves providing the machine with multiple input-output pairs completed by humans, the AI can then infer a function from this data and applies that function to new inputs. This is just one type of machine learning, there are many different Machine Learning methods, but we are not going to go through them here. Today, we will focus on what AI is and the connection between AI and explainable artificial intelligence (XAI).
Why is an image recognition problem considered an AI task and why does it require machine learning? While the task is easy for a human, it is difficult for a machine. These types of tasks are called ‘knowledge tasks’, the answers to which are not well-defined. This is the opposite of the usual machine-solved problems, which are concrete tasks; an example of a concrete task is, 1 + 1, which has the well-defined answer ‘2’. Typically a knowledge task is a problem that may have more than one correct output or multiple arbitrary routes to a correct output. A typical example of a knowledge task is, translating a document written in Japanese into English, it’s difficult to define a correct solution path and there’s no single exactly correct answer.
An example of machine learning applied to image recognition would be to classify an image of a panda. A human can see the image and immediately provide a correct classification. The way we might make a machine do this is to provide the panda image as an input for a machine learning image recognition model such as Inception-v3. The model has been trained through supervised learning in which the machine has learned to identify specific features from a large database of labeled images. When asked to identify a new image, the trained Inception-v3 model will, in turn, generate a sequence of labels and each label will have an associated number, e.g. [Panda: 0.92, Cat: 0.05, Dog: 0.01, …]. The number in the label-number pairs is the machine-determined probability that the image matches that specific class of classification.
So how does AI determine that a certain object is a panda with a probability of 0.92? The exact mechanics behind Machine Learning models such as Inception-v3 are highly complex. Simply put, the input is broken down into separate parts in a series of complicated mathematical functions, the results are collated and the probability is then calculated. But until now, the machine able to classify objects into a specific class with probabilities, but the exact machine-logic was unknown.
XAI will have the ability to tell you the reason for the machine’s decision. Ordinary AI gives you a number e.g. “Panda: 0.92”. Instead, XAI provides the reasons e.g. “Panda: [‘It has fur.’, ‘It is black and white.’, …]” to support its decision. With XAI the user can understand how and why the machine reached a given output based on its explanations. This is a revolutionary concept and will be applicable to a multitude of knowledge tasks. Most importantly XAI will improve AI usability and human understanding of AI and the machine learning process. XAI will be discussed in greater detail in later posts.