Go back

How Far Are We From Explainable Artificial Intelligence?

 
Artificial intelligence (AI) is heralding a revolution in how we interact with technology. Its capabilities have changed how we work, travel, play and live. But this is just the beginning.

The next step is explainable AI (XAI), a form of AI whose actions are more easily understood by humans. So how does it work? Why do we need it? How will it forever change the way industries – especially in marketing – function?

The Mystery of the Black Box: The Problem With Current AI

No one would deny that artificial intelligence produces amazing results. Computers that can not only process vast amounts of data in seconds, but also learn, decide and act on their own have turned many industries on their heads – according to PricewaterhouseCoopers, the market worth of AI is around US$15 trillion. However, in its current form, AI does have one major weakness: explanation.

Namely, it can’t explain its decisions and actions to humans. This is sometimes referred to as the “black box” in machine learning – for example, the calculations and decisions are carried out behind the scenes with no rationale given as to why the AI arrived at that decision.

Why is this a problem? It doesn’t engender trust in the AI, which in turn raises doubt about its actions. Explainable AI is expected to solve that.

How XAI Works

XAI is much more transparent. The human actors interacting with the AI are informed not only of what decisions it reached and actions it will take, but how it came to those conclusions based on the available data. It aims to do this while maintaining a high level of learning performance.

Current AI takes data into its machine learning process and produces a learned function, leaving the user with a number of questions such as: Why did it do that? Why didn’t it do something else? When will it succeed? And when fail? How can I trust it? And how do I correct an error?

By contrast, XAI uses a new machine learning process to produce an explainable model with an explainable interface. This should answer all the questions above.

This carries its own risks. Any decision made by an AI is only as good as the data used to make it. While XAI increases trust in the decision made, that trust could be misplaced if the data is unreliable.

Another problem is how well the AI explains its decisions. If it is not comprehensible to the user – who could be a lay person with no technical background – the explanation will be worthless. Solving this will involve scientists working with UI experts, along with complex work on the psychology of explanation.

Risk, Trust and Regulation: Why We Need XAI

In so-called “big ticket” decisions like military, finance, safety critical systems in autonomous vehicles and diagnostic decisions in healthcare, the risk factor is high. Hence it is crucial that the AI explains its decisions in order to boost trust and confidence in its ability. However, there are a host of benefits for businesses in other industries.

XAI can address pressures like regulation, as it will enable full transparency in case of an audit. It will encourage best practice and ethics by explaining why each decision is the right one morally, socially and financially. It will also reinforce confidence in the business, which will reassure shareholders.

It will also put businesses in a stronger position to foster innovation, as the more advanced the AI, the more capable it is in terms of innovative uses and new abilities. Interacting with AIs will soon be standard business practice in many industries, including marketing. Hence it is vital that users can do so comfortably and with confidence.

Experts think this will empower marketers, effectively turning AI into a co-worker rather than a tool.

“In order to trust AI, people need to know what the AI is doing,” says Hsuan-Tien Lin, Chief Data Scientist, Appier. “Much like how AlphaGo is showing us new insights on how to play the board game Go, explainable AI could show marketers new insights on how to conduct marketing. For instance, AI can reach the right audience at the right time now, but if future XAI can explain this decision to humans, it would help marketers understand their audience more deeply and plan for better marketing strategies.”

It could also usher in a new way of working, with marketers accepting or rejecting XAI’s explainable suggestions with reasons in order to help the AI learn. “Today, it is likely that many great suggestions are rejected because they are not explained, and so humans overlook their power,” says Min Sun, Chief AI Scientist, Appier. However, these days could soon be over…

The Defense Advanced Research Projects Agency is currently running an XAI program until 2021. The program is expected to enable “third-wave AI systems”, where machines can build underlying explanatory models to describe real-world phenomena based on their understanding of the context and operating environment. Other experts also predict XAI will become a reality within three to five years.

XAI is no doubt the next step for AI, improving trust, confidence and transparency. Businesses would be wise not to overlook its potential.

Subscribe to the Appier Blog

Your source for the newest in marketing technology and automation, industry trends and best practices, and Appier insights.