How Far Are We From Explainable Artificial Intelligence?
Artificial intelligence (AI) is heralding a revolution in how we interact with technology. Its capabilities have changed how we work, travel, play and live. But this is just the beginning.
The next step is explainable AI (XAI), a form of AI whose actions are more easily understood by humans. So how does it work? Why do we need it? How will it forever change the way industries – especially in marketing – function?
The Mystery of the Black Box: The Problem With Current AI
No one would deny that artificial intelligence produces amazing results. Computers that can not only process vast amounts of data in seconds, but also learn, decide and act on their own have turned many industries on their heads – according to PricewaterhouseCoopers, the market worth of AI is around US$15 trillion. However, in its current form, AI does have one major weakness: explanation.
Namely, it can’t explain its decisions and actions to humans. This is sometimes referred to as the “black box” in machine learning – for example, the calculations and decisions are carried out behind the scenes with no rationale given as to why the AI arrived at that decision.
Why is this a problem? It doesn’t engender trust in the AI, which in turn raises doubt about its actions. Explainable AI is expected to solve that.
How XAI Works
XAI is much more transparent. The human actors interacting with the AI are informed not only of what decisions it reached and actions it will take, but how it came to those conclusions based on the available data. It aims to do this while maintaining a high level of learning performance.
Current AI takes data into its machine learning process and produces a learned function, leaving the user with a number of questions such as: Why did it do that? Why didn’t it do something else? When will it succeed? And when fail? How can I trust it? And how do I correct an error?
By contrast, XAI uses a new machine learning process to produce an explainable model with an explainable interface. This should answer all the questions above.
This carries its own risks. Any decision made by an AI is only as good as the data used to make it. While XAI increases trust in the decision made, that trust could be misplaced if the data is unreliable.
Another problem is how well the AI explains its decisions. If it is not comprehensible to the user – who could be a lay person with no technical background – the explanation will be worthless. Solving this will involve scientists working with UI experts, along with complex work on the psychology of explanation.
Risk, Trust and Regulation: Why We Need XAI
In so-called “big ticket” decisions like military, finance, safety critical systems in autonomous vehicles and diagnostic decisions in healthcare, the risk factor is high. Hence it is crucial that the AI explains its decisions in order to boost trust and confidence in its ability. However, there are a host of benefits for businesses in other industries.
XAI can address pressures like regulation, as it will enable full transparency in case of an audit. It will encourage best practice and ethics by explaining why each decision is the right one morally, socially and financially. It will also reinforce confidence in the business, which will reassure shareholders.
It will also put businesses in a stronger position to foster innovation, as the more advanced the AI, the more capable it is in terms of innovative uses and new abilities. Interacting with AIs will soon be standard business practice in many industries, including marketing. Hence it is vital that users can do so comfortably and with confidence.
Experts think this will empower marketers, effectively turning AI into a co-worker rather than a tool.
“In order to trust AI, people need to know what the AI is doing,” says Hsuan-Tien Lin, Chief Data Scientist, Appier. “Much like how AlphaGo is showing us new insights on how to play the board game Go, explainable AI could show marketers new insights on how to conduct marketing. For instance, AI can reach the right audience at the right time now, but if future XAI can explain this decision to humans, it would help marketers understand their audience more deeply and plan for better marketing strategies.”
It could also usher in a new way of working, with marketers accepting or rejecting XAI’s explainable suggestions with reasons in order to help the AI learn. “Today, it is likely that many great suggestions are rejected because they are not explained, and so humans overlook their power,” says Min Sun, Chief AI Scientist, Appier. However, these days could soon be over…
The Defense Advanced Research Projects Agency is currently running an XAI program until 2021. The program is expected to enable “third-wave AI systems”, where machines can build underlying explanatory models to describe real-world phenomena based on their understanding of the context and operating environment. Other experts also predict XAI will become a reality within three to five years.
XAI is no doubt the next step for AI, improving trust, confidence and transparency. Businesses would be wise not to overlook its potential.
WE ARE HERE TO HELP
YOU MIGHT ALSO LIKE
Mother’s Day is an international celebration of mothers, with many countries in the Asia Pacific such as India, Japan, Singapore, Australia and the Philippines choosing the American date – the second Sunday in May – for recognition. In the US, Mother’s Day is the second highest gifting holiday after Christmas while in Asia Pacific, it is also emerging as a significant opportunity on which brands can focus their campaigns. As an emerging gifting occasion, Mother’s Day requires a lead-up promotional period to prepare and motivate consumer to purchase gifts. Reminders and recommendations driven by artificial intelligence (AI) can help brands reach a wider range of consumers with more targeted suggestions, increasing the chances of conversion but without losing the human touch of having one gift selected for another. Here are three ways AI can help. 1. Promote the Occasion A couple of weeks prior to Mother’s Day you should start to build your audience for this campaign, using AI to collate both existing customers and lookalikes. A reminder is that there is more to this market than ‘people with mothers’, and you should be using AI to tease out some of the more nuanced segments. For example, people with
Onboarding new app users is only the start of the battle. Once you have gotten new users to start using your app, you need to think about how to re-engage them to make sure they stick around. Planet of the Apps: More Choice Means More Competition Apps represent a huge opportunity for companies. They let you showcase your brand in the best possible light and allow customers to engage with it wherever they are (as long as they have their phone with them). In Q3 2019, global app revenue grew 23 percent year-on-year to a staggering US$21.9 billion. However, this only tells half the story. While customers are downloading more apps than ever, they are not using them all – indeed, they can’t possibly have time. Globally, the average smartphone user has around 80 apps on their phone, only half of which they use each month. In other words, users are also abandoning apps in record numbers – more than a fifth of all users abandon an app after just one use. So how do you make sure yours is not one of them? Retarget to Re-engage One of the most effective ways is by retargeting. Retargeting refers
AI models can make decisions and predictions quicker than any human operator. In order to do so, they have to first learn from data – they can do this through either supervised or unsupervised learning. What Is Unsupervised Learning? Unsupervised learning is a machine learning technique in which the AI needs to find patterns and correlations from a set of inputs without being given outputs to the learning algorithm. For example, when trying to define a target market for a new product type. Unsupervised learning is used when the outputs are unknown or unavailable, either because of scarcity of data, or because training data is too expensive to collect. Think of AI like a child. Supervised learning would involve teaching the child something we as humans already know, like colors, numbers or vocabulary. Whereas unsupervised learning leaves the child free to solve problems and find inferences by himself, for example by letting him pursue imaginative play or creative activities like writing and drawing. Compared to supervised learning, unsupervised learning: · deals with unlabeled data · enables users perform more complex processing tasks · is more unpredictable · can be used to discover the underlying structure of the data ·