Share on facebook
Share on linkedin
Share on twitter
Share on facebook
Share on linkedin
Share on twitter

Explainable AI: Making the Black Box of AI Into a Glass Box

For many people, artificial intelligence (AI) is an unexplainable and uninterpretable black box that takes millions, or even billions of inputs and delivers an answer that we are supposed to trust and take action with. However, with the impact of those variables having the potential to be wide-reaching, there is a movement towards explainable AI, or XAI. 

AI models deliver their outputs by using training data and algorithms. Then, when new information is fed into an AI model, it uses this information to infer some sort of response. For example, when a customer visits an online store, data about that customer, such as previous purchases, browsing history, age, location and other demographic information, will be used to make recommendations.

“If marketers want to segment different groups of customers in order to target them with different offers, they can use AI,” explains Dr Shou-De Lin, Appier’s Chief Machine Learning Scientist. “By understanding how and why an AI model categorizes customers, they can design and implement better marketing strategies on each of those different groups.”

For example, a marketer can use AI to segment customers into three different groups – guaranteed buyers, hesitant buyers and window shoppers – and then decide a different action for each group. Guaranteed buyers could be directed towards upselling while hesitant buyers might be sent a discount code or voucher to increase the likelihood of purchase. For those who are definitely not buying, the marketer will probably do nothing.

By understanding how the AI performs the market segmentation, marketers can devise appropriate strategies for each group.


Why Explainable AI Matters to Marketers

The level of explainability of an AI model depends on what marketers want to understand. While they might not be concerned by the mechanics of the algorithms that are used, they may want to understand what features, or input instances to the system, will influence the suggestions from the model to plan the follow-up actions. 

For example, a customer can be predicted as hesitant by AI based on different signals. It may be because the mouse has moved across an item many times, or because this customer has put an item in the shopping cart without checkout for a long time. The actions for those two scenarios may be different. For the former, marketers can simply recommend a set of items similar to the one with attention; for the latter, marketers might want to offer limited-time free shipping to trigger the final purchase. 

“Marketers need to know the key factors driving the model’s decision. Understanding the algorithms may be very challenging, but knowing which factors are driving decisions makes the model more interpretable”, says Dr Lin.

When we talk about explainable AI, it does not have to be about understanding the intricacies of the entire model, but understanding what factors can influence the output of that model. There is a significant difference between understanding how a model works and understanding why it gives a particular result.

XAI allows the owner or user of a system to explain the AI model’s decision-making process, understand the strengths and weaknesses of the process, and give an indication of how the system will continue to behave.

In image recognition, telling the AI model to focus on specific areas of a photo can drive different results. By understanding what parts of the image are most likely to drive the model to deliver a particular outcome or decision, users can better explain and interpret the actions of the AI model.

As well as aiding decision making around strategies, XAI allows marketers and other users of AI models to explain results to management and other stakeholders. This can be useful when justifying the outputs of a model and why a particular strategy is being used.

It is important to understand that not all AI models are as easy to explain as others. Some researchers have noted that algorithms such as decision trees and Bayesian classifiers are more interpretable than deep learning models such as those used in image recognition and natural language processing. It is also noted that the trade-off here is between accuracy and explainability. As models become more complex, it becomes harder for non-experts to explain how they work, though they usually can achieve better performance.


Explainable AI and Bias in AI Models

“Bias exists in all AI models,” says Dr Lin. “This is because the training data can contain bias. And the algorithms can also be designed with bias, either intentionally or accidentally. However, not all AI bias is negative.

Bias can be leveraged to make accurate predictions, but it needs to be used carefully where it applies to sensitive areas such as race and gender.

“Explainable AI can help us to distinguish whether a model is using good bias or bad bias to make a decision,” he explains. “It also tells us which factors are more important when the model makes the decision. XAI doesn’t detect bias, but it allows us to understand why the model makes that particular decision.”

Explainable AI also allows us to understand whether bias comes from the data that the AI model is trained with or by how different labels are weighted by the model, he adds.


A Matter of Trust

For many people, AI appears to be a black box where data enters, and an output or action appears as the result of an opaque collection of algorithms. That can lead to distrust when the model delivers a result that may, at first, seem counter-intuitive or even wrong.

“XAI makes these models more understandable and reasonable to humans, and so everybody can look into the result and determine whether they want to use it or not,” explains Dr Lin. “XAI brings humans into the decision-making loop and allows people to be the last step before a final decision is made. It makes the entire process more trustable.”

He adds that we can expect that AI models will be able to provide explanations of how they came to their decisions in the future. So, the decisions can be judged, increasing the accountability of the developers creating the models. While the decisions made by AI models can be traced (as opposed to a black box), we will see systems that provide explanations of how they work in the near future.


Creating More Explainable AI Models

There have been a number of papers proposed by academia to facilitate further explanations by AI or other methods.

“There are some models that are easier to be explained than others. For example, deep learning models can be very hard to explain. So, in order to do that, some research proposes to use some proxy models to mimic the behavior of these deep learning models. These proxy models are more explainable,” says Dr Lin.

Another way, he adds, is to build the models to be more explainable by design. For example, using fewer parameters in neural networks may deliver similar accuracy with less complexity, therefore making the model more explainable.  

With more and more businesses deploying AI, it is critical to understanding how these models work so that decisions can be understood, any unwanted bias can be recognized, and systems can be trusted. XAI takes the black box of AI and machine learning and makes it into a glass box.


* Are you looking to find out more about how artificial intelligence can help elevate your marketing efforts? Get in touch with our team today for an exclusive consultation. In the meantime, check out our other blog posts and white papers for more thought-leadership content and best practices. 


Let us know the marketing challenges that you’re facing, and how you want to improve your marketing strategy.


Artificial Intelligence in 2020 and Beyond

Author | Min Sun, Chief AI Scientist, Appier 2020 has a great ring to it. In lots of ways, it has for a long time been a signifier of ‘the future’. Many researchers and experts have, over the past decades, projected 2020 as a landmark year when we might expect any multitude of things to happen or come to a head in some way. And now ‘the future’ is here. We don’t yet have flying cars – or even self-driving cars in any widely-adopted way – but we have seen technology advance in leaps and bounds in other ways in recent decades. As computing power continues to grow, we can expect to see this momentum increase. One area that continues to see rapid advancement is artificial intelligence (AI). There have been some critical breakthroughs in AI technology within the last decade that have allowed AI to be applied in truly revolutionary ways in both business and society – take medical diagnostics for example – and I’d like to share some thoughts on how we might expect to see AI continue to transform the way we do things in 2020.   AI Will Be More Strategic and Able to Act on

In Race to Win Digital Media Subscribers, Are You Succeeding?

The success of digital media services including over-the-top (OTT) platforms like Netflix and Amazon Prime, as well as news publishers like The New York Times, have proven that traditional cable and the prints are no longer the only way audiences consume content. Enabled by technology, the shifting audience demands for what, when and where they consume content have driven incredible growth in the digital media market over the past few years. About 10.2 percent of the global population (765 million) will use a subscription OTT video service at least once per month in 2018, and the global OTT market will grow by 24 percent thanks to increasing internet penetration, faster speeds and a broader shift toward internet entertainment, according to eMarketer. For instance, Netflix has recorded a robust 34 percent year-over-year (y-o-y) revenue growth to US$4 billion driven by the growth in subscribers across both the US and international streaming markets. The latter saw the subscriber base increase by 40 percent y-o-y. On the news media side, Deloitte estimates that by the end of 2020, the proportion of subscription to advertising revenue for publishers will be 50:50 in digital. This split was still 10:90 just in 2012. As digital media

Advocating for Diversity and Inclusion, Day In and Day Out

Today, March 8, is International Women’s Day (IWD), when governments, employers and women themselves celebrate female success and the contributions that women have made to society. Advocating for women is critical, and as a growing technology company, we certainly shoulder some of the responsibilities alongside others in the industry. However, we can’t advocate for women or any other groups on just one day of the year. Organizations including ours need to look at diversity and inclusion on a daily basis, and make sure we are considering it in every area of the business, from hiring to team structure and recognizing achievements. It is proven that diverse and inclusive teams solve business problems faster, allowing things to get done more quickly, and also make for happier and more productive employees. So how can companies make sure they have a culture that welcomes everyone and gives them space to contribute and experiment? It certainly starts with hiring. At Appier, we focus on skill-based hiring, making sure we have the best people to do the job rather than look at race, gender or any other identifying factor. Culture fit is also incredibly important. Appier is a startup, so collaboration across functions is key