Fighting Fire With Fire: Why AI Is the Best Security Defense for Your AI System
Author | Min Sun, Chief AI Scientist, Appier
Breakthroughs such as deep learning for visual recognition and natural language processing underpin much of the excitement in artificial intelligence (AI) today. However, like all new innovative technologies, AI comes with its share of security concerns. It is always the way: While breakthrough technologies can revolutionize business and the way we work, they have to be handled carefully to avoid errors, misuse or worse.
Thankfully, that very same technology could hold the key to making AI more robust.
A Double-Edge Sword: Why AI’s Biggest Strength Is Also Its Biggest Risk
Remember that any kind of software system has its security concerns – it is not just AI. However, AI has two unique properties that make security more pressing.
The first is its power. AI systems are typically built to increase human productivity – they are much more efficient than humans, especially at performing repetitive tasks. So, if malicious actors were to take control of such a system, their productivity would also greatly increase. This is a double-edged sword – AI’s immense power is its biggest strength, but this also makes it more dangerous if it falls into the wrong hands.
This danger is magnified as AI becomes more common. In the future, AI systems will become widespread across all kinds of industries. If those tools become controlled by malicious people, that could potentially be a big problem.
The second property is AI’s reliance on data. Most AI systems are data-driven – they need data in order to reach their decisions. That means malicious actors don’t need to take control of an AI system in order to compromise it – they can just manipulate the data instead. If they pollute, alter or compromise the data source, the AI system will become much less effective. So it is not just the AI system that needs protecting, but the source data too.
A New Era of Security Threats: Two Types of Attack
So how do malicious actors manipulate the data to attack AI systems? Broadly speaking, there are two types of attack: black box and white box.
In a black box attack, the attacker has no idea what is inside the AI system. That means they need to collect data on it. They need to observe approximately 1,000 examples of the input and output relationship, and depending on this data, they can speculate on what is inside the system and use that to craft an attack. The more data they collect from your AI system, the more likely it is that the attack will be successful. A black box attack is more likely for a system that has been running longer, because the attacker has more examples from which to choose.
In a white box attack, the attacker already knows what is inside including the system architecture, the parameters and so on. They use this knowledge to change the data just enough to throw the system off. This has a much higher rate of success than a black box attack. However, it isn’t easy either, as it requires the attacker to compromise the system in order to fully understand how it works. Only then they can start manipulating the data. You might think it is a little counterintuitive: Once you have hacked into a system, why not just control it directly? That is because a white box attack allows for sustained and long-term malicious use, which can prove more damaging in the long run.
Hackers can also hack a system very quickly and then copy it in its entirety. They won’t control it directly, but they have an identical version they can use themselves. Then they can still craft a white box attack.
AI to the Rescue
This all sounds very negative, but there is a silver lining to this particular cloud: AI itself can help protect AI systems from attack.
By studying past attacks using machine learning, you can predict how the system changes its behavior when an attack is imminent. You then create a model that will warn you or shut down the system when certain warning triggers are detected. It is much more efficient than humans looking out for these warning signs. You just need to collect sufficient training data.
The problem is, new types of attacks are always being created. In this instance, the machine learning approach won’t work, because the system won’t know what to look out for. However, this could soon change. Research is under way on how to train AI to probe your system to see where the vulnerabilities lie. This is a much more proactive approach than recording training data and teaching the system what to look out for.
Currently, a human will define the AI’s action space in order for it to test for vulnerabilities. It is much harder to do that than just collecting training data – that action space can be pretty large, which complicates things significantly. However, in the future this could be fully automated using AI. Then you have all the advantages of AI – like greater efficiency and productivity – with only a minimal increase in cost.
When people think about the dangers associated with AI, they think about movies like The Terminator. Well, don’t worry – we are a long way from that, and AI’s considerable benefits far outweigh the risks.
People and businesses using AI just need to be aware of the security concerns. Like all software, it is good practice to always keep your AI system up to date in order to fix any potential vulnerabilities. You should also be testing your system’s vulnerabilities to see how much data has to be altered in order for the system to fail. Ideally, you want your system to be able to detect any possible change to the data so you can proactively shut it down or switch to a back-up system.
As computer systems become more complicated, it becomes harder for humans to find security vulnerabilities within them. The best human hacker in the world can’t hack a very complex system, but that doesn’t mean the system is flawless. Instead, we should leverage AI to actively probe for vulnerabilities and in turn create more robust systems that better serve our needs.
WE ARE HERE TO HELP
YOU MIGHT ALSO LIKE
When a shopper lands on your e-commerce site, she will either browse around or search for specific product info. What if you have a virtual assistant who can chat with her and guide her to exactly what she is looking for, just like in a physical store? This seamless, personalized customer experience is made possible through natural language processing (NLP). NLP is a field of artificial intelligence (AI) that trains machines to understand human language, interpret and converse with it. It takes into account the form of input (speech or voice). By leveraging deep learning algorithms, today’s NLP models are focused on “next sentence prediction”, which is a set of candidate sentences being ranked given an unfinished conversation. Initially, this was done through simple models based on statistical information. Given an input, they would generate the same result. However, using deep learning helps today’s models produce results with considerable accuracy as they are more complicated to extract information from the data. This has opened doors to a whole new host of applications. Robots in customer service, equipped with NLP technology, are able to understand conversations in limited domains and direct customers to relevant answers. Sentiment analysis also helps them analyze
As marketers grapple with the problem of ad fraud and its mounting losses, artificial intelligence (AI) is proving to be an effective weapon that can reverse the tide. Marketers in Asia Pacific continue to throw money at advertising, as ad spending is expected to increase 10.7 percent to US$210.43 billion in 2018, according to eMarketer. However, the ever-growing problem of ad fraud is skewing their reporting and standing in their way of showing better returns. Even mobile marketers who expected more safety with app installs, faced 30 percent more fraud during the first quarter of 2018 compared to the same period last year, according to AppsFlyer’s “The State of Mobile Fraud: Q1 2018” study. Mobile app marketers were exposed to US$700-US$800 million in ad fraud losses worldwide. What makes ad fraud such a challenging problem today? More Sophisticated Ad Fraud Methods Today In the early days of ad fraud, the methods adopted by fraudsters were relatively simple. They used bots focused on driving large volumes of traffic to websites, bought cheap traffic through auto redirects or employed people to install apps in click farms. Once a click was made or an app was installed, their job was done. However, this
If your marketing tries to appeal to everyone, it is doomed to fail. Instead, those in the know use market segmentation to divide their audience into specific groups for precise targeting. What Is Market Segmentation? Market segmentation is a way of separating your customers into different groups. Once you have done so, you can tailor your marketing campaigns in order to appeal to these different target segments. Customer segmentation can be based on various factors: Behavior (known as behavioral segmentation) Demographic information (age, income bracket, etc.) Psychographic information (social class, lifestyle, personality, etc.) Geographic (location) By breaking its customers down into various groups based on the above factors, a company can look at each audience segment individually, and identify their needs, wants and desires. Marketers can then use this information to design marketing campaigns and create ads that appeal directly to these audiences. Examples of Market Segmentation Lots of different people use a gym – despite the stereotypes, there is no one archetypal gym user. A gym will attract fitness fanatics, but even these will be split into those time-poor young professionals who favor HIIT (high intensity workouts) to get results quicker, treadmill addicts, weightlifters whose only goal is building