Blog|Appier

Fighting Fire With Fire: Why AI Is the Best Security Defense for Your AI System

Written by Admin | Jul 22, 2019 4:00:00 PM

Author | Min Sun, Chief AI Scientist, Appier

Breakthroughs such as deep learning for visual recognition and natural language processing underpin much of the excitement in artificial intelligence (AI) today. However, like all new innovative technologies, AI comes with its share of security concerns. It is always the way: While breakthrough technologies can revolutionize business and the way we work, they have to be handled carefully to avoid errors, misuse or worse.

Thankfully, that very same technology could hold the key to making AI more robust.

    A Double-Edge Sword: Why AI’s Biggest Strength Is Also Its Biggest Risk

Remember that any kind of software system has its security concerns – it is not just AI. However, AI has two unique properties that make security more pressing.

The first is its power. AI systems are typically built to increase human productivity – they are much more efficient than humans, especially at performing repetitive tasks. So, if malicious actors were to take control of such a system, their productivity would also greatly increase. This is a double-edged sword – AI’s immense power is its biggest strength, but this also makes it more dangerous if it falls into the wrong hands.

  This danger is magnified as AI becomes more common. In the future, AI systems will become widespread across all kinds of industries. If those tools become controlled by malicious people, that could potentially be a big problem.

The second property is AI’s reliance on data. Most AI systems are data-driven – they need data in order to reach their decisions. That means malicious actors don’t need to take control of an AI system in order to compromise it – they can just manipulate the data instead. If they pollute, alter or compromise the data source, the AI system will become much less effective. So it is not just the AI system that needs protecting, but the source data too.

  A New Era of Security Threats: Two Types of Attack

So how do malicious actors manipulate the data to attack AI systems? Broadly speaking, there are two types of attack: black box and white box.

In a black box attack, the attacker has no idea what is inside the AI system. That means they need to collect data on it. They need to observe approximately 1,000 examples of the input and output relationship, and depending on this data, they can speculate on what is inside the system and use that to craft an attack. The more data they collect from your AI system, the more likely it is that the attack will be successful. A black box attack is more likely for a system that has been running longer, because the attacker has more examples from which to choose.

In a white box attack, the attacker already knows what is inside including the system architecture, the parameters and so on. They use this knowledge to change the data just enough to throw the system off. This has a much higher rate of success than a black box attack. However, it isn’t easy either, as it requires the attacker to compromise the system in order to fully understand how it works. Only then they can start manipulating the data. You might think it is a little counterintuitive: Once you have hacked into a system, why not just control it directly? That is because a white box attack allows for sustained and long-term malicious use, which can prove more damaging in the long run.

Hackers can also hack a system very quickly and then copy it in its entirety. They won’t control it directly, but they have an identical version they can use themselves. Then they can still craft a white box attack.

  AI to the Rescue

This all sounds very negative, but there is a silver lining to this particular cloud: AI itself can help protect AI systems from attack.

By studying past attacks using machine learning, you can predict how the system changes its behavior when an attack is imminent. You then create a model that will warn you or shut down the system when certain warning triggers are detected. It is much more efficient than humans looking out for these warning signs. You just need to collect sufficient training data.

  The problem is, new types of attacks are always being created. In this instance, the machine learning approach won’t work, because the system won’t know what to look out for. However, this could soon change. Research is under way on how to train AI to probe your system to see where the vulnerabilities lie. This is a much more proactive approach than recording training data and teaching the system what to look out for.

Currently, a human will define the AI’s action space in order for it to test for vulnerabilities. It is much harder to do that than just collecting training data – that action space can be pretty large, which complicates things significantly. However, in the future this could be fully automated using AI. Then you have all the advantages of AI – like greater efficiency and productivity – with only a minimal increase in cost.

    Best Practice

When people think about the dangers associated with AI, they think about movies like The Terminator . Well, don’t worry – we are a long way from that, and AI’s considerable benefits far outweigh the risks.

People and businesses using AI just need to be aware of the security concerns. Like all software, it is good practice to always keep your AI system up to date in order to fix any potential vulnerabilities. You should also be testing your system’s vulnerabilities to see how much data has to be altered in order for the system to fail. Ideally, you want your system to be able to detect any possible change to the data so you can proactively shut it down or switch to a back-up system.

As computer systems become more complicated, it becomes harder for humans to find security vulnerabilities within them. The best human hacker in the world can’t hack a very complex system, but that doesn’t mean the system is flawless. Instead, we should leverage AI to actively probe for vulnerabilities and in turn create more robust systems that better serve our needs.