Min Sun, Chief AI Scientist, Appier
Min Sun, Chief AI Scientist, Appier

How to Ride the Third Wave of AI

We are at a very exciting juncture in the development of artificial intelligence (AI), starting to see implementations of the third wave of the technology – this involves machines far surpassing human capabilities in various application domains, and that creates all kinds of opportunities for businesses. To leverage this to its full potential, companies need to rethink how they operate and put AI at the heart of everything they do.

Making Waves: How AI Is Changing the Way We Do Business

The first AI wave started with statistics-based systems – the best-known use would probably be information retrieval algorithms used by big internet companies like Google in the early years of AI  (thinking of the PageRank search engine).

The second wave was about many more machine learning techniques, like logistic regressions, supporting vector machines, and so on. This is used in all kinds of businesses like banking and digital marketing tools.

The third wave is deep learning, of which the use is manifest in so-called perception AI – this relates to our human perception system including sight, hearing, touch and so on. Think of speech recognition and image recognition. It’s used in smart speakers to recognize what you say, in email programs that predict what you want to write next, in mobile phones that are unlocked by facial recognition, in digital marketing and advertising tools that predict customer behavior, and many other use cases.

The third wave has emerged in the last five or so years, and has far surpassed human capabilities in these areas.

In terms of applying this technology to products in the real world, we are at different stages depending on the application. For example, smart speakers are very good at deciphering speech in perfect conditions such as speaking loudly directly into the microphone, but less so in real-world use (if other people are talking in the same room, say). Similarly with facial recognition – your mobile phone will recognize you when you look directly at it, but surveillance cameras in public spaces are less accurate when faced with big crowds of people, some of whose faces are partially obscured.

Object recognition is the same. Vehicles are now pretty good at recognizing other vehicles and pedestrians as part of their advanced driving assistance. However, how effective it is depends on the weather conditions: if it’s raining, dark or too sunny, it can affect the accuracy.

Objects in our homes (cups, TV remotes, chairs, etc.) are even harder to be recognized. That’s why we don’t have robots helping us around the house. At least not yet!

The Importance of High-Quality Data

The way you improve a deep learning system is with data. The rule is: the more high-quality data you feed it, the better the system will perform. It’s simple: more data, better performance. However, the data must be as high-quality as possible.

The way to achieve this is by making your training data as similar as possible to real-world use. The best way to get data is to get your product into your customers’ hands and – with their consent – start collecting data from their usage in their day-to-day lives. Then you will get training data in the exact environment where people are using your product. 

Tesla is a great example. Because it has a sizable and devoted user base using its electric cars, it can collect masses of data which it then uses to retrain its model using deep learning. It then uses this information to continually send out OTA (over-the-air) updates to the software in its cars. Tesla has created a positive feedback loop: the more data it collects, the more accurate the model becomes and the better it’s able to serve its customers. By using deep learning it can continually make driving safer, improving its offering and in the process continue to grow its customer base.

Of course, the converse is also true: the fewer units you sell, the less data you collect and the slower the accuracy of the model grows. Your offering is therefore less compelling to customers. It’s a chicken and egg problem. People don’t buy many robots, and as such, consumer robots don’t advance as quickly as electric cars. The data that’s collected is mostly from imaginary use cases, rather than based on real-world usage. If there is no initial user base, you’re not going to get a decent amount of realistic data. In that case, deep learning might not help improve the product or service.

In the last five years, a lot of application domains have tried to use deep learning, but many have failed because they couldn’t solve this chicken and egg problem. AI alone isn’t enough; you need to offer something plus AI to hook customers in. It’s the AI that will eventually give you the long-term advantages – once you crack it, the quality of what you offer will improve, which will in turn grow your customer base further. That’s how you build a monopoly.

Barriers to Adoption, and How Deep Learning Is Clearing These Hurdles

There are a few obstacles to the third wave of AI.

Firstly, there is the cost of collecting data. Traditionally, data needs to be ‘supervised’, such as given the correct input and output by human operatives. For example, in an automotive early warning system, you will need to label what’s a car, what’s a pedestrian, what’s a cyclist, what’s a Stop sign, and so on. The labelling costs are high. If your application domain is not big enough to support your labelling costs, then deep learning won’t be cost-effective for you.

The good news is that deep learning is becoming so advanced and it’s now possible for unsupervised learning. That means you just collect data and forget about the labels. The machine will figure it out itself. If unsupervised learning can achieve the same performance as supervised learning, then as long as you have a user base and you will get raw data, you can still use AI to improve performance. You will get the same end result, but with a greater profit margin, as you won’t need a sizable labelling budget. It also lowers the barrier to entry, meaning more and more application domains could leverage deep learning.

Certain types of data have also been very difficult or costly to collect, like CT/MRI images from medical scans, but a method called transfer learning can help. This means you transfer the knowledge from other types of data that are more readily available (for example, x-rays), and apply them to your category of data. Again, it solves the cost issue.

So what about the human factor? Lack of AI talents has been one barrier to adoption, but that soon won’t be an issue – AI is such a hot topic that we won’t be short of experts for applying existing AI techniques. The bigger obstacle is at the management level.

Managers need to truly understand this technology in order to plan a roadmap that can solve the chicken and egg problem. However, it’s not just a question of technical proficiency: you need deep knowledge of the application domain as well so you can leverage the power of more users, more data and more powerful AI.

If you can combine those skills, there is no limit to what you can do. The third wave will take you far – enjoy the ride!

 

* This article is originally published on ITProPortal.

Is your organization ready to ride the third wave of AI? Or are you still not sure about how to leverage AI tools to make data-driven decisions that can drive marketing efficiencies and increase ROI? Let us help you! Get in touch with our AI consultants today for an exclusive discussion!