AI is not just a tool for scientists to develop a new medicine, or for researchers to beat humans at chess, but something that impacts all of us, every day. Throughout Covid-19, AI has been optimising the schedules of the vans that deliver your groceries or suggesting you what to watch on Netflix. But what actually is it? And what else can it do for us?
Artificial intelligence originated as an umbrella term in the 1960s, as various scientific communities began studying the first artificial neural networks. The now-famous Turing test was developed, as a way of determining if the responses from various programs differed to that of a human. However, technologies were limited by both computing power and the small amounts of (qualitative) data they had access to, and so no major breakthroughs were achieved.
As the concept of big data boomed in the early 2010s, so too did the popularity of AI. Increasing computing power — as a result of Moore’s law — and vast amounts of data gathered through our online activities enabled AI to flourish — it’s time had finally come.
But what actually is AI? Even in our organisation — an enterprise AI company — we dispute the definition of intelligence and the importance of our role in systematising it. Some have argued that a mushroom, that can adapt and thrive in changing environments, is more intelligent than a machine learning algorithm that can recognise the shapes of roads or people, but that can’t adapt itself. Definitions aside, we’re constantly considering how we apply to expertise to solve complex business problems.
We define human intelligence as goal-direct adaptive behaviour and apply the same criteria to artificial intelligence. Thus, for a system to be intelligent, it must be able to adapt their action in real-time, in production environments, and without the aid of a human. In other words, It’s able to make a decision, learn if that decision was good or bad, and then, presented with the same data, make a different decision. It must be able to adapt its decision based on real-time data, not just data it’s been trained on. Then, and only then, would we define it as true AI.
When achieved, it results in huge efficiencies for the organisations that adopt it, both in the quality of their decision making, and the time saved by not having to constantly adapt and improve their models.
Imagine you show a child thousands of images of kittens and puppies, and tell which animal it is. Eventually, the child begins to recognise kittens or puppies in images they haven’t seen, or in the park, or on the TV. But show them a cow, and they won’t recognise it. Or show them a lion, and they might see legs, fur, eyes and think it’s a kitten and go to pat it. Many systems, which we wrongly call AI work similarly and make these kinds of mistakes. Recently, a facial recognition technology on a phone failed to identify black women — the data it had been trained on simply wasn’t diverse enough. These systems, according to the US Defense Advanced Research Projects Agency (DARPA) — are known as “statistically impressive, but individually unreliable”. And in my opinion, most of the systems marketed today as AI is just automation. They don’t self-learn. They can’t process new datasets in real-time. They’re not intelligent.
Definitions of AI are far from uniform. Different people use different definitions to describe different technologies, each with different capabilities. Some refer to concepts like narrow, or applied AI — a tool capable of performing one very specific task e.g playing GO, self-driving cars or facial recognition. Others use AGI — or artificial general intelligence — to describe systems that can outperform humans on a much broader range of tasks, such as transportation. And then there’s the super-intelligence — that will supposedly outperform humans at every cognitive task. I hope, for the sake of clarity, that the industry will agree on both the definition and it’s naming in the near future.
As an AI company, filled with engineers, data scientists and mathematicians, we think AI has become overly romanticized, and too full of mystery. By much of the media, it’s often wrongly represented as Terminator-style robots that will end life as we know it. This isn’t true. Others label anything that touches data, or does some basic analysis or automation as AI. This isn’t true either.
I don’t want to be misunderstood. I believe that AI-based technologies will be a core component of software in the future. And in some areas, it’s already having an impact. Neurotechnology, a Lithuanian company, is one of the strongest defence and security companies in the world. It’s been developing AI-based fingerprint recognition systems for over 3 decades. Oxipit, a small health-tech startup, uses AI to detect lung diseases and to improve queuing and resourcing in hospitals. Satalia uses AI to optimise the routes of Tesco’s home delivery vans and is scheduling 1M orders a week throughout Covid-19. And for PwC, we optimise the allocation of people to their work. These systems drive enormous value for our clients, but can also be used to solve wider problems for society, or for the public sector.
Despite the noise surrounding AI, I have no doubt that real AI — systems that can adapt to new data in real-time, without human intervention — will form a crucial component of all software in the future. The AI market is projected to reach $190BN over the next five years. And as with any technology, some will win and some will lose. After the 2008 financial crisis, it was the companies that invested quickly in automation and process optimisation who grew the fastest over the following decade. Similarly here, we think those who invest in AI quickly will separate the winners from the losers.
This opinion piece was originally published on LRT, the Lithuanian public broadcaster’s news site.