What causes AI boom and bust: a personal view of Artificial Intelligence history

Artificial Intelligence (AI) is ubiquitous. Everything from toiletries to phones to cow tracking claims to be using ‘cutting edge AI technology’ to improve the efficiency, usefulness or accuracy of specific products or services.

But the cutting edge might not be as sharp as many believe. AI has a nearly 70-year long history with some surprising backtracks and re-emergences among it.

The aging of machine learning

Much of the trending vocabulary surrounding AI has a 40- to 60-year history. Machine Learning (ML) theory began to be expounded in the early 1960s only for it to fall by the wayside as subsequent periods of slowdown in the AI field, ‘AI winters’, were brought about by fundamental issues such as a lack of computing power to fulfil the high promises of AI, and subsequent pulling of funding by big-money backers such as the USA’s DARPA.

Following the end of the first AI winter in 1982 neural network theory saw a rise in popularity and adoption among researchers. It was followed by wild claims, like those preceding the first AI winter, about the ability of such systems. These claims were quashed by a second AI winter in the late 80s to early 90s, caused by hype and the rise of office and home computing that could efficiently carry out the ‘AI’ tasks touted by researchers.

Despite the chilly conditions, work continued in academic and other research environments. However, it wasn’t until 1997 that machine-learning-based AI injected itself, once again, into the zeitgeist by beating then current World Chess Champion Gary Kasparov at chess. This saw the beginning of the infatuation with AI that continues today. Seemingly out of nowhere a computer intelligence could defeat a human at what many consider to be one of the toughest games in the world. However, this had long been a possibility: in 1952 a simple AI program beat amateur players at checkers (or draughts).

New millennium, the resurgence of the old new

In the in-vogue field of Deep Learning (DL) the picture is much the same. The theory was arguably first developed by Marvin Minsky in the late 1960s building on his 1951 work the SNARC Maze Solver an ANN (Artificial Neural Network) that could simulate a rat finding its way through a maze. Minsky’s work and theory went largely unnoticed by the public and was rejected by most AI researchers until computing power had reached a level where results could be clearly demonstrated. One of the first consumer products utilising DL was 1998’s must-have toy for millennial children – Furby – that showed an ability to learn behaviour and language over time.

Despite the lack of public awareness, DL-based AI slowly integrated itself in the early 2000s into the backend systems of large companies involved in banking, data mining, logistics, speech recognition, medical diagnostics and Internet search engines. The mid-2000s saw the introduction of voice recognition apps on smartphones, and the heavy-hitters Apple, Google and Microsoft getting involved in 2011 with releases of Siri, Google Now, and Cortana. The following year, 2012, was another breakthrough in image recognition DL with Google announcing that its Google Brain had learnt to recognise a cat in an image. This ability for AI to recognise objects in a visual field would become a cornerstone of AI-related fields such as autonomous navigation for vehicles and robots.

After this the field went into hyperdrive with Google buying Deepmind, and many other companies investing heavily in an AI future. Advances rapidly followed with development of AI better able to deal with natural language processing (NLP) leading to vastly more nuanced translations and digital assistant/chatbot interactions than previous attempts such as Yahoo’s BabelFish in 1997 or chatbot ELIZA in 1965.

In 2017, AI began to move from centralized resources available via the cloud, to mobile devices, with languages such as TensorFlow Lite. And more publicity-capturing AI-beating-human demonstrations such as AlphaGo Zero beating Go champion, and an AI beating professional poker players, helped cement the image that AI was something to truly be reckoned with.

Beyond good and evil

In 2018 we find ourselves in an enchanted fearfulness of AI. This is reflected in a split between the AI pessimists and AI optimists. The former have gone almost as far as forming a religion worshipping AI, or can be heard piping harmoniously about AI’s potential to reduce the amount of work humans need to do, and the emergent digital socialist utopia of unfettered human creativity that will follow.

Those learning towards fear – including influential figures in the AI world – can be seen influencing government by demanding regulation, ethical standards, and the development of AI that has human-centric goals, to stop them going beyond good and evil.

This latter position raises the issue I have looked at throughout the article: the cyclical nature of the field. If regulation is too restrictive could it cause another AI winter? Or is AI protected by already being too integral to our daily life to be quietly re-veiled? There is another question pertinent with a looming AI-race between China and the USA: if the developed world’s AI regulation is too strict could that tilt power to less-concerned developing countries such as China or India, and what will that mean?

In future articles, we’ll be looking at some of these big issues, predicting the future, and digging down into more detail about AI technologies, applications and markets.

Add this: