AI’s place in the IoT infrastructure

“We are witnessing one of the greatest revolutions in all of human history – a revolution driven by artificial intelligence and the Internet of Things. It will generate prosperity, jobs and promote human understanding.”Dr. Michio Kaku, Professor of Theoretical Physics, City College of New York (at NEC presentation)

Artificial Intelligence, or AI, is a term first coined by John McCarthy, a cognitive scientist - who became known as the father of Artificial Intelligence – in 1955, to describe machines that could reason like a human. Since then, fed by the growing powers of enabling hardware and software technologies, and the Internet, both AI and the IoT have been increasingly impacting our daily lives. And in the opinion of many, this impact could become far more profound in the not too distant future.

This article starts by looking at what AI is, and what it could become. It then returns to the present, and considers the current relationship between AI and the IoT, and how this is bringing benefits at all levels. We finish with a roadmap that defines the possible future stages of AI development.

What is artificial intelligence?

At its simplest definition, Artificial Intelligence (AI) is about the development of computers that can do things normally done by people. The Turing Test, developed by Alan Turing back in 1950, addressed this issue; it recognised that machines may be able to exhibit intelligent behaviour indistinguishable from that of a human.

Artificial intelligence – machines that can think like humans

Fig.1: Artificial intelligence – machines that can think like humans

Since then, machine processing capability has grown steadily; fulfilling the Moore’s Law prediction that computing power would double every two years. AI is now about computers’ ever-increasing ability to search and recognise patterns on continuously growing stores of data. It also addresses two more factors; how an AI system can continuously learn from its incoming data and improve accordingly; and how it can act on its own conclusions without reference to humans.

AI systems, whether we realise it or not, have become part of our daily lives. An article in Wired highlights the Google search engine as a prime example of this; each of the billions of searches performed every day repeatedly tutor the search engine’s deep-learning AI.

Another example is IBM’s Watson – a set of Cloud servers that offer AI services to subscribers such as doctors and hospitals. In one user’s opinion, ‘Watson will soon be the world’s best diagnostician -whether machine or human’.

However, if we allow AI to take over from humans in real-world as well as information applications, designers will have to realise that with greater power comes greater responsibility. Imagine an AI system controlling a large-scale chemical plant, for example. If something goes wrong, the system may decide to destroy part of the plant to prevent further damage and risk to life. Bearing in mind that the system will be making these decisions based on a breadth of data that we maybe cannot assimilate, will we be prepared to grant that autonomy?

AI’s growth potential

AI will pose plenty of big questions like these, while offering enormous and even unimaginable benefits in the future. In the University of Cambridge’s view, AI is likely to significantly impact upon the very functioning of society, posing practical, ethical, legal and security challenges – much of which are not yet fully appreciated or understood. Yet reversing human aging, curing disease and hunger and even mortality and reprogramming the weather to protect the future of life on Earth are just some outcomes that could become possible. The previously-referenced Wired article describes how this potential for startling growth is being created by the emergence of a number of factors:

Cheap parallel computing

Thinking is an inherently parallel process, billions of neurons firing simultaneously to create synchronous waves of cortical computation. To build a neural network—the primary architecture of AI software—also requires many different processes to take place simultaneously. Each node of a neural network loosely imitates a neuron in the brain—mutually interacting with its neighbours to make sense of the signals it receives. To recognise a spoken word, a program must be able to hear all the phonemes in relation to one another; to identify an image, it needs to see every pixel in the context of the pixels around it—both deeply parallel tasks. But until recently, the typical computer processor could only ping one thing at a time.

This changed with the appearance of graphics chips (GPUs), which specialised in parallel processing to handle the millions of pixel calculations needed every second to satisfy the intensely visual and parallel demands of video games. The real breakthrough came when Andrew Ng and a team at Stanford realized that GPU chips could be paralleled to run neural networks. Today neural nets running on GPUs are routinely used by cloud-enabled companies such as Facebook to identify your friends in photos.

Neural network – Image via Wikimedia

Fig.2: Neural network – Image via Wikimedia

Big Data

Every intelligence has to be taught. A human brain still needs to see a dozen examples before it can distinguish between cats and dogs. That’s even more true for artificial minds. Even the best-programmed computer has to play at least a thousand games of chess before it gets good. However, AI systems can now school themselves from the vast repositories of data across the Net; massive databases, self-tracking, web cookies, online footprints, terabytes of storage, decades of search results and more. Not least is the data that organisations are now capturing and recording about their own operations, whether they involve machines on a production floor, or service personnel recording the results of their site visits.

Imagine a service organization, for example, that keeps its technicians’ reports of incidents attended, with notes about both successful and failed repair attempts. Over time these would build up into a database of many thousand repair scenarios; AI analytic software could continuously process this data, spot correlations, learn, and generate ever-better and more insightful recommendations for ongoing technician support.

Better algorithms

Digital neural networks generate huge volumes of data, but computer scientists have had to learn how to manage this to extract useful information. The answer has been to use a multi-layer approach in which, if a group of bits is recognised as a pattern such as a human eye, the result is passed up to the next level in the neural network for further parsing - at this point, two eyes may be grouped together. If this process is iterated through fifteen levels, it can arrive at recognition of a human face.

In 2006, Geoff Hinton, then at the University of Toronto, made a key tweak to this method, which he dubbed “deep learning”. He was able to mathematically optimize results from each layer so that the learning accumulated faster as it proceeded up the stack of layers. Deep-learning algorithms accelerated enormously a few years later when they were ported to GPUs. The code of deep learning alone is insufficient to generate complex logical thinking, but it is an essential component of all current AIs, including IBM’s Watson, Google’s search engine, and Facebook’s algorithms.

The result – possibly explosive growth

This perfect storm of parallel computation, bigger data, and deeper algorithms generated the 60-years-in-the-making overnight success of AI. And this convergence suggests that as long as these technological trends continue—and there’s no reason to think they won’t—AI will keep improving.

In fact, another article by Tim Urban suggests that growth could be explosive due a concept he calls recursive self-improvement. He explains as below:

‘An AI system at a certain level—let’s say human village idiot—is programmed with the goal of improving its own intelligence. Once it does, it’s smarter—maybe at this point it’s at Einstein’s level—so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps. These leaps make it much smarter than any human, allowing it to make even bigger leaps. As the leaps grow larger and happen more rapidly, the AGI soars upwards in intelligence and soon reaches the superintelligent level of an ASI system. This is called an Intelligence Explosion – a term coined by Irving John Good in 1965 - and it’s the ultimate example of The Law of Accelerating Returns.’

NB. AGI means ‘Artificial General Intelligence’ and ASI means ‘Artificial Super Intelligence’, as explained in the later Roadmap section.

Back to the present

We have looked at the possible future of AI, but, naturally, alternative scenarios may unfold – only time will tell. Meanwhile, let’s see how AI, especially together with the IoT, is already influencing our world of today.

A powerful example of where AI would have been useful: In March 1979, a cooling malfunction destroyed a pressurised water reactor at Three Mile Island nuclear power plant, near Harrisburg, Pennsylvania in the USA. The World Nuclear Association named ‘Deficient control room instrumentation and inadequate emergency response training’ as the root causes. Another observer reported that the operators only had a single audible and visual alarm, accompanied by a printout of several hundred fault messages.

An AI system that could have analysed these messages, informed the operators of the root cause, and recommended actions to take might have made a critical difference to the speed and effectiveness of the operators’ response.

Three Mile Island – Image via Wikimedia Commons

Fig.3: Three Mile Island – Image via Wikimedia Commons

Factory AI and IoT systems

Within factories, it’s possible to trace an evolutionary path ending where IoT and AI, through feeding off one another, are improving our ability to understand and manage our industrial processes:

  • In an industrial plant, data generated by machines on the factory floor can be used to generate simple alerts, such as a warning to check a motor if a temperature sensor reading exceeds a pre-set level.
  • A more sophisticated IoT installation will have large numbers of sensors monitoring various operational aspects; temperature, vibration, current consumption and maybe more. This becomes a Big Data approach, in which all the data is sent to the cloud for a higher level of analysis, together with reporting of historical data and other trends. It also relates to Industry 4.0 architectures , which export factory machine control from local siloed systems, transferring instead to remote, centralised cloud-based facilities.
  • As this model evolves and grows, the data processing and some other cloud functions are moved to the edge, as close to the data sources as possible.

This can usher in a true AI solution. Based on real-time analysis of a machine’s performance, and an aggregated history of performance across large numbers of machines across the factory floor, or many factories, the AI system can learn the patterns that lead to failures.

This predictive maintenance system can forecast that in (say) three months a part will fail if not serviced, and recommend actions to pre-empt the failure. The AI system may also recommend ways to operate the machines to maximise their useful life, offering trade-offs between performance and longevity. Machine learning algorithms makes the analytics system smarter as time goes on and more data-sets and patterns are available. The AI system is only as good as the data it receives, so the more data on device operation, failure, and maintenance you feed into it, the more accurate the predictive analytics system becomes.

Data is available from many sources in a factory apart from the machines on line :

  • Programmable controllers

  • Manufacturing execution systems

  • Building management systems

  • Manual data from human inspection

  • Static data such as manufacturer’s service recommendations

  • External data such as weather conditions

  • Geographical data

  • Equipment usage history

  • Parts composition

Conversely, without such AI capability, users simply would not be able to fully realise their IoT installation’s potential. The large numbers of sensors characteristic of the IoT would be generating data in volumes that would soon overwhelm human operators or technicians. They would need many hours to correlate and analyse data that could be handled by AI systems in or near real time, if they could do it at all. The AI system could also factor in other variables, such as machine operators’ performance, seasonal changes or locations of different factories. If enough data on these is collected over time, the AI analytics can learn further correlations that can convert into actionable information.

Maintenance management using predictive techniques as described will save users money compared with earlier preventive maintenance approaches; these relied on replacing parts sometimes well before the end of their operational life. While preventing failures, this inevitably wastes some of the parts’ value.

Further savings can be made by using second-order analytics . For example, deeper analysis can determine the best time to remove the suspect part from service, and which additional repairs, if any, should be performed at the same time.

IBM’s Prescriptive Maintenance on Cloud is a software solution that allows reliability engineers to identify and manage asset reliability risks that could adversely affect plant or business operations. It prescribes actions to take based on predictive scoring and identification of factors that positively and negatively influence asset health, and provides a detailed comparison of historical factors affecting asset performance.

Read our article ‘Smart Factories: Handling complexity with flexibility’ for more information on smart factory environments where IOT and AI implementations are being deployed.

Other AI applications and their benefits

AI’s efficiency in distilling information from data translates into significant money-saving benefits. For example, chatbots are expected to save global businesses up to £6 bn a year by 2022, according to Juniper Research . Marc Lien, director of digital development and applied science at Lloyd’s Bank, believes that chatbot technology is significantly augmenting the service from customer support staff. He sees it as extremely powerful in bringing together the entirety of the institution’s knowledge base to a colleague in a conversational manner, where and when they want it.

Wide adoption of chatbots is facilitated as data from AI-powered backends can be made available through APIs for any authorised front-end to consume. Integration into apps, websites, or other front-ends can be made easy.

Using AI to extract data from IoT sensors can bring benefits to city dwellers - and provide a platform for further, more holistic improvements to follow. This is being demonstrated by Redwood City’s response to the common complaint of insufficient parking availability, despite the proximity of many parking garages that were only about half-full. The California city has set up a pilot project to install vehicle sensors into two of these garages. These not only reveal the available parking spaces and their locations, but also measure traffic flow patterns and help to provide information indicating when the garages are busiest.

This scheme is part of a more ambitious infrastructure plan. The City has launched another pilot project to test autonomous delivery robots. Parking data will also be related to smart lighting systems and connected water meters that can sense future ruptures. The City’s council believes that this holistic approach and use of IoT and AI technology is the future for city infrastructures. Additionally, having AI make data-driven decisions on infrastructure and city planning is key to setting up a future where self-driving vehicles can function.

AI is also influencing our lives in more direct, personal ways. We see it at work every day on shopping sites, supplying information into areas like ‘Customers who viewed this also viewed….’.

Speech recognition is another major beneficiary of AI. As this technology improves, we can interact with our computers and smart devices faster and more effortlessly. IBM has recently announced that it has achieved a new industry record in conversational speech recognition, with a word error rate of just 5.5%. The software uses deep learning technologies and takes advantage of negative examples as well as learning from positive ones, so it becomes smarter over time and performs better where similar speech patterns are repeated.

While this performance is not yet on a par with humans’ at 5.1%, it is becoming an increasingly useful contender.

AI now and in the future

Let’s look at a (slightly) futuristic example; the hospital visit:

  • Imagine that you’re visiting your niece in hospital, who’s recovering from a hockey accident. On arrival, you’re greeted by a digital receptionist like the one in Fig.4.
  • “Hello Mr Smith, your niece is in bed 5 on Ward 6 – but the orthopaedist will be assessing her in a few minutes. I recommend waiting for a while, so you can discuss the doctor’s report with your niece. Meanwhile, why don’t you visit our café? It’s hot today, and they’re serving plenty of cool drinks. I’ll send a map to your smartphone, so you can find it easily.
  • The map also shows the gift shop location, which may be useful, as I can see that you’re empty-handed. I’m aware of the traffic congestion on the ring road today, which may have denied you the time to buy a gift elsewhere. One last thing: I see it’s nearly a month since you collected your last blood pressure pill prescription – you should collect your next pack as soon as possible, but don’t forget that the pharmacy closes at 4:00 pm today”
  • On arrival with your niece, you can discuss the Doctor’s report with her, as well as the feedback from her fitness device. This has been tracking how well she has been following her therapy program, comparing results with historical norms, and highlighting divergence as appropriate.
Robot receptionist

Fig.4: Robot receptionist – Image via Wikipedia

AI Roadmap

Opinions are varied, but most experts accept three categories of AI development :

  • ANI: Artificial Narrow Intelligence

    First intelligence calibre, sometimes referred to as Weak AI. “AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does.”

  • AGI: Artificial General Intelligence

    Second intelligence calibre, sometimes referred to as Strong AI, or Human-level AI. AGI reaches and then passes the intelligence level of a human, meaning it has the ability to “reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience” as easily as a human.

  • ASI: Artificial Super Intelligence

    Third intelligence calibre: Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter — across the board.

Currently, the lowest level – ANI - exists in many implementations everywhere; in cars, factories, cities, shops and homes, and on our smartphones. AGI development projects are in progress, while ASI remains in the future.

One example of an AGI project is Google’s purchase of a UK company called DeepMind . DeepMind is attempting to mimic the biological structure of the human brain with software, to build machines that can learn ‘organically’ – that is, without human involvement.

DeepMind’s research includes systems that they claim are having a major environmental impact by learning how to use vastly less energy in Google’s data centres. The company is also collaborating with clinicians in the National Health Service on delivering better care for conditions that affect millions of people worldwide.

Conclusions

In this article, we have seen how AI, long seen as ‘something of the future’, is now impacting our daily lives. This is happening within many areas, especially where AI combines with IoT implementations in synergistic partnerships.

We have also looked at how the factors that have been giving AI its growing traction – rapidly increasing computer power, Big Data and improved algorithms – are expected to contribute to further, possibly explosive growth in AI and its impact on our society.

References

http://www.nec.com/en/global/onlinetv/en/vision.html

https://www.wired.com/2014/10/future-of-artificial-intelligence

https://www.jbs.cam.ac.uk/faculty-research/centres/risk/news-events/events/2017/risks-and-benefits-of-artificial-intelligence-and-robotics

https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

http://www.world-nuclear.org/information-library/safety-and-security/safety-of-plants/three-mile-island-accident.aspx

http://www.cringely.com/2009/03/31/three-mile-island-memories

https://aibusiness.com/ai-and-iot-two-sides-of-the-same-coin

https://www.salesforce.com/uk/blog/2016/09/what-is-industry-4-0.html

http://www.information-age.com/preparing-ai-predictive-maintenance-123467203

https://www.themanufacturer.com/articles/preparing-ai-future-predictive-maintenance

https://www.ibm.com/us-en/marketplace/prescriptive-maintenance-on-cloud

http://www.cityam.com/264384/chatbots-could-save-business-8bn-year-2020-banks-and

http://www.infoworld.com/article/3124751/paas/from-chatbots-to-einstein-artificial-intelligence-as-a-service.html

http://www.govtech.com/Artificial-Intelligence-Helps-Cities-Get-Smarter-About-Infrastructure-Planning.html?flipboard=yes

http://www.techrepublic.com/article/why-ibms-speech-recognition-breakthrough-matters-for-ai-and-iot

https://www.inverse.com/article/7852-ai-revolution-road-superintelligence

https://deepmind.com

AI’s place in the IoT infrastructure. Date published: 29th August 2017 by Farnell