Will IoT Help Robots Take Over Humankind Anytime Soon?
The Internet of Things (IoT) started embedding ordinary things on us and around us in many smart apps by adding sensors and internet connectivity. IoT has expanded to become the main enabler behind autonomous machines such as self-driving cars, autonomous tractors and algorithms that manage schedules, hiring, and travel plans.1 IoT devices are now producing large volumes of data that have quickly become instrumental in addressing bespoke automation in many different application domains.
In such an enriched sensing world machine learning and other forms of artificial intelligence (AI), such as computer vision and speech to text, are evolving in research environments giving rise to the promise of an IoT that can efficiently support humans. Machine learning is the first step towards the much-hyped AI leading to machine intelligence and the building of intelligent robots. Machine learning gets computers to write programs to reach a certain outcome rather than having a programmer write logic based on a set of rules. Does this mean machines can write their own programs and self-learn and take over humankind? Not so soon!
We humans will remain at the current stage of AI, instead of building intelligent robots to take over humankind, for a very long time because of three challenges that we explore in this article.
Where are we today in building intelligent self-learning robots?
Today we see an explosion of machine learning with robo-trading, algorithms with several computer vision and speech to text IoT devices, and many more useful use cases in industrial and healthcare settings. Machine learning combined with IoT makes many promises. Swarms of devices inside a human body will be able to spot and kill cancer cells. Computers will identify pathogens in blood samples. IoT devices will track for quality and compliance errors in manufacturing processes and machines will predict with high accuracy when bridges and dams or airport components will fail and need interventions. The list goes on to cover almost every industry; promising a utopian world. This has led to 952 AI startups funded today mostly from US, UK and Canada.2 IBM Watson, Google, and Microsoft have all offered machine intelligence APIs for facial recognition, speech to text, and machine learning for classification, segmentation and anomaly detection and much more.
However computers have not evolved from their origins as devices built to take ones and zeros as input. Three main challenges hinder the evolution of AI in machines. They are (i) getting adequate training data; (ii) human dependency on tribal knowledge; and (iii) the need to develop models for many unique use cases.
1. What is training data and why is it hard to build?
The foundation of machine learning is training data that feeds the computer so that it can learn to construct the rules to create the programs needed to achieve the desired outcome. These training data need to be built by humans who have to understand all possible scenarios to teach the computer how to react to inputs coming from numerous IoT and non-IoT data sources. For example to train a self-driving car, Google had to expose the car to a variety of situations to understand all the possible rules needed to teach it to drive as an autonomous vehicle. Civil Maps3 is an AI startup that uses computer vision to create maps for autonomous driving vehicles to map the fixed parts of a terrain to create training data for the vehicles.
When we learn as human beings, even with few data we can use our imagination to create new scenarios to test out our hypotheses. But machines lack imagination. So to be trained they need a very large volume of data to understand every possible scenario. IoT can certainly help in producing raw data at scale.
If we are asking a machine to look at blood samples to spot pathogens we can provide large volumes of data. But for robots it becomes a challenge to provide all the data that characterizes every potential scenario of interaction with the environment.
2. Human dependency on tribal knowledge
"Internet of things implies minimal human intervention and IoT analytics algorithms need to cater for this requirement."4 Another challenge in providing accurate training data relates to the fact that humans have biases. The biases of the programmer building the training data could hinder the machine's learning. On the other hand the tribal knowledge of the programmer in particular industry domains is necessary for the machines to interpret the data correctly. This is another factor that will hinder the process of moving from IoT raw data to intelligent knowledge and getting robots to eventually become independent of humans. The thousands of years of tribal knowledge stored in human experience is not necessarily all documented in books and systems to feed the robots.
3. Need to develop models for many unique use cases not generic uses
Today we have many systems building machine learning, speech to text, computer vision, and facial recognition use cases that are very generic. Facebook, Apple and Google Photos all offer facial recognition using machine learning. The use case is grouping faces as belonging to one particular person. Similarly there are computer vision systems learning to look at video footage for anomaly detection for IoT sensors to help seniors get help if they fall when alone at home. However all these systems are built for wide applicability and eventual market appeal and are not appropriate for specific uses.
The data models from these facial recognition systems and computer vision systems cannot be repurposed to match the face of an intruder from security video footage. That is a new machine learning algorithm that needs to be built out afresh.
I have a recording of a guest speaker from my Stanford class that I wanted to transcribe as a Q&A for my students. Google Machine Intelligence Library TensorFlow5 and IBM Watson Cognitive APIs6 offer speech to text algorithms that are general purpose and do not meet my need. I have to find lots of training data and build a new algorithm to get speech to text for transcribing my Q&A video. A restaurant in Guangzhou, China has fired its robot staff because they are good in repeatable tasks but are unable to adapt to myriad human interactions7.
It is still a huge challenge to evolve from current general-purpose research to building systems that can properly support multi-purpose use case applications for a variety of situations without building endless unique algorithms each time.
Until that happens we can be sure that, while IoT is helping produce large amounts of data to refine unique algorithms, robots are not anywhere close to taking over humans.
1. Breaking the Barriers of Humans and Machines by Sudha Jamthe
4. Deep Learning for Internet of Things Using H2O by Sibanjan Das, Analytics Consultant and Ajit Jaokar, FutureText
Sudha Jamthe is the Stanford instructor of the first IoT Business course and the author of IoT Disruptions 2020, which focuses on innovations at the junction of IoT and AI. Sudha is a member of the IEEE Internet of Things Community. She shares her IoT ongoing research with case studies at iotdisruptions.com.
Subscribe to the Newsletter
Join our free IoT Technical Community and receive our Newsletter.
Calendar of Events
2019 IEEE International Conference on Internet of Things and Intelligence System (IoTaIS)
5-7 November 2019
IEEE 6th World Forum on Internet of Things (WF-IoT)
5-8 April 2020
New Orleans, LA, USA
Call for Papers
Edge-Cloud Interplay based on SDN and NFV for Next-Generation IoT Applications
Submission Deadline: September 1, 2019
Trust-Oriented Designs of Internet-of-Things for Smart Cities
Submission Deadline: August 15, 2019
Emerging Trends and Challenges in Fog Computing for IoT
Submission Deadline: August 1, 2019