IoT as a Key Enabler to Singapore’s Smart Nation Vision

Low Teck Seng
March 14, 2018

 

Singapore has set its ambitions high to be the world’s first Smart Nation. IoT plays a critical role in helping us realise this vision. IoT is not just about being connected to multiple devices – it is about designing connected systems to deliver real value to individuals and organisations in a secure and equitable manner. IoT is a key enabler in improving our lives, helping to deliver more responsive public services, efficient work processes and better living experiences.

Responsive Public Services

At the national level, one of Singapore’s strategic national projects under our Smart Nation drive is the Smart Nation Sensor Platform, which aims to accelerate the deployment of sensors and other IoT devices to make our city more liveable and secure. This would serve a variety of purposes in the areas of security, transport or municipal services. For example, under the PolCam initiative, the Singapore Police Force has installed police cameras in high traffic public areas to further enhance the safety and security of our neighbourhoods and public spaces.

The use of technology is also helping us to create a reliable, convenient and cost effective public transport system that can quickly adapt to meet public demand. We are currently exploring rolling out autonomous public buses in some of our newer housing estates, to supplement manned buses and to enhance first and last mile connectivity for commuters.

Efficient Work Processes

At the industry level, IoT is a key frontier technology identified in Singapore’s Infocomm Media Industry Transformation Map. In a bid to create more intelligent data centres in Singapore, the Infocomm Media Development Authority (IMDA) is working with the National Supercomputing Centre of Singapore (NSCC) to optimise data centre energy consumption through the use of IoT, Big Data and Machine Learning technologies. Through this collaboration, we have developed intelligent sensing technology to profile the health of a data centre’s operations, detect anomalies and monitor its energy consumption. This can help to reduce the cost of data centre operations and enhance its workforce productivity.

Better Living Experiences

At the individual level, IoT is a powerful tool in improving the lives of Singaporeans. For example, the use of IoT goes a long way in helping our seniors age gracefully, while providing caregivers peace of mind while they are away from home. The number of elderly citizens in Singapore is expected to triple to 900,000 by 2030. We are currently test-bedding Smart Elderly Alert Systems in our newer residential estates. Sensors in the flat can monitor the movements of the elderly, and alert caregivers to irregularities, such as when no movement is detected for a prolonged period of time. Wireless panic buttons can also allow the elderly to alert their loved ones quickly in times of need.

In time, we will also launch a new telehealth initiative. The Vital Signs Monitoring system will enable the remote monitoring of vital signs such as the blood glucose level, blood pressure or weight of patients with conditions such as diabetes, hypertension, or heart disease. Patients will be able to receive more timely interventions to manage their health conditions without having to visit the hospital. This system will enable patients to receive more regular care in convenient locations of their choice, and improve the productivity of our healthcare professionals and providers.

Innovation

Singapore has been continually innovating to find new ways of harnessing IoT technologies and encouraging the adoption of IoT. We have been working toward developing cheaper and longer-lasting sensors for a wide variety of applications, from logistics to energy and water usage. Such sensors are driven by cutting edge Low Power Wide Area Network (or LPWAN) technologies such as NarrowBand IoT (or NB-IOT), and LTE Cat-M1. This would enable sensors to be deployed to places without power access, and reduce the cost of maintaining these sensors.

Besides a vibrant cluster of companies, Singapore has built peaks of research excellence as well as many synergistic public-private partnerships. The Agency for Science, Technology & Research or A*STAR’s Industrial IoT Research Programme is one such example. It harnesses multidisciplinary capabilities from A*STAR research institutes as well as our Institutes of Higher Learning, such as NUS, NTU and SUTD, to break new ground on technologies for cognitive and secure industrial IoT systems. These research outcomes are then piloted at A*STAR’s Singapore Institute of Manufacturing Technology (SIMTech) and Advanced Remanufacturing and Technology Centre (ARTC), allowing SMEs to test new technologies before their adoption. To further develop Singapore’s next-generation manufacturing sector, A*STAR has established an Industrial IoT consortium with 13 companies, as well as a Smart Manufacturing Joint Lab with Rolls Royce and Singapore Aero Engineering.

Singapore strongly believes in the importance of being connected to a global network of international partners – scientists, academics, experts in this field and industry players. In hosting the 2018 IEEE World Forum on Internet of Things, we saw a huge opportunity to be connected to the community from all over the world, and learn from the many exciting exchanges during the forum. We invite you to experience Singapore for yourself, and share with us your ideas on how IoT can be applied to make Singapore the world’s first Smart Nation.

 


 

Low Teck SengLow Teck Seng is the Chief Executive Officer of the National Research Foundation, Prime Minister’s Office, Singapore. He was previously the Managing Director of A*STAR (2010 – 2012). Professor Low was also the founding principal of Republic Polytechnic (2002 – 2008), as well as the Dean of the Faculty of Engineering in the National University of Singapore (1998 – 2000). He was the founding Executive Director of A*STAR’s Data Storage Institute (1992 – 1998).

Professor Low was awarded the National Science and Technology Medal in 2004 – the highest honour bestowed on an individual who has played a strategic role in the development of Singapore through the promotion and management of R&D. He was also awarded the Public Administration Medal (Gold) in 2007 for his merit and service to Singapore. Professor Low is a Fellow of the Institute of Electrical and Electronics Engineering (IEEE) and an International Fellow of the Royal Academy of Engineering, UK.

 

 

The Internet of Nothing and The Internet of Things

Nahum Gershon
March 14, 2018

 

As I was walking through the huge display halls at the last Consumer Electronics Show, CES 2018, I happened to see a display of a bathroom mirror. It was not a simple bathroom mirror - it was also a touchscreen.

Disregarding How People Conduct Their Lives

If you would like to know the weather or your own weight while looking at the bathroom mirror, touch it and it will let you know. When Achim Ebert, a friend, commented that this mirror will quickly become dirty from all of the touches, I went back to the exhibit and asked if this mirror is self-cleaning. The answer was no.

So, I thought to myself if you were so anxious as to need to know the current weather while in the bathroom, using a voice assistant like Alexa might be more straightforward and most importantly, this action will not leave any fingerprints on the mirror... This mirror is one example of an implementation of technology that does not tend to fully consider how people really like to conduct their lives or what their needs are. This “nothingness” is what I tend to call the Internet of Nothing (see footnote).

Beep, Beep, Beep…Or Too Much Distractive and/or Irrelevant Data

Unfortunately, this past example of “Mirror, Mirror, on the Wall…” is not the only manifestation of an aspect of what is called the Internet of Things about neglecting some important human needs and expectations.  For example, I have recently installed a set of cameras inside and outside my house. I do enjoy seeing at will the inside and outside of my house from a distance, but, there are some difficulties. First, some of my cameras interpret changes to the captured image as motion.  So, if a light is turned on or off automatically, the change in illumination will be interpreted by the camera as motion and will generate an unnecessary alert. In addition, the cameras outside the house do not distinguish between irrelevant and relevant motion and this was quite apparent during a most recent wind storm.  

Another difficulty is with making the camera detect motion in specified areas. According to the instructions of two cameras I have, you are able to set up regions where motion is to be detected. But, a single lens camera cannot determine the distance of an object from the camera without additional intelligence.  One of the cameras, for example, is set to detect motion up to 5 feet from the camera. Still, occasionally, I get warnings for cars passing by on the road that is 40 feet away from the camera.  As a result, I get many false alarms during a typical day.  These "beeps" are in addition to the many interruptions I tend to get from my cell phone during the day (emails, text messages, etc.). This situation could be a problem as research has shown that frequent interruptions could decrease concentration and thus reduce the effectiveness of work. They even might cause some unwelcome changes in the brain (see [2]).

In spite of these difficulties and distractions, I do get sometimes important reports on the surrounding area of my house from these connected cameras. Last month, for example, one video showed how USPS has delivered an Amazon package. Instead of obeying the instructions to ring the doorbell and if there is no answer, to place the package in front of the door, the USPS worker is seen in the video throwing the package up in the air from a distance of 8 feet away from the door. Luckily, the content was not damaged.

The Accuracy of Sensors Might Be Low

Another aspect of Nothingness is poor accuracy. Experiments conducted showed that fitness trackers could be inaccurate ([3] & [4]). For example, the same tracker would report different values of activity depending if it is attached to the arm or to the core. Two other identical trackers located in the same place on my body reported an elevation change although the path walked was flat. Moreover, these two identical trackers reported vastly different values for the elevation change. These experiments illustrated that not all sensors are alike and that one needs to make sure that the sensors used are accurate enough for the purpose for which they are used.

As for fitness, on the other hand, it might not be so important to get the exact number of steps as long as the deviation from reality is not too large and as long as the tracker use motivates the person to be physically active throughout the day.

A Single Point of Failure is Worse Than Two or Three

So far, we have been discussing properties of single devices. But, besides aspects of the Internet of Nothing of single devices, there are also some aspects related to systems of devices. I started to think about it as a result of what initially seemed to be an unfortunate incident.

While I was on travel about a year ago, one of the Internet of Things hubs at home stopped functioning. I called the manufacturer who has a responsive phone service and I was told that the hub needs to be restarted. When I noted that I am out of the country, I was told that it could be done only from home. I commented that a hub that is designed to help people operate things from outside their home needs to be designed so that it could be rebooted remotely, either by the owner or by the manufacturer. 

The main point for me though was that how lucky I was to have a number of different IoT systems running at home so that if one fails that other ones still function. This is a simple principle of systems design - designing more than a single potential point of failure - redundancy of sensors & detectors and non-homogeneity of the system (e.g., various manufacturers, hubs, sensors, power sources and internet providers).

From the Internet of Things to Also the Internet of Nothing

These considerations have lead me to propose evaluating systems of the Internet of Things using two scores (like Pros and Cons):

  • The good & effective characteristics - The Internet of Things.
  • The difficulties that prevent the system to function as intended - The Internet of Nothing.

Focusing on both scores would lead to a more realistic & balanced view of the systems pointing out directions for improvement for the benefit of the users and to help reduce the potential of an unrealistic hype. Yes, technology is not above all. People & their needs are.

Footnote

This disregard for human and social needs does not seem to be limited to just the Internet of Things or even to technology. Recently, for example, Sidewalk Labs has emerged as a top contender for Toronto waterfront project that intends to use a tech-focused approach to the redevelopment project. Sidewalk’s slogan, “We’re reimagining cities from the internet up,” does not mention people. "Even when addressing issues like affordable housing, urban congestion, and health, solutions based on predictive algorithms rather than human experience can engender healthy skepticism” [1].

References

  1. Jackson Rollings, "Google's Sidewalk Labs emerges as top contender for Toronto waterfront project”, The Architect’s Newspaper, October 5, 2017: http://bit.ly/2FkfwUi
  2. Adam Gazzaley and Larry D. Rosen, "The Distracted Mind: Ancient Brains in a High-Tech World”, MIT Press, 2016.
  3. N. Gershon, "Wearables, Humans, and Things as a Single Ecosystem!”, IEEE Internet of Things, November 2015: http://bit.ly/1PVIQgu
  4. N. Gershon, "The Internet of Things (Everything!) and Health” , The IEEE Life Sciences eNewsletter, February 2016: http://lifesciences.ieee.org/lifesciences-newsletter/2016/february-2016/the-internet-of-things-everything-and-health/

 


 

Nahum GershonNahum Gershon focuses on social media, the Internet of Things, strategic planning, visualization, combining creative expressions with technology and real-time information delivery, presentation & interaction (including storytelling) in mobile, wearable as well as traditional devices including how they could improve both organizational environments and our personal lives. He like to play with ideas, words, and real devices. Nahum Gershon has served in many capacities at the IEEE over the years, in schmooz.org, and is a Senior Principal Scientist at the MITRE Corporation.

Nahum is a well-known community organizer, mentor, and communicator and is quite socially oriented. He has a significant international & multicultural background (citizen of the world, speaking a number of languages) and is right and left brain enabled. He enjoys life!

 

 

Event-Driven Cloud Architecture Considerations for an Interactive Internet of Things

Javier Moreno Molina
March 14, 2018

 

The Internet of Things represents a vision of a world where computer systems are connected and completely integrated with the physical world. Communications, sensing, and actuation interfaces are present now in more and more objects, not only in industry but also in our daily life.

However, the lifecycle of data has grown in a very uneven way. Huge amounts of data are now being generated by million embedded sensors. Data is kept in storages and data warehouses all over the world. Unfortunately, the use of this data is far away from seeing its full potential. Most data is meaningless at collection time and only acquires value during offline analytics when actionable insights are finally obtained [2].

Actionable data, already at collection time, still has a long way to go. IoT applications must match every new data input, with current and previously collected data inputs in order to identify relevant events and provide the best context-aware responses. Moreover, they must do it within reasonable latency. Only then, a seamless interactive Internet of Things will actually come true.

The challenges, and the paradigm shift, from the cloud architecture perspective, are not small.

Event-Driven Services

Unlike Internet applications so far, which have been driven by request-response schemas, IoT applications excel with event-driven services. Instead of delivering service upon a well-specified request, proactively made by a human end-user that directly supervises most of the process, an IoT application can properly identify a service based on heterogeneous data inputs from different sources, and deliver it just at the same time the user notices that he needs it [1].

This seamless interaction with users, anticipating their demands, and delivering what they need at the very moment they need it, is the cornerstone of the interactive Internet of Things. To achieve it, IoT applications need to introduce the following changes:

  • Databases to Streams: There is no specific trigger that allows you to occasionally query the exact information you need. Instead, triggers must be recognized from data-driven IoT services that continuously receive new data inputs. While databases were optimized for the former case, the latter requires continuous queries which are more effective using data streams.
  • Immutable Data Sources: To enable the evolution of IoT applications, the data sources used to produce application events must be preserved. They need to survive as the source of truth for the applications that deliver service based on them. Hence, an improved application with corrected bugs, enhanced algorithms or that expands the spectrum of input data sources, can reprocess all data based on their current implementation, and not be restricted or burdened by the anachronistic decisions from the past.
  • Non-Blocking Interfaces: The throughput of incoming events increases significantly. At the same time, in most cases, there will not be a customer sitting down in front of the screen waiting for a response to every request he makes. An immediate response is not always required, and a blocked input may pass unnoticed. Just as it happened with voice communication, maintaining a sufficient Quality of Service, allows for satisfactory service delivery while obtaining a much higher throughput.
  • Asynchronous Downward Channels: Whenever an action results from event processing, IoT cloud services need to be able to quickly communicate this action to the device or devices that will take part in the execution. In most cases, these devices will not even be aware of the events being processed. Therefore, they need to asynchronously communicate and obtain the necessary information to perform their corresponding actions in time.

The result of applying these design principles is a completely asynchronous communication, that could be difficult to assimilate at first, but which will end up being decisive to enable providing the best responses to customers that are continuously providing data through different sources at the same time.

Figure 1: IoT Event-Driven Application Architecture.

Figure 1: IoT Event-Driven Application Architecture.

Low-Latency Complex Event Processing

In order to provide context-aware responses, IoT applications must be capable of building their own events [4] based on the broad space of data input streams. This means determining which data inputs, or combinations of them, become relevant to their service delivery. This approach has been the field of study of Complex Event Processing for decades.

The critical challenge for IoT interactive services is to be able to perform complex event processing and deliver in “real-time”. There is frequently no time to perform database queries at processing time. The way to reduce latency is to provision event processors with as much context as possible. This way, when a new event occurs, all the required data to take an action is quickly accessible.

There are two main, non-exclusive ways to achieve this:

  • Edge Computing: Assuming most of the contextual data will have geographical coherence, every data input could be enriched with additional contextual data at edge location. In the ideal case, there is no need to access any other remote data, as the incoming event contains all necessary information to make it actionable. Cloud services must just host the business logic to decide which action to execute, and could even be implemented as lambda functions.
  • Stateful Stream Processors: In this approach, cloud services must consume all data they consider relevant for their decisions, and store the states in a very quickly accessible cache memory they can lookup while processing new events. There are already tools like, Apache Samza or Kafka Streams that maintain key-value tables with their relevant states, in order to perform stateful real-time event processing. They both keep states in embedded databases (LevelDB, RocksDB) so that no remote database needs to be accessed.

In practice, while edge computing is crucial to seamlessly provide contextual data, that may be difficult to infer otherwise, it is also very likely to need additional information from external data sources.

Conclusions

Event-streams are present in some well-known architectures, such as in lambda architecture, to provide real-time data analytics. However, IoT applications, especially those focused on closing the cyber-physical loop, rely solely on providing a “real-time” response based on all the information available. A concept that matches very well with the so-called Kappa architecture [3].

Serverless architectures may work well for self-contained events, but as soon as it is needed to remotely fetch additional data, performance will drop dramatically.

The IoT interactive application can be seen as a set of memory hungry micro-applications that subscribe to events of interest and provision themselves with all the additional data they require to execute their business logic (see Figure 1). Ideally, all data will persist in immutable data streams that allow the micro-applications to rebuild their own state tables whenever they need.

References

  1. Bonér, J., Farley, D., Kuhn, R., & Thompson, M. (2014). The reactive manifesto.
  2. Reinsel, D., Gantz, J., Rydning, J. (2017). Data Age 2025: The Evolution of Data to Life-Critical. IDC White Paper.
  3. Kreps, J. (2014). Questioning the Lambda Architecture. O’Reilly.
  4. Fowler, M. (2006). Focusing on Events.

 


 

Javier Moreno MolinaJavier Moreno Molina (IEEE Member) received his master’s degree in Telecommunications Engineering from Technical University of Madrid, and his Ph.D. in Electrical Engineering from Vienna University of Technology. He has worked as a researcher in Cyber-Physical Systems in University of Kaiserslautern, before joining BEEVA (BBVA) as an Internet of Things Solutions Architect. He has participated in several international projects, always in the IoT field, such as GEODES, FP7 SmartCoDe and H2020 VICINITY, and has numerous publications on networked and distributed embedded systems.

 

 

 

(I)IoT Protocols for Beginners

Vivart Kapoor
March 14, 2018

 

We all know the HTTP (hypertext transfer protocol). These are the first 4 characters that you see on any URL of a website you open in your browser. In simple terms, it is a list of rules that define do’s and don’ts of communication between web browser and web server.

It is like you (web browser) going to ATM (web server) to get some cash (request). Here the HTTP will describe the complete procedure – enter pin, amount, etc. You get your cash (result) once you follow the mentioned steps. Quite simple.

The World Wide Web (WWW) works on HTTP as it is the only protocol used there for the data transfer. However, this is not the case in the Industrial (I) IoT world. Here we have a bunch of protocols to choose depending on the type of application or so-called “use case”. The most common among them are MQTT, CoAP and, of course, HTTP. Before we discuss them, let us first have a look at certain networking terminologies and definitions.

Transport Layer Protocols (TCP, UDP)

Transport layer protocol, as the name implies, is responsible for transportation of message or information from one computer to another. The transport of the information can be done in two ways:

  • Connectionless Protocol (UDP): this kind of protocol is preferred in cases where speed and efficiency are more important than the reliability. In this case, the data is sent without establishing or waiting for a connection. This means that a bit or segment of data can get lost during transportation. A typical example of such protocol is live video streaming where sometimes bad connection results in the fragmented video. For example, imagine yourself bringing a bunch of letters to the postbox and dropping them inside. You are just dropping the letters inside the box without knowing whether they will be delivered to their recipients. This is the case with connectionless protocols. On the other hand, bringing all these letters to the post office and ordering a return receipt for them, thus ensuring their delivery, can be compared to a connection-oriented protocol.
  • Connection-Oriented Protocol (TCP):here the protocol ensures the receipt of a message at the other end without any data loss on the way, thus ensuring a reliable transport. The connection-oriented protocol needs extra overhead (discussed later) as compared to the connectionless protocol. Just like, it takes extra resources (time, money) to order a registered letter with return receipt.

Packet and Packet Size: a packet contains data (payload) along with information (header) like source, destination, size etc. Just like a DHL packet that contains stuff to be shipped along with information like address, weight, dimension etc. packet size in networking, is the amount of data (in bytes) carried over the transport layer protocols.

Overhead: it is the extra information (in bytes) or features associated with the packet which ensures the reliable delivery of the data. In other terms, it is that bubble wrap foil around your shipment that is not necessarily needed but provides an extra layer of safety and reliability for a safe shipment of your parcel. The amount of overhead associated with the packet depends on the type of transport protocol used. UDP in comparison to TCP has smaller overhead.

Bandwidth: it is the rate (bits/MB/GB per seconds) at which the data transfer takes place. The larger the bandwidth, the more data can be transferred at a given time.

So that was a crash course on networking. Now let us try to understand the mentioned IIoT protocols using these terminologies.

Message Queue Telemetry Transport

Message Queue Telemetry Transport, or simply MQTT, is a lightweight messaging protocol for industrial and mobile applications. It is best suited for application where network bandwidth and power usage are limited, for example, small sensor, remote location applications, machine to machine communication. MQTT communicates with a server over TCP and unlike HTTP works on a publish-subscribe model (see Figure 1).

Figure 1: Example of a publish-subscribe model used in MQTT.

Figure 1: Example of a publish-subscribe model used in MQTT.

In order to understand the concept behind the MQTT, one should try to understand the underlying architecture, that is the publish-subscribe model. Here a client publishes a message or a topic (temperature, humidity) to a broker that in turn sends these topics out to clients interested in subscribing to that message.

The publish subscriber model used in MQTT offers a couple of advantages as compared to the standard client-server model used in HTTP. Multicast, scalability and low power consumption are among the top three. These advantages are due to the fact that the publish-subscribe model overcomes some of the structural (one to one communication, tight coupling, fault sensitive) drawbacks of the traditional client-server model.

Let’s have a look at an analogy to understand the difference. Let us assume that MQTT and HTTP are two publishing companies: MQTT publishes magazines on various topics (sports, politics, cars, etc.) and provides them to a broker who in turn distributes them to subscribers interested in one or more topics. This way MQTT can cater many (multicast) subscribers at a given time, thus it is scalable. Since he only has to deal with a broker whom he contacts once a day, his investment (power consumption) in maintaining the business is not high.

HTTP, another publisher, likes to deal with one customer at a time. He highly relies on his customer and on his value chain (server to server). This, however, comes at a cost of relatively high business investment (power consumption) since he has to visit his customer each time for a handshake.

MQTT in contrast to HTTP is best suited for an application where bandwidth, packet size and power are at a premium. An industry generator with battery-powered temperature and humidity sensor cannot afford to maintain a connection with server each time it has to push the measured values (event or message) into the cloud. MQTT is just designed to overcome such constraints where the connection is maintained by using a very little power and the commands and events can be received with as little as 2 bytes of overhead (extra resources needed for operation).

Constrained Application Protocol 

Constrained Application Protocol, or simply CoAP, is a UDP based protocol, which is mostly interpreted as a light version of HTTP (except the fact that HTTP works over TCP). It is specially designed to work in a constrained environment with limited bandwidth and power constraints, where communication has to be fast and ongoing. Unlike HTTP, CoAP can support one to many (multicast) requirements and is faster than other TCP based protocols which makes it a good choice for M2M.

It is quite common to see the device to device (D2D) or device to gateway (D2G) communication done over CoAP and the communication between gateway and cloud is HTTP job. This is because there is a well-defined mapping between these two protocols.

So, if both MQTT and CoAP are good for the constrained environment, then what makes one better than another? The answer lies in the underlying transport layer their use. MQTT is better suited for event-based communication in a constrained environment where data needs to be sent in batches (for instance temperature and humidity values) and at regular intervals over a reliable channel.

CoAP is a better choice for continuous conditioning monitoring scenario in a constrained environment. Since it runs over UDP, CoAP offers faster communication among the devices which makes it a better option for M2M/D2D/D2G communication. CoAP is also best suited for web-based IIoT application where it has to work along with HTTP. In such a setup, you have CoAP at sensor side and HTTP running between proxy/gateway and cloud.

What about HTTP?

It is on demand whenever you want to push a big chunk of data from gateway/industry modem/computer into the cloud or a web-based application without compromising on security. Here regardless of how data is collected and sent to a gateway (CoAP vs MQTT) if it comes to reliable big package delivery, then HTTP takes the front seat. Moreover, HTTP is still used as a standard protocol for devices who do not support any other protocols. MQTT or CoAP or HTTP, it is a matter of speed vs reliability vs security, whichever suits your use case the best.

 


 

Vivart KapoorVivart Kapoor is a project manager at KSB AG in Germany where he is involved in various IoT/digital transformation projects. As a hobby blogger, he regularly contributes his expertise and knowledge in the field of the internet of things to several IoT expert panel or blogging sites. Vivart obtained his Bachelors in Biomedical Engineering from Manipal Institute of Technology in India and Masters in Technical Management from University of applied sciences in Emden, Germany.