Article 1

IoT Cybersecurity: Research Challenges and Opportunities Ahead

Urban Sedlar, Leon Štefanič Južnič, Matej Kren, Matej Rabzelj, Andrej Kos, and Mojca Volk

In the past decade, we have witnessed an unprecedented expansion of the "world-sized robot," as various security researchers call the Internet of Things (IoT). This term is quite apt, as this is no longer just a mere sensor: it actuates, it moves, and it drives critical – and expensive – decisions, such as: when to unlock your car or your apartment door; when to turn on the heat; and when to stop the conveyor belt.

 


Article 2

Challenges Monitoring the Arctic Tundra: The Distributed Arctic Observatory Cyber-Physical System

Issam Raïs, Otto Anshus, Phuong H. Ha, and John Markus Bjørndalen

Climate change is deteriorating terrestrial biomes. The arctic tundra is the most challenged one. The extent of projected warming is so extreme that the tundra ecosystems will likely transform into novel states within a few decades, potentially leading to losses of important biodiversity and functions providing irreplaceable services to humanity.

 


Article 3Digitalization of COVID-19 Pandemic Management and Cyber Risk from Connected Systems

Petar Radanliev, David De Roure, Max Van Kleek

What makes cyber risks arising from connected systems challenging during the management of a pandemic? Assuming that a variety of cyber-physical systems are already operational-collecting, analyzing, and acting on data autonomously-what risks might arise in their application to pandemic management? We already have these systems operational, collecting, and analyzing data autonomously, so how would a pandemic monitoring app be different or riskier?

 


Article 3How oneM2M Unlocks the Potential of IoT to Enable Digital Transformation

Ken Figueredo

The topic of digital transformation has risen the agenda for many organizations over the past few years. The sudden shock of Covid-19 has accelerated its importance. Many person-to-person activities have migrated online. Consumers are ordering food supplies and banking remotely. Academia, businesses, and governments are transitioning their operational activities into online formats. Moreover, their workforces are becoming distributed.

 

 

EVENTS & ANNOUNCEMENTS


Article 5

IEEE Internet of Things Initiative - Upcoming Events

IEEE 6th World Forum on Internet of Things has gone virtual.  Join us beginning on 2 June 2020.
View more details

IEEE IoT Vertical and Topical Summit on Tourism - 2020 has been postponed.
View more details

 


Article 5

IEEE Internet of Things Magazine

Internet of Things Magazine logo The Internet of Things Magazine (IoTM) publishes high-quality articles on IoT technology and end-to-end IoT solutions. IoTM articles are written by and for practitioners and researchers interested in practice and applications, and selected to represent the depth and breadth of the state of the art. The technical focus of IoTM is the multi-disciplinary, systems nature of IoT solutions.

Become an author - Submit an article today!
Never miss a copy - Subscribe today!

 


Article 5

IEEE Xplore®

Stay Connected to IEEE Xplore When Working Remotely
If your organization has an institutional subscription to IEEE Xplore® and you need to work remotely due to school and workplace closures, you can still access IEEE Xplore and continue your work and research while offsite. Try these tips for remote access or contact IEEE for help. IEEE is here to support you, making certain that your IEEE subscription continues to be accessible to all users so they can continue to work regardless of location.

 

 

This Month's Contributors

Urban Sedlar is an assistant professor and senior researcher at the Laboratory for Telecommunications, Faculty of Electrical Engineering, University of Ljubljana.
Read More >>

Leon Štefanič Južnič is a research member of the Laboratory for Telecommunications at the Faculty of Electrical Engineering, University of Ljubljana.
Read More >>

Matej Kren is a research assistant at the Laboratory for Telecommunications.
Read More >>

Matej Rabzelj is a master's degree student of Information and Communications Technology at the Faculty of Electrical Engineering, University of Ljubljana.
Read More >>

Andrej Kos is a full professor at the University of Ljubljana, Faculty of Electrical Engineering as well as the Head of the Laboratory for Telecommunications.
Read More >>

Mojca Volk is an Assistant Professor and Scientific Associate at the Laboratory for telecommunications, Faculty of Electrical Engineering at the University of Ljubljana.
Read More >>

Issam Raïs received his Ph.D. in the Algorithms and Software Architectures for Distributed and HPC Platforms (Avalon) team from the LIP laboratory in Ecole Normale Superieure (ENS) Lyon (France) in September 2018.
Read More >>

Otto Anshus is a Professor at the Department of Computer Science, University of Tromsø (UiT) The Arctic University of Norway, and a part-time Professor at the Department of Informatics, University of Oslo, Norway.
Read More >>

Phuong H. Ha received the Ph.D. degree from Chalmers University of Technology, Sweden.
Read More >>

John Markus Bjørndalen is a Professor at the Department of Computer Science, University of Tromsø (UiT) The Arctic University of Norway.
Read More >>

Petar Radanliev is a Post-Doctoral Research Associate at the University of Oxford.
Read More >>

David De Roure is a Professor of e-Research at the University of Oxford.
Read More >>

Max Van Kleek is an Associate Professor of Human-Computer Interaction with the Department of Computer Science, at the University of Oxford.
Read More >>

Ken Figueredo has a background spanning business, management and technology consultancy.
Read More >>

 

Contributions Welcomed
Click Here for Author's Guidelines >>

 

Would you like more information? Have any questions? Please contact:

Raffaele Giaffreda, Editor-in-Chief
rgiaffreda@fbk.eu

Massimo Vecchio, Managing Editor
massimo.vecchio@uniecampus.it

 

About the IoT eNewsletter

The IEEE Internet of Things (IoT) eNewsletter is a bi-monthly online publication that features practical and timely technical information and forward-looking commentary on IoT developments and deployments around the world. Designed to bring clarity to global IoT-related activities and developments and foster greater understanding and collaboration between diverse stakeholders, the IEEE IoT eNewsletter provides a broad view by bringing together diverse experts, thought leaders, and decision-makers to exchange information and discuss IoT-related issues.

IoT Cybersecurity: Research Challenges and Opportunities Ahead

Urban Sedlar, Leon Štefanič Južnič, Matej Kren, Matej Rabzelj, Andrej Kos, and Mojca Volk
May 14, 2020

 

 

In the past decade, we have witnessed an unprecedented expansion of the "world-sized robot," as various security researchers call the Internet of Things (IoT). This term is quite apt, as this is no longer just a mere sensor: it actuates, it moves, and it drives critical – and expensive – decisions, such as: when to unlock your car or your apartment door; when to turn on the heat; and when to stop the conveyor belt.

On one hand, IoT promises to solve numerous inefficiencies across whole industries, but on the other hand, our collective approach to it seems to have been rather negligent [1]. This problem started long ago, in the consumer IoT sector, with low-cost devices that were affordable exactly because of the various compromises – including a neglected focus on security. With quickly decreasing margins and cutthroat competition, the only features that can be left out are the invisible ones. And security is a prime candidate for that [2]. Unfortunately, the consequential lack of security provisions is also something that cannot be easily tacked on after the fact. The term "security by design" implies that such work needs to start at the beginning, with robust architecture design, and must continue from the ground up, across the whole system: from the device, communication stack, cloud backends, to user-facing interfaces. 

This, however, is hardly the usual course of action; several studies have found a lack of long-term support and software upgrades across the industry, thus leading to large deployments of the so-called IoT abandonware. This means there already exist large numbers of publicly exposed and poorly protected devices, many of which are a security liability with direct consequences for safety, privacy, and security of citizens. Furthermore, numerous devices are built on unproven designs, are relying on hardcoded secrets, attempt to roll their security schemes (which is something one should never do), or rely on security by obscurity. Especially the latter seems quite tempting when you're building 100 devices on Kickstarter; however, it is not uncommon for such projects to eventually succeed and become household names, without much change in the source code.

This is where IoT cybersecurity comes into play. Several prominent cyberattacks happening in recent years have gained access to such poorly protected devices and performed distributed denial of service (DDoS) attacks, exposed users’ private data, stolen identities, or just caused inconvenience.

So What Can Be Done About This?

In our view, there are three approaches. Firstly, if built-in security is neither mandatory nor visible, the end-users can hardly be expected to know what to choose – until it's too late. To solve this, the regulatory bodies should step in and mandate companies producing or selling connected devices to make the necessary steps. We have seen a similar approach with GDPR, which has already caused a major shift in the industry with regards to data management and privacy. Even though there’s a large body of ongoing efforts in this respect, including with the prominent SDOs such as ITU, IEEE, IETF and so on, the domain is fragmented and right now there is no one single owner committed to delivering a completed IoT security standardization to address the overall problem space [3].

Next, if the device itself cannot be trusted – either because the software can never be guaranteed to be bug-free, or because the manufacturer is purposefully deceptive – someone else needs to enforce the rules. And that someone should be the network. After all, the network is the one thing that every connected device needs. This is a rehashed idea of intrusion detection and intrusion prevention systems, applied to heterogeneous networks of IoT devices [4]. For example, it would be quite feasible to determine that a smart scale should only connect to its cloud backend once a day, usually at 8 in the morning, and should never connect to Facebook, should never accept an incoming connection, nor open any ports on the home router using Universal Plug And Play (UPnP). In this way, a network connectivity profile could be established for each device, and any anomaly could trigger a block and an alert that something is going on. Of course, such profile templates could someday be included with the device but, in the meantime, they are perfectly well discoverable with current machine learning (ML) techniques. After all, credit card companies have perfected similar anomaly detection with much less data (only dozens of transactions per user per month), while in IoT, we're talking about packets per second, which yields enormous datasets to train the models.

Finally, real-world attack data would be of immense help when training such models – and for that, we need a way to see and capture what the attackers are doing. A lot of research has been done to classify various attacker types (from script kiddies to state actors), and what motivates them. Different classes of attackers have different available resources and different skills. A sad trend in this regard is how quickly the bleeding edge cybersecurity expertise becomes commonplace, and is included in publicly available GitHub repositories. What is today in the domain of state actors and top researchers, might be tomorrow in the arsenal of every script kiddie with too much time on their hands. There’s ample evidence from cybersecurity research firms proving this point [5][6].

Figure 1: Network telescope and statistics showing the exponential growth of probing events (available live at http://telescope.ltfe.org/en/ ).

Figure 1: Network telescope and statistics showing the exponential growth of probing events (available live at http://telescope.ltfe.org/en/ ).

Therefore, we need fresh insight into ongoing reconnaissance, probing, attack, and exploitation methods; having this kind of cyber-threat prediction would be akin to checking weather prediction before leaving home. There are several already established ways to do that. A simple one is a network telescope or a black hole. This is used to passively monitor traffic coming to a completely dark IP range, and most of such traffic is indeed port scanning attempts, revealing what are the most popular services. In our lab, we have been operating such a telescope for almost a decade, and the most worrying bit of statistics is the trend of all the probing events, which has been growing exponentially for the entire decade. We now receive the same amount of probing events per day as we did in 2011 in an average month.

Honeypots present the next level of interaction; since they are at least a little interactive, they represent a much better tool to study cyberattacks in progress; however, they need to be convincing enough to keep the attacker occupied. The simplest kind of honeypots are called "low interaction", and only present a weak facade that disintegrates after modest engagement – imagine a simple answering machine with prepared answers to common questions. There’s a handful of low-interaction IoT honeypots out there, including for example Cowrie (Telnet/SSH), Dionaea(HTTP, MQTT, FTP, TFTP, UPnP), HoneyPy (CoAP, TFTP, TR-069) and TelnetIoT (Telnet) [7] [8] [9].  High interaction honeypots, on the other hand, are perfectly faithful representations of a target system, but this can usually only be achieved by having a real system as a target. Both low and high interaction approaches are readily available for typical server infrastructure, where there is a limited number of extremely popular and regularly maintained services (such as Secure Shell, Telnet, Microsoft Remote Desktop, etc.). In contrast, the landscape of IoT comprises thousands of different device types, with possibly dozens of exposed services and dozens of software versions in the wild [10] [11]. Such a long tail makes it extremely hard to study and mimic a useful subset of devices and services. 

Figure 2: Cybersecurity observation portal with live statistics based on an SSH and Telnet distributed honeynet (available live at http://cyber.ltfe.org/ ).

Figure 2: Cybersecurity observation portal with live statistics based on an SSH and Telnet distributed honeynet (available live at http://cyber.ltfe.org/ ).

For example, we have been running a 50-node SSH and Telnet honeynet for the last year that we have upgraded from low to high interaction. Although Telnet is one of the common protocols of more powerful IoT devices, we have found very little features that could be used to classify IoT and non-IoT attacks.

On the other hand, HTTP (in server mode) is also a very common protocol for devices such as cameras, modems, routers, and similar. In a single node experiment that has been running since February 2020, we have set up a simple HTTP honeypot listening on all TCP ports of a machine and capturing all probing requests. We were able to identify several IoT devices based on the URL structure and keywords, using regular and IoT-specialized search engines such as Shodan.io. As a proof-of-concept, we have tested an iterative approach, where we learned from attackers about probing requests, and then learned about the device responses by scraping a real device found through Shodan.io. By doing this we have refined models of several devices that are now collecting data as publicly-exposed honeypots. Other researchers have gone beyond that and automated the procedure further. We believe this is a promising step in the direction of intelligence-gathering and presents unique data that could in the future power advanced ML algorithms to detect and prevent IoT intrusions.

References

  1. Alladi, Tejasvi, Vinay Chamola, Biplab Sikdar, and Kim-Kwang Raymond Choo. "Consumer iot: Security vulnerability case studies and solutions." IEEE Consumer Electronics Magazine 9, no. 2 (2020): 17- 25.
  2. Neshenko, Nataliia, Elias Bou-Harb, Jorge Crichigno, Georges Kaddoum, and Nasir Ghani. "Demystifying IoT security: an exhaustive survey on IoT vulnerabilities and a first empirical look on internet-scale IoT exploitations." IEEE Communications Surveys & Tutorials 21, no. 3 (2019): 2702-2733
  3. Brass, L. Tanczer, M. Carr, M. Elsden, and J. Blackstock, "Standardising a moving target: The development and evolution of IoT security standards," Living in the Internet of Things: Cybersecurity of the IoT - 2018, London, 2018, pp. 1-9.
  4. Amouri, V. T. Alaparthy, and S. D. Morgera, "Cross layer-based intrusion detection based on network behavior for IoT," 2018 IEEE 19th Wireless and Microwave Technology Conference (WAMICON), Sand Key, FL, 2018, pp. 1-4.
  5. Kaspersky report: DDoS attacks in Q3 2019; November 11, 2019; available at https://securelist.com/ddos-report-q3-2019/94958/ Cited April 30, 2020
  6. Flashpoint: An After-Action Analysis of the Mirai Botnet Attacks on Dyn; October 25, 2016; available at https://bit.ly/3bkngkM.
  7. Sethia, Vasu, and A. Jeyasekar. "Malware Capturing and Analysis using Dionaea Honeypot." In 2019 International Carnahan Conference on Security Technology (ICCST), pp. 1-4. IEEE, 2019.
  8. Shrivastava, Rajesh Kumar, Bazila Bashir, and Chittaranjan Hota. "Attack detection and forensics using honeypot in IoT environment." In International Conference on Distributed Computing and Internet Technology, pp. 402-409. Springer, Cham, 2019
  9. Banerjee, Mahesh, and S. D. Samantaray. "Network Traffic Analysis Based IoT Botnet Detection Using Honeynet Data Applying Classification Techniques." International Journal of Computer Science and Information Security (IJCSIS) 17, no. 8 (2019).
  10. Pa, Yin Minn Pa, Shogo Suzuki, Katsunari Yoshioka, Tsutomu Matsumoto, Takahiro Kasama, and Christian Rossow. "IoTPOT: A novel honeypot for revealing current IoT threats." Journal of Information Processing 24, no. 3 (2016): 522-533.
  11. Luo, Tongbo, Zhaoyan Xu, Xing Jin, Yanhui Jia, and Xin Ouyang. "Iotcandyjar: Towards an  Intelligent-interaction honeypot for iot devices." Black Hat (2017).

 


 

Urban SedlarUrban Sedlar is an assistant professor and senior researcher at the Laboratory for Telecommunications, Faculty of Electrical Engineering, University of Ljubljana. His recent work focuses on the area of cybersecurity threat assessment using large scale honeypots. He has also been involved in several EC and national research and development projects on the topics of emergency response systems, cloud computing, and the Internet of Things.

 

Leon Stefanic JuznicLeon Štefanič Južnič is a research member of the Laboratory for Telecommunications at the Faculty of Electrical Engineering, University of Ljubljana. His main research interests are cybersecurity, cloud architectures, and data analysis. He has received his B.Sc. from the University of Ljubljana in the field of telecommunications and is now working towards his M.Sc. degree.

 

Matej KrenMatej Kren is a research assistant at the Laboratory for Telecommunications. He has a high level of expertise and experience in the design and construction of systems dedicated to saving and mining a massive amount of data.  His main research interests include data visualization and pattern mining in massive datasets. He is involved in several research and development projects, including cybersecurity and smart metering solutions in the energy industry.

 

Matej RabzeljMatej Rabzelj is a master's degree student of Information and Communications Technology at the Faculty of Electrical Engineering, University of Ljubljana. His areas of particular interest include cybersecurity, full-stack software development, and information-technology operations, as well as the design and development of custom-built IT solutions. He holds a bachelor's degree in Electronics engineering and complements his knowledge with several Cisco Networking certifications.

 

Andrej KosAndrej Kos is a full professor at the University of Ljubljana, Faculty of Electrical Engineering as well as the Head of the Laboratory for Telecommunications. He received his Ph.D. at the University of Ljubljana from the field of telecommunications. Currently, at the center of his work are 5G systems and services and the applications of cyber-physical systems including the Internet of things.

 

Mojca VolkMojca Volk is an Assistant Professor and Scientific Associate at the Laboratory for telecommunications, Faculty of Electrical Engineering at the University of Ljubljana. Her main areas of work are 5G, IoT, and cybersecurity in applied areas of technology development, prototyping and trials for security, critical infrastructures, and public protection and disaster relief (PPDR). She holds a Ph.D. in Telecommunications from the University of Ljubljana.

 

 

Challenges Monitoring the Arctic Tundra: The Distributed Arctic Observatory Cyber-Physical System

Issam Raïs, Otto Anshus, Phuong H. Ha, and John Markus Bjørndalen
May 14, 2020

 

Climate change is deteriorating terrestrial biomes. The arctic tundra is the most challenged one. The extent of projected warming is so extreme that the tundra ecosystems will likely transform into novel states within a few decades, potentially leading to losses of important biodiversity and functions providing irreplaceable services to humanity.

In-situ scientific observatories [4, 3, 2, 7, 6, 1] are located in the environment they are measuring and observing. The corresponding data is then made available to researchers. The data is pre-processed to create data suitable as input to models of the environment. The models are then used to explore and understand the dynamics of the environment.

The data collected increasingly exhibit Big Data characteristics. The amount of gathering, processing, and reporting of observational data is dependent on the availability of critical resources, including energy, data networks, computing, and storage.

The characteristics of typical IoT devices include quite good energy efficiency but with small batteries, few and simple sensors, very limited network, computing, and storage resources.

For some environments, these resources are either available or not needed to do the necessary observations. However, we are interested in observing environments where these resources are very limited but needed. The challenges arising is even more demanding because of the emerging big data characteristics exhibited by the data collected by the deployed IoT systems.

One approach to do a scientific observatory is to use a cyber-physical system (CPS) composed of multiple devices with a set of sensors and actuators. The CPS is deployed onto an environment and used to observe its flora, fauna, atmospheric conditions, and many other ecological parameters. By increasing the number of CPS devices, increasingly larger areas of the environment can be observed and manipulated. By increasing the resources of the devices, more sophisticated functionality can be achieved providing more and better measurements, more robust reporting of data, more flexible and adaptable functionalities, and experiments.

To increase the benefit from the devices, they are organized into an infrastructure forming a distributed system. The Distributed Arctic Observatory (DAO) is an ongoing project at the University of Tromsø (UiT), the Arctic University of Norway. The project develops a CPS of devices for the arctic tundra. The DAO system observes the tundra and reports the observations, in some cases with low delays from measurements, done until they become available elsewhere.

The DAO-CPS faces an ecosystem with several hard to deal with characteristics that have a direct impact on how the CPS is being built, deployed, and used (see A, B, and C).

A: Unique Harsh Weather

Operating on the Arctic Tundra (AT) is challenging and demanding for both humans and machines. There is not a lot of precipitation. The summers are windy, cold, and wet, and the winters are windy, cold, and snowy. For long periods (weeks to months) the sun is either always above the horizon, or not at all. There are no proper trees on the arctic tundra. In some areas, dangerous animals like polar bears roam.

For a CPS deployed on the arctic tundra (i.e DAO-CPS) these characteristics present several challenges:

  • Because of the low temperatures, (i) node hardware must be able to tolerate the low temperatures (average below -30C); (ii) batteries have lower capacities and are harder to recharge.
  • Because of the rain in combination with frozen ground not draining water away, nodes must be placed inside waterproof enclosures.
  • Because of the snow and lack of trees, (i) nodes will be on the ground and covered by several meters of snow. In some cases, sensors can be mounted on poles, but installations are often not allowed in interesting areas; (ii) Already borderline radio network signal strength and quality are further reduced under snow; (iii) Energy harvesting is harder to do under the snow.
  • Because of large wild animals, heavy winds, melting snow, and breaking ice, nodes will be disturbed, moved around, or damaged.

As a result of the above conditions, even if nodes are tested in the laboratory, they will experience unexpected failure after deployment.

Because of the general conditions on the arctic tundra, it is in many cases impractical and costly for humans to go there, to deploy and maintain nodes with energy and repairs. One actual deployment we are participating in only allows for a visit by humans once a year. Most scenarios will be much more complicated.

Consequently, to operate inside these limitations, nodes must be both robust and small, lightweight and practical to carry and deploy, energy-efficient, rely on small batteries, have an operational lifetime of at least 12 months, have multiple data network options, use multiple small antennas mounted on the outside or inside of the waterproof enclosure, have a way to establish their location (GPS), and have sensors which can tolerate water and low temperatures. The ideal node should also have actuators to allow for the reorientation of the sensors in case of physical reorientation of the node after deployment.

The functionality provided by software must, in addition to the primary observation functionalities, allow for failure monitoring, the establishment of connectivity between nodes and a back-haul network, and allow for updates over the air.

B: Very Large

The arctic tundra is very large. It includes the North Pole and the northernmost parts of Norway, Greenland, Canada, and Siberia.

Much less than 1% is observed through ground-based installations. Thus, Big Data from the arctic tundra is lacking. The COAT program (coat.no) was created to, among other tasks, collect Big Data about the arctic tundra. The data is produced in several ways, including in-situ observations of the state of the flora, fauna, weather, and other conditions on the tundra.

However, present approaches rely on manual deployment and fetching of data by humans going to the arctic tundra. While this approach is needed for installations demanding careful installation and selection of where to locate measuring instruments, it does not scale for observing larger areas of the arctic tundra. To do in-situ observations of large areas many more nodes need to be deployed. The deployment methods must enable efficient and rapid installation of the nodes, including by drones and aircraft dropping the nodes onto the tundra [5].

When the number of nodes and the data collected by each node increases, a large volume of data of diverse types is produced. Individual nodes, and the CPS as a distributed system, must handle several challenges at scale.

Nodes must decide what to do with the data they collect. Data may need edge pre-processing including analytics and compression [8]. Data must also be kept safe and reported with a decided frequency to wherever it is needed. The reporting implies that nodes must establish connectivity with a back-end receiving the data. Alternatively, nodes must connect with each other to safe-keep data and possibly find a route to a node with connectivity to the back-end system.

Consequently, the need to probe for networks, build a history of when networks are available and their characteristics raise. Edge processing is used to determine which network to use and when to use it. Nodes running out of storage, because of the volume of data accumulated without being able to report it, must decide what to do with it (including if data can be deleted and which data to delete).

C: Isolated and Hard to Reach

The number of human settlements on the arctic tundra is very low. It is in practice largely unavailable to humans in time and space, implying that humans will not be around to maintain systems and replace batteries. Nodes on the arctic tundra can in principle be visited by humans all year around. However, in practice deployments are done during summer. The rest of the year, deployed nodes are left alone, with no possible assistance (because of weather conditions, and because some nodes are meant to be buried under snow). There are also regulations limiting visits by humans and the impact of nodes on the environment. The accumulated number of nodes that in practice are deployed over time is limited.

Energy and network resources are limited in the arctic tundra. Thus, nodes must be very cautious while using energy. Back-haul networks will have varying availability, bandwidth, and latencies. To increase the possibility of being able to report data and receive updates, nodes should have on-node network interfaces to enable them to do ad-hoc connectivity with other nodes within the local network reach. Consequently, nodes located on the arctic tundra have more energy, network, and maintenance challenges to handle than an IoT deployment done inside a building or in a city.

When deployment and configuration during deployment are done by humans on the ground, errors happen. It can lead to inoperable nodes with bugs resulting in later problems. Because nodes are primarily asleep between measurements (to save energy and because the networks are limited) there is only a small window of time when nodes can be remotely reached to do remote maintenance and updates. Consequently, nodes will need to be updated several times after deployment to repair bugs, change observational parameters, and modify functionalities. Updates must be done remotely.

Nodes need to be autonomous and resilient. The most important tasks are (1) execute observations, (2) gather observations as data, (3) keep observations safely and (4) stay alive for the complete duration of the experiment (or to the next expected refill of energy). Autonomy needs to be achieved both as individual units and as groups of nodes forming a distributed system.

Conclusion

The characteristics of the arctic tundra, and other similar challenging environments, impact the architecture, design, and implementation of a scalable CPS of autonomous nodes.

The first characteristic is the harsh weather. It impacts battery capacities, recharging characteristics, energy harvesting, packaging, mounting of nodes, and how practical it is for humans to visit the nodes.

The second characteristic is the large size of the arctic tundra. It impacts the number of nodes needed to do observations at many locations. This increases the volume of data collected, and the number of humans needed to do the deployment and maintain the nodes.

The third characteristic, remote and hard to reach areas, implies isolation of nodes for short and long periods. This increases the complexity of the data networking approaches needed to get reports from the nodes and to do remote updates of them.

The unique characteristics of the arctic tundra imply that, to be deployed, nodes will have complicated software, network, energy solutions, and packaging. Complex solutions certainly imply both more bugs and more complicated bugs. However, even simpler architectures, designs, and implementations will (after deployment) experience unexpected failures and behavior.

To simplify developing and maintaining nodes, a platform to provide the software, and the hardware of the nodes is needed. We call the developed platform an Observation Unit (OU). Its implementation is a run-time environment with a set of software mechanisms and policies where ecologists add software to do specific observations and controlled experiments.

References

  1. The National Ecological Observatory Network (NEON). http://www.neonscience.org, March 2019.
  2. L Evans and P Bryant. LHC machine. Journal of Instrumentation, 3, 2008.
  3. W Gressler, DeVries J., E Hileman, D. R. Neill, et al. LSST Telescope and site status. Ground-based and Airborne Telescopes V, 91451A, 2014.
  4. Laser Interferometer Gravitational-wave Observatory (LIGO). https://www.ligo.caltech.edu, March 2019.
  5. Issam Raıs, John Markus Bjørndalen, Phuong Hoai Ha, Ken-Arne Jensen, Lukasz Sergiusz Michalik, Håvard Mjøen, Øystein Tveito, and Otto Anshus. UAVs as a leverage to provide energy and network for cyber-physical observation units on the arctic tundra. In 2019 15th International Conference on Distributed Computing in Sensor Systems (DCOSS), pages 625–632. IEEE, 2019.
  6. L. M. Smith et al. The Ocean Observatories Initiative. Oceanography, 31(1), 2018.
  7. B Wang, X Zhu, C Gao, Y Bai, et al. Square Kilometre Array telescope - Precision reference frequency synchronisation via 1f-2f dissemination. Scientific reports, 5, 2015.
  8. Issam Raïs, Otto Anshus, John Markus Bjørndalen, Daniel Balouek-Thomert, and Manish Parashar.  Trading data size and  cnn  confidence  score  for  energy  efficient  cps  node  communications.   In CCGrid  2020:  The 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing. IEEE, 2020.

 

Issam RaisIssam Raïs received his Ph.D. in the Algorithms and Software Architectures for Distributed and HPC Platforms (Avalon) team from the LIP laboratory in Ecole Normale Superieure (ENS) Lyon (France) in September 2018. His interests include energy efficiency for high-performance computing, cloud, edge, IoT, and distributed systems. Since January 2019, he is a post-doctoral researcher at ”UiT, The Arctic University of Norway”, Tromsø, Norway, where he contributes to the DAO-CPS (Distributed Arctic Observatory CPS) project to provide scalable, robust, configurable and energy-efficient approaches to monitoring extreme eco-systems such as the arctic tundra. (www.cs.uit.no/~issam/)

 

Otto AnshusOtto Anshus is a Professor at the Department of Computer Science, University of Tromsø (UiT) The Arctic University of Norway, and a part-time Professor at the Department of Informatics, University of Oslo, Norway. His current primary research interests are distributed and parallel cyber-physical systems.

 

Phuong H HaPhuong H. Ha received the Ph.D. degree from Chalmers University of Technology, Sweden. His current research interests include energy-efficient computing, parallel programming, and distributed computing systems. Currently, he is a professor at the Department of Computer Science, UiT The Arctic University of Norway. (www.cs.uit.no/∼phuong).

 

John Markus BjorndalenJohn Markus Bjørndalen is a Professor at the Department of Computer Science, University of Tromsø (UiT) The Arctic University of Norway. His current research interests include concurrent programming, parallel computing, and cyber-physical systems.

 

 

 

Digitalization of COVID-19 Pandemic Management and Cyber Risk from Connected Systems

Petar Radanliev, David De Roure, Max Van Kleek
May 14, 2020

 

What makes cyber risks arising from connected systems challenging during the management of a pandemic? Assuming that a variety of cyber-physical systems are already operational-collecting, analyzing, and acting on data autonomously-what risks might arise in their application to pandemic management? We already have these systems operational, collecting, and analyzing data autonomously, so how would a pandemic monitoring app be different or riskier?

Perhaps unsurprisingly, the answer to these questions depends on specific aspects of the design and deployment of the connected systems in question. If established security design rules are followed, and focus is placed on what is required for pandemic management, this risk will be minimized. If, however, due to time and resource pressure such as from an unfolding pandemic, security is ignored for practicality and speed, new systems could be more easily compromised, leading to the potential for later system failures at crucial times. If such systems are designed and operated in the first wave of a global pandemic such as COVID-19, during the second and subsequent waves of system failure could lead to unnecessary loss of lives. We outline the security design principles that would minimize the potential risk scenarios.

Since the COVID-19 pandemic, the focus on stringent personal data protection for preserving individual privacy has been made more complicated by the need to support public health efforts that necessitate some degree of global surveillance assisted with new digital technologies. This brings into attention Internet of Things (IoT) technologies for their ability to operate autonomously to collect, analyze, and share data about the physical environment, which makes them essential elements in the digitalization of pandemic management. The IoT represents technologies that can use sensors for detection, gather and analyze information, create meaningful insights and act. The action usually represents a tailored product or service or improves the efficiency of operational processes. One difference between the IoT and the Internet is that the IoT can be completely automated and autonomous; in contrast, the Internet is fundamentally designed for an application that connects people. With IoT technologies, the role of humans as actors in the network is arguably diminished. Automated and autonomous IoT technologies trigger questions on the potential cyber risk arising from the integration of artificial intelligence and machine learning in the IoT network. We review how artificial intelligence and machine learning can enable pandemic management, while ethically assessing the cyber risk from increased deregulation of data standards in IoT devices and networks.

What Is IoT Risk?

The emergence of the Internet of Things (IoT) in the late 1990s presented a new and fast-evolving technology that was characterized by low cost-high value per unit to a world that was unprepared to assess the associated risks. Since its emergence, IoT risks have been measured with traditional risk assessment methodologies and frameworks. One problem with such assessments during global pandemics is that IoT presents very different types of risks than the traditional Internet. Thus, an essential step to make sense of IoT risks is to understand IoT risk vectors [1]. One risk vector is that IoT, unlike the Internet, does not necessarily require human intervention [2]. IoT products can use connected sensors to collect and analyze data, then aggregate data or present data in various formats and trigger further actions based on data interpretation processes. The main question is, if interested parties acquire access to big data collected from individuals during a pandemic, what further actions could be triggered based on the aggregated new data, or even just from the interpretations of the new data?

A second risk vector is represented by the multiple and diverse connection points used to access any given IoT ecosystem, e.g. door locks connected to home security systems or smart home hubs that collect personal and sensitive data via the devices they're connected toSince the rapid spread of IoT devices is partially triggered by the low cost of materials, production, installation, automated data collection and analysis, one could argue that one of the main strengths of IoT is represented by the low cost and the relative ease in the deployment of this technology. However, securing IoT devices and assuring a level of maintenance on a par with other critical systems, such as typical of the telecom industry, for example, would require an increase in the initial cost for production and deployment and possibly a significant one in the operational costs. This increase in cost could diminish the main competitive edge of IoT devices.  Also, in complex IoT ecosystems, it can be difficult to find truly independent risks and to establish an actual correlation between pertinent risks [3]. Given the low cost of IoT devices, risk managing the complex coupled IoT systems has proven challenging even before the pandemic. Therefore, the focus must be on enabling data collection for pandemic risk management, while simultaneously limiting the aggregation of personal data that could be used for alternative interpretations.

Value of IoT in Pandemic Management

Despite the emerging IoT risks and the incomplete understanding of the impact created by these new risks, it is the value of IoT infrastructure in pandemic management and the promises for optimization of existing pandemic monitoring costs that drive governments to accept unknown risks and technological challenges. On a set of virtually no global IoT risk standards and policies, we anticipate further fragmentation of the IoT ecosystems, in which risk assessment models are likely to be volatile, vendor dependent, and less transparent [4]. Some of the advantages of the foreseen increase in heterogeneity consist of opportunities through competitive innovation. However, innovation in the IoT space implies important variations and unknowns concerning critical aspects such as security, adoption, and implementation of different sorts of rights, such as the right to privacy.

These issues can be partly solved through insurance and reinsurance. There are already insurer companies that not only cover specific IoT risks considered at different operational levels but more broadly the cascading effects generated by the cyber-physical nature of IoT infrastructure that can span across vast geographical areas and interact with the physical environment and various logical functions. In terms of pandemic management, there is also a value for medical business models. Some insurance companies cover the business aspects because they can distribute the risks across their portfolio in ways that allow a further decrease in average loss values. But given the speed of COVID-19 pandemic, it is unclear how fast can insurers adapt to the new risks. Many insurers simply back away with concerns of significant unpredicted and unmanageable losses.

Why is the Risk of Connected Devices Difficult to Assess?

The risk from coupled and connected systems in pandemic management presents challenges in autonomous medical data collection, storing, processing, and analysis. Here, we outline some of the inherent challenges in pandemic monitoring though coupled and connected systems.

  • Difficulties in assessing cyber risk: The installation of new low-cost connected devices and sensors is, in some instances, not considered an IT or medical function. Connected devices and sensors serve a very diverse set of functions, ranging from simple automation such as room occupancy motion sensors for switching on the lights, to vastly more complex automation such as thermally regulating a building or ensuring its security. Installation of such sensors could be an operational task performed by the buildings and maintenance teams. Hence, in some instances, cyber risk from connected devices is invisible to cyber risk managers. Another example is retrofitting where IoT solutions are implemented on existing legacy systems. The justification for retrofitting is to reduce the cost of implementing new technologies. Also, since IoT evolves so fast, retrofitting is also justified to reduce the cost of new IoT technologies that are quickly becoming obsolete. But this creates security problems. Old legacy systems are based on older security programs, sometimes working with simple and often shared passwords and system accounts that are easy to breach. In this context, IoT technology, and digitalization in general, provides solutions and extend well beyond for the existing legacy systems to manufacturing floors and other production activities and consumers. The logical assumption is that connected devices dealing with bio-sensed or medical data for public health during a pandemic would face similar, if not greater, difficulties. Yet the potential benefits are immense; there are many examples from around the world on how existing medical information systems can be extended and enhanced through connected devices, including automatic diagnosis[1] for conditions unrelated to COVID-19; monitoring and supervision with live tracking systems[2]; or even virtual clinics[3]. One could argue that the real value of digitalization in medical systems depends largely on a strong correlation between cybersecurity and innovation. Global data governance is struggling to keep up with the fast evolution, especially in IoT cyber risks from non-traditional data, such as facial recognition data, facilities access data, and industrial control system data. However, such systems are already in place and operational in many countries. Currently, such systems are used for security reasons, and it is hard to see how it would be any different if the same system is used for pandemic management. The main reasonable concern seems to be the new information that could emerge from analyzing medical data collected for pandemic management, with deep learning and artificial intelligence algorithms.
  • Difficulties in assessing cyber risk from feeding medical data to deep learning and artificial intelligence algorithms: The difficulties in identifying the risk from deep learning and artificial intelligence (AI) emerge from the limitations of such assessments on existing non-medical systems. The proposed digitalization in medical systems can also be seen as a dynamic automated predictive cognitive system supported by real-time intelligence for cyber medical analytics. This requires dynamic analytics of cyber-attack threat event frequencies to predict the cyber risk magnitudes of medical data loss, and/or alternative interpretations of the medical data. Despite the requirements for a predictive model built upon mathematical and statistical methods, there are currently no mathematical models that enable the quantitative assessment of cyber risk in any sector, including the medical sector. In our recent publication on this topic [5], we discovered that the lack of probabilistic data leads to qualitative cyber risk assessment approaches, where the outcome represents a speculative assumption [6]. Emerging quantitative models are effectively designed with ranges and confidence intervals based on expert opinions, and not probabilistic data [7]. However, quantitative risk impact estimation is needed for making decisions on topics such as estimating cybersecurity, cyber risk, and cyber insurance. Without a dynamic, real-time probabilistic risk data and cyber risk analytics enhanced with AI, these estimations can be outdated and imprecise. Hence, the impact of cyber risk on digital medical systems could be costly, and cybersecurity not necessarily effective. The value of cyber risk real-time data in non-medical systems can be explained in economic terms, where the level of cybersecurity is based on economic value. In medical systems, economic value is not the primary concern; instead, the focus is on the patient’s safety and privacy. Therefore, even in times of pandemics, the AI integration in the communications network and the relevant cybersecurity technology must evolve in an ethical way that humans can understand, while maintaining the maximum trust and privacy of the users. The co-ordination of cyber protection and AI analytics of personal data from connected devices must be reliable to prevent abuse from insider threats, organized crime, terror organizations, or state-sponsored aggressors. Given the lack of assessment on data privacy in existing medical systems for pandemic management, we could drive a comparison from the private sector. Data risk has been encouraging the private sector to take steps to improve the management of confidential and proprietary information (i.e. customer or financial data), intellectual property, and PII (Personally Identifiable Information). Companies that are interested in obtaining new revenue streams from data have pursued innovative and cost-effective ways to protect such data. Therefore, the digitalization of medical systems needs to be designed similarly as the private sector has been in recent times. One additional level of security that needs to be anticipated that could evolve from such a new system is the analysis of the threat event frequency, with a dynamic and self-adapting AI. This would empower the design of a cognition engine mechanism for predicting the data loss magnitude through the control, analysis, distribution, and management of probabilistic data. While this would enhance the security of digital medical systems, it would also help future efforts by governments and the private sector to improve the management of confidential and proprietary information.
  • Cyber risk standards on data risk from connected devices: Despite many efforts, there are no uniform standards governing data risk from connected devices. It is unlikely that such uniform standards can be developed on time for COVID-19 pandemic management. There are discussions on this topic and in the future, we can certainly expect such standards. However, this could be years ahead, while the current IoT operating model is based on shared responsibility. Given the rapid growth of the IoT shared ecosystem and the lack of guidance, businesses have already started building and applying their standards and protocols. The digitalization of medical systems for pandemic management could follow existing standards and protocols developed by the business community. The potential negative implications of such developments could hinder the value of autonomous data sharing by connected devices. One of the main strengths of IoT in global pandemic management is the ability to aggregate data from different sources and in different formats and connect to various networks using different protocols. The lack of common, unified, and global standards governing the IoT, creates significant barriers to the interoperability of connected devices in sharing medical data.
  • Security paradigms for connected devices in pandemic management: Traditionally, cybersecurity has been separated into three paradigms, secure, vigilant, and resilient. The secure paradigm is focused on preventing certain risks from occurring. In terms of digital pandemic management, prevention must include multi-layered protection in the form of coupled systems. Coupled systems increase protection from invisible weaknesses, such as different interpretation of collected data. The vigilance paradigm refers to securing a digital pandemic management system in a method that can resist attacks over time. With the IoT technologies continuously evolving, it is not enough to simply have a security strategy digital pandemic management. Since the threats are changing, the strategy should also be changing. Thirdly, resiliency refers to how quickly the recovery process can enable normal operations of the system. This capability is essential for designing digital systems for pandemic management systems, in the context in which the first two paradigms, security, and vigilance, cannot guarantee that some form of failure would not occur.
  • Risk management of connected devices in pandemic management: With connected devices speeding into the medical system during pandemic management, there is an urgent need for digital security officers that would oversee the increasingly connected critical infrastructure, data production processes, and smart data analytics. When considering the risk assessment approaches discussed so far, it becomes clear that the increase in medical data connectivity implies higher risks. Reducing open connections reduces cyber risk. But the IoT is a solution based on multiple connections. Although the open connections are anonymized, they still increase the attack surface. This creates a conflict between value and risk. However, there are methods to mitigate this situation, such as building security in the design process [8], known as Secure by Design principles[4], or considering how security is handled by the device across its lifetime and possible varying contexts of use, known as Security Ergonomics by Design principles [9]. Security considerations need to be applied early in the design development process. To secure IoT devices early in the design development process requires an increase in the cost of manufacturing (e.g. secure by default), production (e.g. secure by design), installation (e.g. secure by resilience), and maintenance (e.g. security updates). Such costs, unless applied through some form of unified and global standards, can hinder the competitiveness of the IoT in a digital medical system on a national level. Many non-medical industries require that expensive risk engineering assessment should be done throughout the entire lifecycle of the product. The digitalization of pandemic management needs to follow a similar approach. Since time is of the essence during pandemics, one solution would be to only use connected devices that are secure by default, design, resilience, and enabled for security updates. This would, however, limit the value of connected devices in pandemic management. Hence, the solutions could be found in the two-level system, where data from less secured devices is anonymized, and only used for a limited function.
  • The simple solution for data from less secured connected devices: To safeguard from the ‘invisible’ IoT cyber risks, an integrated approach to cybersecurity is required in the digitalization of pandemic management. Decentralized security fails to assess how IoT connects operations in unexpected ways. The scale and scope of IoT data collected are often underestimated, including the risks from such data being assessed by third parties, which is often the case. In contrast, an integrated approach to security (e.g. ISA 3000) would assure that most IoT risks could be prevented before they even occur. Another relatively simple solution is to integrate operational capabilities with multi-layered cyber risk management. This could take the form of loosely coupled pandemic management systems. Such an approach can reduce the risk of widespread failure triggered by a single device. To prevent risk from retrofitting, with the rise of new technologies and new threats, the legacy systems will soon become incapable to be upgraded with the newest security that prevents new threats. It seems that the innovation process will resolve this category of cyber risk because it will simplify compliance and address specific operational technology need. In terms of resilience, a simple solution would be to create a fail-safe system. In such a system, malicious artificial intelligence though IoT devices could create a denial of service by themselves[5] and a failure in one element can disrupt the entire pandemic management system.

Final Remarks

The COVID-19 pandemic has resulted in different countries developing digital surveillance approaches for pandemic management, some of which operating autonomously to collect, analyze, and share data. Currently, the digitalization of COVID-19 pandemic management is occurring at a fast rate across the globe, with the integration of automated and autonomous connected devices feeding real-time data to artificial intelligence algorithms. While the digitalization of medical systems presents strong value for pandemic management, efforts need to be focused on solutions that deliver value for pandemic management, and not on designing systems that expose personal data to cyber risk and could make things worst. There are safe and ethical ways of using artificial intelligence algorithms and data from connected devices for pandemic management. But the design of digital systems for pandemic management, need to anticipate that connected devices create new unpredictable and often invisible cyber risks that are currently unregulated and frequently ignored. Considering that these are new technologies that are evolving at a very fast rate, almost every new design a digital pandemic monitoring system can be classified as high risk. Many such digitalization approaches seem to be threading in uncertain waters (e.g. China), taking risks without fully understanding the impact, and operating with a rather hopeful strategy. Also, there is a lack of appropriate cyber insurance policies to transfer risk and the lack of standards and regulations to govern these new medical systems. Given these conditions, the separation of risks becomes urgent. With the appropriate separation of systems according to the potential risks, at least the medical professionals could keep parts of the pandemic management system operational during cyber-attacks, or during data privacy loss. As things stand at present, many digital pandemic management systems are driven by monitoring opportunities and treatment potential. This leads to a scenario where developers of such systems can ignore threats, continue to chase opportunities and hope for the best, but that won’t stop hackers exploit their opportunities, and interfering along the process.

 

[1] https://bit.ly/3csJi6i

[2] https://coronaboard.kr/en/

[3] https://bit.ly/3fE1xYN

[4] https://bit.ly/3dHnqVk

[5] https://bit.ly/3fAPpaK

References

  1. Nurse, Jason RC., Radanliev, Petar., Creese, Sadie., and De Roure, David, “Realities of Risk: ‘If you can’t understand it, you can’t properly assess it!’: The reality of assessing security risks in Internet of Things systems,” in Institution of Engineering and Technology, Living in the Internet of Things: Cybersecurity of the IoT - 2018, 2018, pp. 1–9.
  2. De Roure, D., Page, K.R., Radanliev, P., and Van Kleek, M., “Complex coupling in cyber-physical systems and the threats of fake data,” in Living in the Internet of Things (IoT 2019), 2019 page, 2019, p. 11 (6 pp.).
  3. Radanliev, Petar., Roure, David De., Page, Kevin., Nurse, Jason R.C., Montalvo, Rafael Mantilla., Santos, Omar., Maddox, La’Treall., and Burnap, Pete, “Cyber risk at the edge: current and future trends on cyber risk analytics and artificial intelligence in the industrial internet of things and industry 4.0 supply chains,” Cybersecurity, Springer Nat., 2020.
  4. Nicolescu, Razvan., Huth, Michael., Radanliev, Petar., and De Roure, David, “Mapping the values of IoT,” J. Inf. Technol., vol. 33, no. 4, pp. 345–360, Mar. 2018.
  5. Radanliev, Petar., De Roure, David., Nicolescu, Razvan., Huth, Michael., Montalvo, Rafael Mantilla., Cannady, Stacy., and Burnap, Peter, “Future developments in cyber risk assessment for the internet of things,” Comput. Ind., vol. 102, pp. 14–22, Nov. 2018.
  6. Radanliev, Petar., De Roure, David., Cannady, Stacy., Mantilla Montalvo, Rafael., Nicolescu, Razvan., and Huth, Michael, “Economic impact of IoT cyber risk - analysing past and present to predict the future developments in IoT risk analysis and IoT cyber insurance,” in Institution of Engineering and Technology, Living in the Internet of Things: Cybersecurity of the IoT - 2018, 2018, no. CP740, p. 3 (9 pp.).
  7. Radanliev, Petar., De Roure, David., Nurse, Jason R.C., Nicolescu, Razvan., Huth, Michael., Cannady, Stacy., and Mantilla Montalvo, Rafael, “Integration of Cyber Security Frameworks, Models and Approaches for Building Design Principles for the Internet-of-things in Industry 4.0,” in Institution of Engineering and Technology, Living in the Internet of Things: Cybersecurity of the IoT, 2018, p. 41 (6 pp.).
  8. Radanliev, Petar., De Roure, David., Nurse, Jason R. C., Mantilla Montalvo, Rafael., Cannady, Stacy., Santos, Omar., Maddox, La’Treall., … Maple, Carsten, “Future developments in standardisation of cyber risk in the Internet of Things (IoT),” SN Appl. Sci., vol. 2, no. 2, pp. 1–16, Feb. 2020.
  9. Craggs, Barnaby., and Rashid, Awais, “Smart Cyber-Physical Systems: Beyond Usable Security to Security Ergonomics by Design,” in 2017 IEEE/ACM 3rd International Workshop on Software Engineering for Smart Cyber-Physical Systems (SEsCPS), 2017, pp. 22–25.

 

Petar RadanlievPetar Radanliev is a Post-Doctoral Research Associate at the University of Oxford. He obtained his Ph.D. at the University of Wales in 2014 and continued with postdoctoral research at Imperial College London, Massachusetts Institute of Technology, and the University of Oxford. His current research focusses on artificial intelligence, the Internet of things, cyber risk analytics, and the value/impact of cyber risk.

 

David De RoureDavid De Roure is a Professor of e-Research at the University of Oxford. He obtained his Ph.D. at the University of Southampton in 1990 and went on to hold the post of Professor of Computer Science, later directing the UK Digital Social Research programme. His current research focuses on social machines, the Internet of Things, and cybersecurity. He is a Fellow of the British Computer Society and the Institute of Mathematics and its Applications.

 

Max Van KleekMax Van Kleek is an Associate Professor of Human-Computer Interaction with the Department of Computer Science, at the University of Oxford.  He works in the Software Engineering Programme, to deliver course material related to interaction design, the design of secure systems, and usability. His current project is designing new Web-architectures to help people re-gain control of information held about them "in the cloud", from fitness to medical records.  He received his Ph.D. from MIT CSAIL in 2011.

 

 

How oneM2M Unlocks the Potential of IoT to Enable Digital Transformation

Ken Figueredo
May 14, 2020

 

The topic of digital transformation has risen the agenda for many organizations over the past few years. The sudden shock of Covid-19 has accelerated its importance. Many person-to-person activities have migrated online. Consumers are ordering food supplies and banking remotely. Academia, businesses, and governments are transitioning their operational activities into online formats. Moreover, their workforces are becoming distributed.

This was already the case for their capital resources. Examples include environmental sensors, road-transport sensors, vehicle fleets, and vending machines. Forced with the need to speed up change, many organizations can learn from the IoT market. Some of the key lessons relate to data management, cross-silo interoperability, and standardization.

IoT Key Enablers

IoT technologies contribute significant volumes of data and complement the personal data that many organizations use to optimize their business operations. Machine-type devices significantly outnumber personal connected devices. Many include actuation capabilities to enable remote control. This creates many new opportunities to implement closed-loop solutions. By automating and speeding up decision-making cycles, organizations can quickly evaluate and make investment decisions on IoT deployments and digital transformation initiatives.

A second critical capability in enabling digital transformation is the management and sharing of data across operational boundaries. An example is the process of scheduling nursing visits to home-bound patients. Adjustments are often required in practice due to last-minute changes in staffing availability, disruptions to travel plans, and requirements to control time on-duty for individual members of staff. One way to create a more patient-friendly and responsive scheduling process is to overlay the home-nursing roster with real-time traffic and local weather data to manage welfare visits to home-bound patients. This relies on cooperation across local government departments and private sector partners that may supply traffic data or manage on-demand taxi services.

IoT Standardization

The third key enabler is IoT standardization through a framework for deploying and connecting huge numbers of IoT devices and applications. Many IoT standards exist to address specific technology challenges at different layers of the IoT stack. These range from connectivity technologies (e.g. 3GPP, Bluetooth, Wi-Fi, etc.) to transport protocols (e.g. CoAP, HTTPS, MQTT, etc.) and service capabilities (e.g. OCF, LWM2M, etc.). However, oneM2M is different. It is a middleware standard for interoperable and scalable IoT solutions that reuses existing and established industry standards. oneM2M defines a horizontal, middle-ware layer between a lower tier of devices and communications technologies and an upper tier of IoT applications. Its API abstraction capabilities assist the developer community by masking the complexities and permutations of different IoT technologies. oneM2M standardization allows IoT service providers to deploy devices and sensors from multiple suppliers, avoiding the risk of locking into a single vendor or a proprietary system.

In France, the city of Bordeaux's smart city platform illustrates how these features enable digital transformation[1]. The city’s oneM2M IoT platform is a single system for data collection and remote management of waste bins, streetlights, and other forms of smart city assets. This improves the way that the municipality manages its remote assets and workforce resources. There is an added benefit in managing procurement and technology risks. The city can source smart waste bins in multiple procurement cycles from different vendors and stage its deployment. This works because different waste bins adhere to the oneM2M standard for communication with the city’s platform. In another example, South Korea's second-largest city, Busan, uses an oneM2M platform to deliver over 25 smart city services[2].

IoT Interoperability Drives Digital Innovation

The recent availability of IoT data from many different sources is driving innovation in areas where organizations see benefits in collaboration. A novel illustration of this is in smart city and intelligent transport situations. One of oneM2M’s founding partners, ATIS[3] launched an initiative with US Ignite, a manager of public-private partnerships, to work with North American municipalities on the concept of a smart city data exchange. This initiative aims to facilitate data sharing from IoT devices and other sources.

In the UK and South Korea, local organizations are pioneering innovative ideas around the concept of a data marketplace. One example concerns a two-year trial with four English counties and several private sector partners. This group used a common IoT platform to manage about 300 types of city and transport network data[4]. Through the oneTRANSPORT data marketplace, each entity could control how data was shared with third-party application developers and analytics specialists. One city improved safety on its ring road by monitoring and adjusting control traffic signals at a critical junction. It also used roadside display units to guide city center visitors to the car parks with available capacity. A different city used IoT data to encourage visitors to use its park-and-ride system.

Another UK agency, Transport for West Midlands (TfWM), provides other examples of the innovative potential from data sharing. TfWM uses an oneM2M platform to share city data across a footprint of eight local municipalities and a group of private-sector partners. Controlled data sharing enables application and business-model innovation related to on-demand mobility services and the testing of connected and autonomous vehicles (CAVs).

Collaboration is Key for Success

An important factor in driving the adoption of open-standard solutions is a collaboration between organizations and industry bodies. One example of such an initiative is the liaison between the Industrial Internet Consortium (IIC)[5] and oneM2M[6]. The two organizations recently issued a joint White Paper[7] on “Advancing the Industrial Internet of Things”. It highlights that IIC and oneM2M are advancing digital transformation in the industrial IoT (IIoT) sector through closely aligned frameworks and future standardization goals.

A separate IIC publication entitled “Best Practices for Developing and Deploying IIoT solutions”[8] provides practical evidence of standardization for interoperability. In this publication, two member companies describe the process of linking their respective IoT platforms to create an open-standard, interoperable IoT-platforms (OSP) testbed. This testbed served to validate several device and application interoperability test cases. One test case demonstrated how an IoT application connected to one platform could successfully access data from a sensor connected to the second platform.

As the market for IoT solutions and technologies continues to evolve, new requirements and new candidates for standardization will emerge. Since its launch in 2012 and the first release of its standards in 2015, oneM2M and its members have continued to evolve the standard which will shortly culminate in the publication of Release 4. This covers topics such as Fog/Edge computing, 3GPP interworking, and semantic reasoning for IoT solutions. The continued evolution of oneM2M standards brings together the collaborative efforts of more than 200 members, all of whom are committed to creating a global IoT standard.

 

[1] https://bit.ly/3biTJIc

[2] https://bit.ly/3dzzYhn

[3] https://www.atis.org/scde/

[4] https://bit.ly/3dw43y6

[5] https://www.iiconsortium.org/

[6] https://www.onem2m.org/

[7] https://bit.ly/2xO3UXH

[8] https://bit.ly/2SUyyG8


 

Ken FigueredoKen Figueredo has a background spanning business, management and technology consultancy. He currently focuses on the Industrial Internet, intelligent transport systems, and the smart city sector. Figueredo is an expert in the field of IoT innovation and the formation of partner ecosystems to deliver interoperable IoT applications. Figueredo actively contributes to leading-edge, IoT industry bodies including the oneM2M Global Standards initiative and the Industrial Internet Consortium (IIC).