In IoT We Trust?

Jeff Voas, Rick Kuhn, and Phil Laplante
July 13, 2018

 

DISCLAIMER: Products may be identified in this document, but such identification does not imply recommendation by the US National Institute of Standards and Technology or the US Government, nor that the products identified are necessarily the best available for the purpose.

 

No! IoT is an acronym comprised of three letters: (I), (o), and (T). But the Internet (I) has never been highly associated with the term ‘trust.’

Identity theft, false information, a breakdown in personal privacy, and so many other negative features of (I) cause some people to avoid the Internet altogether. But for most people, avoidance is not an option.  The other letter of importance here is the one associated with ‘things’ (T).  Similar trust concerns occur for (T). Why? Because the ‘things’ carry their own baggage of trust concerns and the interactions between ‘things’ exacerbate these concerns.  In short, and from a trust standpoint, the IoT is an untrustworthy backbone with untrustworthy things attached -- a perfect storm [1].

In this short article, we’ll review an abbreviated list of trust challenges that we foresee as increased adoption transforms the IoT into another ubiquitous technology just as the Internet is. These challenges are in no specific order and are by no means a full set.

To begin, what do we mean by ‘trust?’ We will not use a formal definition, but rather a variation on the classical definition of reliability. Hence, we consider trust to be the probability that the intended behavior and the actual behavior are equivalent, given a fixed content and environment. For example, we can expect a trusted set of behaviors for a car operating on the roadway (we cannot, however, expect such a set of behaviors for a car operating in a lake). This informal definition works well for both ‘things’ and systems of ‘things’.

While subtle, we have just listed three key applications of trust: (1) trust in a ‘thing’, (2) trust in a system of ‘things’, and (3) trust that we are dealing with an appropriate environment and context. This brings us to another key ingredient related to trust --  the ability for the set of behaviors to be bound. Bounding does not apply to (1), but it does apply to (2) and (3). For (2), NIST offered a Special Publication (NIST SP 800-183) titled ‘Networks of Things’ [5] to discuss one way to bound a specific system of ‘things’ such that various metrics and measures of security and reliability could be assessed. The approach in NIST SP 800-183 was simple: define classes of ‘things’, essentially as building blocks.  For trust application (3), bounding an environment is a difficult challenge, but unfortunately, necessary. Rarely will it ever make sense to make a claim, such as “this system of ‘things’ works perfectly for any environment, context, and for any anomalous event that the system can experience”.

So, let’s look at a handful of trust-related issues that the reader may not yet have considered.

  • There is no universally accepted definition for IoT: does any noun preceded by ‘smart’ (e.g. ‘smart toy’ or ‘smart house’ or ‘smart city’) define IoT? Clearly not, but you would not know it from the way many people talk about IoT. Assessing and measuring trust for an entity that is not defined is problematic.
  • Heterogeneity is a trust issue: Getting ‘things’ to connect and interoperate with other ‘things’ from other vendors is non-trivial. Heterogeneity of products and services from thousands of different vendors is terrific from a competition standpoint, but connecting a diverse set of components is rarely easy.
  • Getting the intended behavior is a trust issue: Even if we have no difficulty gluing ‘things’ to ‘things’, this only solves the architecture problem. It does not suggest that these composed ‘things’ will exhibit the intended composite behavior that we desire. Hardware and software components may or may not work well when integrated, depending on whether they were the right components to select, whether they had the proper security and reliability built-in, and whether the architecture/specification was correct. (Note there are subtle differences among integration, interoperability, compatibility, and composability.) Consider the following scenario: A hacked refrigerator's software interacts with an app on a person’s smartphone, installing a security exploit that can be propagated to other applications with which the phone interacts. The user enters their automobile and their phone interacts with the vehicle’s operator interface software, which downloads the new software, including the defect. Unfortunately, the software defect causes an interaction problem (e.g., a deadlock) that leads to a failure in the software-controlled safety system during a crash, leading to injury. The potential for this chain of events to occur demonstrates why interoperability is so challenging regarding identifying and mitigating risks and assigning blame when something goes wrong [4].
  • Certification of a product: (not process or people) is a ‘grand challenge’ for just about anything, regardless of whether hardware, software, or system; and IoT is no exception [2]. IoT certification is nearly impossible unless the environment of a system of ‘things’ is bounded. Also, consider the cost to certify a ‘thing’ relative to the value of that ‘thing’. Is certification an option for IoT-enabled systems? If so, who does it? Who certifies the certifier? What criteria are used? What does it cost? Are the benefits worth it with respect to time-to-market and cost-to-vet? What is the lifespan of a ‘thing’? Also, you must consider composability. Are the other ‘things’ in the system certified? If not, why not? Even if all ‘things’ are certified, that does not mean they will interoperate well (correctly) in a given environment. Certifying ‘things’, as standalone entities, does not solve the fundamental problem of trusting a system that resides in a specific environment. And what about third-party limited warranties – do they still apply when components are interconnected?
  • IoT testing is a concern: You can test ‘things’, systems of ‘things’, and subsystems of ‘things’ [3]. You can test them in artificial environments or operational environments. In operational environments, systems of ‘things’ may only be bound-able for mere instants in time, therefore testing is problematic. Testing systems-at-rest is easier than testing systems that are reorganizing themselves in real-time and at massive scale. If you are testing a system of ‘things’ that relies on Internet connectivity, realize that the Internet at any given time is different than the Internet even a millisecond later. This property of constantly changing configurations may of course also hold for relatively small networks of things, isolated from the full Internet. Furthermore, one of the biggest problems for the reliability of a system of ‘things’ during operational usage is data anomalies propagating through the system. This eventuality suggests that some form of off-nominal or fault injection testing should be considered, which is expensive.
  • IoT quality / security / reliability /etc. are unfortunately and mostly consumer concerns since regulated systems, e.g., commercial aircraft, already have rules for how to specify and test. Will IoT ever reach that level of maturity? And which ‘ility’ is most important? Reliability, Security, Privacy, Performance, Resilience, etc? Other trust considerations here include: (1) did you use a faulty or subpar architecture? (2) are you able to mitigate third party defective ‘things’ for which you have little or no control? (3) were the highest quality ‘things’ used and if so did you over-engineer and spend too much or were the lowest quality ‘things’ employed simply to save money?
  • External leased data is a concern – it may come from sensors owned and controlled by vendors and this data may be received by your system of ‘things’ at a time of their choosing and with an integrity level of their choosing. Will SLAs protect you? Are you able to mitigate faulty interfaces and communication protocols? Are you confident in your wireless service providers? And what about data tampering and data integrity? How secure is your data from accidental problems or malicious tampering, delay, or theft?

In summary, there are many IoT trust issues, of which we mentioned only a handful. But compromising even one of these trust issues destroys overall trust in the system. We are working on a more comprehensive listing to be incorporated in a new NIST publication later in 2018.

References

[1] I. Bojanova and J. Voas, ‘Trusting the Internet of Things’, Guest Editor Intro, IEEE IT Pro, October 2017

[2] J. Voas and P. Laplante, ‘IoT’s Certification Quagmire’, IEEE Computer, April 2018

[3] J. Voas, R. Kuhn, and P. Laplante, ‘Testing IoT-based Systems’, 12th IEEE International Symposium on Service-Oriented System Engineering, March 26-29, 2018, Bamberg, Germany

[4] J. Voas and P. Laplante, The IoT Blame Game, IEEE Computer, Year:2017, Volume: 50, Issue: 6

[5] NIST SP 800-183, ‘Networks of ‘Things’’, J. Voas, 2016 


 

jeff voasJeff Voas is currently a computer scientist at the US National Institute of Standards and Technology (NIST) in Gaithersburg, MD. Before joining NIST, Voas was an entrepreneur and co-founded Cigital. Voas received his M.S. and Ph.D. in computer science from the College of William and Mary (1986, 1990 respectively).  Voas’s current research interests include Internet of Things (IoT) and BlockChain. He is a Fellow of the IEEE. Contact him at jeff.voas@nist.gov.

 

rick kuhnRick Kuhn is a computer scientist in the Computer Security Division at NIST.  His research interests include combinatorial methods in software testing/assurance, access control, and empirical studies of software failure.  Kuhn received his MS in computer science from the University of Maryland College Park.  He is a Fellow of IEEE.  Contact him at kuhn@nist.gov.

 

phil laplantePhil Laplante is a professor of software and systems engineering at Penn State and a visiting computer scientist at NIST. Laplante received his BS, M.Eng. and PhD (computer science) from Stevens Institute of Technology. His research interests include internet of things, requirements engineering and safety critical systems. He is a Fellow of the IEEE. Contact him at plaplante@psu.edu.

 

 

Comments

2018-07-14 @ 6:14 AM by Oak, Atul

Good Article.

OrganiCity: Using IoT to Enable Experimentation-as-a-Service in Smart Cities

Georgios Mylonas, Dimitrios Amaxilatis, Luis Diez, Lasse Vestergaard, Etienne Gandrille and Martin Brynskov
July 13, 2018

 

 

Smart cities are evolving into an exciting application test field for a number of science disciplines. As such, there is growing interest from service providers and stakeholders to integrate additional dimensions into current smart city implementations, in order to form the future smart city solutions.

However, new approaches are needed to facilitate such attempts, considering the complexity of city ecosystems, and the fact that cities are multi-sectorial by nature. There is still a lot to be desired regarding either utilizing available existing infrastructure as an experimentation testbed, or tools and systems that simplify and/or automate the use of said infrastructure. In essence, the idea is that stakeholders like companies or SMEs should benefit greatly from leveraging recent technological developments, while local governments and citizens could be able to try out solutions to challenges present in cities, creating opportunities for improving the quality.

OrganiCity [1-3] places users and communities like citizens, activists, decision-makers, researchers, and entrepreneurs, right at the heart of urban development. What we call Experimentation-as-a-Service (Eaas) is now available through OrganiCity, made possible via a set of tools that allow to deploy, develop and evaluate smart city solutions inside a multi-city IoT deployment in Europe. This is the first time such an integrated toolset is offered to the community, placing co-creation at its core. By “co-creation”, we refer to the process of involving citizens and stakeholders in designing, developing smart city solutions and using local know-how to respond to existing challenges in modern cities and their communities, which have not been solved by more conventional top-down approaches. OrganiCity federates IoT data and resources from 3 cities: Aarhus (Denmark), London (UK) and Santander (Spain). The platform is built on the principles of the Open & Agile Smart Cities (OASC) initiative [4] and unifies existing infrastructures comprising thousands of IoT devices, heterogeneous urban data streams, and services. At the same time, citizens are actively involved in the project by shaping application use-cases and contributing with data.

As regards other experimentation testbeds, SmartSantander [5] built probably the largest single city-scale IoT research facility, pioneering smart city experimentation, services, and applications. A number of other related projects such as Synchronicity [6] and CPaaS.io [7] deal with more generic smart city scenarios or focus more on open data aspects.

Overview of OrganiCity’s Architecture and Available Toolset

Running publicly since October 2016, OrganiCity offers a set of tools that aim to provide multiple opportunities for co-creation within the project, providing readily available functionality or can be used to implement novel solutions on top of them. These tools are aimed to be used by both advanced users (developers), as well as less technical-savvy ones to create and validate new services and applications in the real world.

We divide these tools into 3 general categories. In the first are tools for the development of new applications via which users can implement mobile applications with limited functionality. E.g., experimenters can present assets within the facility on a map or contribute data annotations. Experimenters can use smartphone sensors and invite OrganiCitizens to gather information during their everyday commute or create virtual sensors over the data federated within OrganiCity. In the second category, additional tools allow the integration of new hardware devices within OrganiCity. Finally, we have developed tools for targeted purposes within experiments, as to allow experimenters to subscribe to OrganiCity data streams from web applications, or administer communities of users participating in experiments, and implement incentivization mechanisms. Figures 1-3 showcase aspects of the toolset.

Figure 1: Screenshot from Organicity’s Urban Data Observatory, the main web interface for interacting with OrganiCity’s data.

Figure 1: Screenshot from Organicity’s Urban Data Observatory, the main web interface for interacting with OrganiCity’s data.

Figure 2: Screenshot from Organicity’s Experimentation Management tool, designating active experimentation areas and other options.

Figure 2: Screenshot from Organicity’s Experimentation Management tool, designating active experimentation areas and other options.

Figure 3: OrganiCity “Sensing on the Go” app, available on Google Play.

Figure 3: OrganiCity “Sensing on the Go” app, available on Google Play.

Experiments through OrganiCity, and the Future of EaaSS

OrganiCity has made its infrastructure available via open calls for experimentation, offering funding and mentoring for teams interested in implementing their ideas using OrganiCity’s infrastructure and toolset. Experimenters were invited to submit their ideas based on a 6-month implementation schedule. These open calls were supported by a total funding of 1.8 million euros.

During these open calls, a total of 43 experiments were approved for funding and were implemented in the 3 cities of the project, i.e., Santander in Spain, London in UK and Aarhus in Denmark, as well as 10 other cities in Europe and Latin America, in which experimenters utilized additional IoT infrastructure, e.g., mobile devices. The feedback from these teams suggests so far that the EaaS approach can actually produce tangible results as a means of prototyping new smart city applications, especially when there is a focus on involving citizens in the process, and that it is both feasible and reasonable in certain cases to use the OrganiCity model in this application domain. As the project nears its completion (July 2018), we will continue with the evaluation and presentation of its results to the research community.

Acknowledgment

This work was supported by the OrganiCity project funded by the European Union, under grant agreement No. 645198 of the Horizon 2020 research and innovation program.

References

[1] OrganiCity website, http://organicity.eu
[2] V. Gutiérrez, D. Amaxilatis, G. Mylonas and L. Muñoz, "Empowering Citizens Toward the Co-Creation of Sustainable Cities," in IEEE Internet of Things Journal, vol. 5, no. 2, pp. 668-676, April 2018.
[3] Gutiérrez V, Theodoridis E, Mylonas G, et al. “Co-Creating the Cities of the Future”, in MDPI Sensors, 2016;16(11):1971.
[4] Open & Agile Smart Cities (OASC) website, http://oascities.org
[5] SmartSantander project website, http://smartsantander.eu
[6] Synchronicity project website, http://synchronicity-iot.eu/
[7] CPaaS.io – City Platform as a Service, https://cpaas.bfh.ch/

 


 

georgios mylonasGeorgios Mylonas is a senior researcher at Computer Technology Institute and Press “Diophantus”, Patras, Greece. He received his Ph.D. from the University of Patras. His research interests include IoT, distributed and pervasive computing and smart cities. He has been involved in the AEOLUS, WISEBED, Smartsantander and OrganiCity projects, focusing on algorithmic and software issues of wireless sensor networks. He is currently the coordinator of the Green Awareness in Action (GAIA) H2020 project.

 

dimitrios amaxilatisDimitrios Amaxilatis received his DEng in Computer Engineering and MSc in Computer Science from the University of Patras, and currently pursues his Ph.D. at the same University. Since 2010, he has been with the Computer Technology Institute in Patras, Greece. He was also a member of the founding teams of two technological startups in the fields of microprocessor programming (codebender.cc) and home automation (Sensorflare), as well as the Patras hackerspace (P-Space). His research interests include distributed algorithms, wireless sensor networks, home and building automation, smart city, and participatory sensing applications.

 

luis diezLuis Diez received his M.Sc. and Ph.D. from University of Cantabria, Spain, in 2013 and 2018 respectively. He has been involved in different international and industrial research projects. He is currently a Senior Researcher at the Network Planning and Mobile Communications Laboratory, University of Cantabria. His research interests are resource management in wireless heterogeneous networks and IoT service provisioning.

 

lasse steenbock vestergaardLasse Steenbock Vestergaard received his M.Sc. in Information Science from University of Aarhus, Denmark, and is currently pursuing a Ph.D. focusing on IoT prototyping within Smart Cities at the Alexandra Institute. He has been involved in several national and international research and innovation projects including OUTSMART (FP7), SmartSantander (FP7), CityPulse (FP7), OrganiCity (Horizon2020) and Synchronicity (Horizon2020). His research interests are creative coding, API usability, and IoT prototyping.

 

etienne gandrilleEtienne Gandrille received his Ph.D. from the University of Grenoble. As a researcher at Commissariat à l'Énergie atomique et aux Énergies alternatives (CEA), he research interests focus on Internet of Things, Smart Cities, and Open Data. He has been involved in several international research and innovation projects including SocIoTal (FP7), OrganiCity (Horizon2020), BigClouT (Horizon2020) and Brain-IoT (Horizon2020).

 

martin brynskovMartin Brynskov is an Assoc. Professor in Interaction Technologies at Aarhus University, Denmark, chair of Open & Agile Smart Cities, research director of AU Smart Cities, director of Digital Design Lab, coordinator of SynchroniCity and OrganiCity, vice-chair of the UN ITU-T Focus Group on Data Processing and Management to support IoT and Smart Cities & Communities and chair of the Danish Standards Committee on Smart Cities.

 

 

Divide-and-Conquer: How Edge Processing Will Open a New Door for IoT Applications

Sergio Flores
July 13, 2018

 

It comes as no surprise to anyone in the IoT industry that we are reaching a point where continued innovation will not be possible without making fundamental changes to the approach we use to process information.

According to dozens of studies from independent consulting firms, the number of IoT-connected devices is growing at an unprecedented rate, and it is now expected to hit an all-time high in 2020. Indeed, as predicted by Gartner, Inc. back in 2017, the number of connected things will reach 20.4 billion by 2020 [1] while the majority of the use of these devices will be driven regionally by Greater China, North America and Western Europe.

 

Table 1: IoT Units Installed Base by Category (Millions of Units)
Category2016201720182020
Consumer 3,963.0 5,244.3 7,036.3 12,863.0
Business: Cross-Industry 1,102.1 1,501.0 2,132.6 4,381.4
Business: Vertical-Specific 1,316.6 1,635.4 2,027.7 3,171.0
Grand Total 6,381.8 8,380.6 11,196.6 20,415.4

Source: Gartner (January 2017)

 

Putting the situation into full perspective, since the ongoing growth of connected devices is accompanied by substantial improvements in technology, the overall volume of data that will need to be processed by cloud systems in the upcoming years is also projected to grow considerably. To take an example, in the recently booming autonomous car industry where activities like vehicle operation and in-vehicle content are essential to delivering value to users, it is expected that cars will generate 1TB of data per day [2] which will, in many cases, require real-time processing and immediate feedback to the user. The same trend can be seen in the smart home surveillance industry where the latest devices are already adopting very high-resolution 4K image sensors with the purpose of both delivering better image quality to users and enabling software to perform advanced computer vision tasks with greater accuracy. Will we be able to send all this information to the cloud and get it processed on a real-time basis? Probably not.

The biggest challenge for the IoT industry, then, comes with the fact that the evolution of sensor technology and hardware does not match up with the speed of improvements in widely-available data transportation technology. In fact, as Botta [3] highlights, it has been seen that over the last 20 years, processor power has increased by a factor of 1015, but data bandwidth capacity has only increased by a factor of 104. This not only imposes restrictions on the possible IoT applications that can be developed but also places risks in those technologies where real-time feedback is a requirement – imagine an autonomous car that takes seconds to make a maneuver. Other concerns also arise from the fact that, in the face of new data protection regulations and vigorous enforcement of data privacy, it might not be in the interest of IoT product manufacturers to start sending even higher volumes of data directly to the cloud. Indeed, a recent survey from Gemalto – focused on confirming that consumers lack confidence in IoT device security – affirmed that for about 65% of IoT consumers their most common concern is a hacker controlling their IoT devices. Nevertheless, it would be inappropriate to blindly lay blame on the cloud computing model for the challenges which face the expansion of IoT.

Looking back in time, cloud computing has been a crucial component in allowing the IoT industry to take hold and rapidly evolve since the idea of connecting devices to the internet took off – already two decades ago. Owing to the high availability of affordable cloud infrastructures, IoT devices, in general, have become more accessible to the public thanks to the reduction of processing power needed to be available on the device and the simplification of tasks handled by it – in many cases, the tasks being that of merely uploading data to the cloud. Higher volumes of data, and also the presence of heavyweight processing power resources in these cloud infrastructures, have opened up possibilities for the development of more and more complex data analytics tools and more extended compatibility of hardware and software vendors. Whether it is ideal or not, this has led most IoT solutions to be based on highly monolithic backend cloud solutions and strongly centralized processing and storage of data. Something that has worked well in most cases, so far.

However, the previously mentioned limitations urgently call for newer and better ideas on how to reduce the distance between the source of data and the place where it is processed. Edge computing, a concept motivated by the idea of decentralizing cloud computing and dealing with the current data explosion and network traffic challenges, has offered several opportunities to take the IoT industry to the next level by dividing the data processing load into smaller chunks that can be processed closer to the data sources, thereby opening up hundreds of new possibilities for IoT applications requiring low latencies. Moreover, since the adoption of this concept also implies reducing the need to stream high volumes of raw sensor data to the cloud, it can be expected that this will impact positively both on the consumer and in industrial IoT use cases where reducing connectivity costs and increasing the security of data is critical. In practice, IoT applications which explicitly require low latency like smart home cameras are already implementing this approach by offloading heavy processing tasks like object/face detection directly to the edge and consequently reducing the time needed to notify users when something relevant is happening. Similar use cases can be seen in more privacy-concerned manufacturers who are implementing mechanisms that preprocess raw video directly in the camera and blur all faces present in the video so that it can be safely uploaded to the cloud.

Overall, by stepping away from a centralized cloud approach and redirecting towards a distributed computational load approach, other additional advantages can also be expected:

  • Increased data privacy in edge device applications: for example, by reducing the amount of private information that IoT wearables send to the cloud in the form of fitness and heart monitoring data, or by guaranteeing that electricity or water usage patterns from smart homes are not at risk of being exposed to potential parties interested in knowing if there is someone at home or not.
  • Energy consumption savings in edge devices: by allowing the internal hardware to process data without the need to incur the cost of high energy-consuming communication modules, and therefore simplifying tasks for products relying on batteries – e.g. smart outdoor cameras.
  • Location-aware data processing on edge devices: by enabling location-aware devices to make data processing decisions based on location and therefore distributing data processing loads across other dimensions.

Despite the numerous benefits that a well-established edge computing concept might bring to the IoT industry, the reality is that, in practice, the potential benefits of decentralizing data processing in the cloud might not be achievable without first solving the several unknowns that still needed to be researched – such as the appropriate distribution of infrastructure resources and the management of virtual machines and containers under an edge-type model. In order to distribute resources and data processing across different infrastructures or network nodes, it is mandatory to develop standards that specify how all the various computing components should collaborate and allocate resources even when different infrastructure suppliers might be involved. More in detail, when it comes to the management of virtual machines across decentralized IoT infrastructures, for example, works in the areas of Fog computing will become extremely important since they will lead the discussion of how resources can better be distributed. In the same way, as seen in existing research [3], in order to implement edge computing successfully, situations in which virtual machines might want to entirely delegate specific tasks to other network nodes (because of the lack of resources or location-based decisions) will require VMs to understand the limits of their own containers and take the right delegation strategies accordingly. Finally, once these standards and related challenges have been well resolved, tough decisions will have to be made for already established IoT applications as to how best to start breaking down their currently deployed monolithic backend solutions and start placing centralized data processing blocks into smaller blocks on the edge.

Although it is true that there are still multiple issues that need consideration before we start building more solutions based on edge computing, it is clear that with the most recent substantial improvements in SoC, CPU, and DSP [4] technology, performing more complex data processing tasks at never-seen-before efficiencies right at the edge device will become a common scene for many more IoT devices and will give rise to an entirely new era of IoT solutions. This will undoubtedly be a great opportunity to deep dive into researching solutions to distributed computing paradigms and to be part of a new, game-changing industry.

References

[1] Gartner, Inc., "Gartner Says 8.4 Billion Connected "Things" Will Be in Use in 2017, Up 31 Percent From 2016," 7 February 2017. [Online]. Available: https://www.gartner.com/newsroom/id/3598917. [Accessed 5 July 2018].
[2] V. Madel, "Connected Vehicles and IoT Technology: Are You Ready?," Samsung, 2018. [Online]. Available: https://insights.samsung.com/2017/10/26/connected-vehicles-and-iot-technology-are-you-ready/. [Accessed 17 May 2018].
[3] A. Botta, "Integration of Cloud computing and Internet of Things: A Survey," Future Generation Computer Systems, vol. 56, pp. 684-700, 2016.
[4] R. Roman, J. Lopez, M. Mambo, "Mobile edge computing, Fog et al.: A survey and analysis of security threats and challenges," Future Generation Computer Systems, vol. 78, pp. 680-698, 2018.
[5] W. Qualcomm, "We are making on-device AI ubiquitous," Qualcomm, 2018. [Online]. Available: https://www.qualcomm.com/news/onq/2017/08/16/we-are-making-device-ai-ubiquitous. [Accessed 17 May 2018].

 


 

sergio floresSergio Flores received his degree in Electrical and Computer Engineering from Seoul National University in South Korea - one of the most prestigious university in the country. He joined Samsung Electronics in 2014 as an integrated circuit (IC) hardware engineer, where he gained the critical hardware level experience needed to initiate his career in IoT product management. Later that year, he was invited to join Samsung’s IoT R&D team where he worked as a technical product manager contributing to and leading two important Smart Home and Drone projects. He is currently acting as an IoT Product Manager at Smartfrog, a $32M funded IoT start-up headquartered in Berlin, that offers a revolutionary home security solution under a software as a service model (SaaS).

 

 

Rentable Internet of Things Infrastructure for Sensing as a Service (S2aaS)

Charith Perera
July 13, 2018

 

Sensing as a Service (S2aaS) model [1] [2] is inspired by the traditional Everything as a service (XaaS) approaches [3]. It aims to better utilize the existing Internet of Things (IoT) infrastructure. S2aaS vision aims to create ‘rentable infrastructure’ where interested parties can gather IoT data by paying a fee for the infrastructure owners.

S2aaS model primarily utilizes the existing IoT infrastructure which is being deployed to achieve a primary objective. For example:

  • a shop may deploy a security camera system in order to provide security for its premises (primary objective). However, such cameras (or the data captured by the cameras) can be reutilized (or reanalyzed) to understand the consumer patterns (e.g., analyze demographics such as age, gender, etc. of the people who pass by).
  • a garbage bin may be fitted with sensors in order to monitor and track garbage levels and to support resource management (e.g., truck allocation, recycling facility demand monitoring etc.). Same sensing infrastructure can also be reutilized to understand crowd in a given day (e.g., understand crowds based on what they throw away).

Let us consider the following scenario, as illustrated in Figure 1. There is a game in the stadium on the weekend. A marketing company, BestBrands, wants to understand the attending crowds better to develop their promotional campaigns specifically targeting the spectators (market segment). Therefore, they may be interested in collecting data such as demographics (age ranges, gender, sentiments, etc), movement, sentiments, buying behaviors, etc. Through a broker, BestBrands aims to rent the infrastructure over a certain period of time (during the game day), so they can gather the data in order to understand the crowd better. BestBrands may be interested to gather a variety of data from the streets. Different sensors may be used to gather and infer different types of knowledge: video cameras [demographics]; motion sensors: [number counting, crowd movement identification]; environmental sensors (e.g. temperature, wind, humidity): [identify any influencing factors, buying behaviors, etc].

Figure 1: Cloud Initiated Sensing as a Service.

Figure 1: Cloud Initiated Sensing as a Service.

Major Research Challenges

There are many research challenges that need to be addressed in order to realize this vision. Optimum IoT resources orchestration to facilitate users’ requirements is one of the major challenges. Such orchestrations should also respect user preferences and while managing the overall efficiency of the network. In the above context, BestBrands may either interest in gathering data in real-time (e.g., to enrich their promotion in real-time) or in a differed manner (e.g., to enrich future promotional campaigns). The orchestrations need to be performed accordingly to support the two types of sensing requirements. In order to support real-time sensing as a service, orchestration will be required to bring more computational nodes together in order to process data at a higher rate to reduce latency. Due to high resource consumption (both computation and network), BestBrands will be required to pay a higher price. Knowledge engineering techniques (e.g., semantic web technologies) can be used to enable the optimum IoT resource orchestration process in conjunction with AI planning techniques [4].

One of the major challenges in edge computing is to reduce network communication and latency. Knowledge engineering techniques can be used to enrich edge nodes with intelligence (knowledge), so they can make decisions by themselves reducing communication with the cloud. Orchestration also requires discovering IoT resource (e.g. computational nodes, service, sensing capabilities, etc.) efficiently in order to develop an optimal plan at runtime. Knowledge engineering techniques are also useful towards performing ad-hoc resource discovery.

In the above use case, the orchestration is triggered via a cloud broker where the BestBrands makes its initial request. However, there is another type of scenarios that could occur as follows where the request initiated by one of the edge nodes. Let us consider the scenario presented in Figure 2.

Bob is visiting a tourist attraction and he is interested in using his augmented reality device (AR) (mobile phone, glasses, etc.) to enrich his experience. He is interested in a rich experience, so he would like to rent nearby IoT infrastructure to support the experience. His augmented reality device would discover the nearby infrastructure to share the computation load (computation offloading), so Bob’s own AR device can reduce its energy consumption. As a result, Bob can have longer experience. Bob’s AR device will orchestrate the different computational tasks to different nodes (e.g., download and process maps, weather information, audio narration, translation, etc.). Such distribution of tasks will reduce the latency and improve Bob’s experience. Bob is happy to pay for this rich experience.

On the other hand, Alice is a university student with a limited budget. She is less concerned about the experience, but she needs to retain the battery of the mobile phone until she returns back to the hotel. Based on her priority, the orchestration that Alice’s AR device need to perform would be significantly different from Bob’s orchestration. Alice may pay less than Bob, but her experience may not as rich as Bob’s (e.g., latency, feature limitations).

Figure 2: Edge Initiated Sensing as a Service.

Figure 2: Edge Initiated Sensing as a Service.

In this scenario, the request is initiated by Alice’s and Bob’s AR devices (edge devices). As same as in the previous scenario, orchestration may need to consider contextual information. Candidate compute nodes may not only have different computational and sensing capabilities, but they may also have other relevant resources already with them. For example, the garbage bin may already have the map in its local cache (that both Alice and Bob needs). Therefore, it is much efficient to assign map processing to the garbage bin node. Similarly, there could be many considerations that the orchestration algorithms need to consider (in addition to user preferences). Knowledge engineering techniques (interoperability, semantics) can play a significant role in edge orchestration activities. Even though service composition for the ubiquitous domain is well researched (though mostly in simulations), they all assume nodes and the services are inseparable and static [5].

In contrast, one of the main assumptions in S2aaS is that infrastructure and associated resources are rentable, and the services are separable from nodes. This means that the assignment of services into rented compute nodes happens dynamically. Such separability allows performing orchestration in a much fine-grained and optimum manner. However, such separability also makes discovery and orchestrating algorithms much more complex (due to increased possibilities) than typical service composition. Therefore, new algorithms will be required to tackle this challenge efficiently.

In additional to the rentable infrastructure already deployed across cities, we envision that some service provider may deploy purpose build devices (e.g., drones augmented with rentable infrastructure) in high demand areas. It is also interesting to exploring how such services can be commissioned in real-world scenarios.

References

[1] C. Perera, A. Zaslavsky, P. Christen, and D. Georgakopoulos, “Sensing as a service model for smart cities supported by Internet of Things,” Eur. Trans. Telecommun., vol. 25, no. 1, pp. 81–93, 2014.
[2] C. Perera, Sensing as a Service for Internet of Things: A Roadmap. Leanpub, 2017.
[3] P. Banerjee et al., “Everything as a Service: Powering the New Information Economy,” Computer (Long. Beach. Calif)., vol. 44, no. 3, pp. 36–43, Mar. 2011.
[4] S. J. Russell and P. Norvig, Artificial intelligence a modern approach. Pearson, 2016.
[5] N. Chen, N. Cardozo, and S. Clarke, “Goal-Driven Service Composition in Mobile and Pervasive Computing,” IEEE Trans. Serv. Comput., vol. 11, no. 1, pp. 49–62, 2018.

 


 

charith pereraCharith Perera is a Research Associate at Newcastle University, UK. He received his BSc (Hons) in Computer Science from Staffordshire University, UK and MBA in Business Administration from the University of Wales, Cardiff, UK and Ph.D. in Computer Science at The Australian National University, Canberra, Australia. Previously, he worked at the Information Engineering Laboratory, ICT Centre, CSIRO. His research interests are Internet of Things, Sensing as a Service, Privacy, Middleware Platforms, and Sensing Infrastructure. He is a member of both IEEE and ACM. Contact him at www.charithperera.net or charith.perera@ieee.org