IoT and 5G Convergence

Ahmed Banafa
July 22, 2021

 

The convergence of 5G and Internet of Things (IoT) is the next natural move for two advanced technologies built to make users' lives convenient, easier, and more productive. But before talking about how they will unite we need to understand each of the two technologies.

Simply defined; 5G is the next-generation cellular network compared to 4G, the current standard, which offers speeds ranging from 7 Mbps to 17 Mbps for upload and 12 Mbps to 36 Mbps for download, 5G transmission speeds may be as high as 20 Gbps. Latency will also be close to 10% of 4G transmission, and the number of devices that can be connected scales up significantly which warranted the convergence with IoT [1].

The Internet of Things (IoT) is an ecosystem of ever-increasing complexity; a universe of connected things providing key physical data and further processing of that data in the cloud to deliver business insights— presents a huge opportunity for many players in all businesses and industries. Many companies are organizing themselves to focus on IoT and the connectivity of their future products and services. IoT can be better understood by its four components: Sensors, Networks, Cloud/AI, and Applications as showing in Fig.1 [2,3,9].

Figure 1: Components of IoT.

Figure 1: Components of IoT.

When you combine both technologies, 5G will hit all components of IoT directly or indirectly, sensors will have more bandwidth to report actions, the network will deliver more information faster, for cloud and AI the case of real-time data will be a reality, and applications will have more features and cover many options given the wide bandwidth provided by 5G.

Benefits of Using 5G in IoT

Higher transmissions speed: with transmission’s speed that can reach 15 to 20 Gbps, we can access data, files, programs on remote applications much faster. By increasing the usage of the cloud and making all devices depend less on the internal memory of the device, it won’t be necessary to install numerous processors on a device because computing can be done on the Cloud. This will increase the longevity of sensors and open the door for more types of sensors with different types of data including high-definition images, and real-time motion to list a few [4].

More devices connected: 5G impact on IoT is the increased number of devices that can be connected to the network. All connected devices can communicate with each other in real-time and exchange information. For example, smart homes will have hundreds of devices connected in every possible way to make our life more convenient and enjoyable with smart appliances, energy, security, and entertainment devices. In the case of industrial plants, we are talking about thousands of connected devices for streamlining the manufacturing process and provide safety and security, add to that concept of building a smart city will be possible and manageable on a large scale [4].

Lower latency: in simple words, latency is the time that passes between the order given to your smart device till the action occurs. Thanks to 5G this time will be ten times less than what it was in 4G. For example: Due to lower latency the use of sensors can be increased in industrial plants, including; control of machinery, control over logistics, or remote transport all is now possible. Another example, lower latency led healthcare professionals to intervene in surgical operations from remote areas with the help of precision instrumentation that can be managed remotely [4].

Challenges Facing 5G and IoT Convergence

Operating across multiple spectrum bands: 5G will not replace all the existing cellular technologies any soon, it’s going to be an option besides what we have now, and also new hardware needed to take full advantage of the power of 5G, IoT’s second component “networks” will have more options now and can deal with a wide spectrum of frequencies as needed, instead of being limited to few options [5].

A Gradual up-gradation from 4G to 5G: the plan is to replace 4G gradually with all the infrastructure available now and this must be done on multiple levels and phases; software, hardware, and access points. This needs big investment by both sides’ users and businesses, different parts of the nation will have different timelines to replace 4G and that will be created challenges in the services provided based on 5G, in addition to the ability and desire of users to upgrade their devices to a “5G compatible “device is still a big unknown, a lot of incentives and education needed to convince individual and businesses to make the move [5].

Data interoperability: this is an issue on the side of IoT as the industry evolves, the need for a standard model to perform common IoT backend tasks, such as processing, storage, and firmware updates, is becoming more relevant. In that new sought model, we are likely to see different IoT solutions work with common backend services, which will guarantee levels of interoperability, portability, and manageability that are almost impossible to achieve with the current generation of IoT solutions. Creating that model will never be an easy task by any level of imagination, hurdles and challenges are facing the standardization and implementation of IoT solutions and that model needs to overcome all of them, interoperability is one of the major challenges [6].

Establishing 5G business models: the bottom line is a big motivation for starting, investing in, and operating any business, without a sound and solid business models for 5G-IoT convergence we will have another bubble, this model must satisfy all the requirements for all kinds of e-commerce; vertical markets, horizontal markets, and consumer markets. But this category is always a victim of regulatory and legal scrutiny [6].

Examples of Applications of 5G in IoT

Automotive: one of the primary use cases of 5G is the concept of connected cars, enhanced vehicular communications services which include both direct communication (between vehicles, vehicle to pedestrian, and vehicle to infrastructure) and network-facilitated communication for autonomous driving. In addition to this, use cases supported will focus on vehicle convenience and safety, including intent sharing, path planning, coordinated driving, and real-time local updates. This brings us to the concept of Edge Computing which is a promising derivative of cloud computing, where edge computing allows computing, decision-making, and action-taking to happen via IoT devices and only pushes relevant data to the cloud, these devices, called edge nodes, can be deployed anywhere with a network connection: on a factory floor, on top of a power pole, alongside a railway track, in a vehicle, or on an oil rig. Any device with computing, storage, and network connectivity can be an edge node. Examples include industrial controllers, switches, routers, embedded servers, and video surveillance cameras.”, 5G will make communications between edge devices and cloud a breeze [5,7].

Industrial: the Industrial Internet of Things (IIoT) is a network of physical objects, systems, platforms, and applications that contain embedded technology to communicate and share intelligence, the external environment, and with people. The adoption of the IIoT is being enabled by the improved availability and affordability of sensors, processors and other technologies that have helped facilitate capture of and access to real-time information.5G will not only offer a more reliable network but would also deliver an extremely secure network for industrial IoT by integrating security into the core network architecture. Industrial facilities will be among the major users of private 5G networks [5,8].

Healthcare: The requirement for real-time networks will be achieved using 5G, which will significantly transform the healthcare industry. Use cases include live transmission of high-definition surgery videos that can be remotely monitored. The concept of Telemedicine with real-time and bigger bandwidth will be a reality, IoT’s sensors will be more sophisticated to give more in-depth medical information of patients on the fly, for example, a doctor can check-up and diagnostic patients while they are on the emergency vehicle in the way to the hospital saving minutes that can be the difference between life and death. 2020’s pandemic taught us the significance of alternative channels of seeing our doctor beside in person, and many startups created apps for telemedicine services during that period, 5G will propel the use of such apps and make our doctor visits more efficient and less waiting [5].

References

  1. https://davra.com/5g-internet-of-things/
  2. https://www.linkedin.com/pulse/iot-blockchain-challenges-risks-ahmed-banafa/
  3. https://www.linkedin.com/pulse/three-major-challenges-facing-iot-ahmed-banafa/
  4. https://appinventiv.com/blog/5g-and-iot-technology-use-cases/
  5. https://www.geospatialworld.net/blogs/how-5g-plays-important-role-in-internet-of-things/
  6. https://www.linkedin.com/pulse/iot-standardization-implementation-challenges-ahmed-banafa/
  7. https://www.linkedin.com/pulse/why-iot-needs-fog-computing-ahmed-banafa/
  8. https://www.linkedin.com/pulse/industrial-internet-things-iiot-challenges-benefits-ahmed-banafa/
  9. https://www.amazon.com/Secure-Smart-Internet-Things-IoT/dp/8770220301/

 

Ahmed BanafaAhmed Banafa has extensive experience in research, operations, and management, with a focus on IoT, Blockchain, Cybersecurity, and AI. He is the recipient of the Certificate of Honor from the City and County of San Francisco, Author & Artist Award 2019 of San Jose State University. He was named as No.1 tech voice to follow, technology fortune teller, and influencer by LinkedIn in 2018, his research featured on Forbes, IEEE, and MIT Technology Review, and Interviewed by ABC, CBS, NBC, CNN, BBC, NPR, and Fox. He is a member of the MIT Technology Review Global Panel. He is the author of the books: “Secure and Smart Internet of Things (IoT) using Blockchain and Artificial Intelligence (AI)” which won 3 awards San Jose State University Author and Artist Award, One of the Best Technology Books of all Time Award, and One of the Best AI Models Books of All Time Award. His second book “Blockchain Technology and Applications” won one of the Best New Private Blockchain Books and was used at Stanford University and other prestigious schools in the USA. He studied Electrical Engineering at Lehigh University, Cybersecurity at Harvard University, and Digital Transformation at Massachusetts Institute of Technology (MIT).

 

 

Wireless Time-Sensitive Networking (WTSN)

Dave Cavalcanti
July 22, 2021

 

Time is a precious and scarce resource not only to people, but also to most machines, computers, and IoT devices. Precise time and computing/communications with strictly bounded, usually, low latency are the foundations for emerging IoT applications and new user experiences. Future flexible factories, mobile and collaborative robots, autonomous systems, and virtual/immersive reality are a few examples of the next wave of applications that rely on accurate time and bounded (low) latency computing and communications.

Time-Sensitive Networking (TSN) is emerging as a toolbox of standards-based technologies, developed under the IEEE 802.1 TSN task group, to provide accurate time distribution, deterministic data delivery capabilities that will enable the next wave of time and safety-critical IoT applications and experiences. With TSN capabilities, the same network can be shared by time-sensitive and other (best-effort) types of applications, such as typical IT applications. The advances in recent wireless technologies, such as Wi-Fi 6/6E and 5G, have created the possibility to enable Wireless TSN (WTSN) and unlock scalable and highly flexible use cases. Resting on the foundation of open standards as defined in IEEE 802.1, a combination of wired and wireless TSN capable networks will be able to not only manage automated operations but to digest and react to data from multiple devices/systems in real-time. In this renaissance of highly transparent, digitally observable production pipelines, discrete systems, factories, and suppliers will be able to share real-time information across all segments of the network, both wired and wireless. This flow of time-sensitive data will fuel enhanced AI decision-making, greater efficiency, and expanding customization capabilities, among a whole host of other benefits.

Given the reliability, stability, and interference-related issues associated with wireless communications, achieving wired grade performance over wireless brings many challenges. We have discussed the main use cases, wireless technologies, and time-aware protocols for enabling WTSN in [1].

From Wired to Wireless TSN

TSN applications require careful reservation of resources (e.g. bandwidth) to ensure guaranteed data delivery. In practice, time-sensitive applications will reside within a managed TSN domain, a protected subset of the network wherein all devices are TSN-capable, which will likely include a mix of wired to wireless TSN devices. We envision WTSN as an extension of wired TSN forming a hybrid wired/wireless TSN infrastructure as shown in the figure below.

Figure 1: Wired-Wireless TSN network architecture1.

Figure 1: Wired-Wireless TSN network architecture1.

Network configuration and management are key for enabling TSN. The IEEE 802.1Qcc specification defines the network configuration and management models for TSN. Industrial IOT applications will primarily use a centralized model, where all endpoints are managed by a central user configuration device (CUC). A Centralized Network Configuration (CNC) device will handle resource scheduling and path management, configuring the TSN bridges in wired and wireless segments. Of course, wireless configuration and management will require careful planning and management to handle intrinsic variations of the wireless channel to ensure deterministic data delivery across the network. Scheduling capabilities in Wi-Fi 6/6E and URLLC capabilities in 5G will be key to supporting the low bounded latency requirements for TSN applications.

WTSN Applications

Demonstrating WTSN capabilities with real applications is critical to understand the practical challenges and prepare the technology for market adoption. We are seeing increasing research and industry efforts in this direction. For instance, the National Institute of Technologies and Standards (NIST) and our team at Intel Labs have demonstrated Wi-Fi TSN enabling a collaborative robotic work cell. TSN time synchronization (802.1AS) and traffic shaping (802.1Qbv) implemented over Wi-Fi not only supported the control tasks with low bounded latency but also enabled the robots to operate faster, with less idle time comparing to a non-TSN network. This work was recently presented and received the best paper award at the 17th IEEE International Conference on Factory Communication Systems (WFCS 2021) [2]. Demonstrations of industrial control applications have also been done over 5G systems. Autonomous Mobile Robots (AMRs) and collaborative robots are also applications that can benefit from wireless connectivity and the determinism provided by TSN capabilities.

Research Challenges and Next Performance Frontiers

Delivering wire-equivalent reliability with determinism over wireless is a hard problem, especially when operating in the unlicensed or shared spectrum. Although there has been significant progress in demonstrating low bounded latency over wireless, WTSN solutions will need to address many other challenges including resilience, security, mobility, and efficiency. Wireless networks will need to be resilient to a range of disturbances typical to wireless communications. They will need to show high degrees of fault-tolerance against interference and security threats while meeting strict time performance. Enabling redundancy, interference detection, localization, and real-time network management to ensure seamless mobility are important research areas. Ultimately, distributed IoT applications will require determinism across multiple computing and network nodes along end-to-end paths that may include wired and wireless links. Typically, applications and networking layers are designed in isolation, but co-optimizing the orchestration of computing and network resources will need to be considered to meet ultra-low latency and reach the next performance frontiers for industrial IoT systems. Related works on control-communication co-design have demonstrated promising results to increase efficiency in wireless control systems [3].

Developing the WTSN Industry Ecosystem

To make WTSN a reality, we will need partnerships across the industry. For instance, Avnu Alliance, an industry group developing an interoperable TSN ecosystem, has formed a Wireless TSN working group to guide the industry on market requirements, product certification, and testing. The group recently published a WTSN white paper[1] laying out use cases, architecture, wireless technologies, and standards roadmap for WTSN market adoption. WBA and 5G ACIA have also been active connecting end-users and technology provides in trials involving Wi-Fi 6 and 5G technologies, respectively.

For the moment, wireless TSN over Wi-Fi and 5G are in early stages, but analysis of current TSN, Wi-Fi, and 5G capabilities, as well as use cases and market requirements, by industry groups, such as IEEE 802.1, 802.11, 3GPP, Avnu Alliance, 5G-ACIA, and WBA revel an encouraging portrait of how the first WTSN applications will manifest. We have a clear picture of the TSN capabilities (time synchronization, traffic shaping) and applications that will be supported by wireless systems. It is also clear that hybrid wired/wireless TSN-capable networks will be required to support end-to-end applications and leverage emerging edge computing capabilities as well as virtualization of industrial applications.

References

  1. Cavalcanti, D., Perez-Ramirez, J., Rashid, M. M., Fang, J., Galeev, M., & Stanton, K. B. (2019). Extending accurate time distribution and timeliness capabilities over the air to enable future wireless industrial automation systems. Proceedings of the IEEE, 107(6), 1132-1152.
  2. Sudhakaran, S., Montgomery, K., Kashef, M., Cavalcanti, D., and Candell, R., “Wireless Time Sensitive Networking for Industrial Collaborative Robotic Workcells”,  2021 17th IEEE International Conference on Factory Communication Systems (WFCS), July 2021.
  3. Eisen, M., Rashid, M. M., Cavalcanti, D., & Ribeiro, A. (2020). Control-aware scheduling for low latency wireless systems with deep learning. In 2020 IEEE International Conference on Communications Workshops (ICC Workshops) (pp. 1-7). IEEE.

 

[1] https://avnu.org/wireless-tsn-paper/


 

Dave CavalcantiDave Cavalcanti received his Ph.D. in computer science and engineering in 2006 from the University of Cincinnati. He is currently Principal Engineer at Intel Corporation where he develops next-generation wireless connectivity and networking technologies and their applications in autonomous, time-sensitive systems. He leads Intel Lab’s research on Wireless Time-Sensitive Networking (TSN) and industry activities to enable determinism in future wireless technologies, including next-generation Wi-Fi and beyond 5G systems. He is a Senior Member of the IEEE and serves as the chair of the Wireless TSN working group in the Avnu Alliance.

 

 

Practical Artificial Intelligence for the IoT

Danilo Pietro Pau
July 22, 2021

 

This article proposes a practical methodology and associated tools for next-generation IoT developers who aim to productively conceive and deploy IoT applications with interoperable artificial neural networks (ANN) into resource-constrained microcontrollers (MCU).

Educating future computer-science engineers on artificial intelligence (AI) and ANN for embedded systems requires a consistent study plan. The embedded industry is struggling to find well-prepared B.Sc., M.Sc., and Ph.D. degreed engineers who are fluent in python and C/C++ programming. Thus, there is a resource gap that will take several years to be filled.

Academia has recognized this disparity and is taking steps to compensate. One example is an initiative between Harvard University and Edx with the courses Fundamentals of TinyML[1], Applications of TinyML[2], and Deploying TinyML[3] that focus on the basics of machine learning and embedded systems by teaching how to program in TensorFlow Lite for MCUs. The program teaches IoT practitioners to write machine learning (ML) models for resource-constrained MCUs and a range of student-designed IoT applications.

And industry? What help can it provide? STMicroelectronics in cooperation with the Università di Catania has developed the Programming in Embedded Systems: from the basics of the Microcontroller to Artificial Intelligence[4] course, which is a clear demonstration of the industry’s interest in facilitating teaching to a large community of IoT practitioners. Programs like this will facilitate graduates being prepared to find ML jobs more easily in an environment rich with opportunities to drive their future professional growth.

When developing IoT applications based on ANN that can be mapped onto tiny MCUs, the steps must be simple and easy to learn. We are proposing a simple, easy, inexpensive, and productive 5-step methodology, as shown in Figure 1:

  1. Define an IoT application: first define an applicative problem to be solved and be prepared, accordingly, to capture enough representative data about the physical phenomena that will be subject to ANN processing for that data to be meaningful. This usually involves placing sensors at, on, or near the physical object to be monitored to record its state and associated changes over time. Examples of physical parameters include acceleration, temperature, sound, pressure, vision, thermal imaging, battery charge, and others depending on the target IoT application.
  2. Create an ANN, which requires labeled data that have been acquired from sensors and doing pre-processing. In "supervised learning," the designer must associate elements of the data set to semantically defined labels so that an input-output relation is uniquely set by construction. This classified set is the "ground truth" that will be used to train the ANN and then validate it by using a proper partition of it. The designer must decide the type of topology the ANN should feature to learn from the data and achieve high accuracy for the target IoT application. Usually, this step requires using one of the popular off-the-shelf deep learning frameworks to architect, train, and test the ANN topologies.
  3. Train the ANN, which involves passing the data sets through the ANN iteratively so that the ANN's outputs can minimize any adopted error criteria. ANN definition, training, and testing are typically performed using an off-the-shelf deep learning framework like said in the previous step. This training is usually done on a powerful computing platform (like a server with GPUs), that features virtually unlimited memory and computational power, to allow many epoch-based iterations until the output data converges to satisfactory accuracy. This training produces a pre-trained ANN stored in a file with the format of the deep-learning framework adopted (e.g. Keras .h5, Tensorflow Lite .tflite, Pytorch/MXNet/Paddle Paddle .onnx). These are interoperable file formats and de facto industry standards (Google) or specified by a large community (ONNX).
  4. Convert the ANN into optimized code for the MCU: to avoid an unproductive, repetitive, error-prone, hand-crafted C-code development, STMicroelectronics designed a technology that allows fast, accurate, automatic conversion of pre-trained ANNs into optimized C code. This C-code can run on a tiny MCU with full validation and built-in performance-characterization facilities without any need to hand-craft them. The technology, embodied in freely available tools[5],[6], guides IoT practitioners and developers through the selection of the MCU (using ST’s STM32 and SPC58 families) and provides rapid and detailed feedback on the implementation implications of the ANN on the selected MCU, for both IoT (STM32) and Automotive (SPC58) applications. Validation of the ANN against the deep learning run time can run both on the PC and the target MCU. This technology does not pose any specific constraint on the IoT application, while it facilitates its integration into the design flow using a well-defined set of public application program interfaces (APIs). It also offers simple and efficient interoperability with popular deep-learning training tools widely used by the AI developer’s community.
  5. Finally, embed the ANN into an MCU integrated into the IoT application for field trials and in-field validation.

Figure 1: The 5 steps needed to deploy an IoT application based on ANN onto STM32 and SPC58 MCUs.

Figure 1: The 5 steps needed to deploy an IoT application based on ANN onto STM32 and SPC58 MCUs.

To prove this methodology in practice, let us go through a case study to appreciate how the methodology is so easy and productive even for IoT practitioners that need to quickly prototype their ideas without using challenging and complex software packages or development boards.

Step 1: Let’s consider that we need to classify the fill level of sodium chloride sterile liquid in bottles for intravenous administration. One goal is to reduce or eliminate continuous human visual monitoring, as this may represent an onerous, time-consuming, and high-error task. Automating the task can help to increase productivity and save time. Under normal circumstances, human visual monitoring of the saline level in the bottle is required from time to time, without any real-time criticality. When the saline liquid in the bottle is fully consumed, and the bottle is not replaced or the infusion process is not stopped immediately, the difference between the patient's blood pressure and the empty saline bottle could cause an outward rush of blood into the bottle.

Step 2: The dataset with pictures of the saline bottles is openly available in [1], free of charge, properly labeled, documented, and organized in folders.

Step 3: Various ANNs were designed based on convolutional (separable and depth-wise) feed-forward topologies (CNN) using Keras’ deep-learning framework. Their topologies are composed of a mix of Conv2D, ReLU, MaxPooling, Flatten, and Dense layers. By setting the kernel size of filters, their numbers, and interleaving SeparableConv2D, model sizes can be dramatically reduced. This is important because it reduces the model complexity measured in the number of multiplies and accumulate (MACC), occupation into non-volatile MCU memory (FLASH) and it also reduces dynamic memory occupation (RAM). ANNs were hand-crafted at first using 32bits floating-point precision and then quantized to use integer 8bits. 8bit quantization happens first by converting the pre-trained neural network from Keras to Tensorflow Lite file format and then using the post-training quantization procedure, which also required a calibration dataset. This procedure usually marginally reduces accuracy -- or at least did so in our case.

Step 4: Those ANNs were automatically converted into C code for STM32 MCUs, which are embedded into the STM32 Nucleo family[7]. STM32 Nucleo MCU boards are priced affordably so they are easy to use to try out new IoT ideas and quickly create prototypes or proofs-of-concept with any Arm® Cortex® M4- and M7-based STM32 MCU. Compiler and debugger tools are free of charge[8], too. Figure 2 highlights various ANN versions developed in step 3, their complexity, and validation accuracy achieved using the ST AI tools[9] that automatically deploy ANN on the STM32 Nucleo board.

Figure 2: Complexity of the various ANN topologies automatically mapped on a STM32H743ZI2 Nucleo board (480MHz, 2Mbytes Flash, 1 Mbytes RAM). Any other STM32 Nucleo M4 or M7 can be used. DW stands for Depth-wise convolutions

Figure 2: Complexity of the various ANN topologies automatically mapped on a STM32H743ZI2 Nucleo board (480MHz, 2Mbytes Flash, 1 Mbytes RAM). Any other STM32 Nucleo M4 or M7 can be used. DW stands for Depth-wise convolutions

Step 5: Since the STM32 Nucleo board does not feature an integrated image sensor or any other sensor, we designed the system depicted in Figure 3a. The demonstrator uses a PC connected to an STM32 Nucleo MCU board via USB: a) the STM32 MCU runs any ANN model (in Figure 2) generated with «validation on target» built-in X-CUBE-AI program; b) a webcam is attached to the PC; 3) a python script, exploiting OpenCV library, running on the PC in the Conda environment properly set, gets from the sensor image frames data in real-time and sends them to the STM32 Nucleo board via USB, which process the data and sends back ANN classification results to the GUI, as well as STM32 execution times. PC <--> STM32-Nucleo bi-directional communication happens through a serial port emulated on a USB. Images are encoded using a dedicated binary protocol (documentation is openly available by installing X-CUBE-AI) and decoded on the MCU.

Figure 3a: Demonstrator of the concept by using a PC and an STM32 NUCLEO MCU board

Figure 3a: Demonstrator of the concept by using a PC and an STM32 NUCLEO MCU board.

Figure 3b: Some visual results.

Figure 3b: Some visual results.

The lessons that IoT practitioners can learn from this case study are:

  • Do not stop at the first ANN topology you conceive or reuse from related works. Further exploration must be done to shrink the model size to be as tiny as possible while maintaining accuracy at the expected level. Any ANN layer has measurable (through X-CUBE-AI) complexity and storage costs that impact model parameter size and memory footprint. Be aware of those costs even before embracing time-consuming model training such as K-fold validation.
  • Consider using ST’s AI tools to automatically explore ANN deployability on an MCU. As said you can do this even before the training phase, using the automatic analysis that renders computational complexity of the ANN as well as its impact on FLASH and RAM, embedded into the MCU, so users can be aware of, and apply optimizations to the ANN as early as possible in the design process. Take advantage of the automatic deployment, including on target validation and performance characterization, the tool via built-in programs offers to program the MCU with your favorite ANN. All these will have a dramatic impact on your productivity.
  • Consider using post-training quantization to convert full precision models to integer 8bits. This decreases storage memory costs by four times and accelerates execution speed on average by as much as two or three times on the MCU. Check carefully if accuracy is marginally compromised and at which level. Typically, a reduction of less than 1% can be tolerated.
  • Connect any sensor to the PC and consider using X-CUBE-AI pre-defined data format to convert and route sensor data to the STM32 Nucleo MCU via USB. This will help to quickly assemble a proof of concept (POC) that helps to show the idea to stakeholders. Write a python script to support the purpose, as we did.
  • Unleash your fresh-mind creativity. This is perhaps the most important and valuable contribution any early practitioner can bring to the ML field, since not biased by experience. Address new problems and challenge the AI tools.

How will the methodology evolve to become even easier and more productive than today? All the topologies presented in Figure 2 were hand-crafted. Associated hyperparameters were changed manually, picking values by considering personal insights, experiences, and knowledge of the inner properties of the ANN layers used. Undoubtedly, this is the most challenging task for any early practitioner, especially if has an embedded programming background, with limited or no-preexisting knowledge. So, we are back to a new gap: how to shape an ANN topology. Is there any change likely to provide a better way forward? AutoML tools represent an interesting evolution to automatically design an ML algorithm and simultaneously setting its hyperparameters to optimize its empirical performance on a given dataset. When the ML algorithm to optimize is an ANN, AutoML specializes in Neural Architecture Search (NAS). For a given ANN topology, hyperparameter optimization (HPO) supports the automatic choice of a set of optimal hyperparameters to maximize accuracy. Unfortunately, the resulting very accurate ANN is typically inferred onto powerful targets and is not deployable on resource-constrained MCUs. Fortunately, the research community is very active, and interesting technologies are proposed to fill the gap and to help the mapping process. Early examples are AutotinyML [2], tinyNAS [3], μNAS [4] and [5]. These tools can offer the support of NAS/HPO features and addressing the challenge of NN implementability on MCU in the earliest phase of the ANN design process.

The technology for mapping ANN into MCUs is moving incredibly fast, these days. However, it shall not leave the education of next-generation IoT practitioners behind and all efforts to help them must be made. The IEEE Region 8 Action for Industry sub-committee has also set the Internship Initiative as a contribution to better connect industries with IEEE students as early as possible during their studies. 

References

  1. Pau, D., Kumar, B. P., Namekar, P., Dhande, G., & Simonetta, L. (2020). Dataset of sodium chloride sterile liquid in bottles for intravenous administration and fill level monitoring. Data in Brief, 33, 106472.
  2. Perego, R., Candelieri, A., Archetti, F., & Pau, D. (2020). Tuning Deep Neural Network’s Hyperparameters Constrained to Deployability on Tiny Systems. In International Conference on Artificial Neural Networks (pp. 92-103). Springer, Cham.
  3. Lin, J., Chen, W. M., Lin, Y., Cohn, J., Gan, C., & Han, S. (2020). Mcunet: Tiny deep learning on iot devices. arXiv preprint arXiv:2007.10319.
  4. Liberis, E., Dudziak, Ł., & Lane, N. D. (2021). μNAS: Constrained Neural Architecture Search for Microcontrollers. In Proceedings of the 1st Workshop on Machine Learning and Systems (pp. 70-79).
  5. Xiong, Y., Mehta, R., & Singh, V. (2019). Resource constrained neural network architecture search: Will a submodularity assumption help?. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 1901-1910).

 

[1] https://www.st.com/en/evaluation-tools/stm32-nucleo-boards.html

[2] https://www.st.com/en/development-tools/stm32cubeide.html

[3] https://www.st.com/en/embedded-software/x-cube-ai.html

[4] https://www.edx.org/course/fundamentals-of-tinyml

[5] https://www.edx.org/course/applications-of-tinyml

[6] https://www.edx.org/course/deploying-tinyml

[7] http://web.dmi.unict.it/it/corsi/l-31/la-programmazione-nei-sistemi-embedded-dalle-basi-del-microcontrollore-all%E2%80%99intelligenza

[8] https://www.st.com/content/st_com/en/ecosystems/stm32-ann.html (X-CUBE-AI)

[9] https://www.st.com/en/development-tools/spc5-studio-ai.html


 

Danilo PauDanilo Pau graduated at Politecnico di Milano in 1992. One year before, he joined STMicroelectronics, where he worked first on HDMAC decoder design and then on MPEG2 video memory reduction, video coding, embedded graphics, and computer vision. Today, his work focuses on developing solutions for deep learning tools and applications. Danilo has been elevated to IEEE Fellow since 2019, served as Industry Ambassador coordinator for IEEE Region 8 South Europe and Member of the Machine Learning, Deep Learning and AI in the CE (MDA) Technical Stream Committee IEEE Consumer Electronics Society (CESoc). With over 80 patents, 100 publications, 113 MPEG authored documents and 40 invited talks/seminars at various worldwide Universities, PhD schools and Conferences, Danilo's favourite activity remains mentoring undergraduate students, MSc engineers and PhD students. If you are interested to discuss further, please contact Danilo Pietro Pau, Technical Director, IEEE and ST Fellow STMicroelectronics, Agrate Brianza (Italy) at danilo.pau@st.com.

 

 

Global IoT Spending in the COVID-19 Era

Philipp Wegner
July 22, 2021

 

Key Insights: i) overall enterprise Internet of Things (IoT) spending grew 12.1% in 2020 to $128.9 billion; ii) Asia-Pacific (APAC) saw the fastest growth (17%), followed by North America (14.9%) and Europe (9.7%); iii) 2021 IoT spending for enterprises is expected to grow 24% in 2021, led by investments in IoT software and IoT security; iv) Beyond 2021, it is expected that IoT spending will grow at 26.7% annually.

Technology markets are re-accelerating in 2021 as COVID-19 fades. IoT remains a high-growth market with opportunities across the entire technology stack.

Spending on enterprise IoT solutions grew 12.1% in 2020 to $128.9 billion, according to IoT Analytics’ latest update on the overall enterprise IoT market[1]. The COVID-19 pandemic had vastly different impacts on different segments of the IoT market. For example, spending on IoT hardware grew 5.4% in 2020, while spending on IoT cloud/infrastructure services grew 34.7% in the same timeframe. Many hardware installations were postponed as travel came to a standstill and capital expenditure budgets were frozen. At the same time, software tools, especially those that could serve as responses to the pandemic (e.g., IoT-based remote asset monitoring) and those allocated to operational expenditures, saw a smaller negative effect and in some rare cases even a pandemic-led boost.

China managed to limit pandemic effects by acting quickly and decisively. As a result, enterprise IoT spending grew by 23.5%, nearly twice the global average.

Despite the continuing negative impact of the pandemic on IoT budgets, IoT Analytics expects 2021 IoT spending to increase 24.0%, with the overall market reaching $159.8 billion by the end of 2021.

The post-COVID-19 digitization push that many forecasted can already be felt, and it is the view of the IoT Analytics team that the increasing use of digital technologies will lead to a compound annual growth rate (CAGR) for IoT spending of 26.7% between 2022 and 2025. At the same time, the number of global IoT connections is expected to reach 31 billion, tenfold what it was a decade ago, as IoT Analytics reported last year[2].

Figure 1: interactive data dashboard.

Figure 1: interactive data dashboard.

Segment View

Due to low oil prices, IoT spending in the oil and gas industry was hard-hit in 2020, declining much more than the global average. On the other hand, driven by the boom in eCommerce and online shopping, IoT spending in warehousing companies rose by 22.3% in 2020. In comparison, IoT spending in the automotive industry was very mixed. 2020 IoT spending for Chinese car manufacturers grew nearly 20%, while Europe and South America grew at below-average rates.

The 2021 IoT spending outlook shows that the pharmaceuticals, metals, and energy segments are among the fastest-growing segments.

Regional View

Spending on IoT enterprise solutions in Europe grew by 9.7% in 2020, compared to 14.9% in North America and 17% in APAC.

The regional differences can be explained by the evolution of the pandemic and related enterprise spending behavior. An immediate and strong reaction to the COVID-19 outbreak helped APAC bounce back quickly. The enterprise IoT market in China, for example, grew 23.5% in 2020.

2021 IoT spending outlook: In 2021, the lasting push for digitization is expected to lead to higher growth in all major world regions, with APAC leading, followed by North America and Europe.

Technology View

Companies increased spending on IoT security in 2020 by 40.3%. The surge in high-profile security attacks led companies to increase spending in the areas of cyber- and IoT security. IoT cybersecurity incidents that were visible in the media, such as hacks of Amazon’s Ring cameras in late 2019[3], led to increased awareness of the need for better protection of IoT devices.

Correspondingly, a recent survey by IoT Analytics found that an overwhelming 83% of information technology professionals implemented stronger cyber hygiene among employees during the pandemic and plan to continue prioritizing the subject after COVID-19.

Other areas that saw significant increases in spending include cloud infrastructure for IoT deployments and IoT software applications. Growth for IoT software applications is expected to pick up in the coming years. For example, in just a few years, predictive maintenance has moved from an uncertain, standalone niche application to a fast-growing, high return on investment (ROI) application that delivers measurable value to users[4]. IoT Analytics expects that IoT applications with very strong ROI profiles will grow at rates above the market average in the coming years.

Spending on IoT hardware in 2020 grew slower than IoT software spending in 2020. In 2020, companies spent 5.4% more on computers, gateways, sensors, chipsets, and other hardware as part of their IoT solution. Spending on specific subsets of the market, e.g., cellular IoT modules, declined by 8% in the same timeframe[5].

One area to watch in 2021 is spending on IoT chipsets (part of IoT hardware). IoT Analytics is forecasting strong growth in 2021; however, ongoing supply issues might mean that the demand will not be met, even by the end of the year.

 

[1] https://www.theguardian.com/technology/2020/dec/23/amazon-ring-camera-hack-lawsuit-threats

[2] https://iot-analytics.com/predictive-maintenance-market-evolution-from-niche-topic-to-high-roi-application/

[3] https://iot-analytics.com/cellular-iot-module-market/

[4] https://iot-analytics.com/iot-market-data/global-iot-enterprise-spending/

[5] https://iot-analytics.com/state-of-the-iot-2020-12-billion-iot-connections-surpassing-non-iot-for-the-first-time/

 

 

Philipp WergnerPhilipp Wegner is a senior analyst focusing on quantitative analysis, surveys, and market models. He has a background in Economics and market research More information at https://iot-analytics.com/request-a-demo/.