Bringing Machine Learning to the Deepest IoT Edge with TinyML as-a-Service

Hiroshi Doyu, Roberto Morabito, and Jan Höller
March 11, 2020


The power of machine learning can have a remarkable technological impact on the core of constrained and embedded Internet of Things (IoT). Yet various technological barriers have so far made it challenging to realize the full value of ML-driven IoT at the edge. TinyML holds promise for a solution. At Ericsson Research, we are currently exploring the potential and challenges of TinyML, as well as introducing the concept of TinyML as-a-Service (TinyMLaaS) to address some of the challenges.

Machine Learning (ML) has profoundly revolutionized and enhanced the last decade of computer technologies. By extension, it has impacted several application domains and industries ranging across medical, automotive, smart cities, smart factories, business, finance, and more. Remarkable research efforts are still ongoing today, across both industry and academia, to bring the full advantage of the ever-growing number of ML algorithms. Here, the aim is to make computing machines, of every size factor, smarter and able to deliver sophisticated and reliable services.

ML applied in the context of the IoT is, without doubt, an application domain that has attracted a large amount of interest from across the enterprise, industrial and research communities. Today, researchers and industry experts are working extensively to advance existing ML-driven IoT to boost the quality of experience for users of smart devices and the improvement of industrial processes.

It is worth noting that the use of ML in IoT has multiple opportunities and interpretations. In our view, taking advantage of intelligent algorithms in the IoT context includes also having the possibility of equipping small IoT end-devices running on micro-controllers, with capabilities to benefit from ML algorithms. This thus extends the use of ML in IoT beyond the cloud and more capable devices running e.g. Linux.

Figure 1: Example usages of TinyML in industrial IoT.
Figure 1: Example usages of TinyML in industrial IoT.

Applicability examples of TinyML in industrial IoT are plentiful, let us mention a few. In discrete manufacturing, production up-time, quality, safety, and yield are priorities. By embedding ML-trained real-time inferencing in sensors inside machines, more accurate and timely predictive maintenance can be achieved (Figure 1). This inferencing is possible both on the individual sensor level and using sensor fusion at aggregate levels at the machine itself. Further, by embedding inferencing deeply inside a complex production line, the quality of produced parts and assemblies can be controlled in-process rather than post-process, i.e. making it possible to take corrective actions at the point in time when needed rather than to do post-manufacture quality inspections. Moreover, by employing multi-modal sensory monitoring of the entire factory environment, e.g. in the ceiling and floor, safety can be ensured by people's movement detection in real-time when and where it happens using e.g. infrared, temperature, or vibrations. Also, the right environmental properties needed for the production process quality can be in control based on inferencing in sensors, like humidity, gases, and air particles. Another example from a very different scenario is to use several microphone arrays deployed in a power station hall of turbines, which would be used in real-time to be analyzing sound for detecting impending turbine failure. As can be imagined, embedding inferencing using ML algorithms deeply into machines and processes can have a very significant impact on improvements.

Using ML in deeply embedded processes, like the application examples above, entails also technical constraints. Many consider ML at the IoT edge device as being inferencing in devices like the single-board computer Raspberry Pi. But the question goes deeper: how can we make ML algorithms fit on "constrained IoT devices" typically based on 32-bit microcontroller units that are not capable of running an operating system like Linux? Usually, those microcontrollers feature 256KB of SRAM and a few MBs of flash memory.

To provide an answer, we need to have a clear understanding of what can be defined as a "constrained IoT device". In the last decades of IoT research, there have been attempts to converge towards a common and coherent definition. To our extent, we accept the definition and characterization given by the Internet Engineering Task Force (IETF) through the RFC 7228 [4]. We believe that we must consider and operate within the world of embedded systems to be able to talk about IoT devices at the very deep edge. Embedded can be considered a synonym of hardware and software constraints. This, in turn, can be considered an antonym of Cloud and Edge – in this discussion being big and somewhat "unlimited" resources. Embedded can also be viewed as embedding the computing, sensing, and actuation in everyday objects and environments, like a soil sensor in agriculture or vibration sensor in a manufacturing machine.

What is TinyML?

Using the above definition of "constrained IoT device" as a starting point, it is crucial to characterize the distinction between "serving" ML to IoT devices, and "processing" ML within IoT devices.

In the "serving" case, all the ML-related tasks like training are “outsourced” to the Edge and Cloud, meaning that an IoT device is somehow "passively" waiting to receive the rendered ML model algorithm. In the "processing" case, an IoT device effectively uses the ML model for local inferencing on sensor data.

Figure 2 illustrates the overlaps of different technology areas in this context and where our research focus is. One can note several overlapping areas representing the common grounds of interest. As an example, the world of embedded Linux can be considered a rallying point between "Linux" technologies and "constrained IoT", thus also acknowledging that IoT capabilities stretch across the device-edge-cloud realms. "TinyML" represents the intersection between "Constrained IoT" and "ML" and disjoint with "Linux", the latter feature being a crucial aspect of our research focus [1].


Figure 2: Intersections between Constrained IoT, ML, and Linux.

Figure 2: Intersections between Constrained IoT, ML, and Linux.


Here we define TinyML as the technology area which concerns the running of ML inference ("processing") on Ultra-Low-Power (ULP ~1mW) micro-controllers found on IoT devices. TinyML is not only a general technical concept but also it has an emerging community of researchers and industry experts. tinyML Summit is held annually and tinyML meet-up is held monthly at Silicon Valley [6, 7].

The Challenges of TinyML

Now we elaborate a little on two key challenges of TinyML itself, the first being related to development, the second related to applicability of ML frameworks.

  • The gap between general software development and embedded development: general software development and execution usually target environments of a fleet of Linux machines with Gigabytes of RAM, Terabyte of storage (HDD/SSD), GHz of processing and multicore 64-bit processors, and where Linux Container orchestration is used. On the other hand, embedded development and execution target a variety of micro-controllers, a variety of Real Time Operating Systems (RTOS), with 100s of kB of SRAM, a few Megabytes of flash memory, without any standard orchestration. Those two target environments, as illustrated in Figure 3, are totally different. We cannot migrate cloud-native software onto constrained IoT devices.

Figure 3: Web vs Embedded software environments.

Figure 3: Web vs Embedded software environments.


  • Applicability of ML frameworks: as mentioned, ML typically has two phases, one for training and another for inferencing. ML training is usually done in the cloud with popular python-based ML frameworks, e.g. TensorFlow, PyTorch, etc., and its produced model is stored and archived in repositories called model zoo. Thanks to the latest introduction of ONNX (Open Neural Network eXchange format), each ML framework can make use of a model that is trained on another framework easily. But this cannot be applied to embedded IoT. Any of those frameworks and models are too big to run on IoT devices (Figure 4).

Figure 4: ML, software and hardware specifics across cloud, web and embedded domains.Figure 4: ML, software and hardware specifics across cloud, web and embedded domains.

The above limitations are further explained in one of our Ericsson Blog articles [2]. In summary, we propose to build a higher-level abstraction of TinyML software that is as hardware and software agnostic as possible to hide the heterogeneity of ML-enabled chips and compilers, and further to support this in an "as a Service" fashion. This is what we call TinyML as-a-Service.

What is TinyML as-a-Service?

So, what is our TinyML as-a-Service concept (TinyMLaaS) and how can it solve TinyML problems?

A typical and traditionally pre-trained ML inference model cannot be run on constrained IoT devices as it is, because the computing resources of those constrained devices are not enough. Such models must be converted into the appropriate size fitting the target device resources. An ML compiler can convert a pre-trained model into an appropriate one for the target IoT device platform. They use techniques to squeeze the model size, for example, "quantizing" with fewer computing bits, "pruning" less important parameters, "fusing" multiple computational operators into one. Since popular ML frameworks cannot run in the targeted IoT devices, an ML compiler also needs to generate a specialized small runtime, optimized for that specific model and for the embedded hardware accelerators that the device is featuring. The latter is typically chip vendor-specific, and we consider those steps as a customization service per device features.

TinyML as-a-Service is proposed as an on-demand customization service in the cloud. It can host multiple ML compilers as its backends, firstly gather device information from a device, e.g. using LwM2M [8]. Secondly, it can generate an appropriate ML inference model from model Zoo, and then install it onto devices on-the-fly, e.g. again using an LwM2M Software Over The Air update (Figure 5).

Usually, embedded developers and ML developers have different and often complementary development skills. This means that introducing ML into the embedded world can represent a challenge for embedded developers. However, with the use of TinyMLaaS, embedded developers can easily introduce ML capabilities onto their devices and, vice versa, ML developers can also target constrained IoT devices when designing their algorithms and models. Looking at the high-level picture, TinyMLaaS can potentially enable any service providers to start their AI business with devices more easily. To learn more about the TinyMLaaS approach and the impact it can generate, please refer to our Ericsson blog article [3].

Figure 5: TinyML as a Service overview.Figure 5: TinyML as a Service overview.


The TinyML community has rapidly evolved during the last year. TinyMLaaS is an ecosystem around TinyML. Other ecosystem players, like chip vendors, compiler companies, service providers, etc. have an opportunity to both influence and accelerate the development of such an ecosystem. Here at Ericsson, we very much encourage and invite this level of cross-industry collaboration, to make ML at the deepest IoT Edge possible. Hiroshi Doyu is presenting a talk about TinyML as-a-Service at Linaro Tech Days 2020 (live stream) on 24th March. Please watch if you are interested or contact him on LinkedIn.


  1. H. Doyu, R. Morabito. “TinyML as-a-Service: What is it and what does it mean for the IoT Edge?” [Online]. Available:
  2. H. Doyu, R. Morabito. “TinyML as a Service and the challenges of machine learning at the edge.” [Online]. Available:
  3. H. Doyu, R. Morabito. “How can we democratize machine learning on IoT devices?” [Online]. Available:
  4. C. Bormann, M. Ersue, A. Keranen. “Terminology for Constrained-Node Networks”. Internet Requests for Comments (RFC), No. 7228, 2014.
  5. P. Warden, D. Situnayake. TinyML: Machine Learning with TensorFlow Lite on Arduino and Ultra-Low-Power Microcontrollers". O'Reilly Media, 2019.
  6. tinyML Summit 2020. [Online]. Available:
  7. tinyML - Enabling ultra-low Power ML at the Edge. [Online]. Available:
  8. Open Mobile Alliance (OMA). “Lightweightm2m technical specification v1.0. [Online]. Available:



Hiroshi DoyuHiroshi Doyu is part of the Ericsson Research IoT technologies team. He has spent more than 20 years in product development and has contributed to the upstream Linux kernel development for more than a decade, including Nvidia Tegra SoC. He received his M.Sc. in Aerospace engineering from the Osaka Prefecture University, Japan. Hiroshi is passionate about technology but also loves to play floorball and ice hockey.


Roberto MorabitoRoberto Morabito is an Experienced Researcher in IoT Technologies at Ericsson Research, and a Research Associate at Princeton University, USA. His research interests include the design and development of research projects in the context of the Internet of Things, Edge Computing, and Distributed Artificial Intelligence. Roberto holds a Ph.D. in Networking Technology from Aalto University, Finland.


Jan HollerJan Höller is a Research Fellow at Ericsson Research where he is responsible for defining and driving the IoT technology and research strategies, and also to contribute to the corresponding corporate strategies. He established Ericsson’s research activities on IoT over a decade ago. Jan is a co-author of the book "Internet of Things: Technologies and Applications for a New Age of Intelligence" that was recently released in its 2nd edition. He has held various positions in Strategic Product Management, Technology Management and has, since he joined Ericsson Research in 1999, led different research activities and research groups. For several years he has served on the Board of Directors at the IPSO Alliance, the first IoT alliance formed back in 2008. Jan currently serves on the Board of Directors of the Open Mobile Alliance and is chairing the Networking Task Group in the Industrial Internet Consortium. He is a frequent speaker at industrial and academic conferences and events.




For a Multi-Stakeholder Discussion on 5G in Agriculture and Rural Area Development

Saverio Romeo
March 11, 2020


In agriculture, 5G is seen as the enabler for empowering the smart farming vision taking it towards autonomous and predictive farming. That view, very technological-centric, has shadowed other considerations on agriculture and rural areas creating a foggy view on 5G by rural and agricultural communities. This article wants to explore this initial journey of 5G in agriculture and rural areas and highlight future directions.

A Brief Overview of Smart Farming and Smart Rural Areas

The common belief sees agriculture as an old sector, not highly technological and based on traditional practices. The reality is totally the opposite. Agriculture has embraced digital technologies in a profound way proving exceptional innovation capabilities. The term “precision agriculture” was strongly embraced by the agricultural sector a long time ago, when terms like “ubiquitous computing”, “pervasive computing” and “Internet of Things” were ideas of technological pioneers and visionaries. Agricultural machinery manufacturers such as John Deere, CNH-Global, CLAAS, AGCO, and others have been working on precision agriculture for some time. Their combines and agriculture vehicles have used positioning technologies and sensors for gathering data about the fields and the crops. They move the data to information management systems, also known as farm management information systems, in order to optimize agricultural operations. This era of “precision agriculture” was based on M2M connections, quite often based on satellite technology. As technologies have evolved and the Internet of Things has become more mainstream, agriculture has embraced the IoT vision driving the rise of “smart farming” or “smart agriculture”. In small-sized fields, such as vineyards, mesh-networks of sensors monitor the grapes, send data to farm management information systems, and actions are taken to optimize production and increase quality. Similarly, it is happening for livestock and for fish farmers. Farming, in all its sizes and forms, is embracing the IoT vision in which the farm is a space, a sensed space, where data is used – at the edge or on the cloud – for optimizing farming processes and creating new services.

The smart agriculture vision embraces all the phases of the farming value chain. In “A Case for Rural Broadband” published in April 2019, the United States Department of Agriculture offers a useful framework on how to look at farming activities and the impact of smart farming vision. The report highlights three main phases:

  • Planning. Using data for decision support in order to make better decisions about what to produce, when and how.
  • Production. Monitoring the farming cycle and optimizing, accordingly, the entire process.
  • Market Coordination. Creating access to new customers and market channels, through the understanding of customers’ preferences and tendencies.

The common element of the three phases is the data. Data is gathered from different sources. Those sources could be sensor-based, external data (for example, weather data), and IT system data. The data is then analyzed, and various decisions are taken. As shown in figure 1, smart farming is not an on-off project, but a cycle process.

Figure 1: The lifecycle of smart farming.Figure 1: The lifecycle of smart farming.

The objectives are then: enhanced the understanding of the farming process through data to lower the costs reducing inefficiencies and risks. In turn, that means better margins and better capacities to meet the markets and customers’ needs.

Several sources such as the EU funded Smart Akis Smart Farming Thematic Network ( project can show the positive impact of smart farming, but it is very difficult to quantify the overall impact of smart farming on the agricultural sector. There are some studies that have designed models for describing that. One of those has been done by the US Department of Agriculture as shown in figure 2.

Figure 2: Potential Annual Gross Benefit of Smart Farming .

Figure 2: Potential Annual Gross Benefit of Smart Farming1.

Figure 2 shows an extract of the analysis. The analysis looks at the different types of farming activities, different smart farming technology used and quantify the impact in revenue for the US agricultural sector.

The benefits are potentially very lucrative and very impactful, but smart farming does not come without challenges. Some of those are technological, directly deriving by the Internet of Things as conceptual roots of smart farming. There are then some important business and market ones.

The main technological challenges are:

  1. Smart farming, as all the IoT solution, is an integration of components: devices, connectivity forms, IoT platforms, farm management software. Selecting the right component and integrating them is not an easy exercise.
  2. Technology incompatibility exacerbates the previous points. Compatibility between hardware and software of different suppliers of sensors, data, and implementations is not always there.
  3. Securing smart farming solutions is essential to have data security and protect the entire solution. There are several guidelines and best practices that can help in pursuing that, but it requires the necessary skill sets and collaboration.
  4. The lack of wireless and wired connectivity in rural areas is a strong impediment for designing and deploying a smart farming solution.

The main non-technological challenges are:

  1. For most farmers, the investment in a smart farming solution is not affordable. The margins are too low for spending resources on innovation. That approach exacerbates if smart farming project return of investments becomes difficult to prove.
  2. There is a shortage of workforce and skills in agriculture. Agriculture does not strongly attract the younger generations. That delays the adoption of smart farming solutions.
  3. The debate on climate change is calling the agricultural sector to embrace sustainable ways of production. The response is not easy and immediate.

Addressing these challenges requires different actions and different tools. Emerging technologies are examples of those tools, but also an additional element of disruption. It is also important to highlight that those challenges can be better addressed if the convergence of technologies is considered. The convergence of the IoT with AI, blockchain, and 5G can solve some of those challenges and bring the smart farming vision to a different level of sophistication, towards the automated and predictive agriculture. 5G can play an important role. The next paragraph will try to explore that.

How 5G Can Support the Smart Farming Vision

Solving the “Rural Connectivity” problem: the lack of reliable connectivity in rural areas has been a historical hurdle for the development of telecommunications in rural areas and the use of digital technologies in agriculture. The issue of the “digital divide” as an unbalanced distribution of broadband connectivity between urban areas and peripheral areas has characterized telecommunications policy in the new centuries. But, the political objective “broadband for all” has not been fulfilled yet. Based on the Rural, Mountainous, Remote Areas and Smart Village EU Parliament Intergroup (, 25% of the EU rural population does not have access to the Internet. But broadband connectivity is the building block of smart rural areas and smart farming. Continuing the analysis proposed in Figure 2, the US Department of Agriculture also argued that those benefits are possible only if broadband is wider available in the rural US. Figure 3 shows the benefits due to the presence of rural connectivity.

Figure 3: Potential Annual Gross Benefit of Smart Farming and the contribution of broadband1.

Figure 3: Potential Annual Gross Benefit of Smart Farming and the contribution of broadband1.

Unlike the move from 2G to 3G, which was completely city-centric, the move from 4G to 5G can be designed more uniformly giving the same priority to cities and rural areas. That can solve the “rural connectivity” problem and not recreating the digital divide between urban and rural areas. That would be a profound incentive to the economic development of rural areas, the adoption of smart farming practices, and driving innovation in agriculture.

Moving the Smart Farming Vision towards Automation and Prediction: 5G cannot only contribute to solving the “rural connectivity” problem, but it can enable a variety of applications in smart farming, from those which require a small amount of data to the data-rich ones. That is because 5G should not be considered simply as cellular ultra-broadband technology, but as a cellular connectivity framework as it is showed in Figure 4.

Figure 4: The 5G Technology Framework.Figure 4: The 5G Technology Framework.

The two areas of Massive MTC (Machine-type Communication) and the tactile Internet are particularly relevant for smart farming.  The Massive MTC can serve all those low-data, low-power, long-battery life applications such as specialty crop monitoring, precision livestock monitoring, irrigation systems monitor and similar. Those applications are currently served either via 2G – even if 2G is slowly sunsetting – LPWAN (Low Power Wide Area Network) solutions or other forms of mesh networks. Instead, the tactile internet will enable self-driving agriculture vehicles, various forms of robotics, from drones to strawberry picking robots. The Enhanced Mobile Broadband area will be also relevant particularly for video analytics on crops and livestock, but also for market coordination applications.

It is important to highlight that 5G is an important enabler of smart farming applications, but it is not the only one. The role of AI is important for the automation of the processes and for the adoption of a predictive approach to farming. Those require the convergence of IoT-5G-AI as the use of blockchain and DLT (Distributed Ledger Technologies) can enable quality tracking from the farm to the table, and therefore creating new approaches to the market coordination phase described in the first part of this article.

Driving innovation in rural areas and agriculture: in this intersection of emerging technologies (IoT, AI, 5G, DLT, and others), there is also a promising growth of innovative companies that try to bring the benefits of those technologies to farmers. In India, Trringo offers rental models and support services for farming equipment using the IoT. In France, Karnott offers software and hardware solutions to transform legacy agricultural systems into smart farming systems. In Italy, Agricolus and Agri Open Data offer a farm management system solution that brings together smart farming data, AI and blockchain. In the USA, Iron Ox offers a plant grow solutions completely based on robots. Taranis offers a platform that uses a combination of aerial and satellite imagery with AI tools for optimizing the farm management system. In the UK, Hectare Agritech has developed a blockchain-based farm trading platform; and Hands-Free Hectare is testing automated machines growing crops autonomously using 5G.  

Conclusions: The Need for a Multi-Stakeholder 5G Rural and Agriculture Strategy

This article has highlighted three important contributions that 5G can bring to smart rural areas and smart farming.

  1. The planning and deployment of 5G networks is an opportunity for solving the “rural connectivity” problem. 5G cannot solve that entirely but giving 5G a prominent role within a combination of connectivity forms (satellite, fixed, PLC, LPWAN, other wireless forms) for rural areas could be the answer to 20 years and more of lack of connectivity in rural areas.
  2. 5G should be a technology framework enabler for smart farming solutions. In convergence with other emerging technologies (AI, DLT, and others), 5G can bring smart farming to the era of autonomy and predictive and prescriptive maintenance.
  3. 5G can become the innovation enabler in agriculture because of fundamental blocks for exploring applications of emerging technologies in farming. That means driving the establishment of a new flow of agri-tech start-ups, but also expanding digital culture in rural areas.

However, this cannot be done through a push-down approach regarding 5G deployments. Among rural communities, there are doubts about the sustainability of 5G and its impact on the environment and the health of the communities at large. Additionally, the investment requires for 5G deployment can be misread if it does not come with a strong engagement with rural community stakeholders, who are expecting supports and investment for their farming activities. An exogenous push will not reach the results discussed and will not encounter the collaboration of the rural communities. Operators in charge of the 5G rollouts should not think of rural areas as urban areas. If in urban areas, the effect of 5G can be more evident to citizens, in rural areas this is not necessarily the truth. The deployment of 5G should be done in continuous collaboration with an informed rural community. Discussing 5G with communities is essential for making clarity on the value of 5G and on its impact on the environment and health.

5G for rural areas is a fascinating and complicated issue. It can have enormous benefits for rural communities and the agricultural sector. But that can fully happen if the rural communities are informed and strongly involved in the process. 5G deployment in rural areas is not an easy exercise and it should not treat in the same way done in cities. The smart farming industry, national and regional governments, and communities need to come together for designing a 5G Rural and Agriculture Strategy that can fully catch the benefits of 5G and its convergence with other emerging technologies.




Saverio RomeoSaverio Romeo is an associate lecturer at Birkbeck College on emerging technologies (IoT, blockchain, and AI) and their impact on innovation and policy. He runs modules for postgraduates and undergraduates on emerging technologies and contribute to research activities on the impact of technologies in business and society. He also runs the Emerging Technology Observatory (ETO), a consultancy outlet working with different organizations (XSure, STL Partners, Augmented Reality Enterprise Alliance, IntentHQ, WoW, Technopolis Group, CSIL Milano, VAA, Club Demeter, IoT Analytics and IoTNow) on the use of emerging technologies such as blockchain, AI, immersive technologies, 5G and IoT. He was also Lead Expert for the EU Digital Cities Challenge Project supporting the city of L’Aquila in defining its digital transformation strategy.




Potable Water Management with Integrated Fog Computing and LoRaWAN Technologies

Hamidreza Arkian, Dimitrios Giouroukis, Paulo Souza Junior, and Guillaume Pierre
March 11, 2020


Potable water is a precious resource in many regions of the world, and particularly in semi-desertic areas such as the South of Spain [1]. Over the previous decade, the EMIVASA company, which is in charge of water management in the city of València (Spain), has been active at the forefront of IoT technologies [2].

EMIVASA deployed more than 420,000 smart water meters in every household which periodically report their readings through wireless networking technologies. The data are transferred in a data center and processed in various ways such as detecting water leaks or other incidents leading to abnormal water usage, which in turn brings considerable efficiency improvements to the management of precious water resources. However, this system relies on 15-years-old technologies. The wireless networking infrastructure carries only limited numbers of meter readings per day, which delays any incident detection. Different vendors impose the usage of different proprietary and mutually incompatible standards. Finally, deploying new functionality is not trivial. It is time to transition to more modern and open technologies.

The FogGuru European project aims to develop innovative Fog computing technologies and to train the next generation of European Fog computing experts. For us, this constitutes an excellent opportunity to create a “Living Lab” with our partner Las Naves in València where we experiment Fog computing technologies in real settings and demonstrate their usefulness to address real-world problems [3]. Fog computing extends cloud computing deployments with additional compute, storage and networking resources located in the proximity of users and IoT devices [4]. Processing incoming data very close to their source allows one to reduce the pressure on the backhaul networking infrastructure, and may even allow disconnected operation in case of a failure of the device-to-cloud connection [5]. In the specific case of water management, we expect Fog computing technologies to bring faster detection of abnormal water consumption patterns, independence from the smart water meter manufacturers, and improved resilience and flexibility of the overall smart metering platform.

Integrating LoRaWAN with Fog Computing Technologies

We base our proposed solutions on the LoRa low-power wide-area network technology. LoRa supports wireless communications over very long distances (10km+), yet at the expense of an extremely limited available bandwidth per node. This fits the requirements of our use-case because each water meter reports only dozens of bytes of data at a modest frequency such as once every hour. On the other hand, the long-range implies that we can cover an entire city with a small number of gateway nodes.

(a) Classical cloud-based architecture

(a) Classical cloud-based architecture

(b) Proposed Fog-based architecture

(b) Proposed Fog-based architecture

Figure 1: Two potential system architectures.

LoRa technology addresses only low-level device-to-gateway communication. To design an entire infrastructure with multiple gateways, the LoRaWAN standard [6] defines additional elements depicted in Figure 1a. The network server is in charge of detecting and removing duplicate messages received by multiple gateways from the same IoT device. The application server(s) are in charge of processing incoming data. The recent LoRaWAN v1.1 specification introduces additional components for device authentication and registration.

However, this architecture addresses the requirements only partially, as useful and sensible as it is. In particular, the reliance on a centralized network server implies that the messages originating from the water meters must traverse long network links before being processed in a single location. On the other hand, our goal is to process incoming messages as close as possible to the water meters, in the same location as the gateways that have received them.

<p">As shown in Figure 1b, we propose to equip each LoRa gateway node with its dedicated network server. This allows the introduction of Fog computing servers in the same location, where incoming messages are processed immediately after having been received. The architecture still contains backend servers located in the cloud, where a final aggregation of pre-processed results by different Fog servers takes place, and where the operators control and visualize the status of the system. Figure2 depicts parts of the experimental testbed.

Figure 2: The experimental testbed.

Figure 2: The experimental testbed.


Building a prototype of an integrated Fog+LoRaWAN platform forces us to address some difficult research challenges.

LoRa vendor independence: every LoRa gateway vendor integrates additional software such as a network server and an application server inside their gateways. However, these components often turn out to be proprietary and incompatible with one another. Since vendor independence is an important goal for this project, we prefer relying only on the simplest possible functionality of a LoRa gateway (namely packet forwarding between two networking technologies) and making use of open-source software such as the Chirpstack network server [7] running in the Fog servers for all the other system components.

Packet deduplication: one obvious drawback of a system architecture with multiple network servers is that we cannot rely anymore on a single centralized network server to detect and filter duplicate messages received by multiple gateways. We are developing innovative techniques to allow multiple network servers to collaborate to realize the same function in a distributed way.

System configuration: deploying multiple nodes in different locations immediately creates practical difficulties, even at a modest scale, for maintaining the correct software versions and configurations across the whole system. We are thus designing mechanisms to centralize this information and distribute them on-demand to the respective nodes.

Application development: data being produced by the smart water meters can be seen as a set of unbounded data streams. We expect that the same property will be true in numerous other IoT scenarios. We, therefore, plan to make use of data stream processing engines in the Fog nodes as the standard application development middleware in our system. We expect to use Node-RED for simple scenarios, and Apache Flink for more complex ones.

Application deployment: we believe that a Fog computing platform should not be seen as a static environment dedicated to a single application. Instead, we envisage that many independent applications will be developed to process the incoming data in a variety of ways and that such applications will be frequently deployed, re-deployed and eventually retracted from part or all of the Fog platform. To facilitate application deployment, we rely on a modified version of Kubernetes designed for Fog computing scenarios [8].


The development of our experimental Fog+LoRaWAN platform is underway, and we expect to be able to start experimenting in real settings in the coming weeks. If everything goes according to plan, we will have demonstrated how Fog computing technologies may bring tangible benefits to the citizens of València thanks to improved management of their precious water resources.


This work is part of a project that has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No 765452. The information and views set out in this publication are those of the author(s) and do not necessarily reflect the official opinion of the European Union. Neither the European Union institutions and bodies nor any person acting on their behalf may be held responsible for the use which may be made of the information contained therein.


  1. UNESCO Intangible Cultural Heritage, “Irrigators’ tribunals of the Spanish Mediterranean coast: The Council of Wise Men of the plain of Murcia and the Water Tribunal of the plain of Valencia,” 2009,
  2. SmartH2O European Project, “Validation report,” Deliverable F7.2, Jul. 2016,
  3. FogGuru Project, “The FogGuru Living Lab for improved water management,” YouTube video, Feb. 2020,
  4. IEEE Standards Association, “IEEE standard for adoption of OpenFog reference architecture for fog computing,” 2018,
  5. S. Noghabi, L. Cox, S. Agarwal, and G. Ananthanarayanan, “The emerging landscape of edge-computing,” ACM SIGMOBILE GetMobile, Mar. 2020, of-edge-computing/.
  6. LoRa Alliance, “About LoRaWAN®,”
  7. ChirpStack, “open-source LoRaWAN® Network Server stack,”
  8. A. Fahs and G. Pierre, “Proximity-aware traffic routing in distributed fog computing platforms,” in Proc. CCGrid, May 2019,


Hamidreza ArkianHamidreza Arkian is a Ph.D. student at the University of Rennes 1 (France) and a student member of IEEE. His research interests are data stream processing, performance modeling, and elasticity in geo-distributed systems such as Fog computing.


Dimitrios GiouroukisDimitrios Giouroukis is a Ph.D. student at TU Berlin (Germany), working at the DIMA research team. His research interest lies in the intersection of stream processing, resource management, sensor data analysis, and Fog computing.


Paulo Souza JrPaulo Souza Jr is a Ph.D. student at the University of Rennes 1 (France) and a student member of IEEE. His research interests are Fog computing, resource management, and data stream processing.


Guillaume PierreGuillaume Pierre is a Professor at the University of Rennes 1 (France). His main interests are Fog computing, Cloud computing, and all other forms of large-scale distributed systems. He is the coordinator of the H2020 FogGuru Maria-Skłodowska Curie project, the leader of the Myriads research team at Inria/IRISA, the academic coordinator of the international “cloud and networking infrastructures” master program at the University of Rennes 1, and a member of IEEE. He can be contacted at



Antennas for Compact IoT Devices: Challenges and Perspectives

Leonardo Lizzi and Fabien Ferrero
March 11, 2020


The Internet of Things (IoT) has increased our expectations of benefit from a smart and cognitive world in the next future. We eagerly wait to effectively live in green smart cities, working in efficient smart buildings, coming back at night to our sweet smart homes, driven by a high-tech autonomous car. If this will become a reality or not depends mainly on the capacity of building a reliable massive IoT infrastructure in which billions of objects will be connected to the internet.

One of the key issues is the wireless connectivity of such objects. On the one hand, we want to connect smaller and smaller objects, such as implantable e-health devices, key-chain trackers and miniaturized environmental sensors. On the other hand, most of the wireless standards used for IoT work in sub-GHz frequency bands (e.g., 868 MHz) where long-distance propagation and building penetration are much less challenging. These two aspects make the design of antennas for IoT devices extremely challenging.

The laws of the physics tell us that a “good” antenna must have dimensions comparable to the wavelength of the signal to transmit. Classical quarter-wave monopole antennas are therefore almost 9 cm long at 868 MHz, which clearly does not allow the integration level required by modern IoT applications and devices.

Designing antenna for the IoT means therefore to design electrically small antennas. What does this mean?

As the antenna gets smaller, its performance suffers from two main effects. First, the antenna becomes less efficient. The efficiency of an antenna is defined as the ratio between the antenna radiated power and the one injected into the antenna. When the antenna is small, most of the electromagnetic energy gets trapped into the antenna structure instead of being radiated in far-field. Most importantly, the efficiency reduction is not linear with the antenna size. If the antenna becomes too small, the efficiency suddenly falls to levels that transform the antenna into a heating resistor, making it totally useless. Furthermore, IoT devices are very cost-sensitive, and antennas for IoT must be produced using cheap materials. Many of them are directly printed on dielectric substrates suitable for circuit board mass-production, such as Epoxy FR4. The higher losses introduced by these materials further worsen the efficiency drop.

The second effect of miniaturization is the reduction of the antenna operating bandwidth. This latter is defined as the frequency range in which the antenna is well matched, i.e. where there is a good transfer of energy from the source to the antenna. For antennas for compact terminals, usually, at least 75% of the energy is required to be transferred to the antenna (or equivalently, at most 25% mismatch losses are accepted). As the antenna gets smaller, the operating bandwidth becomes narrower. Given the small amount of data needed to be transmitted by IoT devices, this is not, in and of itself, a problem. The problem lies in the consequent increase of the antenna sensitivity to the close environment.

In general, the radiation behavior of an antenna is always perturbed from anything entering into his near field. In particular, what happens is that the antenna resonance shifts in frequency. This is usually not a problem, as the antenna bandwidth is larger than the bandwidth of the signal to be transmitted. Imagine the antenna as a target and the signal as an arrow. If the target is large, even if you move it a little bit while the arrow is flying, the arrow will still hit the target, even if probably not at the center. Unfortunately, this does not apply to IoT antennas that, because of miniaturization, exhibits narrow bandwidths, sometimes of few MHz. A small shift in the antenna resonance would make the antenna no more capable of transmitting the desired signal. It is like suddenly playing with a very small target.

Ok, designing small antennas for compact IoT devices can be a challenging task. Then what? Here are some perspectives:

  1. Miniaturization is important if it comes with efficiency. Antenna efficiency is the key parameter to care about. It is way more important than impedance matching (which is the first performance criteria for larger antennas). An antenna can be always matched to 50 Ohms whatever its dimension, but it does not mean that it will radiate. A 50 Ohm resistance has a perfect matching (over a huge bandwidth) but you do not want to use it as an antenna.
  2. The device and the application must be carefully considered from the beginning of the antenna design phase. The more details you know about the device (e.g., shape and material of the casing, presence of other electronic components, position of the battery, etc.) or the application (e.g., will the device be placed on the body?), the more likely it is that the antenna will efficiently work when integrated into the device.
  3. Measurements in real operation conditions are required. If the antenna will be integrated into a compact IoT device, directly connected to the RF transceiver, it does not make sense to measure it using big vector network analyzer cables. New measurement procedures must be developed.
  4. The antenna must be designed in a way that its resonant frequency can be re-tuned when practically employed. This can be done by introducing reconfigurable solutions or simply by designing antenna geometries that can be easily modified to increase or decrease their electrical length.

The unique characteristics of IoT devices and applications can give a hard time to antenna researchers and engineers. However, they also oblige them to totally rethink the antenna design problem, opening the way to innovative and disruptive solutions.

Figure 1: 868 MHz miniature antenna, shaped as Université Côte d’Azur acronym “UCA” and integrated in compact IoT terminals .

Figure 1: 868 MHz miniature antenna, shaped as Université Côte d’Azur acronym “UCA” and integrated in compact IoT terminals1.

1 For more info:



Leonardo LizziLeonardo Lizzi is currently an Associate Professor at the University Côte d’Azur, France. He received a master’s degree in Telecommunication Engineering and the Ph.D. degree in Information and Communication Technology from the University of Trento, Italy, in 2007 and 2011, respectively. At the moment, his research focuses on reconfigurable, miniature, multi-standards antennas for Internet-of-Things applications, wearable devices, and 5G terminals. He is the coordinator of the European School of Antennas (ESoA) Ph.D. course on “Antennas and Rectennas for IoT Applications”. He is co-author of more than 110 papers in international journals and conference proceedings.

Fabien FerreroFabien Ferrero was born in Nice, France in 1980. Since 2018, he is Professor with the Laboratory of Electronics, Antennas and Telecommunications at Université Côte d'Azur. His research concerned the design and measurement of millimetric antennas, IoT systems, and reconfigurable antennas. Since 2013, he is co-director of the CREMANT, a joint lab between UCA, CNRS, and Orange.