Challenges Monitoring the Arctic Tundra: The Distributed Arctic Observatory Cyber-Physical System
Climate change is deteriorating terrestrial biomes. The arctic tundra is the most challenged one. The extent of projected warming is so extreme that the tundra ecosystems will likely transform into novel states within a few decades, potentially leading to losses of important biodiversity and functions providing irreplaceable services to humanity.
In-situ scientific observatories [4, 3, 2, 7, 6, 1] are located in the environment they are measuring and observing. The corresponding data is then made available to researchers. The data is pre-processed to create data suitable as input to models of the environment. The models are then used to explore and understand the dynamics of the environment.
The data collected increasingly exhibit Big Data characteristics. The amount of gathering, processing, and reporting of observational data is dependent on the availability of critical resources, including energy, data networks, computing, and storage.
The characteristics of typical IoT devices include quite good energy efficiency but with small batteries, few and simple sensors, very limited network, computing, and storage resources.
For some environments, these resources are either available or not needed to do the necessary observations. However, we are interested in observing environments where these resources are very limited but needed. The challenges arising is even more demanding because of the emerging big data characteristics exhibited by the data collected by the deployed IoT systems.
One approach to do a scientific observatory is to use a cyber-physical system (CPS) composed of multiple devices with a set of sensors and actuators. The CPS is deployed onto an environment and used to observe its flora, fauna, atmospheric conditions, and many other ecological parameters. By increasing the number of CPS devices, increasingly larger areas of the environment can be observed and manipulated. By increasing the resources of the devices, more sophisticated functionality can be achieved providing more and better measurements, more robust reporting of data, more flexible and adaptable functionalities, and experiments.
To increase the benefit from the devices, they are organized into an infrastructure forming a distributed system. The Distributed Arctic Observatory (DAO) is an ongoing project at the University of Tromsø (UiT), the Arctic University of Norway. The project develops a CPS of devices for the arctic tundra. The DAO system observes the tundra and reports the observations, in some cases with low delays from measurements, done until they become available elsewhere.
The DAO-CPS faces an ecosystem with several hard to deal with characteristics that have a direct impact on how the CPS is being built, deployed, and used (see A, B, and C).
A: Unique Harsh Weather
Operating on the Arctic Tundra (AT) is challenging and demanding for both humans and machines. There is not a lot of precipitation. The summers are windy, cold, and wet, and the winters are windy, cold, and snowy. For long periods (weeks to months) the sun is either always above the horizon, or not at all. There are no proper trees on the arctic tundra. In some areas, dangerous animals like polar bears roam.
For a CPS deployed on the arctic tundra (i.e DAO-CPS) these characteristics present several challenges:
- Because of the low temperatures, (i) node hardware must be able to tolerate the low temperatures (average below -30C); (ii) batteries have lower capacities and are harder to recharge.
- Because of the rain in combination with frozen ground not draining water away, nodes must be placed inside waterproof enclosures.
- Because of the snow and lack of trees, (i) nodes will be on the ground and covered by several meters of snow. In some cases, sensors can be mounted on poles, but installations are often not allowed in interesting areas; (ii) Already borderline radio network signal strength and quality are further reduced under snow; (iii) Energy harvesting is harder to do under the snow.
- Because of large wild animals, heavy winds, melting snow, and breaking ice, nodes will be disturbed, moved around, or damaged.
As a result of the above conditions, even if nodes are tested in the laboratory, they will experience unexpected failure after deployment.
Because of the general conditions on the arctic tundra, it is in many cases impractical and costly for humans to go there, to deploy and maintain nodes with energy and repairs. One actual deployment we are participating in only allows for a visit by humans once a year. Most scenarios will be much more complicated.
Consequently, to operate inside these limitations, nodes must be both robust and small, lightweight and practical to carry and deploy, energy-efficient, rely on small batteries, have an operational lifetime of at least 12 months, have multiple data network options, use multiple small antennas mounted on the outside or inside of the waterproof enclosure, have a way to establish their location (GPS), and have sensors which can tolerate water and low temperatures. The ideal node should also have actuators to allow for the reorientation of the sensors in case of physical reorientation of the node after deployment.
The functionality provided by software must, in addition to the primary observation functionalities, allow for failure monitoring, the establishment of connectivity between nodes and a back-haul network, and allow for updates over the air.
B: Very Large
The arctic tundra is very large. It includes the North Pole and the northernmost parts of Norway, Greenland, Canada, and Siberia.
Much less than 1% is observed through ground-based installations. Thus, Big Data from the arctic tundra is lacking. The COAT program (coat.no) was created to, among other tasks, collect Big Data about the arctic tundra. The data is produced in several ways, including in-situ observations of the state of the flora, fauna, weather, and other conditions on the tundra.
However, present approaches rely on manual deployment and fetching of data by humans going to the arctic tundra. While this approach is needed for installations demanding careful installation and selection of where to locate measuring instruments, it does not scale for observing larger areas of the arctic tundra. To do in-situ observations of large areas many more nodes need to be deployed. The deployment methods must enable efficient and rapid installation of the nodes, including by drones and aircraft dropping the nodes onto the tundra .
When the number of nodes and the data collected by each node increases, a large volume of data of diverse types is produced. Individual nodes, and the CPS as a distributed system, must handle several challenges at scale.
Nodes must decide what to do with the data they collect. Data may need edge pre-processing including analytics and compression . Data must also be kept safe and reported with a decided frequency to wherever it is needed. The reporting implies that nodes must establish connectivity with a back-end receiving the data. Alternatively, nodes must connect with each other to safe-keep data and possibly find a route to a node with connectivity to the back-end system.
Consequently, the need to probe for networks, build a history of when networks are available and their characteristics raise. Edge processing is used to determine which network to use and when to use it. Nodes running out of storage, because of the volume of data accumulated without being able to report it, must decide what to do with it (including if data can be deleted and which data to delete).
C: Isolated and Hard to Reach
The number of human settlements on the arctic tundra is very low. It is in practice largely unavailable to humans in time and space, implying that humans will not be around to maintain systems and replace batteries. Nodes on the arctic tundra can in principle be visited by humans all year around. However, in practice deployments are done during summer. The rest of the year, deployed nodes are left alone, with no possible assistance (because of weather conditions, and because some nodes are meant to be buried under snow). There are also regulations limiting visits by humans and the impact of nodes on the environment. The accumulated number of nodes that in practice are deployed over time is limited.
Energy and network resources are limited in the arctic tundra. Thus, nodes must be very cautious while using energy. Back-haul networks will have varying availability, bandwidth, and latencies. To increase the possibility of being able to report data and receive updates, nodes should have on-node network interfaces to enable them to do ad-hoc connectivity with other nodes within the local network reach. Consequently, nodes located on the arctic tundra have more energy, network, and maintenance challenges to handle than an IoT deployment done inside a building or in a city.
When deployment and configuration during deployment are done by humans on the ground, errors happen. It can lead to inoperable nodes with bugs resulting in later problems. Because nodes are primarily asleep between measurements (to save energy and because the networks are limited) there is only a small window of time when nodes can be remotely reached to do remote maintenance and updates. Consequently, nodes will need to be updated several times after deployment to repair bugs, change observational parameters, and modify functionalities. Updates must be done remotely.
Nodes need to be autonomous and resilient. The most important tasks are (1) execute observations, (2) gather observations as data, (3) keep observations safely and (4) stay alive for the complete duration of the experiment (or to the next expected refill of energy). Autonomy needs to be achieved both as individual units and as groups of nodes forming a distributed system.
The characteristics of the arctic tundra, and other similar challenging environments, impact the architecture, design, and implementation of a scalable CPS of autonomous nodes.
The first characteristic is the harsh weather. It impacts battery capacities, recharging characteristics, energy harvesting, packaging, mounting of nodes, and how practical it is for humans to visit the nodes.
The second characteristic is the large size of the arctic tundra. It impacts the number of nodes needed to do observations at many locations. This increases the volume of data collected, and the number of humans needed to do the deployment and maintain the nodes.
The third characteristic, remote and hard to reach areas, implies isolation of nodes for short and long periods. This increases the complexity of the data networking approaches needed to get reports from the nodes and to do remote updates of them.
The unique characteristics of the arctic tundra imply that, to be deployed, nodes will have complicated software, network, energy solutions, and packaging. Complex solutions certainly imply both more bugs and more complicated bugs. However, even simpler architectures, designs, and implementations will (after deployment) experience unexpected failures and behavior.
To simplify developing and maintaining nodes, a platform to provide the software, and the hardware of the nodes is needed. We call the developed platform an Observation Unit (OU). Its implementation is a run-time environment with a set of software mechanisms and policies where ecologists add software to do specific observations and controlled experiments.
- The National Ecological Observatory Network (NEON). http://www.neonscience.org, March 2019.
- L Evans and P Bryant. LHC machine. Journal of Instrumentation, 3, 2008.
- W Gressler, DeVries J., E Hileman, D. R. Neill, et al. LSST Telescope and site status. Ground-based and Airborne Telescopes V, 91451A, 2014.
- Laser Interferometer Gravitational-wave Observatory (LIGO). https://www.ligo.caltech.edu, March 2019.
- Issam Raıs, John Markus Bjørndalen, Phuong Hoai Ha, Ken-Arne Jensen, Lukasz Sergiusz Michalik, Håvard Mjøen, Øystein Tveito, and Otto Anshus. UAVs as a leverage to provide energy and network for cyber-physical observation units on the arctic tundra. In 2019 15th International Conference on Distributed Computing in Sensor Systems (DCOSS), pages 625–632. IEEE, 2019.
- L. M. Smith et al. The Ocean Observatories Initiative. Oceanography, 31(1), 2018.
- B Wang, X Zhu, C Gao, Y Bai, et al. Square Kilometre Array telescope - Precision reference frequency synchronisation via 1f-2f dissemination. Scientific reports, 5, 2015.
- Issam Raïs, Otto Anshus, John Markus Bjørndalen, Daniel Balouek-Thomert, and Manish Parashar. Trading data size and cnn confidence score for energy efficient cps node communications. In CCGrid 2020: The 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing. IEEE, 2020.
Issam Raïs received his Ph.D. in the Algorithms and Software Architectures for Distributed and HPC Platforms (Avalon) team from the LIP laboratory in Ecole Normale Superieure (ENS) Lyon (France) in September 2018. His interests include energy efficiency for high-performance computing, cloud, edge, IoT, and distributed systems. Since January 2019, he is a post-doctoral researcher at ”UiT, The Arctic University of Norway”, Tromsø, Norway, where he contributes to the DAO-CPS (Distributed Arctic Observatory CPS) project to provide scalable, robust, configurable and energy-efficient approaches to monitoring extreme eco-systems such as the arctic tundra. (www.cs.uit.no/~issam/)
Otto Anshus is a Professor at the Department of Computer Science, University of Tromsø (UiT) The Arctic University of Norway, and a part-time Professor at the Department of Informatics, University of Oslo, Norway. His current primary research interests are distributed and parallel cyber-physical systems.
Phuong H. Ha received the Ph.D. degree from Chalmers University of Technology, Sweden. His current research interests include energy-efficient computing, parallel programming, and distributed computing systems. Currently, he is a professor at the Department of Computer Science, UiT The Arctic University of Norway. (www.cs.uit.no/∼phuong).
John Markus Bjørndalen is a Professor at the Department of Computer Science, University of Tromsø (UiT) The Arctic University of Norway. His current research interests include concurrent programming, parallel computing, and cyber-physical systems.
Sign Up for IoT Technical Community Updates
Calendar of Events
IEEE 8th World Forum on Internet of Things (WF-IoT) 2022
26 October-11 November 2022
Call for Papers
Special issue on Towards Intelligence for Space-Air-Ground Integrated Internet of Things
Submission Deadline: 1 November 2022
Special issue on Smart Blockchain for IoT Trust, Security and Privacy
Submission Deadline: 15 November 2022