AIoT: Thoughts on Artificial Intelligence and the Internet of Things

Jeffrey S. Katz
July 23, 2019

 

It’s time for another convergence. IT/OT convergence is happening, initiated by cybersecurity mutual interests, now pressure-fit by fuzzy boundaries with computing at the edge, and in the Cloud. Let’s talk about two other large technology trends forming a junction, Artificial Intelligence, and the Internet of Things. We can even squeeze out a letter, and instead of AI-IoT, we have AIoT. This becomes more interesting when thinking of AIoT at the edge, where the action is, however perhaps more distributed than ‘classic AI’ (a phrase I never thought I’d write).

There is more data from more instrumentation and omnipresent embedded sensors, and this begins the story of IoT and Big Data and the connectedness of things. Not only might we learn more by collecting data, but communication improvements (just watch the 5G advertisements), are making the data velocity, as well as volume, increase. Moreover, every object is now a ‘thing’, including the one you see when you look in a mirror. Think about Fit Bit and Apple Watch. In contrast, a good summer read is Jaron Lanier’s “You Are Not a Gadget” from 2011. It was a national bestseller, very prescient.

Of course, even in basic SCADA, we collect data faster than people can analyze it. If it is unstructured data, such as drone video, an hour’s flight time can take months for analysis. In my area of Smart Grid (which I think of as IoT for Electric Power 1.0), every classic Intelligent Electrical Device is becoming an IoT device, to the point where it is hard to buy a ‘dumb’ electro-mechanical relay.

The thought is sometimes lost that AI means Artificial Intelligence. As with artificial sweeteners, there are good and bad aspects. Making an artificial something, when we don’t completely understand the original workings of intelligence, is ambitious. There are aspects we tend to start with, that represent human intelligence but scale it. From the Smart Grid area:

  • Inspection of images, gathered by drones, to keep manual site visits down, is one case. IoT is a general topic; to me, a drone is IoT that flies; a robot is IoT that senses and interacts with the physical world. Pattern recognition of visual defects (e.g. is there surface rust on transmission line towers, or inside of gas pipelines) may not need a mechanical engineering degree to observe. One could have a cub scout troop walk down a transmission line corridor right of way with binoculars and do a credible survey of tower rust, geotagging photos that looked suspicious. The problem is that it doesn’t scale, so we apply AI, though it is simple for people to do it on a small scale
  • Being so visually focused (which is why most of a TV’s cost is in the picture circuits, not the sound, and why connecting a small stereo system to your TV can add so much to viewing), we forget the field of acoustics. Did you hear that? What does a machine normally sound like? Just because we’re at the top of the food chain does not mean we’re at the top of the sensory chain. For instance, a recent patent was issued for acoustic monitoring of electric transmission lines.

A major utility was years behind in the manual analysis of images of transmission tower rust. Here AI might not replace people, it could help scale a problem that humans can do well, perhaps better, by having AI be a pre-screening intern. However, back to those cub scouts for a minute, they probably didn’t call reddish-brown leaves on the trees seen through the tower structural members rust, even if the leaves were the correct color, because when you give a ten-year-old that assignment, they know you meant rust on the metal of the tower, not rust color near the tower. So, another point, AI may be aided by pre-processing with more classic techniques, such as mapping the tower steel elements in the photo, before the actual AI part begins.

A similar example in the smart grid now is vegetation management – where do distribution power lines and trees intersect, so problems can be detected, and preventive tree-trimming can be optimized? Cameras in trees and poles is not the answer, but IoT does not mean the sensor has to be on the ‘thing’. Here satellite and LIDAR images can tell us enough about trees via remote sensing. This is a $100M annual cost, even at some mid-size utilities. The amount of data is enormous; again, a troop of boy scouts could do it, but how fast? So, in an encore performance, AI is used to correlate satellite imagery (the original high-flying drone perhaps) with GIS information on power lines, using an advanced geospatial data integration platform.

Of course, people operating industries in real-time don’t have the luxury of searching through all of this data, so AI comes in again, via natural language processing, speech to text, and text to speech, so it can listen to, and answer questions from,  field technicians.

There is a so-called aging workforce problem. Of course, it is not a problem that people get older and more experienced. The problem is ‘tribal knowledge’, a polite way of saying it was no one’s job to organize the company’s information learning over decades, only to discover a problem exists at someone’s retirement party. There was a time when computer science and library science were in the same department in universities – now the importance of that relationship is glaring. AI systems need context and defined taxonomies to derive the most value out of IoT streams.

In fact, there is now AI and robotic process automation (a term I’m ambivalent about because it makes me think of the Jetsons having a robot to do workflow) - just to organize the data. Take for example DMS, EMS, AMI, major systems at many utilities. While they are different technologies, they all produce or consume huge volumes of data, and may use different names in different systems for the same data point. This may make upgrades to systems such as DMS absorb thousands of unplanned utility engineering hours. Is it not reasonable for a DMS vendor to expect the information the utility needs to deploy its new system is ready, organized, stored in some XML or systematized taxonomy, perhaps according to some industry standard, as a requirement that customer has to fulfill?

  • Some modern utilities (no names are being used in this article to protect the industry leaders) believe all the data is interesting; little is important. How to find a needle in a haystack, where the focus is on the few nuggets that are of value. Value, of course, is in the eye of the person funding the project – if it is transmission it may be synchro-phasors, if it is distribution it might be smart meter data. Data storage is cheap. When contemplating its first analytics projects, the sponsor should first be sure all SCADA data is kept on-line, even if the SCADA system has limited disk storage itself, because the value of that data will increase.
  • I like to think of AI as a smart intern, not a replacement. It can help filter through masses of data, for the humans to focus on what may be important. I have heard this is done in radiology – computers determine the X-ray is OK, or has an issue, and the issues go to the M.D.
  • With the surge in renewable energy on the transmission system, communication must support data captured at least twice a cycle, over long distances, and this may be problematic. Especially since these lines are sometimes in the middle of nowhere. All is good, good... All is well until something is out of alignment – transmission oscillation due to renewable energy fluctuations perhaps. A potential case for some edge computing.

Technology Changes, People Often Don’t

Just because AI is here, we need to ask the question:

  • Does the problem need AI?
  • Could it be improved by a machine? Much has already

Rapid application of decisions to manage high volumes of data (95% of the time no panic, just observed – then a critical period of time is demanded to make the best decision quickly). AI does not solve everything, it needs to be matched to the application. If there is a closed-form analytical solution, do we need AI to rediscover it? Do not have AI ‘learn’ Fourier analysis of harmonics on the grid from inverters. Instead give AI the harmonic analysis, the grid state, the weather, the load forecast, and let it help with decision support for future operational contingencies.

Can we trust more to automation just because we can’t do it in time? For example, nuclear has always had more automation because a) things could go wrong faster and b) consequences may be more severe.

Most automation projects since the punched card era starts as an off-line helper, then gets trusted by people in control.

With AI, one must train it; before it ships, and after delivery. Here engineers are key to helping the system learn, and people purchasing AI might consider there is a cost after delivery, not just buying a year’s worth of software support.  AI doesn’t have tribal knowledge. It needs engineers to say, ‘you’ve got this wrong!’ Traditional software doesn’t get better, but it doesn’t get worse either, and needs no guidance, just perhaps human patience. AI is not magic out of a box, it is a middle school student who with guidance could become more than its teacher.

Now comes the importance of consistent and available data across large parts of the grid system, but data has to be managed properly, and kept private and anonymized. People are good at filling in the blanks in the face of ambiguity. I’ve reviewed many presentations by US authors and explain to them what won’t carry over to native speakers of another language or culture – by idiom, by abbreviation, or by typos. People are good in their native language at filling in the blanks or discerning new acronyms by context. For AI, this ‘background knowledge’ can be troublesome.

Ai can be an advisor – for decision support, suggestions on how to get operations back on track ASAP. AI can do routine observation – given what is seen from IoT, do experiential learning from operator actions, sift out best practices – which may first go into a simulated training environment. No one wants a pompous system making suggestions that haven’t been tried off-line

AIoT at the Edge can effectively be pushing decisions to the edge. As we consider whether to push decisions and actionability further out, the challenge is also with the control of data and its movement; privacy at the edge; utility becoming disintermediated (humans always want to push the button). Consider what is our comfort level – do we trust it enough to let it run critical infrastructure?  Will we forgo speed for control? What’s our level of comfort to give up control?  Test and regression analysis in classic software becomes test and correct and help AI differentiate the right answer from the wrong answer. There is always the Hollywood aspect though – the teacher has to consider whether an AI system is on to something, not just that it didn’t do what the human would have suggested.

People are the hard part of technological progress. One needs to think - even when I can trust the AI, how do I integrate it operationally?  The grid edge devices are doing things, but I still have a team of people.  Change is hard.  Where things have failed in the past is change management; why perhaps 80% of projects may miss their initial project plan dates.

Security – how do I build a network that is impervious – redundancy and resiliency are especially important if the Edge is off on its own, even if on a short leash. If it has a computer and a network connection, it has a cyber vulnerability that needs to be addressed. This is something I believe, but will an AI system be able to recognize a cyber problem alone at the edge?

AI for decision support – what caused the fault in the grid? There are plenty of FLISR solutions. Can AI bring together more information, besides weather and SCADA, such as municipal repair activity or other utility construction? Fire in the area that the utility is not aware of? AI can read news feeds, especially from hyper-local companies such as Patch, and Twitter.

AI for creating feedback loops – does anyone go back to the maintenance records today (maybe the summer intern) and look for abnormalities in SCADA data – after a failure? We could use AI to check out anomalous faults and attach annotations to field reports.  We could be learning by AI understanding maintenance notes and correlating with the SCADA data from two hours before the fault. What’s not getting done because it takes too many people to do it, even if the data is available?

The Keys Tend to Be

  • Have a platform – extensible by the user with their own analytics and learning, and with some industry standard data models. Otherwise, you’ve created your own vendor lock-in of engineering knowledge, whose efforts aren’t portable
  • Put learning in a process – learning instantiated in a system eventually grows in the right direction – just see most on-line help forums
  • Empower people – they are being assisted, not replaced
  • Last, but not least, remember Dilbert  –  whose biggest problem is (mostly) smart engineers with poor management. Independent AIoT devices will need some clue as to the overall system operating state, because their actions, done independently, while correct for their specific mission as they know it, might be different if the overall scenario is unusual. Think, for example, in the electric power system, there may be protective devices to reduce turbine overspeed driving the generator. An action that causes sudden loss of load, while locally correct, may result in a plant shutdown, though each IoT system was doing its smart mission locally, in a vacuum, the overall result was less than successful. If this was an IEEE Journal article, it would be called emergent behavior, an aspect of systems of systems science.

Author’s Notes

These are the author’s views, not necessarily those of IBM.

There is a  Utility University session on the topics of the article, and more, at DistribuTECH 2020, organized by the author.  Utilities interested in speaking at this tutorial  may contact the author.

 


 

jeffrey katzJeffrey S. Katz is a Senior Member of the IEEE. He is a member of the IBM Academy of Technology and the IBM Industry Academy. He was a co-chair of the IEEE 2030 Standard on Smart Grid Interoperability Guidelines, IT Task Force. He was on the External Advisory Board of the Trustworthy Cyber Infrastructure for the Power Grid and is on the Advisory Board of the Advanced Energy Research and Technology Center. He was on the “Networked Grid 100: The Movers and Shakers of the Smart Grid in 2012” list from Green Tech Media. He was appointed to the IEEE Standards Association Standards Board for 2014. He is an Open Group Distinguished IT Specialist. He co-chaired the first IEEE Power and Energy Society workshop on Big Data in Utilities in September 2017 and co-organized the first PES workshop on Utility Cybersecurity in December 2017. He is a member of the Industry Advisory Committee for the IEEE Intelligent Smart Grid Technologies conference for 2019 and 2020.Prior to IBM he was the Manager of the Computer Science department at the U.S. Corporate Research Center of ABB, and then of ALSTOM (now GE).He can be reached at jskatz@us.ibm.com or Jeffrey.s.katz@aya.yale.edu

 

 

How Smart Cities Can Benefit from Autonomous Cars?

Dalton Oliveira
July 23, 2019

 

Smart cities, the concept in itself, it is more than solely implementing known and new technologies – this is digital cities. When we talk about something smart, we are talking about the capability of connecting technologies, dealing with data, and bringing value – in the case of smart cities, bringing value for the perspective of citizens and governments.

Smart cities start with smart homes. Based on smart homes data about the consumption of basic services by the citizen (i.e.: water, gas, electricity, sewer, internet broadband, etc.), governments and service providers can analyze, make decisions, and take actions by providing the proper load balance of services for the city and for the citizens. When we talk about smart homes and smart cities, what is the first thing that comes to mind than mobility? Mobility is a pain point for big cities around the globe. So, how autonomous cars can support smart cities?

Technology + Cars = Opportunities

We all have been reading about connected and autonomous cars and their benefits – since decreasing car accidents until turning people’s life smarter by the integration with other technologies. It is a vast subject to explore. When mixing cars with emerging technologies such as internet of things (IoT), artificial intelligence (AI), machine learning (ML), deep learning (DL), there is no limitation for the creativity, for the innovation; and I am not talking about user experience only, I am talking about the insights provided by all the data involved too. I am talking about new business models (even for existing brands), opportunities for entrepreneurs to develop new products and new services, fuel governments with data generated to take right and quick decisions, and opportunities for product R&D and application engineers too. As we can see, there are new opportunities (which includes new careers) for all the ecosystem – the ones directly and even indirectly involved. In other words, it is a new lifestyle for users and new businesses coming up as quick as we can imagine. Are you prepared for that?

Connected Cars and the City

Connected cars are around us for a few years. By using the Internet of things (IoT), it is possible to predict, for example, car preventive and even corrective maintenance, and (yes) provide insights and inputs for smart cities – I am going to tell you how.

Figure 1: Google Earth 3D view of a district of Sao Paulo / Brazil (Credits: Wardston Consulting, Map data: Google, DigitalGlobe).

Figure 1: Google Earth 3D view of a district of Sao Paulo / Brazil (Credits: Wardston Consulting, Map data: Google, DigitalGlobe).

Figure 1 shows an area of a city divided into 4 quadrants (with very similar characteristics). A user (U1) of a connected car (CC1) in one quadrant (Q1) goes to maintenance more often than another user (U2) of the same connected car model (CC2) in another quadrant (Q2). There are several hypotheses, but engineers do prefer to be based on data, and connected cars data tell us some stories (based on a known context):

  • the average speed of (1) is higher than (2);
  • the average time from point of origin to point of destination of (1) is higher than (2);
  • the average fuel consumption of (1) is higher than (2);
  • the average gear shift of (1) is higher than (2);
  • some more data.

Based on the few data shown above, it seems that the streets of Q1 and Q2 are not that similar as it was supposed to be. With in-loco analysis, it was detected that they are really not: Q1 has more bumps and potholes than Q2.

Figure 2: Bump on the street.

Figure 2: Bump on the street.

Regarding Figure 2, it is possible to provide some insights and inputs for local government:

  • Create more points of car fuel stations in Q1;
  • Create a type of bump capable to transfer energy from the mechanical impact of cars tires to connected gears, in order to, for example, generate electric energy;
  • Many other insights and inputs.

Bumps are provided by local government, but potholes are not – local government should fix them. But, to fix them, it is necessary to know where they are.

Autonomous Cars Supporting Smart Cities

Autonomous cars can gather, process, send, and receive data in order to make decisions and take actions. A complex ecosystem to deal with big data, mission-critical and fully connected systems running in real-time and mixing internet of things (IoT), artificial intelligence (AI), machine learning (ML), deep learning (DL) that makes everything work together.

Figure 3: Understanding the relationship between artificial intelligence -AI, machine learning -ML, deep learning -DL (Credits: IEEE Communications Society).

Figure 3: Understanding the relationship between artificial intelligence -AI, machine learning -ML, deep learning -DL (Credits: IEEE Communications Society).

Computer vision with trained machine learning (ML) models to detect and recognize things around to avoid (i.e.: potholes) and not to collide (i.e.: with other cars, people, objects, etc.) is part of the complex cognitive block of autonomous cars.

Figure 4: Potholes on streets can be detected and recognized by computer vision -CV.

Figure 4: Potholes on streets can be detected and recognized by computer vision -CV.

Moreover, while autonomous cars are working for themselves to make decisions and to take actions, they can fuel local government with data and who else is interested in working together for a better community:

  • When autonomous cars identify potholes, they can adjust the speed to less damage the car plus they can map the location and the types of potholes – in order to notify the government to fix them;
  • When autonomous cars identify bumps, they can adjust the speed to less damage the car plus they can map the location of bumps – in order to notify the government to analyze and cross this information with several others (for example, areas with high score of car accidents versus the number of bumps, and other analysis);
  • Many other things in order to create, manage, and output (for several purposes) more and more information.

It is possible to expand the analogy of the 4 quadrants mentioned before by dividing the city into small areas and each area has its own 4 quadrants. Autonomous cars driving from point A to point B to point C to point A, we understand that they are moving in and out several quadrants, what it makes them as like remote stations gathering information and sending these data to some cloud, to some database, building a historical record of, for example, real-time weather temperature, rain, snow, air quality, UV, and other climate data building dashboards – I am talking about real-time data, not forecast (read more about one of my IoT+AI/chatbot projects for smart cities called SmartyTempy).

For example, by providing real-time information about air quality to fuel a bike riding service to notify bikers if the area is proper for riding a bike or not. And, with enough data, it is possible to predict some natural disaster and take actions before it happens. If the street or road has snow or flood, by using augmented reality (AR) – with previously mapped data – while human eyes sometimes cannot see the risk, it is possible to show in car or smartphone screen the danger in some meters ahead. As we can see, there are many benefits from autonomous cars to smart cities.

Remember: When we talk about data, big data, cloud, database, and related, the main thing is: data and numbers by themselves do not tell us stories, stories are built based on a known context plus data and numbers.

The US$ 1 Billion Question

With all that said, who owns the data?

 


 

Dalton OliveiraDalton Oliveira is an Electronics Engineer working as a Global Digital Transformation Consultant, Mentor, Speaker (application, product, project, process, engineering) at Wardston Consulting. Awarded Top 3 IoT World Series (competing with Siemens, AT&T, Bayer, others) and Facebook Testathon Best Product Idea. Experienced in global mission-critical projects with US$ Billions budget in global companies (consumer goods, telecom, scientific), governments, universities since 2002. For internet of things (IoT) and artificial intelligence (AI), some of his authoral projects are installed from Silicon Valley to New York. Guest speaker at IoT World Series event in Atlanta/USA, guest quote writer at Freedom IoT Manufacturing in Cincinnati/USA, guest judge for NVC at The George Washington University in Washington DC/USA. Mentioned in academic paper at Saudi Arabia University + MIT/USA, interviewed by Riviera Magazine (exclusive 1/3 page), interviewed by Cultura TV (tv news, prime time, nation wide). Contact him at LinkedIn and at Wardston website.

 

 

Using IoT in the Classroom Towards Energy Savings and Sustainability Awareness

Georgios Mylonas, Federica Paganelli, Pavlos Koulouris, Joerg Hofstaetter, and Nelly Leligou
July 23, 2019

 

 

The Internet of Things (IoT) and smart cities are two of the most popular directions the research community is pursuing very actively. But although we have made great progress in many fields, we are still trying to figure out how we can utilize our smart city and IoT infrastructures, in order to produce reliable, economically sustainable solutions that create public value, and even more so in the field of education.

GAIA1, a Horizon2020 EC-funded project, has developed an IoT infrastructure across school buildings in Europe. Its primary aim has been to raise awareness about energy consumption and sustainability, based on real-world sensor data produced inside the school buildings where students and teachers live and work. Today's students are the citizens of tomorrow, and they should have the skills to understand and respond to challenges like climate change. Currently, 25 educational building sites participate in GAIA, located in Sweden, Italy, and Greece. An IoT infrastructure [1] is installed in these buildings, monitoring in real-time their power consumption, as well as several indoor and outdoor environmental parameters.

However, this infrastructure would not be particularly useful without having a set of tools to allow access to the data produced and provide the functionality to support educational activities. The GAIA Challenge2 is a playful interactive platform aimed at students, designed to serve as an introduction to aspects related to power consumption and energy-saving. In addition, real-time data from sensors in the buildings and participatory sensing help to visualize the real-life impact of the students’ behavior and enable competitive gamification elements among different schools. The GAIA building manager is a web application offering visualization of energy consumption and environmental data. A smartphone app allows end-users to access school building data from the GAIA infrastructure in a more immediate manner.

Figure 1: Examples of the IoT hardware used in the project: a) the IoT node used inside classrooms, b) IoT hardware used for educational lab activities, c) the exterior of some GAIA-enabled schools in Greece.

Figure 1: Examples of the IoT hardware used in the project: a) the IoT node used inside classrooms, b) IoT hardware used for educational lab activities, c) the exterior of some GAIA-enabled schools in Greece.

GAIA Pilot Activities at Schools

In terms of questions that GAIA investigated, a first one is whether the use of real-time IoT data from the end-users' environment can act towards motivating them to participate in energy-saving activities and actually produce some tangible results. A second one is whether IoT-based educational activities can be successfully integrated into the curriculum of schools. In order to answer these questions, we implemented a series of pilot activities inside the schools that participate in the project during school years 2017-18 and 2018-19.

The idea is that GAIA’s software components are used in the context of a set of educational templates and lab activities [2, 3] that are proposed to the teachers of the schools that participate in the project. Schools chose the energy domain on which they focused on (e.g., the use of lights), and used the GAIA methodology as a way to organize their interventions and be able to monitor results in a structured manner.  In the context of the proposed methodology [4], schools followed a series of simple steps, in which students and teachers successively study their environment, monitor the current situation and detect potential issues, devise a strategy to achieve energy savings and act, and then monitor and review the results of their actions. The provided software tools allow for immediate feedback with respect to the effect of the energy-saving strategies they choose to follow and apply inside their schools, which could either belong to the ones proposed by GAIA or be something entirely different, e.g., a strategy proposed by the students themselves.

Figure 2 An example of an interactive installation serving as a "control panel", using touch-enabled cardboard surfaces and GAIA IoT data.

Figure 2: An example of an interactive installation serving as a "control panel", using touch-enabled cardboard surfaces and GAIA IoT data.

Some Promising Results

In terms of participation in the project activities and use of the project tools, one very positive result is the participation of students themselves through their registration and use of the GAIA Challenge, our playful introduction to the project. The Challenge includes a number of “missions”, in which students complete certain “tasks”, by answering questions, making correlations, etc. Overall, 3735 students registered to the Challenge, with a 92% mission completion rate out of those who started playing a mission, i.e., the majority of the users that registered and started playing, actually continued playing through the challenge and were successfully “introduced” to the aims of the project. It also helped to make this introduction without requiring the schools to dedicate additional time for this activity.

In terms of actual energy saving results from combining the tools and project methodology with data produced inside school buildings during related activities in the schools, we have seen results in the range of 15-20% at most instances, while in some cases there were both smaller and bigger energy savings. The key, in this case, is to engage teachers and students and retain this engagement by tying actions with elements in the school curriculum. With respect to engagement, an important factor has been competition: students were intrigued by the prospect of competing with students from other schools and countries and were further motivated to participate in GAIA’s competitions for energy savings and related ideas. An additional remark is that you don't always need complex tools to achieve good results; in many cases, there is a low-hanging fruit in energy savings in public buildings such as schools, where simple interventions based on actual data can have a real impact.

Acknowledgment

This work has been supported by the EU research project “Green Awareness In Action” (GAIA), funded by the European Commission and the EASME under H2020 and contract number 696029.

References

  1. D. Amaxilatis, O. Akribopoulos, G. Mylonas, I. Chatzigiannakis, “An IoT-based solution for monitoring a fleet of educational buildings focusing on energy efficiency”. In MDPI Sensors, Special Issue in Advances in Sensors for Sustainable Smart Cities and Smart Buildings, 17(10): 2296.
  2. G. Mylonas et al., "An Educational IoT Lab Kit and Tools for Energy Awareness in European Schools". In International Journal of Child-Computer Interaction, Volume 20, 2019, pages 43-53, Elsevier.
  3. G. Mylonas, I. Chatzigiannakis, D. Amaxilatis, F. Paganelli, A. Anagnostopoulos, “Enabling Energy Efficiency in Schools based on IoT and Real-World Data”. In IEEE Pervasive Computing, Volume: 17, Issue 4, 2018.
  4. G. Mylonas, D. Amaxilatis, S. Tsampas, L. Pocero, J. Gunneriusson, “A Methodology for Saving Energy in Educational Buildings Using an IoT Infrastructure”. In the 10th International Conference on Information, Intelligence, Systems and Applications (IISA 2019), 2019.

1GAIA Project website, http://gaia-project.eu

2GAIA Challenge website, http://gaia-challenge.com/

 


 

Georgios MylonasGeorgios Mylonas is a senior researcher at Computer Technology Institute and Press “Diophantus”, Patras, Greece. He received his Ph.D., MSc and diploma degree from the Dpt. of Comp. Engineering and Informatics at the University of Patras. His research interests lie in the areas of IoT, wireless sensor networks, distributed systems, and pervasive games. He has been involved in the AEOLUS, WISEBED, SmartSantander, AUDIS and OrganiCity projects, focusing on smart cities and IoT-related aspects. He is the coordinator of the Green Awareness in Action (GAIA) H2020 project.

 

Federica PaganelliFederica Paganelli received the Ph.D. degree in telematics and information society from the University of Florence, Florence, Italy, in 2004. She is an Assistant Professor at the Computer Science Department of the University of Pisa, Italy. Her research interests include context-aware and Web of Things systems, service-oriented computing and communication, and next-generation networks.

 

 

Pavlos KoulourisPavlos Koulouris received his degree in Greek Literature and Linguistics from the University of Athens, Greece. He continued with postgraduate studies and research at the Institute of Education, University of London, in the field of ICT in education. He has worked in diverse areas of educational research and innovation for more than 20 years, including as a senior member of the Research and Development of Ellinogermaniki Agogi, a highly innovative school in Athens, Greece.

 

Jorg HofstatterJörg Hofstätter is the founder and managing partner of ovos, a digital design agency in Vienna. At ovos, Jörg is responsible for business development for online projects and games. He is a trained Architect at the academy of applied arts Vienna at Studio Hadid and has dealt with online technologies and Games for over 15 years. He is frequently invited to international conferences to speak about Serious Games and virtual/augmented space.

 

 

Helen LeligouHelen C. (Nelly) Leligou is currently an assistant professor of the Dept. of Industrial Design and Production Engineering of the University of West Attica. Her research interests lie in the areas of Information and Communication Technologies including routing protocols and trust management in sensor networks, control plane technologies in broadband networks, and industrial, embedded and network system design and development and blockchain technologies. She has participated in several EU-funded ACTS, IST, ICT and H2020 research projects in the above areas.

 

 

Adding One More AI Layer on Connected Technologies

Riccardo Petrolo and Teresa Macchia
July 23, 2019

 

In a workplace world, where connected devices, collected data, and applications overcome the number of people, a call to action is in place to make our future sustainable. The conventional and close straight-line business models – which apply to smart industry too – is indeed becoming unsustainable in the long term. Hence, new intelligent technologies have the power to play a central role to foster a regenerative economic cycle by combining and facilitating the (re)use of data and IoT solutions and orchestrate them into new disruptive services.

We work for creating a cognitive platform to empower people in their workplace by supporting collaboration and enhancing decision processes. Specifically, our research focuses on the i) ingestion of accurate and reliable data from disparate sources via intelligent agents, ii) the extraction of meaningful information from data via an AI layer, and iii) the orchestration of insights via an AI2 layer that enables a set of new disruptive assistive services.

To connect the digital and the physical world (see Figure 1) we are investigating an infrastructure that can orchestrate multiple agents together with AI layers. These investigations focus on different aspects of the existing cloud-edge technologies in order to expand the potential and the quality of services such as smart environment, robotics, and natural language processing with real-time requirements that might be different depending on the context.

Figure 1: Bridging Physical and Digital world through AI.

Figure 1: Bridging Physical and Digital world through AI.

Particularly, intelligent agents are all those software modules capable to acquire information from devices and their surroundings. Once an agent is connected to our platform, its data is connected via cognitive pipelines to AI engines capable to extract meaningful information; at the same time, data could be stored - according to GDPR and other privacy laws - for further more analysis and for training purposes. At this layer, insights can be provided as the output of the AI engines. For example, an agent connected to a camera could be used as input of an AI activity recognition engine to provide information about activities that are happening in the area covered by that camera. 

The AI2 layer collects all the output of AI engines, enriches them with context information, and provides services capable to support collaboration and to enhance decision processes. For example, insights about the camera described above merged with location and other context information can be used to enhance the working experience, e.g., the system can notify an employee, who is trying to move a heavy box, by detecting a potential risk for the person. At the same time, the system could also ask a robot deployed nearby to help the employee with the task. This system can be used to empower employees within different workplaces, from offices to hospitals, to manufacturers.

A lot of technical challenges are still open and need to be addressed, e.g., efficient data processing, resources placement, etc. Moreover, the supplement of Artificial Intelligence layer on the top of existing services has to take into account a number of aspects and has to be designed with the intention to support human autonomy and promotes diversity e.g. by recognizing different contexts and the role played by humans in a specific action. In this direction, the challenge for designing and developing AI is to questioning how AI impacts humans’ lives and leads actions that accommodate routines and automate undermining human activities. 

Reinventing the wheel is just something that we cannot afford anymore, in any aspect of our lives. The opportunity intelligent technology is giving us, to enrich and reuse existing data and services, is something we cannot ignore. Hence, the technology we are designing aims at easing the workplace with the expectation to not adding rumor in already noisy smart industries. The integration of different components, devices, platforms, and services is root on the concept that AI will guide and support every aspect of the work-life. Integrating AI and IoT solutions into the workplace is expected to regenerate and refresh the use of data and will generate a set of new disruptive assistive services.

 

 


 

Riccardo PetroloRiccardo Petrolo is currently R&D Engineer at Konica Minolta Laboratory Europe Rome, Italy. He received Bs.C. and Ms.C. degrees from University “Mediterranea” of Reggio Calabria, Italy in 2010 and 2013 respectively. In 2016, he received the Ph.D. degree from Lille 1 University, France. During this period, Dr. Petrolo served as Research Assistant at Inria Lille – Nord Europe, and he was involved in the development of the EU FP7 VITAL project, one of the first Operating Systems for Smart Cities.

In 2015, Dr. Petrolo was Visiting Researcher in the GTA group at UFRJ, Rio de Janeiro, Brazil. From January 2017 to January 2019 he was Postdoc Fellow at Rice University (Houston, Texas, USA) where he led the ASTRO project. His current research interests include the Internet of Things, Robotic Networks, Semantic interoperability, Edge computing, and Smart Cities.

Teresa MacchiaTeresa Macchia is Design Researcher in Konica Minolta Laboratory Europe. She has ten years’ experience in designing human technologies for private and public sectors. As sociologist and doctor in Computer  Science, she is passionate about the impact connected technologies on people lives and she works to design technologies for improving the quality of life. Before joining Konica Minolta Laboratory Europe, she held the position of Experience Researcher at Digital Catapult (UK) and participated in various EU activities through the Unify IoT project, the AIOTI, and the EIT to foster a sustainable and innovative adoption of connected technologies. Teresa received her Ph.D. degree in Computer Science from the University of Trento by focusing on Social Informatics and Human-Computer Interaction.