Pilots

Demonstration of scientific progress and applications pilots

The project will implement three different use cases as pilots, which in turn will help to draft guidelines and establish best practices for implementors. The created demonstrators will ensure an effective integration, evaluation and validation of the base algorithms, the infrastructure and the edge hardware.

Three major warehouse management challenges are inventory management, product layout, and order picking and accuracy. This use case addresses these warehouse-related problems by way of drone monitoring and inspection: autonomous drones will be deployed to monitor the outside perimeter of the warehouse to ensure security, and inside the warehouse, for rack inspections and asset tracking. The resulting monitoring information will be displayed in the digital twin (DT) of the warehouse that will be created, which will be accessible to workers via screen display or directly in their Head-Mounted Display (HMD). The outer perimeter of the facility will be monitored for intruders, trespassers, and any type of irregular activity. As soon as security cameras are triggered, by the detection of movement, drones will be automatically launched. The drones’ activity is displayed constantly in a Spatial Mobility Portal. Administrators and workers with the appropriate permissions may use the portal to view and track drone activity as well as the live surveillance camera feed. The drones are equipped with speakers and lights to allow for direct communication with potential trespassers. Similarly, to optimize warehouse management, drones will be flown inside the warehouse for rack inspection and asset tracking. The mapping achieved by the drone results in a warehouse DT that is displayed in the Spatial Mobility Portal, where all the content of the warehouse is inventoried and accessed by workers via the portal or their HMD.

The drones are dynamically routed, so that the optimal route is continuously calculated in order to provide workers with minimal travel time and avoid congestion in high traffic areas. In this context, Human-Robot Collaboration (HRC) will also be introduced to address warehouse management challenges. Autonomous Mobile Robots (AMRs) are normally devoid of ability to make decisions in unexpected or unknown situations, where the human competence becomes essential. To this end, HRC will be explored at both the coexistence and collaboration level, between human worker and AMRs, under the scenario of order picking and delivery in a warehouse. Human workers will be secured to work in a shared workspace of AMRs without hazards, on (1) a different task; human walks into the AMRs’ workspace to place products on the rack and (2) a shared task; human places products on the rack, from where the robot will pick up the products and delivers them to the shipment station. Edge devices with AI model will be developed in WP5 and integrated with AMRs to provide supports on HRC and enable the DTs for both human workers and AMRs in the warehouse.

This use case targets AI-supported triage and prioritisation of satellite Earth Observation data for downlink to Earth. This is of high importance when a quick information of the respective stakeholder is required, for example for disaster management (wildfires, oil spills etc.) and/or environmental monitoring. Images are pre-processed on the edge (onboard the satellite) to efficiently pass on only relevant, time-critical information to stakeholders/responders on Earth. Respective AI models for edge inference and strategies for continual, efficient edge learning will be explored; multi-agent systems concerning multiple edges and novel methods for efficient and robust communication will be explored. Specific hardware requirements imposed by onboard processing will be addressed. High resolution Earth Observation satellites are mostly located in the Lower Earth Orbit (LEO) at about 500-800 km altitude above sea level. In the current operational state-of-the-art, the raw data acquired by the satellite is stored until contact to the next ground station is established, where the data is downlinked and processed. Alternatively, the raw data can be transferred using satellite-to-satellite communications with a rather limited data rate and processed thereafter on ground. Both possibilities yield to a high latency between the acquisition and delivery to the end user.

For example, near real-time (NRT) requirements specify 20 minutes after downlink, so about 30 minutes are reached considering average satellite travel and downlink times. With onboard processing, results of the H2020 project EO-ALERT showed that a processing latency of one minute is achievable, which is a significant improvement and crucial for many time-critical scenarios.

In this use case, an on-board processing scenario will be implemented on a ground-based demonstration system. The implemented data chain will cover all steps necessary in a real system, starting from raw instrument data, running AI inference, and ending with small data packages containing only the necessary amount of data to be transferred. The chosen demonstrator scenario is environmental monitoring, such as sea ice classification. This is a task with increasing importance considering climate change causing the melting of polar ice sheets and opening the shorter, but risky Northeast and Northwest Passages to commercial shipping. The data used will be Synthetic Aperture Radar Call:(SAR) data; their benefits include an all-day all-weather capacity which is especially important during polar nights and cloudy weather, ensuring a high reliability of the warning system. In WP5, utilizing the models, concepts and design paradigms conceived therein, AI algorithms for environmental monitoring applied in this use case will be developed. Training will be carried out using publicly available datasets, such as the sea-ice dataset generated within the framework of the H2020 ExtremeEarth project, which will be further enhanced using data cleaning and active learning-based annotation tools. Inference workflows will be implemented in the demonstration system. The envisaged time for processing until data transfer is below one minute. Results of this use case are intended to be representative for a wide range of detection tasks and platforms and will demonstrate the possibilities of AI inference on the edge (of Earth).

The objective of this use case is to provide reliable edge and fog computing in a generic smart city. Specifically, with this use case dAIEDGE will enable:

The integration of novel AI-based methods on top of cutting-edge heterogenous edge devices. For instance, extreme edge learning algorithms for continual/lifelong training for the identification of road-users or distributed. This will allow the generation of knowledge based on the use of the data produced by the city.

The implementation of services that are efficient and can be used in a generic network infrastructure. These services can support decentralized AI services and will serve as a basis for the demonstration of a reliable fog-to-edge computation and communication in the smart city. This eases the interaction of the edge with vehicles or other technologies like VR/AR devices.

The deployment of high-performance low-power edge devices will bring these technologies closer to a deployment phase in a real smart city environment.

Furthermore, the dAIEDGE framework will allow successful deployment of AI in human-centric applications. Putting particular emphasis on strengthening the uptake of these new technologies in society and improving the security of personal data, as well as contributing to road safety. We expect that the achievement of this use case might provide guidelines for a successful deployment of technologies in a smart city.

The validation of the results will happen in the Modena Automotive Smart Area (MASA), which is a 3km2 -wide area of urban territory, adjacent to a transportation hub (train station, bus stops), equipped with cameras, sensors and private communication networks (4G, soon 5G). The area is used as a living experiment and a testing ground for a number of automotive and smart-communities projects by the local University and companies for the experimentation of vehicles with Cooperative, connected and automated mobility (CCAM) capabilities. To do so, MASA features cameras around the area that 1) capture road images for the detection and tracking of road-users (e.g., cars, bikes, pedestrians) and 2) produce geo-localized information (GPS coordinates) of the objects tracked in the world.

MASA administrators are planning the adoption of new technologies to improve traffic safety, security and privacy. In this line, the objective is to communicate warning alerts to road-users like cars or other smart technologies like VR/AR devices in case of risk of an accident. Unfortunately, the current edge technologies featured by MASA are functional but do not provide safe and reliable end-to-end computing guarantees. This means that the processing time taken by the edge and fog cannot satisfy safety related requirements.