PROJECT RESULTS
Demos and Videos
DEMOS
This demo showcases the complete data flow within the Urban Digital Twin (UDT). The process starts with data generated by the simulator, which is ingested into an InfluxDB instance. A Producer Service queries this data in windows and publishes it to an Apache Kafka broker. The data is then consumed by two independent, asynchronous services, which model it based on a custom-designed ontology and insert it into a Virtuoso triplestore database. When a new state is required, the State Generator Service queries the RDF data from the triplestore, generates the current state, and stores it in SkyStore. This state file is then distributed to relevant components like the Simulator and Reinforcement Learning module for further processing.
This video show how to operate the workflow orchestration of TASKA-C MVP on an example dataset. The purpose of this video is to demonstrate how a researcher can set up and trigger the workflow in a “Interactive” mode (without COMPSs). We show how the data (on S3 storage) are converted into a scientific radio image. The complementary video will demonstrate the “Automatic” mode of the same workflow with the help of COMPSs.
This demo shows a low-level description of how SkyStore is configured to use two different S3 stores: AWS eu-west-1 and eu-central-1. This is explained by identifying the various Kubernetes resources associated with the SkyStore deployment and their respective contents
Cross-Premise Inference Demo
This demo simulates the model transfer and inference in the PER use-case. It starts with KServe being deployed on OVH K8s cluster. It uses SkyStore as a S3 model repository. In the PER use case, the model trained in the BSC HPC environment is stored in nearby S3 storage while models served in the edge cluster in Venice use a different S3 cluster that is closer to Venice. This is simulated by storing the model in SkyStore S3-proxy connected to one S3 store using AWS CLI and then loading it directly to KServe from a different SkyStore S3-proxy connected to a separate S3 store.
In this video, we explore the configuration of COMPSs, highlighting the specific parameters required to run it efficiently. The focus here is to demonstrate a larger workflow compared to the TASKA Use Case C demo, specifically a matrix multiplication task. This use case involves a much larger number of tasks and utilizes more computing nodes, showcasing the enhanced parallelism that COMPSs provides. The graphical representation of tasks in the Paraver trace helps us to visualize the execution across multiple nodes, highlighting the parallel tasks and their execution times.
In this video, we showcase how we implement a workflow using COMPSs for the TASKA use case, specifically focusing on TASKA Use Case C. This use case involves both an interactive part, managed manually with a Jupyter notebook, and an automatic version, which we focus on here. In the automatic version, the scientist has predefined the steps for rebinning, calibration, and imaging of the datasets, and we use COMPSs to parallelize the execution of these steps efficiently.
Additionally, each task runs a Lithops process, which helps enhance parallelism even further. An important point to note is the use of COMPSs decorators in the code, making it straightforward for developers to adapt and optimize their code for distributed execution by simply adding some decorators.
Finally, by leveraging Paraver, we are able to visualize task dependencies, execution times, and the resources where tasks were executed. This allows for deep insight into the workflow performance and highlights the effectiveness of parallelism in our solution.
This demo shows how the Kubernetes cluster creation, management and deletion is automatized with the Ansible tool, which ensures idempotency. It ensures that playbooks achieve the same result regardless of how many times they are applied. This allows for all partners of the project to create their Kubernetes clusters in the same way.
This demo is based on the original SkyStore demonstrator from D4.2 but using K8s onboarding. In this case, both clients’ S3 operations seem to operate on a single (virtual) common S3 storage, even though they are connected to different S3 stores through their accompanying S3-proxies.
The video demonstrates how Binare’s security toolkit developed specifically for EXTRACT automatically scans and identifies vulnerabilities and CVEs within docker images that are pushed to EXTRACT code and repository bases, thus making sure that everything is scanned before deployment and as such the remediation as pro-active and can be monitored and followed-up
VIDEOS