ACES, Edge-native Cloud Infrastructure for Data-Dense Environments and Applications

The continuous increase of data density in organisational, service, and industrial processes in our living and working environments, may draw important and wide-ranging benefits from a powerful edge computing infrastructure that is designed to locally process Big Data and AI. This distributed infrastructure supports the interoperability of applications and platforms that increasingly incorporate autonomous intelligent behaviours and functions. Consequently, the cloud-to-edge (North-South) infrastructure orientation that charted the path into the 4th Industrial revolution is now transforming into an edge-to-edge-to cloud (East-West) infrastructure orientation with an ‘edge-first’ emphasis. 

The management of such an infrastructure follows the same trajectory of development as the applications and platforms it is going to support, e.g. autonomous, ‘edge first’ east-west interoperability and distributed intelligent, and at times emergent, behaviour. 

ACES is building the first generation of an infrastructure management platform for powerful edge as a service (PEaaS). ACES is a performant, low latency and high availability platform leveraging cognitive technology to flexibly orchestrate and arrange clusters of hyperconverged servers in a mesh of compact EdgeMicroDataCenters (EMDC). ACES exhibits a self-orchestrating, self-configuring, self-repairing and self-adjusting intelligence which takes full advantage of the capabilities of Artificial Intelligence, Machine Learning and Swarm Intelligence to run a self-sustaining edge infrastructure. The future-proof design of the intelligence can orchestrate a fully composable data centre, that is a data centre equipped with bleeding-edge hardware, currently being introduced in the market, that relies on PCIe-CXL technology.

ACES autopoiesis framework is a future-proof approach, and already the 1st generation of its platform enables reliable and self-adaptive execution of processing, storage, and networking services at the edge, guaranteeing continuous self-optimizing performance, little to no overprovisioning and underutilization of infrastructure, and reduced data transfer needs. ACES is an important element is the realisation of cost-efficient high-performing edge infrastructures. 

ACES integrates the latest containerisation tool (Kubernetes) complemented with a mix of beyond state-of-the-art swarm technology (emergent intelligence) AI/ML. The swarm is not a typical homogeneous one (large number of elements of the same ‘species’), nor is it relying on the behaviour of one species (birds, ants, etc.). ACES has built a heterogenous custom swarm by combining large numbers of entities from different custom-built species, for each custom-built species the initial implementation cloned and combined the characteristics from multiple well known swarm species. The additional AI/ML is intended to feed the swarm with triggers, tweak the swarm agents, and embed levels of explainability within the swarm behaviour. 

The rationale behind this design is twofold: to overcome the limited responsiveness and flexibility of current orchestration mechanisms, the future of data centres is in composable hardware.

Furthermore, to improve the efficiency of the standard Kubernetes orchestrator under highly dynamically changing workloads, various AI/ML and specific swarm technologies have been deployed. Compared to the standard Kubernetes orchestrator the efficiency improved yet at the same time the studies draw two main conclusions: AI/ML is too slow in grasping the fast-changing dynamics of the workloads, and, although swarm technologies are faster, as yet they do not provide enough flexibility in adaptivity nor was such adaptivity explainable so far. Swarm technologies require a large number of entities in order to work efficiently. The current state of the art for an edge node is a hyperconverged server (CPU, memory) with PCIe attached storage and accelerators. By implementing a dual high-speed fabric (Ethernet and PCIe), the EMDC can dynamically offer any mix of CPU cores (including memory), accelerator and storage to the orchestrator. This capability is unprecedented: with the arrival of PCIe-CXL the individual components in the datacentre become fully disaggregated, creating pools of CPU cores, accelerator cores, RAM, and storage. The ACES orchestrator is now capable of dynamically creating the right hardware configurations for the incoming workloads, and after execution of the workloads it can return individual components to their pools of resources. While not all hardware components are currently CXL capable, the ACES project is already developing the orchestrator for the fully composable datacentre. Especially at the edge, where each EMDC has, despite its high density, limited resources, the composability will allow very high utilization levels and energy efficiency, ultimately leading to cost optimisation of edge infrastructure.

Related content

📣 Save the date!

Next 18 April 2024 | 10h30-13h00 CET, in the context of “EUCloudEdgeIoT.eu RIA Showcase” webinar series organised by the Open Continuum CSA, ACES project will host the...

ACES demo session at the DECIDO final conference

The DECIDO final conference “DECIDO: Shaping the Future of Evidence-Informed Policy Making,” took place on the 22nd of February 2024 at the Renaissance Brussels Hotel...

Engineering Swarm Intelligence for the self-organized scheduler in ACES EMDCs

What is a swarm? A swarm is a group that solves a common problem by emergent behavior. In doing so, swarm participants work together without a central commander,...

ACES – Autopoietic Cognitive Cloud Services for Edge applications in the Energy Sector

What is that we are researching and developing? ACES aspires to be one of the key driving forces in the post-cloud computing paradigm, a.k.a. the intelligent edge...