Introduction
Understand UltraEdge HPC-I architecture for edge AI and mixed workloads
Provision a Google Axion C4A VM for Yocto image builds on Arm
Build and install Yocto images for NXP S32G-VNP-GLDBOX3 with UltraEdge
Install UltraEdge on Debian and Ubuntu for Edge AI workloads
Run and manage UltraEdge HPC-I for AI and mixed workloads on Arm
Next Steps
UltraEdge is an edge-native, high-performance execution fabric designed to run AI and mixed workloads without the overhead of traditional container platforms. While technologies like Docker and Kubernetes were created for general-purpose cloud environments, they introduce latency, resource bloat, and non-deterministic behavior that are poorly suited for edge deployments.
UltraEdge takes a fundamentally different approach. It replaces heavyweight container runtimes with a lean, deterministic execution stack purpose-built for performance-oriented compute. This enables millisecond-level startup times, predictable performance, and a dramatically smaller resource footprint - allowing workloads to start faster, run closer to the hardware, and make full use of available CPU and GPU resources.
At the core of UltraEdge are two specialized execution systems:
· MicroStack, optimized for enterprise and mixed workloads
· NeuroStack, purpose-built for AI inference and accelerated compute
Together, these systems deliver up to 30x faster startup times and 3.8x smaller package sizes compared to conventional container-based approaches. By removing unnecessary abstraction layers, UltraEdge ensures compute cycles are spent on execution - not on managing the runtime itself.
This Learning Path introduces the architecture, principles, and components that make UltraEdge a high-performance execution fabric for modern edge infrastructure.
UltraEdge orchestrates an edge-native execution fabric for high-performance compute infrastructure. Key design principles and capabilities include:
Built-for-edge execution stack
A lightweight, adaptive platform for AI and mixed workloads optimized for low latency, high determinism, and minimal footprint.
Dual workload focus
Native support for both traditional enterprise workloads and next-generation AI workloads, without compromising performance.
Full-stack enablement
Delivered through MicroStack and NeuroStack execution systems, each optimized for its workload domain.
UltraEdge is an edge-native, high-performance execution fabric for AI and mixed workloads on Arm platforms. Unlike traditional container platforms such as Docker and Kubernetes, UltraEdge minimizes latency, resource overhead, and non-deterministic behavior, making it ideal for edge deployments where performance and efficiency are critical.
UltraEdge replaces heavyweight container runtimes with a lean, deterministic execution stack. This enables millisecond-level startup times, predictable performance, and a smaller resource footprint. You can start workloads faster, run closer to the hardware, and maximize CPU and GPU utilization.
UltraEdge includes two specialized execution systems:
These systems deliver up to 30x faster startup times and 3.8x smaller package sizes compared to conventional container-based approaches. By removing unnecessary abstraction layers, UltraEdge ensures compute cycles are spent on execution, not on managing the runtime.
UltraEdge orchestrates edge-native execution for high-performance compute infrastructure. Its design principles include:
UltraEdge is composed of layered systems, each responsible for a distinct aspect of execution and orchestration:
UltraEdge high-level architecture
UltraEdge organizes functionality into five specialized layers. Each layer is responsible for a specific aspect of workload execution and orchestration:
In this section, you:
Next, you’ll move on to hands-on installation and configuration of UltraEdge on your target Arm platform.