Introduction
Understand Flyte and gRPC ML workflows on Google Axion
Create a Google Axion C4A Arm virtual machine
Install Flyte and gRPC tools on Axion
Build a gRPC feature engineering service
Create ML Training Workflow
Execute and validate the ML pipeline
Understand the distributed ML architecture
Next Steps
Flyte is an open-source workflow orchestration platform used to build scalable and reproducible data and machine learning pipelines. Combined with gRPC for efficient distributed service communication, Flyte enables developers to define workflows as Python tasks while delegating specific operations to independent microservices. Running this stack on Google Axion C4A Arm-based processors provides efficient, scalable infrastructure for executing modern ML workflows and distributed data processing tasks.
Google Axion C4A is a family of Arm-based virtual machines built on Google’s custom Axion CPU, which is based on Arm Neoverse-V2 cores. Designed for high-performance and energy-efficient computing, these virtual machines offer strong performance for data-intensive and analytics workloads such as big data processing, in-memory analytics, columnar data processing, and high-throughput data services.
The C4A series provides a cost-effective alternative to x86 virtual machines while leveraging the scalability, SIMD acceleration, and memory bandwidth advantages of the Arm architecture in Google Cloud.
These characteristics make Axion C4A instances well-suited for modern analytics stacks that rely on columnar data formats and memory-efficient execution engines.
To learn more, see the Google blog Introducing Google Axion Processors, our new Arm-based CPUs .
Flyte allows developers to define workflows as Python tasks, simplifying the management of complex ML processes such as data preparation, feature engineering, and model training.
gRPC enables fast communication between distributed services within these pipelines. Running Flyte with gRPC on Google Axion C4A Arm-based processors provides efficient, scalable infrastructure for executing modern ML workflows and distributed data processing tasks.
To learn more, visit the Flyte documentation and explore the gRPC documentation to understand how distributed service communication enables scalable machine learning workflows.
In this section, you learned about:
Next, you will deploy Flyte tools, create a gRPC-based feature engineering service, and build a distributed ML workflow pipeline that orchestrates data processing and model training tasks on Axion infrastructure.