Introduction
Understand Flyte and gRPC ML workflows on Google Axion
Create a Google Axion C4A Arm virtual machine
Install Flyte and gRPC tools on Axion
Build a gRPC feature engineering service
Create ML Training Workflow
Execute and validate the ML pipeline
Understand the distributed ML architecture
Next Steps
In this section, you execute the distributed machine learning pipeline built using Flyte and gRPC.
The ML workflow will:
The feature engineering service runs independently and communicates with the workflow using gRPC remote procedure calls.
Make sure the flyte-env virtual environment is active. If you opened a new terminal, reactivate it:
source ~/flyte-env/bin/activate
Start the feature engineering service that was created in the previous section.
python feature_server.py
The output is similar to:
Feature gRPC service running on port 50051
Leave this terminal running because the ML pipeline will send requests to this service.
Open a new terminal session. Navigate to the project directory.
cd ~/flyte-ml-pipeline
Run the workflow:
python workflow.py
The output is similar to:
Loading dataset
Preprocessing dataset: 10
Training model with feature: 200
Model accuracy: 10.0
Pipeline result: Model performance good
During pipeline execution the following steps occur:
Load Dataset
│
▼
Preprocess Data
│
▼
Feature Engineering (gRPC Service)
│
▼
Model Training
│
▼
Model Evaluation
│
▼
Pipeline Result
You can observe activity in the terminal running the feature service. When the workflow sends a request, the service prints a message similar to:
Feature gRPC service running on port 50051
Generating feature for: 20
The output confirms that the Flyte workflow successfully communicated with the gRPC service.
In this section, you learned how to:
In the next section, you will explore the architecture of a distributed ML training pipeline implemented with Flyte and gRPC on Axion infrastructure.