In this section, you create a Python environment with PyTorch, TorchAO, and ExecuTorch components needed for quantization and .vgf export.
If you already use Neural Graphics Model Gym , keep that environment and reuse it here.
Create and activate a virtual environment:
python3 -m venv venv
source venv/bin/activate
python -m pip install --upgrade pip
In your virtual environment, clone the ExecuTorch repository and run the installation script:
git clone https://github.com/pytorch/executorch.git
cd executorch
./install_executorch.sh
From the root of the cloned executorch repository, run the Arm backend setup script:
./examples/arm/setup.sh \
--i-agree-to-the-contained-eula \
--disable-ethos-u-deps \
--enable-mlsdk-deps
In the same terminal session, source the generated setup script so the Arm backend tools (including the model converter) are available on your PATH:
source ./examples/arm/arm-scratch/setup_path.sh
Verify the model converter is available:
command -v model-converter || command -v model_converter
Verify your imports:
import torch
import torchvision
import torchao
import executorch
import executorch.backends.arm
from executorch.backends.arm.vgf.partitioner import VgfPartitioner
print("torch:", torch.__version__)
print("torchvision:", torchvision.__version__)
print("torchao:", torchao.__version__)
If executorch.backends.arm is missing, you installed an ExecuTorch build without the Arm backend. Use an ExecuTorch build that includes executorch.backends.arm and the VGF partitioner.
If you checked out a specific ExecuTorch branch (for example, release/1.0) and you run into version mismatches, check out the main branch of ExecuTorch from the cloned repository and install from source:
pip install -e .
In this section, you:
PATHIn the next section, you apply PTQ to a sample model and generate a .vgf artifact.