.vgf as an intermediate artifact, it helps to inspect the exported graph before you integrate it into your runtime.Install and launch Model Explorer with the VGF adapter:
pip install vgf-adapter-model-explorer
pip install torch ai-edge-model-explorer
model-explorer --extensions=vgf_adapter_model_explorer
Open the .vgf file from ./output/ and ./output_qat/.
When you review the graph, look for unexpected layout conversions (for example, extra transpose operations), operators that you did not intend to run on your GPU path, and model I/O shapes that do not match your integration.
The fastest way to understand the integration constraints is to start from a known-good sample and then replace the model.
Use the Learning Path
Get started with neural graphics using ML Extensions for Vulkan
and focus on how the sample loads and executes .vgf artifacts. This is where you validate assumptions about input and output tensor formats and where any required color-space or layout conversions happen.
You now have a complete reference workflow for quantizing an image-to-image model with TorchAO and exporting INT8 .vgf artifacts using the ExecuTorch Arm backend. You also have a practical baseline you can use to debug export issues before you switch to your production model and data.
When you move from the CIFAR-10 proxy model to your own model, keep these constraints in mind: