Before you can deploy and test models with ExecuTorch, you need to set up your local development environment. This section walks you through installing system dependencies, creating a virtual environment, and cloning the ExecuTorch repository on Ubuntu or WSL. Once complete, you’ll be ready to run TinyML models on a virtual Arm platform.
These instructions have been tested on:
Run the following commands to install the dependencies:
sudo apt update
sudo apt install python-is-python3 python3-dev python3-venv gcc g++ make -y
Create and activate a Python virtual environment:
python3 -m venv $HOME/executorch-venv
source $HOME/executorch-venv/bin/activate
Your shell prompt should now start with (executorch)
to indicate the environment is active.
Clone the ExecuTorch repository and install dependencies:
cd $HOME
git clone https://github.com/pytorch/executorch.git
cd executorch
Set up internal submodules:
git submodule sync
git submodule update --init --recursive
./install_executorch.sh
If you encounter a stale buck
environment, reset it using:
ps aux | grep buck
pkill -f buck
Check that ExecuTorch is correctly installed:
pip list | grep executorch
Expected output:
executorch 0.8.0a0+92fb0cc
Now that ExecuTorch is installed, you’re ready to simulate your TinyML model on an Arm Fixed Virtual Platform (FVP). In the next section, you’ll configure and launch a Fixed Virtual Platform.