Build Tensorflow with SVE enabled

Now that you have seen that you can use Eigen with SVE enabled, it’s time to build your own SVE-enabled Tensorflow.

Tensorflow is a complex application, and building it requires significant effort. However, following the instructions below, you should be able to build and run it.

Install Build Requirements for Tensorflow

You are going to follow the Tensorflow Instructions to build from source with some slight modifications.

Before you build Tensorflow, you need to install the build dependencies first.

The following packages are required for the recent Debian/Ubuntu distribution used here. You might have to change the packages if you’re using a different Linux distribution:


            sudo apt -u install gcc g++ python3-pip golang python3-virtualenv default-jdk-headless patchelf libhdf5-dev -y

You also need to download bazelisk: a Go-based tool which you can use instead of bazel. You need to download the Linux arm64 version and rename it as bazel, and add it to your search path. One way is to put the file in your $HOME/bin directory, and add this directory to your $PATH:


            mkdir ~/bin
wget -O ~/bin/bazel
chmod +x ~/bin/bazel
export PATH=$PATH:$HOME/bin

Some python packages need to be installed using pip and it’s best that you do that in a virtual environment, using the virtualenv Python package:

After you create the environment, you need to activate it:


            virtualenv ~/python-venv
. ~/python-venv/bin/activate

Your shell prompt now shows the virtual environment, which should look like this:


        (python-venv) $


Next, clone Tensorflow from its Git repository to your system:


            git clone
cd tensorflow

Now you can configure Tensorflow. Configuration requires you to answer some questions, but you can select the defaults.

You need to pass the relevant SVE flags as you did before to make sure that Eigen selects the SVE backend.

Here is the configuration transcript, only the first line is a command you can copy and run:


        python3 ./
__output__You have bazel 6.5.0 installed.
__output__Please specify the location of python. [Default is /home/markos/python-venv/bin/python3]:
__output__Found possible Python library paths:
__output__  /home/markos/python-venv/lib/python3.11/site-packages                                                                                                                                                                       Please input the desired Python library path to use.  Default is [/home/markos/python-venv/lib/python3.11/site-packages]
__output__Do you wish to build TensorFlow with ROCm support? [y/N]:
__output__No ROCm support will be enabled for TensorFlow.
__output__Do you wish to build TensorFlow with CUDA support? [y/N]:
__output__No CUDA support will be enabled for TensorFlow.
__output__Do you want to use Clang to build TensorFlow? [Y/n]: n
__output__GCC will be used to compile TensorFlow.
__output__Please specify the path to clang executable. [Default is /usr/lib/llvm-17/bin/clang]:
__output__You have Clang 17.0.6 installed.
__output__Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -Wno-sign-compare]: -march=armv9-a -msve-vector-bits=128 -DEIGEN_ARM64_USE_SVE
__output__Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]:
__output__Not configuring the WORKSPACE for Android builds.
__output__                                                                                                                                                                                                                              Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.                                                                                         --config=mkl            # Build with MKL support.
__output__        --config=mkl_aarch64    # Build with oneDNN and Compute Library for the Arm Architecture (ACL).
__output__        --config=monolithic     # Config for mostly static monolithic build.                                                                                                                                                          --config=numa           # Build with NUMA support.
__output__        --config=dynamic_kernels        # (Experimental) Build kernels into separate shared objects.
__output__        --config=v1             # Build with TensorFlow 1 API instead of TF 2 API.
__output__Preconfigured Bazel build configs to DISABLE default on features:                                                                                                                                                                     --config=nogcp          # Disable GCP support.                                                                                                                                                                                --config=nonccl         # Disable NVIDIA NCCL support.


Run bazel to start the build.


You might want to take a break and return later as this takes quite a long while, even on fast systems.


            bazel build //tensorflow/tools/pip_package:wheel --repo_env=WHEEL_NAME=tensorflow_cpu

When the build is complete, you should have tensorflow pip package in this directory bazel-bin/tensorflow/tools/pip_package/wheel_house with a filename similar to this:




You are finally able to install your custom Tensorflow build to your system, using pip install:


            pip install

Installing it will take a while as it will install all dependencies but when it finishes you should have Tensorflow ready to use!

You can test it to see if it works by running:


        python3 -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000])))
__output__tf.Tensor(492.89847, shape=(), dtype=float32)


If you get the above response then your Tensorflow installation was successful.