Get started with the Neoverse Reference Design software stack

Arm has developed a suite of Neoverse Reference Designs compute sub-systems. They are supported by free-of-charge Arm Ecosystem FVPs , and complete software stacks to illustrate how these systems boot to Linux. This learning path is based on the Neoverse N2 Reference Design (RD-N2).

Before you begin

You can use either an AArch64 or x86_64 host machine running Ubuntu Linux 22.04. 64GB of free disk space and 32GB of RAM is minimum requirement to sync and build the platform software stack. 48GB of RAM is recommended.

Follow the instructions to set up your environment using the information found at the Neoverse RD-N2 documentation site .

Install docker on your machine.

Install repository

Start by obtaining the repo tool to simplify the checkout of source code that spans multiple repositories.

Repo tool

First, refresh the list of available packages and install repo:

    

        
        
            sudo apt-get update
sudo apt-get install repo
        
    

Verify your installation:

    

        
        
            repo version
        
    

The output looks like this:

    

        
        <repo not installed>
repo launcher version 2.17
       (from /usr/bin/repo)
git 2.34.1
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
OS Linux 6.2.0-1009-aws (#9~22.04.3-Ubuntu SMP Tue Aug  1 21:11:51 UTC 2023)
CPU x86_64 (x86_64)
Bug reports: https://bugs.chromium.org/p/gerrit/issues/entry?template=Repo+tool+issue

        
    

Fetch source code

Create a new directory in to which you can download the source code, build the stack, and then obtain the manifest file.

To obtain the manifest, choose a tag of the platform reference firmware. RD-INFRA-2023.09.29 is used here. See the release notes for more information.

Specify the platform you would like with the manifest. In the manifest repo there are a number of available platforms. In this case, select pinned-rdn2.xml.

    

        
        
            mkdir rd-infra
cd rd-infra/
repo init -u https://git.gitlab.arm.com/infra-solutions/reference-design/infra-refdesign-manifests.git -m pinned-rdn2.xml -b refs/tags/RD-INFRA-2023.12.22
        
    

Now look at what the configured manifest contains:

    

        
        
            cat .repo/manifest.xml
        
    

The file content should look like:

    

        
        <?xml version="1.0" encoding="UTF-8"?>
<!--
DO NOT EDIT THIS FILE!  It is generated by repo and changes will be discarded.
If you want to use a different manifest, use `repo init -m <file>` instead.

If you want to customize your checkout by overriding manifest settings, use
the local_manifests/ directory instead.

For more information on repo manifests, check out:
https://gerrit.googlesource.com/git-repo/+/HEAD/docs/manifest-format.md
-->
<manifest>
  <include name="pinned-rdn2.xml" />
</manifest>

        
    

The manifest.xml file points to pinned-rdn2.xml so let us examine that one next:

    

        
        
            cat .repo/manifests/pinned-rdn2.xml
        
    

The contents of pinned-rdn2.xml are shown below:

    

        
        <?xml version="1.0" encoding="UTF-8"?>
<manifest>
  <remote fetch="https://git.gitlab.arm.com/infra-solutions/reference-design/" name="arm"/>
  <remote fetch="https://github.com/" name="github"/>
  <remote fetch="https://git.savannah.gnu.org" name="gnugit"/>
  <remote fetch="https://git.kernel.org" name="kernel"/>
  <remote fetch="https://git.trustedfirmware.org" name="tforg"/>

  <project remote="arm" name="platsw/scp-firmware" path="scp" revision="refs/tags/RD-INFRA-2023.12.22"/>
  <project remote="arm" name="platsw/trusted-firmware-a" path="tf-a" revision="refs/tags/RD-INFRA-2023.12.22"/>
  <project remote="arm" name="platsw/edk2" path="uefi/edk2" revision="refs/tags/RD-INFRA-2023.12.22"/>
  <project remote="arm" name="platsw/edk2-platforms" path="uefi/edk2/edk2-platforms" revision="refs/tags/RD-INFRA-2023.12.22"/>
  <project remote="arm" name="platsw/linux" path="linux" revision="refs/tags/RD-INFRA-2023.12.22"/>
  <project remote="arm" name="scripts/build-scripts" path="build-scripts" revision="refs/tags/RD-INFRA-2023.12.22"/>
  <project remote="arm" name="scripts/model-scripts" path="model-scripts" revision="refs/tags/RD-INFRA-2023.12.22"/>
  <project remote="arm" name="scripts/container-scripts" path="container-scripts" revision="refs/tags/RD-INFRA-2023.12.22"/>
  <project remote="arm" name="valsw/kvm-unit-tests" path="validation/sys-test/kvm-unit-tests" revision="refs/tags/RD-INFRA-2023.12.22"/>
  <project remote="arm" name="platsw/buildroot" path="buildroot" revision="refs/tags/RD-INFRA-2023.12.22"/>

  <project remote="tforg" name="TF-A/tf-a-tests.git" path="validation/comp-test/trusted-firmware-tf" revision="6f9e14a0e3a9e14051cf6235a49b06bae32823d9"/>
  <project remote="github" name="acpica/acpica" path="tools/acpica" revision="refs/tags/R06_28_23"/>
  <project remote="github" name="ARMmbed/mbedtls.git" path="mbedtls" revision="refs/tags/mbedtls-2.28.0"/>
  <project remote="github" name="mirror/busybox" path="busybox" revision="refs/tags/1_36_0"/>
  <project remote="gnugit" name="git/grub.git" path="grub" revision="refs/tags/grub-2.04"/>
  <project remote="kernel" name="pub/scm/linux/kernel/git/jejb/efitools" path="tools/efitools" revision="refs/tags/v1.9.2"/>
  <project remote="kernel" name="pub/scm/linux/kernel/git/will/kvmtool" path="kvmtool" revision="e17d182ad3f797f01947fc234d95c96c050c534b"/>
</manifest>

        
    

The manifest defines repositories of firmware sources, build and model scripts, linux, and tooling.

Fetch the sources with the repo sync command. This will take a few minutes to complete.

    

        
        
            repo sync -c -j $(nproc) --fetch-submodules --force-sync --no-clone-bundle
        
    

The output from running this command looks like:

    

        
        ... A new version of repo (2.40) is available.
... New version is available at: /home/ubuntu/rd-infra/.repo/repo/repo
... The launcher is run from: /usr/bin/repo
!!! The launcher is not writable.  Please talk to your sysadmin or distro
!!! to get an update installed.

Fetching: 100% (17/17), done in 2m23.399s
Fetching: 100% (16/16), done in 26.300s
Fetching: 100% (8/8), done in 11.914s
Fetching: 100% (1/1), done in 0.592s
Updating files: 100% (79368/79368), done.testsUpdating files:  26% (21084/79368)
Checking out: 100% (42/42), done in 13.164s
repo sync has finished successfully.

        
    

Now you should have all the code.

Docker container setup

Set up a docker container in which to perform the build. A container execution script is provided. See the help for more information.

    

        
        
            cd container-scripts/
./container.sh -h
        
    

The output from help should look like:

    

        
        Usage: ./container.sh [OPTIONS] [COMMAND]

If no options are provided the script uses the default values
defined in the 'Defaults' section.

Available options are:
  -v  <path> absolute path to mount into the container;
  -f  <file> docker file name;
  -i  <name> docker image name;
  -o  overwrites a previously-built image;
  -h  displays this help message and exits;

Available commands are:
  build  builds the docker image;
  run    runs the container in interactive mode;

        
    

Build the default configuration:

    

        
        
            ./container.sh build
        
    

Verify that the container has been built:

    

        
        
            docker image list
        
    

The output from this command looks like:

    

        
        REPOSITORY        TAG              IMAGE ID       CREATED         SIZE
rdinfra-builder   latest           8729adb0b96c   8 minutes ago   3.07GB
ubuntu            jammy-20230624   5a81c4b8502e   6 months ago    77.8MB

        
    

The appearance of the output is like a standard ubuntu container based on the latest release with the rdinfra-builder container built on top. Let’s run and enter the container:

    

        
        
            docker run -it rdinfra-builder:latest /bin/bash
        
    

This command puts you in the running container where you can run ls to list the contents:

    

        
        ubuntu@923218f076f5:/$ ls
bin  boot  dev  etc  home  lib  lib32  lib64  libx32  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

        
    

You can exit the container like any login shell.

    

        
        
            exit
        
    

You can use the container script to run and enter the container. Mount the source checkout into the container so you can do a build without having to copy the source in and build the targets out:

    

        
        
            ./container.sh -v /home/ubuntu/rd-infra/ run
        
    
Host-based builds

If you do choose to build this on the host, you need to obtain all the pre-requisites that would otherwise be installed in the container during its creation.

The build system provides a script for this that you must run as root:

    

        
        
            sudo ./build-scripts/rdinfra/install_prerequisites.sh
        
    
Back
Next