About this Learning Path

Who is this for?

This is an introductory topic for developers interested in running LLMs on Arm-based servers.

What will you learn?

Upon completion of this learning path, you will be able to:

  • Download and build llama.cpp on your Arm server
  • Download a pre-quantized Llama 2 model from Hugging Face
  • Run the pre-quantized Llama 2 model on your Arm CPU

Prerequisites

Before starting, you will need the following:

Next