Who is this for?
This is an introductory topic for anyone interested in running the Llama 3 model on a Raspberry Pi 5, and learning about techniques for running large language models (LLMs) in an embedded environment.
What will you learn?
Upon completion of this learning path, you will be able to:
- Use Docker to run Raspberry Pi OS on an Arm Linux server.
- Compile a Large Language Model (LLM) using ExecuTorch.
- Deploy the Llama 3 model on an edge device.
- Describe how to run Llama 3 on a Raspberry Pi 5 using ExecuTorch.
- Describe techniques for running large language models in an embedded environment.
Prerequisites
Before starting, you will need the following: