In this section, you’ll prepare your Raspberry Pi 5 by installing Python, required libraries, and Ollama, so you can run large language models (LLMs) locally.
This Learning Path assumes you have set up your Raspberry Pi with Raspberry Pi OS and network connectivity. For Raspberry Pi 5 setup support, see Raspberry Pi Getting Started .
The easiest way to work on your Raspberry Pi is by connecting it to an external display through one of the micro‑HDMI ports. This setup also requires a keyboard and mouse.
You can also use SSH to access the terminal. To use this approach, you need to know the IP address of your device. Ensure your Raspberry Pi 5 is on the same network as your host computer. Access your device remotely via SSH using the terminal or any SSH client.
Replace <user>
with your Pi’s username (typically pi
), and <pi-ip>
with your Raspberry Pi 5’s IP address.
ssh <user>@<pi-ip>
Create a directory called smart-home
in your home directory and navigate into it:
mkdir -p "$HOME/smart-home"
cd "$HOME/smart-home"
The Raspberry Pi 5 includes Python 3 preinstalled, but you need additional packages:
sudo apt update && sudo apt upgrade -y
sudo apt install -y python3 python3-pip python3-venv git curl build-essential gcc python3-lgpio
Create and activate a Python virtual environment to isolate project dependencies:
python3 -m venv venv
source venv/bin/activate
Install the required libraries:
pip install ollama gpiozero lgpio psutil httpx orjson numpy fastapi uvicorn uvloop
Install Ollama using the official installation script for Linux:
curl -fsSL https://ollama.com/install.sh | sh
Verify the installation:
ollama --version
If installation was successful, the output should be similar to:
ollama version is 0.11.4
Ollama supports various models. This guide uses deepseek-r1:7b
as an example, but you can also use tinyllama:1.1b
, qwen:0.5b
, gemma2:2b
, or deepseek-coder:1.3b
.
The run
command sets up the model automatically. You will see download progress in the terminal, followed by an interactive prompt when ready.
ollama run deepseek-r1:7b
If you run into issues with the model download, try the following:
qwen:0.5b
or tinyllama:1.1b
if you encounter memory issues. 16 GB of RAM is sufficient for small to medium models; very large models may require more memory or run slower.With the model set up through Ollama, move on to the next section to start configuring the hardware.