What you've learned

You should now know how to:

  • Install the Python version of llama.cpp on your Raspberry Pi 5.
  • Download an LLM from Hugging Face.
  • Assess LLM memory size and performance.
  • Run the LLM on your Raspberry Pi 5 using Python bindings for llama.cpp.

Knowledge Check

Is it possible to run LLMs on edge devices such as a Raspberry Pi?

Does llama.cpp require skills to build and run C++ applications?

Can you estimate memory usage without loading an LLM?