Who is this for?
This is an introductory topic for software developers and ML engineers looking to deploy an optimized AI agent application.
What will you learn?
Upon completion of this learning path, you will be able to:
- Set up llama-cpp-python optimized for Arm servers.
- Run optimized Large Language Models (LLMs).
- Create custom functions for LLMs.
- Deploy optimized AI agents for applications.
Prerequisites
Before starting, you will need the following:
- An
Arm-based instance
from a cloud service provider or an on-premise Arm server.
- Basic understanding of Python and prompt engineering.
- Understanding of LLM fundamentals.