📄️ First RAG Agent
In this tutorial, we will build and run our first Retrieval-Augmented Generation (RAG) agent using the built-in completion client. This is the simplest LLM-powered agent and is a good starting point for understanding how to create more complex agents. The built-in completion client is a small wrapper around the low-level vendor clients (e.g., OpenAI, Anthropic, Ollama, etc.) that understands the native input arguments to the agent's main method.
📄️ First LangChain RAG Agent
In this tutorial, we will build and run our first Retrieval-Augmented Generation (RAG) agent using the LangChain framework. LangChain is a popular high-level interface for building complex language model applications, making it easier to integrate various components like retrievers and language models.
📄️ Deploying Agents
In this tutorial, we will guide you through the process of deploying and configuring your agents on the Zeta Alpha platform. By the end of this tutorial, you will have your agent running on the platform and accessible to users.