Skip to main content

Passing Custom Context

Prerequisites

Make sure you have completed the Getting Started with the Chat API tutorial.

Chat with custom context

Besides chatting with documents that are ingested in the Zeta Alpha index, the Chat API also offers the possibility to chat with custom context, using the custom_context field of conversation_context and passing an arbitrary content as a string or as a JSON with an arbitrary schema of your choice. Keep in mind that all the fields of the JSON will be passed to the LLM in order to generate the response.

Also, the document_hit_url of the evidence items in the response will have the format custom://{document_id}, where document_id is the one passed as custom context in the request.

The streaming API is used below to showcase how the custom context works, but the functionality is the same for the REST endpoint as well.

import json
import os

import requests
import sseclient

TENANT = "zetaalpha"
CHAT_STREAMING_ENDPOINT = (
f"https://api.zeta-alpha.com/v0/service/chat/stream?tenant={TENANT}"
)

headers = {
"accept": "text/event-stream",
"Content-Type": "application/json",
"x-auth": os.getenv("ZETA_ALPHA_API_KEY"),
}

response = requests.post(
CHAT_STREAMING_ENDPOINT,
headers=headers,
json={
"conversation_context": {
"custom_context": {
"items": [
{
"document_id" : "myID_1",
"content": "The weather on Monday will be sunny"
},
{
"document_id" : "myID_2",
"content": {
"title": "Forecast for Tuesday",
"prediction": "Cloudy",
"temperature": "17 degrees Celsius"
}
},
{
"document_id" : "myID_3",
"content": "The weather on Wednesday will be windy"
}
],
}
},
"conversation": [
{
"sender": "user",
"content": "What's the weather on Tuesday?",
},
],
"agent_identifier": "chat_with_dynamic_retrieval",
},
stream=True,
)

response.raise_for_status()
client = sseclient.SSEClient(response)

for event in client.events():
try:
streamed_data = json.loads(event.data)
except Exception:
print(f"Data stream error: {event.data}")
streamed_data = None

if streamed_data:
print("\n---------------- COMPLETE MESSAGE ----------------")
print(f"Message:\n{streamed_data['content']}\n")
print(f"Evidences:\n{streamed_data['evidences']}\n")
print(f"Function Call:\n{streamed_data['function_call_request']}\n")
print("--------------------------------------------------")

Sample output:

---------------- COMPLETE MESSAGE ----------------
Message: The weather on Tuesday is expected to be cloudy with 17 degrees Celsius<sup>2</sup>.


Evidences:
[{'document_hit_url': 'custom://myID_2, 'text_extract': 'Forecast for Tuesday\nCloudy\n17 degrees Celsius', 'anchor_text': '<sup>2</sup>'}]

Function Call:
None

--------------------------------------------------

Usage in a Frontend app

In case you want to use the above functionality in a Frontend app, you might need to display the context and the evidences to the user as a UI component. In order to do so, you can pass as the content field of the custom_context.items, the fields that you need in order to display this UI component.

For example, in case you want to display a document card with title, source, description etc, you can pass all these fields as part of the content object inside an item of the custom_context field. Those fields will be used by the LLM to answer the user's question, while the evidence will contain a pointer to the document_id of the context in order for the Frontend to grab the whole document item and display it as desired in the UI component of their choice.