Chat API
Integrate AlonChat conversations into your applications
Chat API#
Send messages and receive AI responses programmatically through the AlonChat Chat API.
Overview#
The Chat API allows you to:
- Send messages to your AI agent
- Receive streaming or complete responses
- Maintain conversation context
- Access RAG sources used in responses
Quick Start#
1. Get Your API Key#
- Go to your agent dashboard
- Click Deploy → API
- Copy your API key
2. Make Your First Request#
bash
curl -X POST https://api.alonchat.com/v1/agents/{agent_id}/chat \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"message": "Hello, what are your business hours?"
}'
3. Handle the Response#
json
{
"response": "We're open Monday-Saturday, 9AM to 6PM. Closed on Sundays and holidays!",
"conversation_id": "conv_abc123",
"confidence": 0.95,
"sources": [
{
"type": "qa",
"title": "Business Hours",
"relevance": 0.98
}
]
}
Authentication#
All requests require an API key in the Authorization header:
Code
Authorization: Bearer YOUR_API_KEY
API Key Types#
| Type | Scope | Use Case |
|---|---|---|
| Publishable | Read-only, limited | Frontend widgets |
| Secret | Full access | Backend integration |
⚠️ Never expose secret keys in frontend code!
Endpoints#
Send Message#
POST /v1/agents/{agent_id}/chat
Send a message and receive an AI response.
Request Body#
| Field | Type | Required | Description |
|---|---|---|---|
message | string | ✅ | User message |
conversation_id | string | ❌ | Continue existing conversation |
metadata | object | ❌ | Custom data to attach |
stream | boolean | ❌ | Enable streaming (default: false) |
Example Request#
javascript
const response = await fetch(
`https://api.alonchat.com/v1/agents/${agentId}/chat`,
{
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
message: "What products do you offer?",
conversation_id: "conv_abc123"
})
}
);
const data = await response.json();
console.log(data.response);
Response#
json
{
"response": "We offer three main products...",
"conversation_id": "conv_abc123",
"message_id": "msg_xyz789",
"confidence": 0.87,
"model": "gpt-5.2",
"sources": [...],
"credits_used": 5
}
Streaming Responses#
For real-time responses, enable streaming:
javascript
const response = await fetch(
`https://api.alonchat.com/v1/agents/${agentId}/chat`,
{
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
message: "Tell me about your services",
stream: true
})
}
);
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
process.stdout.write(chunk);
}
Streaming events:
text- Response text chunkssources- RAG sources useddone- Stream complete
Get Conversation#
GET /v1/conversations/{conversation_id}
Retrieve conversation history.
javascript
const response = await fetch(
`https://api.alonchat.com/v1/conversations/${conversationId}`,
{
headers: { 'Authorization': `Bearer ${apiKey}` }
}
);
const conversation = await response.json();
Response#
json
{
"id": "conv_abc123",
"agent_id": "agent_xyz",
"created_at": "2026-01-23T10:00:00Z",
"messages": [
{
"role": "user",
"content": "Hello",
"timestamp": "2026-01-23T10:00:00Z"
},
{
"role": "assistant",
"content": "Hi! How can I help you today?",
"timestamp": "2026-01-23T10:00:01Z"
}
]
}
Rate Limits#
| Plan | Requests/minute | Requests/day |
|---|---|---|
| Free | 10 | 100 |
| Starter | 60 | 1,000 |
| Pro | 300 | 10,000 |
| Enterprise | Custom | Custom |
Rate limit headers:
X-RateLimit-Limit- Max requests per windowX-RateLimit-Remaining- Requests leftX-RateLimit-Reset- Window reset time
Error Handling#
HTTP Status Codes#
| Code | Meaning |
|---|---|
| 200 | Success |
| 400 | Bad request (check payload) |
| 401 | Invalid API key |
| 403 | Insufficient permissions |
| 404 | Agent not found |
| 429 | Rate limited |
| 500 | Server error |
Error Response Format#
json
{
"error": {
"code": "rate_limit_exceeded",
"message": "Too many requests. Retry after 60 seconds.",
"retry_after": 60
}
}
SDKs#
Official SDKs coming soon:
- JavaScript/TypeScript
- Python
- PHP
For now, use standard HTTP libraries.
Webhooks#
Receive real-time notifications for events:
json
{
"event": "message.received",
"data": {
"conversation_id": "conv_abc123",
"message": "..."
}
}
See Webhooks Documentation for details.
Best Practices#
- Store conversation_id - Maintain context across messages
- Handle errors gracefully - Retry with exponential backoff
- Use streaming for UX - Better perceived performance
- Cache when appropriate - Reduce redundant API calls
- Monitor usage - Track credits consumed