Chat API

The streaming chat endpoint that powers conversations and DSL extraction.

POST /api/chat

Send a message and receive a streaming response with DSL extraction.

Request

{
  "messages": [
    {
      "role": "user",
      "content": "I need a task management system"
    }
  ],
  "chatId": "chat_xyz", // Optional, creates new if not provided
  "model": "anthropic/claude-3.5-sonnet", // Optional
  "temperature": 0.3, // Optional
  "enableDSL": true // Optional, default true
}

Response (Streaming)

Server-Sent Events stream:

data: {"type": "text", "content": "I'll help you build"}

data: {"type": "dsl", "content": {"entities": {...}}}

data: {"type": "phase", "content": "clarifying"}

data: {"type": "done", "content": ""}

Example: Creating a Blog System

const response = await fetch('/api/chat', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    messages: [{
      role: 'user',
      content: 'I need a blog with posts, authors, and comments'
    }]
  })
});

const reader = response.body.getReader();
const decoder = new TextDecoder();

while (true) {
  const { done, value } = await reader.read();
  if (done) break;
  
  const chunk = decoder.decode(value);
  const lines = chunk.split('\n');
  
  for (const line of lines) {
    if (line.startsWith('data: ')) {
      const data = JSON.parse(line.slice(6));
      
      switch(data.type) {
        case 'text':
          console.log('AI:', data.content);
          break;
        case 'dsl':
          console.log('DSL Update:', data.content);
          break;
        case 'phase':
          console.log('Phase:', data.content);
          break;
      }
    }
  }
}

Real Example Response

When you send "Create a blog system", the API extracts:

{
  "type": "dsl",
  "content": {
    "entities": {
      "User": {
        "fields": {
          "id": { "type": "uuid", "required": true },
          "email": { "type": "string", "required": true, "unique": true },
          "name": { "type": "string", "required": true },
          "role": { "type": "enum", "values": ["admin", "author", "reader"] }
        }
      },
      "Post": {
        "fields": {
          "id": { "type": "uuid", "required": true },
          "title": { "type": "string", "required": true },
          "slug": { "type": "string", "required": true, "unique": true },
          "content": { "type": "text", "required": true },
          "excerpt": { "type": "text", "required": false },
          "authorId": { "type": "uuid", "required": true },
          "status": { "type": "enum", "values": ["draft", "published", "archived"] },
          "publishedAt": { "type": "datetime", "required": false }
        }
      },
      "Comment": {
        "fields": {
          "id": { "type": "uuid", "required": true },
          "content": { "type": "text", "required": true },
          "postId": { "type": "uuid", "required": true },
          "authorId": { "type": "uuid", "required": true },
          "approved": { "type": "boolean", "required": true, "default": false }
        }
      },
      "Tag": {
        "fields": {
          "id": { "type": "uuid", "required": true },
          "name": { "type": "string", "required": true, "unique": true },
          "slug": { "type": "string", "required": true, "unique": true }
        }
      }
    },
    "relationships": [
      { "from": "Post", "to": "User", "type": "many-to-one", "name": "author" },
      { "from": "Comment", "to": "Post", "type": "many-to-one" },
      { "from": "Comment", "to": "User", "type": "many-to-one", "name": "author" },
      { "from": "Post", "to": "Tag", "type": "many-to-many" }
    ],
    "phase": "proposing",
    "readiness": 65,
    "confidence": 0.85
  }
}

GET /api/chat/[id]/stream

Resume streaming for an existing chat.

Request

GET /api/chat/chat_xyz/stream

Response

Returns the chat history and continues streaming if conversation is active.

Error Handling

Common Errors

// Rate Limited
{
  "error": {
    "code": "RATE_LIMITED",
    "message": "Too many requests. Please wait 60 seconds.",
    "retryAfter": 60
  }
}

// Invalid Model
{
  "error": {
    "code": "INVALID_MODEL",
    "message": "Model 'gpt-5' is not available",
    "availableModels": ["claude-3.5-sonnet", "gpt-4-turbo"]
  }
}

// OpenRouter API Error
{
  "error": {
    "code": "LLM_ERROR",
    "message": "Failed to connect to LLM service",
    "details": "Check your OPENROUTER_API_KEY"
  }
}

WebSocket Alternative

For better real-time support:

const ws = new WebSocket('ws://localhost:3000/api/chat/ws');

ws.onopen = () => {
  ws.send(JSON.stringify({
    type: 'message',
    content: 'Create a task management system'
  }));
};

ws.onmessage = (event) => {
  const data = JSON.parse(event.data);
  console.log('Received:', data);
};

Rate Limits

  • • Standard: 100 messages/hour
  • • Pro: 1000 messages/hour
  • • Enterprise: Unlimited
  • Best Practices

  • Reuse chat sessions for context continuity
  • Enable DSL extraction only when needed
  • Use lower temperature (0.3) for consistent results
  • Implement retry logic for network failures
  • Cache responses when appropriate
  • Related Endpoints

  • Generation API - Generate code from DSL
  • Message API - Manage individual messages
  • Conversation API - Manage chat sessions