Skip to content

Structured Outputs

Structured outputs let you request JSON-conforming responses from any LLM provider. Martha supports two modes:

  • JSON Schema — constrained decoding guaranteeing the response matches your schema
  • JSON Object — best-effort JSON output (no schema enforcement)

Configuring on an Agent

In the admin UI, open an agent's LLM config and set the Output Format:

SettingEffect
Text (default)Free-text responses
JSON Schema (strict)Constrained decoding — LLM returns JSON matching your schema exactly

When selecting JSON Schema, provide a schema in the textarea:

json
{
  "type": "object",
  "properties": {
    "answer": { "type": "string" },
    "confidence": { "type": "number" },
    "sources": {
      "type": "array",
      "items": { "type": "string" }
    }
  },
  "required": ["answer", "confidence", "sources"],
  "additionalProperties": false
}

The agent's final response will always conform to this schema.

!!! note "Tool calls are unaffected" Structured outputs only apply to the agent's final text response. During tool-calling turns, the LLM is free to emit tool calls as normal. The schema constraint applies only after all tools complete.

Configuring via API

Set response_format in the agent's llm_config:

json
{
  "provider": "claude",
  "model": "claude-sonnet-4-5-20250929",
  "temperature": 0.7,
  "max_tokens": 4096,
  "response_format": {
    "type": "json_schema",
    "json_schema": {
      "name": "agent_output",
      "strict": true,
      "schema": {
        "type": "object",
        "properties": {
          "answer": { "type": "string" },
          "confidence": { "type": "number" }
        },
        "required": ["answer", "confidence"],
        "additionalProperties": false
      }
    }
  }
}

Provider Support

All major LLM providers are supported. Martha translates a single canonical response_format into each provider's native structured-output API:

ProviderTranslation
Claude (Anthropic)Native structured-output parameter
OpenAINative response_format
Any other providerRouted through a multi-provider connector that handles 100+ providers

You write the schema once; Martha picks the right format for whichever model the agent ends up using.

Workflow Nodes

LLM Router

The LLM router node automatically uses structured outputs for classification. When you configure conditions (e.g., "billing", "technical", "sales"), the router builds a JSON Schema with an enum constraint, ensuring the LLM returns exactly one of the configured categories.

No manual configuration needed — this happens automatically.

LLM Node

In the workflow builder, select an LLM node and scroll to Output Format in the config panel. Choose JSON Object or JSON Schema, same as the agent-level config.

Via API, set response_format inside the node's llm_config:

json
{
  "llm_config": {
    "response_format": {
      "type": "json_schema",
      "json_schema": {
        "name": "extraction",
        "strict": true,
        "schema": { "type": "object", "properties": { "key": { "type": "string" } }, "required": ["key"], "additionalProperties": false }
      }
    }
  }
}

Agent Loop Node

In the workflow builder, expand Override agent settings for this step on an agent loop node to find the Output Format selector. This overrides the agent's default format for this workflow step only.

Via API, set response_format in the node's llm_config override, same format as agent-level config.

Limitations

  • Streaming: JSON arrives incrementally during streaming. The complete response is only available at the end. No automatic retry on parse failure during streaming.
  • Model support: Older models (GPT-3.5, Claude 3) may not support json_schema mode; the platform falls back gracefully to JSON-object mode.
  • Schema complexity: Very complex schemas (recursive, deeply nested) may be rejected by some providers.
  • Token limits: Large schemas with low max_tokens settings can cause truncation.

Martha is built by aiaiai-pt.