Agent integration
Martha is designed to be operated by both human developers and LLM agents. The CLI is the primary surface for both — it ships with a structured JSON output mode, predictable exit codes, and a bundled skill reference describing every command.
The skill bundle
@aiaiai-pt/martha-cli ships with skills/martha-cli/SKILL.md — a self-contained operational reference covering tenant/agent/function/task primitives, every command, and the JSON contracts the CLI produces. Pull it directly into your agent's prompt context:
# Print to stdout
martha skill
# Pipe a head into the prompt
npx -y @aiaiai-pt/martha-cli@latest skill | head -200The skill is keyword-tagged for tools like Claude Code that auto-discover skills under ~/.claude/skills/ — drop the file there and it activates automatically.
JSON output discipline
Every command supports --json. Use it whenever you're piping to anything that parses output:
martha agents list --json | jq '.[] | {name, status}'
martha tasks list --status open --json | jq '.[] | .id'Without --json, output is human-formatted with color codes and ANSI escapes that don't survive plain-text parsing.
Stability
0.x releases may change CLI flags or JSON output between minor versions. Pin a version in agent runtimes:
npx -y @aiaiai-pt/[email protected] ...A schema_version envelope in --json output is on the roadmap; until then, treat the contract as best-effort and rebuild your parser on each minor bump.
Exit codes
| Code | Meaning |
|---|---|
| 0 | Success |
| 1 | Generic error |
| 2 | Auth failure (token missing, expired, or rejected) |
| 3 | Not found (404) |
| 4 | Validation error (400/422) |
| 5 | Conflict (409) |
Agent runtimes should map 2 → "re-authenticate", 3 → "the named entity doesn't exist", 4 → "your input was malformed" rather than treating non-zero as undifferentiated failure.
Auth modes for agent runtimes
The cleanest fit is service-account credentials — no browser flow, env-based, auto-refreshing:
export MARTHA_CLIENT_ID="martha-tenant-acme-client"
export MARTHA_CLIENT_SECRET="..."
martha auth login --service-accountAfter login, every subsequent CLI invocation uses the cached service-account token, refreshing automatically before expiry.
For a one-shot agent run that doesn't need persistence, mint a token externally and pass it via MARTHA_TOKEN:
# Get a token from your Keycloak realm via client credentials
TOKEN="$(curl -sf -X POST \
"$KEYCLOAK_URL/realms/$REALM/protocol/openid-connect/token" \
-d "grant_type=client_credentials" \
-d "client_id=$MARTHA_CLIENT_ID" \
-d "client_secret=$MARTHA_CLIENT_SECRET" \
| jq -r .access_token)"
MARTHA_TOKEN="$TOKEN" \
npx -y @aiaiai-pt/[email protected] agents list --jsonDiagnosing problems
martha doctor runs five checks (API health, version skew, Keycloak reach, token validity, authenticated request) and prints a specific remedy for each failure. Run it as the first step when an agent run is misbehaving — it will surface configuration or auth issues before you spend tokens on the actual task.
Tested integrations
The CLI is designed to be harness-agnostic, but here's how it fits into the runtimes we've integrated with:
- Claude Code — drop the bundled
SKILL.mdinto~/.claude/skills/; activates automatically via Claude Code's skill discovery. - CrewAI — shell out from a custom tool:
subprocess.run(["martha", ...], capture_output=True). Parse--jsonoutput. - Pydantic AI — wrap each command in a
Toolthat returns the parsed JSON. - ork — direct shell exec; the bundled skill includes a section on ork's task-claim / heartbeat / complete pattern.
- Plain shell agents —
martha skill | head -200to seed prompt context, then--json | jqfor everything else.
Working with another harness? Open a discussion at github.com/westeuropeco/martha/issues — we'll add it here.