Concepts
Martha has a small vocabulary of primitives. They get blurred in everyday conversation — this page is the precise version. Skim it once and the rest of the docs will read clearly.
The mental model in one paragraph
You build agents that hold prompts and pick functions to call as tools. You compose workflows out of nodes — LLM calls, function calls, branches, loops, human pauses. Workflows can run inside an agent's loop or be called directly. Clients consume your agents from chat, SMS, voice, or embedded widgets. Tasks queue async work for agents to claim and complete. Everything is scoped to a tenant, which is your slice of the platform.
Primitives
| Concept | What it is | Use it for |
|---|---|---|
| Tenant | Your isolated slice of the platform. Every record — agents, functions, documents, tasks, sessions — is filtered by tenant. You don't pass it explicitly; it's derived from your token. | Multi-customer setups. Separate prod from staging. Whatever needs hard data isolation. |
| Agent | A prompt + LLM config + tool grants + loop behavior. The thing that actually talks to users or executes tasks. | Anything goal-directed: chatbot, research agent, code reviewer, customer support, internal copilot. |
| Function | A callable tool an agent can use. Either an HTTP endpoint (REST/GraphQL) or a built-in platform tool. Defined once, granted to many agents. | Wrap any external API. Expose internal capabilities. Give agents real-world reach. |
| Workflow | A graph of nodes (LLM, function, branch, loop, parallel, agent loop, approval gate) executed deterministically. | Multi-step pipelines, conditional logic, retry policies, human-in-the-loop sequences. Anything that's more than one LLM call. |
| Client | A consumer of your agents. Chat web app, SMS sender, voice line, embedded widget. Has its own credentials, system prompts, and allowlists. | Connect a frontend. Wire up SMS via a messaging provider. Drop a chat widget into a partner's site. |
| Task | A unit of async work with a goal, priority, and lifecycle (open → claimed → running → completed/failed). Optionally linked to an external tracker issue. | Queue work for agents. Coordinate human-in-the-loop reviews. Sync with Linear/GitHub/GitLab. |
| Team | A named group of agents with a routing strategy (round_robin, manual, external). | Spread workload across many similar agents. Dispatch to remote agent harnesses (ork, CrewAI, etc.). |
| Trigger | Event-driven dispatch: when an event fires, start a workflow or call a function. Events come from the platform itself, inbound webhooks, schedules, or absence-of-event timers. | Auto-ingest documents on upload. Sync inbound webhooks. Run periodic jobs. Detect missing heartbeats. |
| Approval | A pause point inside a workflow that waits for a human to OK something before continuing. | Compliance gates, payment confirmations, code-review approvals. |
| Connection | A stored credential for an external integration (tracker, API, OpenAPI service). Secret values live in your secret store, not in Martha's database. | Authenticate to a customer's system without baking credentials into function definitions. |
| Document collection | A named bag of documents agents can search via keyword + semantic queries. | Build retrieval-augmented agents. Surface knowledge without re-uploading docs to every prompt. |
| Knowledge wiki | A tenant-level compiled knowledge base — Martha distills your documents into a navigable wiki agents can search. | Replace ad-hoc "what does our company know about X" prompts with a single source of truth. |
How they fit together
Tenant
│
┌─────────────────┼──────────────────┐
▼ ▼ ▼
Clients Agents Workflows
(chat, SMS, (prompt + tools) (graph of nodes)
voice, embed) │ │
▼ │
Functions ◄────────────┘
(HTTP / platform)
│
▼
External APIs
Document collections
Tasks, Teams, …Agents call functions. Functions call into your APIs. Workflows orchestrate sequences of LLM calls + function calls + human pauses. Clients are how end users reach an agent. Tasks queue the work.
What this gets you
You don't have to build:
- The conversation engine — multi-turn chat with tool calls, streaming, structured outputs, citations.
- The orchestration layer — durable workflow execution with retries, branches, parallel paths, and pauses for human input.
- The integration layer — drop in any OpenAPI spec and every endpoint becomes an agent tool.
- The knowledge layer — upload documents and they're parsed, chunked, embedded, and searchable.
- The delivery layer — embeddable chat widget, SMS/WhatsApp adapters, voice channels, webhook triggers.
- The credential layer — connection store, per-client allowlists, scoped service-account auth.
- The async layer — task queue with claim/heartbeat/complete semantics for remote agent harnesses.
You build:
- What your product actually does — the prompts, the functions that hit your specific systems, the workflow shapes for your specific business processes.
That's the trade.
What's next
- Just want to try it? Quickstart — five minutes from
npm ito your first command. - Building an agent? Start with the CLI, then Composable workflows.
- Embedding chat? Embeddable chat walks the integration end-to-end.
- Wiring an existing API? OpenAPI integrations imports a whole spec at once.
- Operating Martha programmatically from another LLM agent? Agent integration covers the JSON-contract discipline and skill bundle.