Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.ntropii.com/llms.txt

Use this file to discover all available pages before exploring further.

All endpoints sit under /v1/tasks/:taskId/.... taskId is the workspace task UUID (or, for child workflows, parent:step:slug — the controllers split on : where they need the root id). Endpoints group into four families:
  • Lifecycle — what the UI renders and how the user responds (/ui, /snapshots, /action, /row-action).
  • Events — append-only timeline used for reasoning streams and audit (/events).
  • Files — upload, list, download, and provider-tagged ingest under /files.
  • Data ingest — structured JSON payload submission under /data/ingest.

Lifecycle

Combined UI state

GET /v1/tasks/:taskId/ui
Composes getTask + getNextStep from api-workspace into a single UIState payload that ui-tenant can render directly. The container resolves the tenant context from the task’s tenantId, auto-refreshing on cache mismatch — so the dev/e2e wipe-and-reseed flow works without restarting the process. Response (sketch)
{
  "task": { "id": "…", "title": "…", "status": "IN_PROGRESS",  },
  "step": {
    "id": "await_starting_tb",
    "title": "Upload trial balance",
    "displayHint": { "component": "FILE_UPLOAD", "config": {} },
    "args": {}
  },
  "steps": [{ "id": "period_open", "status": "COMPLETED" }, ]
}

Step snapshots

GET /v1/tasks/:taskId/snapshots
Pure pass-through of api-workspace’s /snapshots endpoint — what was shown + decided at each completed step. Fetched on demand when the user clicks a completed step in the sidebar. ResponseRecord<string, StepSnapshot> keyed by step id.

Approve / reject

POST /v1/tasks/:taskId/action
User’s response to an approval gate. Mapped to api-workspace’s /approve or /reject signal endpoint. BodyTaskActionDto:
{
  "type": "approved",
  "comments": "All extracted fields look correct."
}
FieldTypeRequiredNotes
type"approved" | "rejected"YesMirrors the UI primary / failure CTAs.
commentsstringNoFree-text comment — recorded on the audit log; mapped to reason on the workspace signal.
Response{ "ok": true }.

DATA_TABLE row action

POST /v1/tasks/:taskId/row-action
Non-terminal row action — edit_cell or reject_row — forwarded to api-workspace as a user_action signal whose kind is row_action. Workflow handlers drain these via on_row_action (see UI and Temporal signals). BodyTaskRowActionDto:
{
  "ledger": "expenses",
  "action": "edit_cell",
  "rowId": "11111111-…",
  "field": "amount_gross",
  "value": "12.50"
}
FieldRequiredNotes
ledgerYesCurrently "expenses" only (server-validated allow-list).
actionYes"edit_cell" or "reject_row".
rowIdYesStable subledger row UUID.
fieldFor edit_cellColumn key (e.g. amount_gross).
valueFor edit_cellNew scalar / null — coercion + validation in workflow / domain.
Response{ "ok": true }.

Events

Append-only timeline of task_events rows. Used for reasoning-stream rendering, audit, and the canvas “what happened next” feed.

Ingest a task event

POST /v1/tasks/:taskId/events
Internal producer endpoint. When NTRO_INTERNAL_EVENTS_KEY is set, requests must include x-ntro-events-key: <secret> — typically only ntro-worker emits to this endpoint. BodyTaskEventDto:
{
  "type": "reasoning.delta",
  "source": "worker",
  "sourceLabel": "April Rent Roll",
  "seq": 42,
  "payload": { "delta": "…" },
  "groupKey": "extract_fields"
}
FieldTypeRequiredNotes
typestring (≤128)Yese.g. reasoning.delta, step.completed, source.submitted.
sourcestring (≤64)YesProducer — typically "worker".
sourceLabelstring (≤256)NoHuman label for the source.
seqint (≥0)YesMonotonic per task / event stream.
payloadobjectYesEvent payload (free-form).
groupKeystring (≤128)NoOptional thread / group id for streaming reasoning into a single panel.
Response{ "ok": true, "event": <persisted record> }.

List events

GET /v1/tasks/:taskId/events
Lists persisted events in seq order. No auth — the x-ntro-events-key requirement applies to writes only. Response
{ "events": [ { "type": "…", "seq": 1, "payload": {},  },  ] }

Files

All endpoints sit under /v1/tasks/:taskId/files. Bytes live in tenant Postgres (ingest.submitted_documents.data_bytes).

List files for a task

GET /v1/tasks/:taskId/files?source=<source>
Returns up to 20 documents associated with the task, ordered by upload time descending. Used by the “viewing completed step” mode to find files attached to a previous file-upload step. Filter by source to scope to one runbook source slug. Filtering by taskId keeps the per-source list from leaking duplicate file rows across wipe-and-reseed cycles. Response
[
  {
    "id": "11111111-…",
    "filename": "rent-roll-may.pdf",
    "contentType": "application/pdf",
    "uploadedAt": "2026-05-12T09:00:00Z",
    "downloadUrl": "/v1/tasks/<taskId>/files/<id>/download"
  }
]

Upload a file (browser / script path)

POST /v1/tasks/:taskId/files
Bytes inline in the request body (typically base64-encoded). NOT advertised to agents in next_step — sandboxed agents often can’t reach this URL. Same persist + signal flow as the agent path. BodyUploadFileDto:
FieldRequiredNotes
sourceYesStable source identifier — workflow activities query by this.
dataYesBase64 of the file bytes.
fileNameNoOptional override; otherwise derived.
contentTypeNoMIME type; guessed from filename if omitted.
periodNoPeriod qualifier ("YYYY-MM").
schemaSlugNoExtraction schema slug used downstream.
signalTaskId / signalNameNoWorkflow ID + signal name to fire after persist. From next_step.args.
ResponseUploadResult describing the persisted document + any side effects.

Ingest a file (agent path)

POST /v1/tasks/:taskId/files/ingest
Provider-tagged file reference; api-tenant fetches the bytes server-side. This is what next_step’s submit_file action tells agents to use. BodyIngestFileDto. Same fields as the upload DTO but with fileRef instead of data:
{
  "source": "xero-trial-balance",
  "fileRef": { "provider": "local", "path": "/data/tb.xlsx" },
  "period": "2026-03",
  "signalTaskId": "…",
  "signalName": "tb_submitted"
}
fileRef.provider is one of:
ProviderShape
local{ "provider": "local", "path": "/abs/path" }
url{ "provider": "url", "url": "https://…", "headers": {…} }
inline_base64{ "provider": "inline_base64", "data": "<base64>" }
The FileFetcher validates per-provider fields. Adding a new provider doesn’t require touching the DTO. ResponseUploadResult (same shape as the upload path).

Download

GET /v1/tasks/:taskId/files/:documentRef/download
Streams the bytes of a previously-uploaded document. Used by the “viewing completed step” mode to let the user re-download a file submitted earlier. taskId is in the route for symmetry with the upload paths but isn’t enforced at the row level — documentRef is already a UUID. Response — raw bytes with Content-Type from the stored row and Content-Disposition: attachment.

Data ingest

POST /v1/tasks/:taskId/data/ingest
Bulk-oriented endpoint with single-row sugar. Persists rows to ingest.submitted_records and fires a Temporal signal per event after commit.

Body shapes

Two mutually exclusive top-level shapes — exactly one of data / events must be set. Single-row (back-compat):
{
  "data": { "vendor": "Uber", "amount": 12.5, "currency": "GBP", "date": "2026-04-12" },
  "sourceRef": { "emailId": "<msg-1234@example.com>" },
  "kind": "expense",
  "signalTaskId": "…",
  "signalName": "data_submitted"
}
Bulk:
{
  "events": [
    { "data": {}, "sourceRef": {} },
    { "data": {} }
  ],
  "kind": "expense",
  "signalTaskId": "…",
  "signalName": "data_submitted"
}

Field reference

FieldWhereRequiredNotes
datatop-level (single)Either data or eventsFree-form JSON object.
events[].dataper-event (bulk)Same as single-row data.
sourceReftop-level (single) or per-event (bulk)NoFree-form provenance JSON (e.g. {"emailId": "…"}). Persisted on the event row for audit. Ignored at top level when events is provided.
kindtop-levelNoDiscriminator shared across the batch (e.g. "expense", "refund").
signalTaskIdtop-levelNoWorkflow ID to signal once events are persisted. May be a child workflow id (parent:step:slug).
signalNametop-levelNoSignal name on the running workflow (e.g. "data_submitted").

Limits

  • 256 KB per event on the JSON-stringified data; one over-cap event rejects the whole batch.
  • One DB transaction per batch — bulk persists atomically; one Temporal signal fires per event after commit.

Response

The shape mirrors the input:
  • Single body in → single DataIngestResult out.
  • events array in → DataIngestResult[] out.
Each result carries the persisted row id plus any per-event acceptance status the service emitted.

Errors

StatusWhen
400 Bad RequestDTO validation fails; both data and events set; over-cap event in a batch; row-action ledger not in the allow-list; non-UUID rowId.
401 UnauthorizedPOST /events without a valid x-ntro-events-key (when NTRO_INTERNAL_EVENTS_KEY is set).
404 Not FoundTask / document not found, or task not under this tenant.

UI and Temporal signals

How /action and /row-action reach the workflow as Temporal signals.

Ingest outcomes & feedback

Workflow-side handling of submit_file / data_submitted results.