Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.ntropii.com/llms.txt

Use this file to discover all available pages before exploring further.

Ntropii is built around a counter-intuitive idea: LLMs should generate workflow code, not be the workflow code. Most “AI agent” platforms wire an LLM into the execution loop — every customer interaction, every document processed, every journal entry posted runs through a model that reasons, decides, and acts. That’s expensive, non-reproducible, and a regulator’s nightmare in a fund-ops context where a single bad NAV calculation is a significant regulatory risk.
1

Design the workflow

A coding agent (Claude Code, Cursor, Copilot Studio) talks to Ntropii over MCP, reads your data platform schema, and generates a deterministic Python workflow that automates the operation. You and your team review the generated code in a normal pull request.
2

Test workflow locally

A fund accountant or operations engineer corrects anything the model got wrong — a misclassified GL account, a wrong date format, a missing edge case. Once approved, the code merges and ships.
3

Deploy & execute

The merged Python deploys to an Ntropii worker that runs only that code — with an LLM observer in the loop. Every NAV calculation, every journal posting, every document classification is reproducible from the workflow source plus its inputs. An auditor can replay a run from six months ago and get bit-identical output.
4

Observe

A separate observer process watches running workflows for anomalies (extraction confidence drops, reconciliation breaks, schedule slips). When something looks off, it routes the case back to a human reviewer — and only then does an LLM re-engage to suggest a fix.

Why this matters for fund operations

Private markets fund admin runs under JFSC, CSSF, FSRA, FCA, SEC, and Big-4 audit pressure. The deployed system has to be:
  • Reproducible. Same inputs produce the same outputs, every time, regardless of model version.
  • Auditable. Every output traces to a specific commit of source code and a specific set of input documents.
  • Explainable. The accountant signing a NAV must be able to defend every line of the trial balance.
  • Cheap to run at scale. Closing 200 SPVs at month-end can’t cost 200 × £40 in LLM tokens.
A workflow generated once and deployed satisfies all four. A workflow that re-reasons on every execution satisfies none.

What you write vs what Ntropii writes

You writeNtropii (or a coding agent using Ntropii) writes
The skill definition — a prompt that teaches the LLM what to generate for a given domain (NAV, capital call, document ingest)The runbook itself — a Python file using the ntro SDK
Capability implementations specific to your tenant (custom GL classifier, jurisdiction-specific tax logic)The orchestration: how to fetch documents, when to extract, how to call your custom logic, when to ask a human
Reviews and corrections in the PRThe correction corpus that improves the next generation

What you build on

The platform you reach via the CLI, MCP, and SDK has two halves:
  • Ntropii Workspace — the control plane that holds tenants, entities, workflow definitions, deployments, schedules, and notifications. Stateless about your financial data; this is where you create tenants, push workflow versions, and trigger runs.
  • Ntropii Tenant — the runtime that lives inside your infrastructure (or ours, if you choose hosted), holds your credentials, and executes runbooks against your data platform. No customer financial data ever flows through Workspace.
The next page walks through this split in detail.

Tenant architecture

The Workspace ↔ Tenant boundary and what lives where.

Configure environment

Bind a data platform and set up your runbook repo.