Quality checks let a runbook ask a separate model to evaluate whatever the previous step produced — extracted invoice fields, a proposed journal, a calculated NAV — before a human ever sees it. The check result feeds the Tenant UI’s review screen alongside the underlying value, so the accountant reviewing the output sees the AI’s own assessment of “I’m pretty sure I got this right” or “this looks off, please double-check the GL mapping”.Documentation Index
Fetch the complete documentation index at: https://docs.ntropii.com/llms.txt
Use this file to discover all available pages before exploring further.
Install
The API
CheckResult with:
| Field | Type | What’s in it |
|---|---|---|
result.passed | bool | The headline answer |
result.severity | "info" | "warn" | "error" | How urgent the finding is |
result.summary | str | One-line human summary for the UI |
result.findings | list[Finding] | Itemised observations, each with field path + explanation |
result.suggested_corrections | list[Correction] | Optional suggested edits the human can accept with one click |
Wiring into a runbook
Quality checks are typically run as a Temporal activity right after the thing they’re checking. Lifted fromnav-monthly-journals:
The re-export pattern matters.
run_quality_check is implemented in the SDK, but the worker only registers activities it can find via the runbook’s activities.py. Re-exporting run_quality_check (and the other shared activities you need) in __all__ is what wires it into your bundle.await run_quality_check(...) from inside the workflow:
CheckResult next to the underlying JournalProposal — the accountant sees both the proposed journal and the model’s “I checked the balance, debits = credits, looks balanced” alongside it.
Designing a good check
Quality checks work best when they’re focused. A check that’s “review this entire NAV” is hard to interpret. A check that’s “verify the journal balances and that all GL codes exist in the COA” is actionable. Common shapes:| Check type | Example slug | What it does |
|---|---|---|
| Constraint check | journal-balance-v1 | Asserts a structural property (debits = credits, all dates in period, etc.) |
| Cross-reference check | gl-map-coverage-v1 | Confirms every account in the proposal exists in the COA |
| Sanity check | nav-plausibility-v1 | Compares the result to historical norms, flags outliers |
| Tail check | audit-anomaly-v1 | Looks for the unusual — duplicate references, suspicious round numbers, weekend-dated entries |
ai.extract backs run_quality_check.
Severity drives routing
Theseverity field affects how the Tenant UI surfaces the result:
info— green check, accountant sees “all clear” and approves quicklywarn— yellow flag, accountant sees the finding inline with the valueerror— red block, the workflow canwait_for_actionwithmust_resolve=Trueto force a correction before approval
severity == "error" with wait_for_action(must_resolve=True) to guarantee the human can’t approve over the top of a critical issue.
Related
Workflow capability
Where you wire the check into the run via
@ui_step.Private AI
The same provider plumbing backs both extraction and checks.