Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.ntropii.com/llms.txt

Use this file to discover all available pages before exploring further.

“Private AI” because the LLM call originates from inside Ntropii Tenant, under the customer’s own provider credentials, against the customer’s own data — never as a hosted service we run for you. Same extract() surface whether the underlying provider is Anthropic, Azure OpenAI, a self-hosted model, or anything else routed through your tenant’s provider configuration.

Install

pip install 'ntro[workflow]'
The AI capability is bundled with the workflow extra — every runbook has it.

The API

from ntro.capabilities import ai

result = await ai.extract(
    content="<plain text from a document>",
    schema_slug="invoice-v1",                    # Routes to a prompt template
    structured_context={"cell_grid": cells},     # Optional layout context
)
Returns an ExtractionResult with:
FieldTypeWhat’s in it
result.fieldsdict[str, Any]The extracted typed values, keyed by the schema’s field names
result.confidence_scoresdict[str, float]0.0–1.0 per field, used to drive HITL routing
result.line_itemslist[dict]Repeating rows (invoice line items, journal lines) when the schema expects them
result.summarystrA human-readable summary the UI surfaces alongside the structured fields
schema_slug is the contract between the runbook and the provider — it routes the call to the right prompt template and output schema. New extraction schemas live in your provider configuration, not in the SDK.

Canonical example

Lifted from the document-ingest runbook:
from temporalio import activity

from ntro.capabilities import ai


@activity.defn(name="document_ingest.extract_fields")
async def extract_fields(input: ExtractInput) -> ExtractedPayload:
    """Run AI extraction against the configured schema.

    The schema slug routes to the right prompt template inside the
    configured provider (ntro-provider-anthropic, etc.). Same call
    regardless of which schema or provider — the runbook stays clean.
    """
    result = await ai.extract(
        content=input.raw.plain_text,
        schema_slug=input.schema_slug,
        # Cell-grid context helps for tabular sources like rent rolls,
        # where row/column structure matters.
        structured_context={"cell_grid": input.raw.cell_grid},
    )
    return ExtractedPayload(
        schema_slug=input.schema_slug,
        document_ref=input.raw.document_ref,
        fields=result.fields,
        confidence_scores=result.confidence_scores,
        line_items=result.line_items,
        summary=result.summary,
    )
The pattern: feed extract() the plain text from files.parse(), optionally pass the cell grid as structured_context, and stash the typed result in your runbook’s domain model.

Why structured_context matters

The extractor’s prompt template can choose to use or ignore structured_context, but for tabular sources it’s the difference between getting GL line allocations right and getting them wrong. A trial balance flattened to plain text loses column boundaries. The cell grid preserves them, so the model can disambiguate “Account: 200 | Debit: 1,500.00 | Credit: 0.00” from a flattened “200 1,500.00 0.00”. The convention is: always pass cell_grid if you have one. The provider decides whether to use it.

Confidence scores feed HITL routing

Every field comes back with a confidence score. The downstream HITL step typically routes the document by aggregate confidence:
avg = sum(result.confidence_scores.values()) / len(result.confidence_scores)
if avg < ctx.hitl_threshold:
    # Bounces to human review
    response = await self.wait_for_action(
        payload=extracted,
        display_hint={"type": "review_extraction", "schema_slug": input.schema_slug},
        reason=f"Confidence {avg:.0%} below threshold {ctx.hitl_threshold:.0%}",
    )
The threshold is tenant config, not SDK config — different funds tolerate different levels of automation.

What the schema_slug routes to

ai.extract() doesn’t know what an invoice is. It knows that schema_slug="invoice-v1" means “use the prompt template registered under that slug in the configured provider”. The provider holds:
  • The prompt template (with your tenant’s specific tone, jurisdiction, accounting framework)
  • The output JSON schema the LLM is constrained to
  • The model selection (Claude Opus for hard cases, Haiku for cheap ones)
  • Any fine-tuning, retrieval, or routing logic
This is what makes the AI “private” — the prompts, the schemas, and the credentials all live inside your tenant. Swapping providers (Anthropic → Azure OpenAI → self-hosted) doesn’t change the runbook code; it changes the provider config.

Collect files

Produces the input to ai.extract().

Quality checks

The natural follow-up — verify what AI extracted.