Skip to content

Pain-point demos

The runnable demo suite lives in examples/pain-point-demos. Each numbered folder owns its own Node.js demo project under workspace/, so the files being changed are visible inside the same pain-point folder.

The shell scripts are scenario launchers. The evidence should come from AIT CLI output: ait query, ait attempt show, ait memory list, ait review status, ait review report, and ait apply.

Prerequisites

  • ait on PATH
  • git
  • Node.js and npm
  • python3
  • Claude Code CLI installed and logged in
  • Codex CLI installed and logged in

Prepare all workspaces

cd examples/pain-point-demos
./setup.sh

This creates or resets each folder's local project:

examples/pain-point-demos/01-blast-radius/workspace/
examples/pain-point-demos/02-provenance/workspace/
...
examples/pain-point-demos/10-prompt-search/workspace/

Run one demo

cd examples/pain-point-demos/01-blast-radius
./run.sh
cd workspace

Then use that folder's AIT verification flow. Do not explain the result from private script state or ad-hoc filesystem checks; explain it from AIT metadata and AIT CLI output.

Run the full suite

cd examples/pain-point-demos
./run-all.sh

run-all.sh resets every workspace and runs every scenario.

Folder map

Folder Pain point What it demonstrates
01-blast-radius Blast radius Claude Code makes a broad risky edit, but AIT keeps it in an isolated attempt worktree.
02-provenance Provenance AIT records the intent, agent, changed files, prompt/trace references, and attempt metadata.
03-failed-run-isolation Failed-run isolation Codex breaks a test; the failure is inspectable without polluting the main workspace.
04-memory-reuse Memory reuse Claude records an investigation; Codex later receives it through AIT context/memory.
05-parallel-agents Parallel agents Claude Code and Codex both edit approach.txt in separate attempt worktrees.
06-explicit-promotion Explicit apply Multiple candidate attempts exist; only the selected result is accepted into the current branch.
07-cross-agent-handoff Agent-to-agent communication An accepted Claude decision becomes repo memory that Codex can consume later.
08-local-only-provenance Local-only provenance AIT metadata is inspectable locally through AIT commands, without a hosted dashboard.
09-verification-evidence Adversarial review A risky Claude result is challenged by an AIT adversarial review and recorded as blocked.
09-1-codex-reviewer Claude implementation, Codex review Claude Code implements unsafe divide; Codex reviews it; review gate holds ait apply.
10-prompt-search Prompt search AIT query recovers an old attempt by intent text or changed file.

AIT verification flows

Each case README contains the exact commands. The common pattern is:

ait query --on attempt '<selector>' --format table
ait attempt show <attempt-id>

For memory cases:

ait memory list --format table
ait attempt show <claude-attempt-id>
ait attempt show <codex-attempt-id>

For adversarial review cases:

ait query --on attempt 'review.mode="adversarial"' --format table
ait query --on attempt 'review.status="blocked"' --format table
ait review finding list --severity high --format text
ait review report --attempt <attempt-id> --format json

For the 09-1-codex-reviewer apply-gate evidence:

ait config show --format json
ait apply <attempt-id> --mode current

Expected result:

AIT held the result because this repo requires review before apply.
Status: held
Reason: review gate: required review is blocked

Talk track

Use the scripts to create the scenario, then switch to AIT commands for the explanation. The audience should leave with one idea: AIT turns agent work into isolated, queryable, reviewable Git attempts instead of asking people to trust terminal scrollback.