Skip to Content
InternalDocsAPIPlaybooksAi Channel Playbooks V1

Ai Channel Playbooks V1

Source: docs/api/playbooks/ai-channel-playbooks-v1.md

# AI Channel Playbooks (v1) Status: Active Version: `v1.0.0` Last updated: 2026-02-24 This document defines execution playbooks for high-reach AI assistants. Each playbook includes: - prompt template - expected API call sequence - expected terminal response shape - failure troubleshooting path Shared prerequisites: - integration key (`sk_int_...`) - base URL - API contract: `docs/api/public-api-contract-v1.md` - quickstart kit: `docs/api/agent-integration-kit-v1.md` ## Common Terminal Shape All channels should terminate with a compact operator summary: ```json { "token": "issued", "jobId": "<uuid>", "finalStatus": "COMPLETED", "resultsCount": 20, "nextAction": "review /api/sima/results" } ``` --- ## Claude Playbook ### Prompt template ```text You are integrating with the RGL8R API. Execute this flow exactly: 1) Exchange x-api-key for bearer via POST /api/auth/token/integration. 2) Enqueue SIMA batch via POST /api/sima/batch with screeningAuthority=US. 3) Poll GET /api/jobs/:id until COMPLETED or FAILED. 4) Fetch GET /api/sima/results?limit=20. Return only JSON with token status, jobId, finalStatus, resultsCount, and any error envelope. ``` ### Expected API calls 1. `POST /api/auth/token/integration` 2. `POST /api/sima/batch` 3. `GET /api/jobs/:id` (loop) 4. `GET /api/sima/results?limit=20` ### Troubleshooting path - `401 INVALID_API_KEY`: rotate/reissue integration key - `401 INVALID_TOKEN`: rerun token exchange - `400 INVALID_REQUEST`: validate payload against OpenAPI - `500 INTERNAL_ERROR`: stop retries, capture response body, escalate --- ## ChatGPT Playbook ### Prompt template ```text Use the RGL8R public API contract v1. Run token -> enqueue -> poll -> fetch with strict error-envelope handling. If any call fails, return {code,message,details} and stop. If successful, return {token,jobId,finalStatus,resultsCount}. ``` ### Expected API calls Same 4-call sequence as common flow. ### Expected terminal response shape ```json { "token": "issued", "jobId": "<uuid>", "finalStatus": "COMPLETED", "resultsCount": 20 } ``` ### Troubleshooting path - `429 RATE_LIMITED`: exponential backoff with jitter, max 3 attempts - non-terminal poll timeout: return timeout error with last known status --- ## GitHub Copilot Chat Playbook ### Prompt template ```text Generate and run a minimal script that: - exchanges integration key for JWT, - enqueues SIMA batch, - polls job to terminal, - fetches SIMA results. Use only documented endpoints and canonical error envelope handling. ``` ### Expected API calls Same 4-call sequence as common flow. ### Expected terminal response shape ```json { "token": "issued", "jobId": "<uuid>", "finalStatus": "COMPLETED|FAILED", "error": null } ``` ### Troubleshooting path - if script emits non-envelope auth errors, align to `docs/api/public-api-contract-v1.md` - if upload-based flow is attempted, switch to `/api/sima/batch` for first-value run --- ## Gemini Playbook ### Prompt template ```text Act as an API integration operator for RGL8R. Execute token -> enqueue -> poll -> fetch using the v1 OpenAPI contract. Do not invent endpoints. Surface canonical error envelope fields when failures occur. ``` ### Expected API calls Same 4-call sequence as common flow. ### Expected terminal response shape ```json { "token": "issued", "jobId": "<uuid>", "finalStatus": "COMPLETED", "resultsCount": 20, "error": null } ``` ### Troubleshooting path - ensure header is `x-api-key` on token exchange - ensure bearer token passed on tenant routes - if `SCOPE_DENIED` appears, confirm endpoint requires scope and key scopes include it --- ## Evidence Linking Channel dry-run evidence must be recorded in: - `docs/api/playbooks/p11-c-channel-dry-run-evidence.md` Required channels: - Claude - ChatGPT - GitHub Copilot Chat - Gemini