LLM Connectors — overview
Claresia is platform-portable. The same Skill IR ships into all four enterprise LLM platforms via the Distribution Plane. You pick whichever your organization already standardized on; Claresia publishes the catalog and ingests the audit log.
Supported platforms
Section titled “Supported platforms”| Platform | Roadmap | Distribution mechanism | Telemetry source |
|---|---|---|---|
| Anthropic Claude Enterprise | cc-063 | Anthropic Admin API → publishes as Claude Skills | Anthropic Audit API |
| Microsoft Copilot M365 | cc-065 | Power Platform Admin API → Copilot Studio agents + Power Platform actions | Microsoft Graph audit log |
| OpenAI ChatGPT Enterprise | cc-070 | OpenAI Compliance API → Custom GPTs + tools (or manual upload) | OpenAI Compliance API |
| Google Gemini for Workspace | cc-070+ | Google Workspace Admin API → Gemini Gems | Workspace Admin Audit API |
You can connect multiple platforms simultaneously — common pattern is “Claude for engineering, Copilot for everyone else.”
Common requirements (all platforms)
Section titled “Common requirements (all platforms)”- An enterprise tier of the LLM (not the consumer / team tier — Claresia needs the admin/compliance/audit surfaces)
- A service account or admin API key with scopes documented per platform
- Zero-retention enabled in the platform (Claresia only routes through zero-retention modes; we will refuse to publish to a tenant where retention is on)
What the Distribution Plane does
Section titled “What the Distribution Plane does”For each connected platform, Claresia:
- Reads your tenant’s Skill IR catalog (versioned, RBAC-filtered per archetype)
- Transpiles each skill into the platform’s native shape (Claude Skill JSON, Copilot Studio agent YAML, OpenAI Custom GPT bundle, Gemini Gem)
- Publishes via the platform’s admin API under the
claresia-{tenant_slug}-*namespace - Sets RBAC to match your group → archetype mapping (where the platform supports per-user / per-group skill scoping)
- Re-publishes on change — if you toggle a skill in the Onboarding Portal, the publish lag is <60s p99
What the Telemetry Pipeline does
Section titled “What the Telemetry Pipeline does”For each connected platform:
- Pulls the audit log on a 60s polling cadence (or webhook where the platform supports it)
- Filters for events tagged
claresia-* - Normalizes into
fn_telemetry_eventshape (skill_id, ts, success, latency_ms, tokens_in, tokens_out, cost_usd_estimate, …) - Writes to the Hub
telemetry_eventtable - Surfaces in Command Center within 5 min p95 of the LLM invocation
Per-platform deep dives
Section titled “Per-platform deep dives”When the platform doesn’t have an admin API
Section titled “When the platform doesn’t have an admin API”Some platforms (notably ChatGPT Enterprise on certain SKUs) do not yet expose a fully programmatic publish API. Claresia falls back to:
- Manual-upload UX in Command Center: you download a per-skill ZIP bundle
- a CSV manifest, upload it via the platform’s admin UI in one batch
- Per-skill instruction copy-paste for the Custom GPT builder
- CSV import for ChatGPT Enterprise’s bulk Custom GPT manager (if enabled)
This fallback is documented per-platform.
Skills lifecycle (across platforms)
Section titled “Skills lifecycle (across platforms)”graph LR A[Author Skill IR] --> B[Validate against schema] B --> C{Distribution Plane} C -->|cc-063| D[Claude Enterprise] C -->|cc-065| E[Microsoft Copilot] C -->|cc-070| F[ChatGPT Enterprise] C -->|cc-070+| G[Google Gemini] D --> H[End-user invokes @claresia.X] E --> H F --> H G --> H H --> I[Output + telemetry → Hub] I --> J[Command Center surfaces]What the end user sees
Section titled “What the end user sees”Regardless of which LLM your org uses, the user types @claresia.<skill-name> in
their LLM, gets a parameter prompt, gets the response wrapped in a Claresia
Adaptive Card with the Hub deep-link footer. The experience is intentionally
near-identical across all four platforms — that’s the point of the IR.
See End-user guides for archetype-specific walkthroughs.