Skip to content

LLM Connectors — overview

Claresia is platform-portable. The same Skill IR ships into all four enterprise LLM platforms via the Distribution Plane. You pick whichever your organization already standardized on; Claresia publishes the catalog and ingests the audit log.

PlatformRoadmapDistribution mechanismTelemetry source
Anthropic Claude Enterprisecc-063Anthropic Admin API → publishes as Claude SkillsAnthropic Audit API
Microsoft Copilot M365cc-065Power Platform Admin API → Copilot Studio agents + Power Platform actionsMicrosoft Graph audit log
OpenAI ChatGPT Enterprisecc-070OpenAI Compliance API → Custom GPTs + tools (or manual upload)OpenAI Compliance API
Google Gemini for Workspacecc-070+Google Workspace Admin API → Gemini GemsWorkspace Admin Audit API

You can connect multiple platforms simultaneously — common pattern is “Claude for engineering, Copilot for everyone else.”

  • An enterprise tier of the LLM (not the consumer / team tier — Claresia needs the admin/compliance/audit surfaces)
  • A service account or admin API key with scopes documented per platform
  • Zero-retention enabled in the platform (Claresia only routes through zero-retention modes; we will refuse to publish to a tenant where retention is on)

For each connected platform, Claresia:

  1. Reads your tenant’s Skill IR catalog (versioned, RBAC-filtered per archetype)
  2. Transpiles each skill into the platform’s native shape (Claude Skill JSON, Copilot Studio agent YAML, OpenAI Custom GPT bundle, Gemini Gem)
  3. Publishes via the platform’s admin API under the claresia-{tenant_slug}-* namespace
  4. Sets RBAC to match your group → archetype mapping (where the platform supports per-user / per-group skill scoping)
  5. Re-publishes on change — if you toggle a skill in the Onboarding Portal, the publish lag is <60s p99

For each connected platform:

  1. Pulls the audit log on a 60s polling cadence (or webhook where the platform supports it)
  2. Filters for events tagged claresia-*
  3. Normalizes into fn_telemetry_event shape (skill_id, ts, success, latency_ms, tokens_in, tokens_out, cost_usd_estimate, …)
  4. Writes to the Hub telemetry_event table
  5. Surfaces in Command Center within 5 min p95 of the LLM invocation

When the platform doesn’t have an admin API

Section titled “When the platform doesn’t have an admin API”

Some platforms (notably ChatGPT Enterprise on certain SKUs) do not yet expose a fully programmatic publish API. Claresia falls back to:

  • Manual-upload UX in Command Center: you download a per-skill ZIP bundle
    • a CSV manifest, upload it via the platform’s admin UI in one batch
  • Per-skill instruction copy-paste for the Custom GPT builder
  • CSV import for ChatGPT Enterprise’s bulk Custom GPT manager (if enabled)

This fallback is documented per-platform.

graph LR
A[Author Skill IR] --> B[Validate against schema]
B --> C{Distribution Plane}
C -->|cc-063| D[Claude Enterprise]
C -->|cc-065| E[Microsoft Copilot]
C -->|cc-070| F[ChatGPT Enterprise]
C -->|cc-070+| G[Google Gemini]
D --> H[End-user invokes @claresia.X]
E --> H
F --> H
G --> H
H --> I[Output + telemetry → Hub]
I --> J[Command Center surfaces]

Regardless of which LLM your org uses, the user types @claresia.<skill-name> in their LLM, gets a parameter prompt, gets the response wrapped in a Claresia Adaptive Card with the Hub deep-link footer. The experience is intentionally near-identical across all four platforms — that’s the point of the IR.

See End-user guides for archetype-specific walkthroughs.