Skip to content

What is Claresia?

Claresia is the Agent Operations Platform — the layer between your enterprise LLM (Claude Enterprise, Microsoft Copilot M365, ChatGPT Enterprise, Google Gemini for Workspace) and your organization’s employees, data, and processes.

It does three things no LLM platform does on its own:

  1. Distributes versioned, governed skills to every employee inside the LLM surface they already use — so a Firmware Engineer in Microsoft Copilot, an Account Executive in Claude, and an FP&A analyst in ChatGPT all see the same claresia.* skill catalog, scoped by RBAC.
  2. Captures every invocation as a canonical Hub record (output, decision, governance event, artifact, employee profile, telemetry event) with SHA-256 provenance and a 7-year audit chain.
  3. Tracks individual + organizational maturity along a 3-level ladder (Baseline → AI-Adopted → Agent Operator) so the CIO can prove ROI in dollars and hours, not in vibes.
RoleWhat you get
CIO / IT buyerA self-serve, audit-ready way to deploy AI to 200–5000 seats in 5 days, across the LLM platform you already chose, with SOC 2 evidence and a Trust Center.
CISO / SecurityPer-tenant isolation by default, customer-managed keys (Mode B/C), governance_event chain for every privileged action, sub-processor list, DPA template, pen test summary on request.
Head of Digital TransformationA tracked maturity ladder per employee + per archetype. Quarterly readout to the board. Synthetic Twin model that shows the L0→L2 trajectory before you sign.
VP Engineering / VP Sales / VP Marketing / VP FinanceA Cowork pack per archetype: ~5 skills your team uses daily, in their LLM, with branded responses and a Hub deep-link.
End userType @claresia in your LLM. Pick a skill. Get a branded answer. The output is logged to your org’s Hub with provenance. Done.
  • Not another LLM. Claresia runs on top of the LLM you already chose.
  • Not another chat UI. End users live inside Copilot / Claude / ChatGPT — the Adaptive Card + Hub deep-link footer is the only Claresia surface they touch.
  • Not another vector RAG product. Claresia’s Hub is a canonical record store for outputs and decisions, not a search index over documents.
  • Not a black box. Every skill is shipped as Skill IR — a versioned JSON contract that can be audited, pinned, and re-published independently of any one LLM platform.
LinePurposeRoadmap home
PowerLensTransform people: adoption, certification, AI-fluency progression, manager dashboardscc-019
DeepLensTransform systems: SaaS rationalization, function consolidation, cost-out via the Synthetic Twincc-022

Together with the Command Center (the IT pane of glass), these are the three investor-facing tools.

  • 56 skills across 9 functions (Sailford, Forge, Boss, Ledger, Gatespic, Takecare, Steve, Clawshield, Zottos)
  • 83 coworks — pre-bundled skill packs per archetype
  • 14 archetype runtime specs (cc-051) — what each role does week by week
  • Synthetic Twin simulator (cc-052) + Maturity Engine (cc-053) — used to forecast the L0→L2 trajectory for any prospect
  • Distribution Plane — per-LLM publishers (cc-063 Anthropic, cc-065 Microsoft, cc-070 OpenAI, cc-071 Slack)
  • vs Glean / Sana / Writer: Claresia does not replace your LLM. It rides inside it. Procurement is faster, surface area is smaller, RBAC is delegated to your IdP, and your data plane can stay in customer cloud (Mode C).
  • vs Microsoft Copilot Studio / OpenAI Custom GPTs / Anthropic Claude Skills: Claresia is platform-portable. The same Skill IR ships to all four LLM platforms via the Distribution Plane. You are not locked into one vendor’s skill format.
  • vs building it yourself: Claresia ships ~$15M of foundational engineering (Skill IR, Hub schema, Distribution Plane, Maturity Engine, Synthetic Twin) for a fraction of the in-house cost. The 12-month build the customer would otherwise own becomes a 24-hour Mode A onboarding.