Make shadow AI auditable.
Your engineers are using Claude Code, Codex, Copilot, Cursor, every week. You did not approve a stack. They approved themselves. The risk is not that they're using AI. The risk is that you cannot answer simple questions about what the AI did.
Different tools, different teams, no shared run truth.
Different audit trails, all of them chat transcripts.
No way to answer "did the AI change this file" with a structured record.
No way to prove to security or compliance that AI-driven changes follow your policy.
No way to point an external audit at a single source of truth.
- 01Durable execution record across the planner-executor-validator loopToday: runs, steps, attempts, artifacts, validations, gates — across five executor adapters with conformance tests. Tomorrow: validator-as-actor loops and cross-session planner memory on the same substrate.
- 02Planner-neutralWhen OpenAI ships hosted Codex agents and Anthropic ships Claude Code Teams, you still want one neutral plane across them. Codencer gets more valuable as vendors collapse, not less.
- 03Local-first executionThe trust model that makes remote planning safe without exposing a raw remote shell. Code never leaves your network.
- 04Open-source-firstApache 2.0. Your legal can read every line. A moat against lock-in that managed-SaaS competitors structurally cannot match.
- Apache 2.0 license — your legal team can audit every line
- Self-host: relay, runtime, cloud control plane all available today
- Local-first execution: code stays on your engineers' machines
- No inference inside Codencer — we record, surface, structure; we do not train, do not infer
- Audit query API on the v0.6 roadmap (Aug 2026)
- SSO, SCIM, RBAC on the v1.0 roadmap (Jan 2027)
- SLA tier and private deploy at v1.0
We say no to the things we don't ship yet, on purpose. The roadmap is concrete.
- v0.2.0-beta does not yet ship the audit query API. v0.6 does.
- v0.2.0-beta does not yet ship SSO, SCIM, RBAC. v1.0 does.
- v0.2.0-beta does not run inference. It will not run inference, ever.
- Codencer does not host the LLM. It will not host the LLM.
If your pilot needs one of the deferred features today, we say so. We don't pretend the roadmap is shipped.
- 01Read the security & audit chapterMost procurement reviews start there.
- 02Run the self-host smoke test in a sandboxPLANNER_TOKEN=<planner-token> make self-host-smoke-mcp
- 03Email me directlyI do every design-partner conversation personally. Pilots get priority on roadmap items that block them.