Cortex unifies documentation, product knowledge, and operational context across your entire organization — then layers AI agents on top to resolve any query, for any team, instantly.
Every organization has the same problem: critical knowledge lives in scattered docs, Slack threads, wikis, Notion pages, Confluence spaces, and people's heads. When someone needs an answer, they interrupt three people and search four tools.
Product docs, API references, runbooks, and guides scattered across 5+ tools. No single source of truth.
Engineering doesn't know what Sales promised. Support doesn't know what Product shipped. Everyone's guessing.
Customer questions take hours because support escalates to engineering who escalates to the one person who knows.
When people leave, their knowledge leaves with them. Onboarding takes months because context isn't captured.
Cortex is a git-native knowledge platform where every team contributes their knowledge — documentation, product specs, support playbooks, sales collateral, engineering runbooks — into a unified, versioned, structured system. Then AI agents layer on top to serve any query to any team.
Cortex isn't a concept. We've built and are using a full CMS with git-style workflows, a CLI for developer-native content sync, a rich web editor, and multi-project publishing. This is the foundation the AI layer builds on.
Create branches, stage changes, commit with messages, diff, merge, rebase — the full git workflow applied to documentation.
Engineers use the CLI to push/pull docs from their repos. PMs and writers use the rich web editor. Both stay in sync.
Protected branches, merge request workflows, conflict detection, rebase — enterprise-grade content governance.
Each product, team, or partner gets their own documentation space with independent versioning and access control.
Tenant → Organization → Project → Branch level permissions. Partners see only what they should.
Import OpenAPI specs, render interactive API docs with try-it panels, keep specs versioned alongside guides.
Knowledge shouldn't be bottlenecked by who knows how to use which tool. Cortex meets every person where they already work — engineers in their IDE, PMs in the browser, support in their templates — and merges it all into one versioned source of truth.
cortex pull → edit in IDE → cortex push
Write docs alongside code in your editor. The CLI syncs everything — API specs, architecture decisions, runbooks — as structured markdown. Your IDE indexes it all as context for AI-assisted coding.
No CLI needed. The web editor gives you a full writing experience with live preview, drag-and-drop media, and one-click publishing. Create a branch, write your spec, submit for review — all from the browser.
Solved a tricky customer issue? Turn it into a knowledge article in minutes. Templates guide the structure, and the review workflow ensures quality before it goes live for the whole team.
It doesn't matter how knowledge enters Cortex — CLI push, web editor, or template. It all lands in the same git-native system with the same versioning, the same review process, and the same AI layer on top. No more "which wiki did you put that in?"
The CLI bridges Cortex and your local environment. Once knowledge is synced locally, your AI-powered IDE treats docs like code — indexing product specs, API contracts, and team conventions as generation context. The result: AI that understands intent, not just syntax.
Run cortex pull and your entire knowledge base — docs, API specs, runbooks, architecture decisions — lands in your local workspace as structured markdown files.
AI coding assistants (Cursor, Copilot, Kiro, Windsurf) automatically index these files. Your product specs, API contracts, and team conventions become part of the generation context — no manual copy-pasting.
When the IDE knows your API schema, your naming conventions, your architecture patterns, and your product requirements — it stops guessing and starts generating code that actually fits. Less hallucination, fewer corrections.
Updated a doc while coding? Run cortex push and your changes go back to Cortex — versioned, reviewed, and available to every team. The knowledge loop stays closed.
Most teams treat documentation and code as separate worlds. Cortex collapses them into one. Your docs live alongside your code, your IDE understands both, and AI generation gets dramatically better because it has the full picture — not just syntax, but intent, context, and constraints.
The real power isn't just storing knowledge — it's structuring it in cascading layers so AI agents can resolve any query by traversing the right context at the right depth.
Customer-facing docs, API references, guides, tutorials. Served to end users and support agents first.
Engineering runbooks, architecture decisions, deployment guides, incident playbooks. For internal teams.
Product specs, roadmaps, competitive analysis, sales playbooks, customer success workflows.
Team structures, processes, vendor docs, compliance requirements, onboarding materials.
AI agents traverse layers based on the query context. A support question starts at L1, escalates to L2 if needed. A sales question hits L3 first. Agents cascade through layers until the answer is found — no human escalation needed.
Agent finds the public SSO guide, enriches with internal configuration notes and known edge cases.
Agent pulls competitive positioning from sales layer, backs it up with actual feature documentation.
Agent surfaces the architecture decision record, links to the API spec, and shows the retry configuration.
Agent cascades through troubleshooting docs, checks known issues from recent releases, suggests resolution.
Agent aggregates shipped features from product specs, cross-references with adoption data.
Agent walks through onboarding checklist, links to environment setup docs, explains CI/CD pipeline.
Cortex runs entirely on AWS infrastructure, designed to scale from day one to serving thousands of organizations.
Foundation models for the agentic query layer. Claude, Titan embeddings for knowledge retrieval and answer generation.
Vector search across cascading knowledge layers. Semantic retrieval with hybrid keyword + embedding search.
Core data store for the CMS — branches, commits, content nodes, merge requests, access control.
Content object store for committed snapshots, media attachments, API spec files, and static site hosting.
Serverless API layer for the agentic network. Each agent function scales independently based on query volume.
Container hosting for the core platform API and frontend. Auto-scaling based on active users and content operations.
Global CDN for published documentation sites. Sub-100ms page loads for end users worldwide.
Event-driven indexing pipeline. Content changes trigger re-embedding and knowledge base updates asynchronously.
Bedrock model invocations, OpenSearch vector indexing, embedding generation for the agentic layer
RDS, ECS/Fargate, S3 for the CMS platform serving multiple organizations
CloudFront, ElastiCache, load testing infrastructure for production readiness
WAF, GuardDuty, CloudTrail, KMS for enterprise-grade security posture
Full git-native content management system with branching, merging, rebasing, conflict resolution
Developer CLI (Go) for push/pull/fetch + rich web editor for non-technical contributors
Hierarchical access control, multi-project support, partner-specific documentation spaces
Comprehensive test suite — 92 backend + 140 frontend tests covering all core workflows
Every piece of knowledge in your organization flows into Cortex.
AI agents organize it into cascading layers.
Anyone can ask anything and get the right answer — instantly.
No more Slack-searching. No more "who knows about X?"
No more knowledge walking out the door when people leave.
Just ask Cortex.