← Maxy

Maxy vs Claude. Surface by surface.

Companion to maxy.md. Source of truth for every comparison claim made on the landing pages, in customer conversations, and inside the Maxy Institute curriculum.

Why this document exists

Maxy is built on Claude. Every Maxy customer is a Claude Max 20X customer. The honest question every prospect asks - and that the team must answer in one breath - is "why not run Claude on its own?".

This document is the answer. It is not a takedown of Anthropic's products; they are the substrate Maxy depends on. It is a precise mapping from each Anthropic surface to the operator-side cost the substrate leaves on the table, and from each cost to the Maxy capability that absorbs it.

The mapping rests on a single structural observation, named in Module 27 of the Maxy Institute curriculum and corroborated by Anthropic's own documentation and three long-running open issues on github.com/anthropics/claude-code:

Anthropic ships horizontal surfaces. The operator builds the integration layer.

Cancel the integration burden and you cancel the gap between what Claude promises (the 100:1 multiplier doctrine taught in the Institute's Module 00) and what an operator actually experiences.

The substrate-wide pattern: read-mostly, ringfenced, operator-as-glue

Every limitation in the table that follows is an instance of the same pattern. State this clearly and the rest of the document writes itself.

Anthropic's products are read-mostly across every surface. Project files are read-only (open issue #16550 since January 2026). The RAG threshold for project knowledge kicks in at around 13 files regardless of total size and degrades retrieval quality long before the documented capacity limit (open issue #25759 since February 2026). Design can read linked repositories but cannot write back. Enterprise connectors like the M365 MCP server are also read-only - enterprise customers have been asking for write-back since at least February 2026. The pattern is substrate-wide, not a single product oversight.

The operator's downloads folder becomes the bridge between products that should compose and don't. Every cross-surface workflow requires manual file shuffling - the kind of executable drudgery the substrate's own doctrine promises to eliminate. The substrate, as currently shipped, does not deliver on that promise for multi-surface work; the operator builds the data plumbing the substrate omits.

That is the gap Maxy occupies.

Surface by surface

Read this table left to right. Every row is structurally complete on the Maxy side because the integration layer is what we ship.

Anthropic surface What it does well Operator-side cost (cited) Maxy equivalent
Claude (web) Frontier model. Cross-device sync for chats, memory, Projects. Universal access. The chat is the only state. Nothing composes across chats without Project files; Project files are read-only and ringfenced. Same model, reached through a chatbox tuned for the operator's archive. The graph composes across every session, every public agent, every workflow.
Claude Mobile Best-in-class capture surface. Voice, share sheet, ideation in motion. The capture goes nowhere structured. It lands in chat history, not in an archive that knows the operator. Same capture surfaces (WhatsApp, Telegram, share-sheet to web chat). Every capture lands as a node in the operator's graph, ontology-resolved, edge-linked.
Claude Projects (cloud) Persistent context. Cross-surface (web, mobile, desktop). Ringfenced per Project. Files are read-only (issue #16550). No deduplication; duplicate uploads pay full token cost in both direct-load and RAG modes (Module 27). RAG threshold cuts in at ~13 files (issue #25759). The five-step manual sync loop on every edit. One graph. Account-scoped. Ontology-bounded. Read-write everywhere. No duplicates by construction. Retrieval is graph traversal plus semantic similarity, not 13-file RAG.
Cowork Local read-write filesystem. Scheduled tasks. Computer use. Dispatch. The tool of choice for autonomous local work. Desktop-only. Machine must be awake at the scheduled time or the run is skipped. Audit-trail gap: Anthropic's own Cowork documentation warns "do not use Cowork for regulated workloads". No multi-tenant agent surfaces. Scheduler is local, not cloud. Background admin agents on a private device that does not sleep. Six roles working in parallel. Workflow engine with a durable audit trail that regulated practice can run through.
Cowork Projects (desktop) Read-write file access on local disk. Schedule travels with the project. Parallel feature to Claude Projects, same name, separate store, no sync between them (Module 23). Two desktops mean two separate Cowork Projects with no reconciliation. The bridge-summary discipline is the operator's manual workaround on every import. The cache-to-workspace copy is a one-time fix the operator applies on every first open. One graph. One device. One operator identity. No bridge summaries to write because nothing is ringfenced. Cross-surface continuity is structural, not a per-import rite.
Dispatch Mobile-initiated standalone tasks. Beta. Mobile-only initiation. Audit-trail gap inherited from Cowork. Local execution; no always-on cloud server. The same mobile capture, but the device on the operator's network does the work. No "machine awake" constraint. The agent runs on the appliance the operator already paid for.
Design Visual workspace for mockups, decks, prototypes. Read-only sources. Static-mirror trap: edits land in the project, not the source-of-truth file (Module 27). The downloads folder is the integration layer between Design and the repo. Every iteration cycle taxes the operator's time on file shuffling. Maxy does not ship a Design substitute. Design remains useful for one-shot visual work. For cross-surface work, the graph is the integration layer, not the downloads folder.
Claude Code Best-in-class developer tool. Routines run on Anthropic's cloud and persist whether the operator's machine is on or off. Developer surface, terminal-first. Not the surface a non-technical operator opens. Maxy is Claude Code wrapped for the non-technical operator. Same agent-framework underneath, conversational on top. The operator never opens a terminal; the agents inside the appliance do.
Memory Shallow personalisation across chats. Not a data layer. Not queryable. Not ontology-bounded. Module 27 names this as "the data pipeline gap": the substrate names memory but ships no fundamentally useful data pipeline. The graph is the memory. Queryable, ontology-bounded, observable, exportable in conversation under GDPR Article 15, deletable under Article 17 with a deletion receipt.
MCP / Connectors The protocol that wires external systems into Claude. Strong ecosystem. Built-in Connectors for Google, GitHub, Slack. Most enterprise connectors are read-only (M365 - issue #25363; Salesforce documented similarly). Wires, not stores. The operator builds the store. Maxy is the store. MCP servers ingest into the graph; the graph is the source of truth; Maxy's own MCP servers expose the graph back to Claude with schema-bounded write paths.
Public-facing agents (24/7 customer chat) Anthropic does not ship this. The operator either points clients at claude.ai (no good) or builds the customer-facing surface themselves. Maxy ships multi-tenant public agents on the same hardware as the admin agent. Permission separation is structural, not policy. Each public agent has its own knowledge, boundaries, and personality, all configured in conversation.
Sovereignty / on-premises Anthropic does not ship this. The data lives on Anthropic's servers (Free, Pro, Max plans train on data by default with five-year retention; Team and Enterprise do not, but the data still lives in their cloud). The graph, the embeddings, the public-agent surface, the workflow engine - all on a Raspberry Pi 5 (from £250) or a 16 GB Linux machine the operator already owns. Only LLM prompts leave, sent under zero-retention API terms.

Six things on the Maxy side are not derivable from the substrate at any price. Every one of them is the absence of a feature Anthropic could ship but has chosen not to:

  1. The graph. One operator-owned ontology across every surface, every capture, every conversation.
  2. Read-write everywhere. No five-step sync loop, no static-mirror trap, no parallel-feature confusion.
  3. Multi-tenant public agents. Customer-facing chat surfaces with their own personas, knowledge, and boundaries.
  4. Sovereignty. The data lives on hardware the operator owns.
  5. A workflow engine with audit trail. Regulated practice can run through it; Cowork cannot.
  6. The conversational interface as the only interface. No menus. No dashboards. No surface to learn.

The three Anthropic structurally cannot match

Pull the table together and three structural impossibilities remain. Not "things Anthropic has not got round to" - things Anthropic, being Anthropic, cannot ship at the price an operator can pay for them.

1. The graph as source of truth

Anthropic's primary cross-chat persistence layer is Project files. Every Project rings its own files; nothing composes across Projects. There is no ontology and no graph schema. Retrieval is keyword-matching or whatever the underlying RAG layer does on a fragile threshold (~13 files, regardless of content size). The data layer, in Module 27's framing, is structurally absent.

Maxy ships a Neo4j graph with ontology-bounded entities and relationships, vector embeddings computed on the device, and hybrid retrieval (graph traversal plus semantic similarity) on every turn. The schema is injected directly into the relevant agent's system prompt: the LLM keeps its semantic judgement but cannot generate out-of-schema entities or edges. Deterministic boundaries with intelligent judgement inside them.

This is the single most important difference. Most other Maxy advantages flow from it.

2. Multi-tenant public agents

Anthropic ships claude.ai for the operator. There is no consumer-facing surface where the operator's customers can talk to a tuned agent that knows the operator's pricing, availability, scope, and tone of voice. The operator's options on the substrate are to draft replies in Claude and paste them, or to build the public-facing surface themselves with MCP servers, custom auth, and their own infrastructure.

Maxy ships public agents on the same device as the admin agent. Each public agent has its own knowledge, boundaries, and personality, all configured in conversation by the operator. A two-person business operates with the customer-facing surface of a twenty-person operation. Permission separation between the admin context and the public context is a structural property of the system - not a policy that can be misconfigured.

3. Sovereignty

Anthropic is in the business of shipping cloud-hosted horizontals. That is the right business for Anthropic. It is not the business of shipping a private appliance for one operator at a time. The operator who needs the data on their premises - for confidentiality reasons, for regulatory reasons, for the simple reason that an asset they own should live on hardware they own - has nowhere to land on Anthropic's product surface.

Maxy lands there. The graph, the embeddings, the public-agent surface, the workflow engine - all on a Raspberry Pi 5 (from £250) or a 16 GB Linux machine the operator already owns. Only LLM prompts leave the device, under zero-retention API terms. Embeddings powering memory search are computed locally; Anthropic does not retain or train on what reaches them. Cancel the subscription and the data stays. There is no export fee, no lock-in, no hostage situation.

This is the property an operator buys with the price tag. Not features (Claude has more). Not capability (Claude is the capability). Sovereignty.

Where Anthropic is genuinely the better surface

The honest section. Three places where the substrate, on its own, is the right answer:

Naming these honestly is what lets the rest of the comparison hold weight. We are not in competition with Anthropic; we make our customers better Anthropic customers.

Why the comparison is not a feature list

A feature-list comparison would lose. Anthropic ships more features per quarter than any vendor in the category. The comparison Maxy wins is not "more features" - it is "the right assembly".

The Module 00 doctrine names the goal: AI does the executable drudgery so the operator's human-touch time multiplies up to a hundredfold. The drudgery includes manual file shuffling between Claude products. The drudgery includes maintaining a knowledge graph by hand in Obsidian. The drudgery includes pasting Claude's drafts into the customer-facing channel. The drudgery includes building one's own integration layer over MCP servers, Connectors, and the downloads folder.

Maxy absorbs the drudgery - end to end, on the operator's own device - so the operator can spend the freed time on the part of the work only they can do.

That is the comparison. Anthropic's products are excellent and we depend on them. The integration layer is the missing half of the value, and it is the half Maxy ships.

Citations