The privacy architecture of a personal intelligence

Dr Claude DelormeHead of Research, moccet

Six structural commitments make deep access to a user’s data safe. They are not features of a privacy policy. They are properties of a system designed to be trusted with the whole of a person’s life.

The privacy architecture of a personal intelligence is the set of structural commitments that make deep access to a user's data safe. The architecture has six components. Encryption in transit and at rest. Prohibition on training. Data sovereignty. Sandboxed action. User confirmation. Auditability. Each addresses a specific class of risk, and the components together break what the security researcher Simon Willison has called the lethal trifecta of agent vulnerabilities. moccet is being built with these components from the beginning rather than retrofitted as compliance.

This essay explains what each component does, why the older privacy framework is no longer sufficient for systems that act on the user's behalf, and what to look for when evaluating a personal intelligence.

What is the lethal trifecta and why does it change AI privacy?

In June 2025, a security researcher demonstrated something that should have changed how the AI industry talks about agent privacy. The researcher sent a single crafted email to a corporate inbox running Microsoft 365 Copilot. The email contained no attachment, no link, and no instruction visible to the human recipient. What the email did contain was a hidden prompt, a passage of text that Copilot, during its routine summarisation of the user's inbox, would parse as an instruction. Within seconds of the email being summarised, the agent had pulled sensitive data from the user's OneDrive, SharePoint, and Microsoft Teams accounts and exfiltrated it through a domain Microsoft itself trusted. The user clicked nothing. The user opened nothing. The agent did the harm on its own, having been silently instructed by an attacker the user had never met.

The vulnerability, catalogued as CVE-2025-32711, earned a CVSS score of 9.3 out of 10. Microsoft patched the specific issue. The deeper architectural problem the attack exposed remained, and was repeated, by January 2026, in production exploits against four major AI agent products in five days, using variations of the same attack pattern.

The pattern has a name. Simon Willison, the British programmer who first popularised the term prompt injection in 2022, has called it the lethal trifecta. An AI agent is exploitable by design when the system has three properties simultaneously. Access to private data. Exposure to untrusted content. The ability to externally communicate. Most consumer AI agents in 2026 have all three. The architectural fact is that large language models cannot reliably distinguish between instructions and data. Models treat everything in their input as text that may contain instructions, and any document, email, or web page they ingest can therefore become a vector for taking control of the agent.

The conversation about privacy in personal AI tends to focus on the wrong layer. The traditional privacy questions still matter. Does the company sell my data? Do they use it for training? Who has access? The deeper concern in agent systems is that the data does not need to leave through traditional exfiltration channels for it to be compromised. The agent itself can be turned into the exfiltration channel by an external attacker with no access to the user's account and no knowledge of the user's password.

The architectural answer to this is not a privacy policy. The architectural answer is a system design that breaks the trifecta.

What are the six components of a privacy architecture?

The privacy architecture of a personal intelligence has six components. Each addresses a different class of risk. The first three are the layers that have always mattered for handling personal data. The last three are specific to systems that act on the user's behalf.

The first component is encryption in transit and at rest. The user's data is encrypted on the way to the system and encrypted while it sits in storage. Encryption does not protect against an insider with access to keys. Encryption protects against external actors who do not have them, against accidental exposure of storage volumes, and against the routine forms of breach that account for most security incidents in commercial software. Every credible product in the category has it. A product that does not is not viable.

The second component is the prohibition on training. The user's personal data is not used to train models. The reason this matters more in this category than in older ones is that AI training is irreversible. Once a model has learned from a piece of data, the data is part of the model in a way that survives the user's account being deleted. If a model has been trained on a user's emails and the user later cancels their service, the emails continue to influence the model's outputs to other users in ways that cannot be undone.

The third component is data sovereignty. The user's right to revoke access to any source at any time, the right to delete all their data, the right to export it, the right to know what the system has stored. These rights are increasingly mandated by regulation. GDPR in Europe, CCPA in California, similar frameworks in other jurisdictions. The rights also convert the user's relationship with the system from captivity to delegation. A user who cannot leave is not a user. A system whose retention is built on lock-in rather than usefulness is a system whose privacy guarantees are not worth much.

The fourth component is sandboxed action. When the system takes an action on the user's behalf, the action runs in a constrained environment. The system cannot do arbitrary things. The system can only do things from a defined list of allowed actions. The list is visible to the user. New actions are added carefully and explicitly, with documentation about what each action allows and what it does not. The sandbox is the layer that prevents an action loop, especially one influenced by a prompt-injection attack, from doing damage at scale.

The fifth component is user confirmation. Every action the system takes is confirmed by the user before it executes. The system drafts. The user approves. The action runs. Confirmation is structural, not optional, and it is the design feature that addresses Willison's third leg of the lethal trifecta, the ability to externally communicate. An agent that cannot send an email, transfer a file, or call an API without an explicit user confirmation has had the exfiltration leg of the trifecta surgically removed. Even if the agent is successfully prompt-injected by malicious content, the attack cannot complete without the user noticing. A fuller account of the engineering of confirmation is in the essay on what it means for an AI to take action.

The sixth component is auditability. The system maintains a log of every action it has taken, every piece of data it has read in deciding to take the action, every confirmation the user has given. The log is visible to the user in a form they can actually parse, not buried in a developer console. The audit trail is the receipt for the relationship.

What regulatory frameworks should a personal intelligence meet?

Around the six technical components is a layer of regulatory and organisational architecture that makes the technical guarantees meaningful.

SOC 2 Type II is an audit framework that confirms a company has the security controls it claims to have, with continuous review. HIPAA is the regulatory framework for health data in the United States. GDPR is the framework for personal data of European Union residents and has become the practical global standard. CCPA is the California analogue.

Each framework adds requirements that a personal intelligence should meet. Not because the law strictly requires them in every jurisdiction, but because meeting them is observable evidence that the company has taken the work seriously. moccet is being built to meet all four, and the engineering choices behind the continuous structured model of the user are designed to satisfy the requirements rather than work around them.

Should a personal intelligence exist at all?

The question is correct to ask, and the right answer is more nuanced than either a flat yes or a flat no.

The objection is that any system that knows a person this completely is, by definition, a privileged-access system. A well-meaning company today can be a hostile company in five years, a compromised company in ten, or a defunct company whose data has been sold to its creditors. The data outlasts the intentions of its first custodians.

The response is twofold. First, a personal intelligence built on the six components above is a different kind of system than a personal intelligence built without them. The architectural choices people make matter more than the absence of the technology. A product that ships the depth of access without the corresponding privacy architecture is exactly the surveillance product the skeptics fear. A product that ships the architecture is genuinely something different, a system the user has chosen, can leave at any time, and can verify is doing what it said it would do.

Second, the alternative to a well-built personal intelligence is not the absence of surveillance. The alternative is surveillance distributed across two dozen systems that were not designed to be safe. Most knowledge workers in 2026 have their data scattered across Google, Apple, Microsoft, Meta, their bank, their health provider, their insurance carrier, their wearable manufacturer, their employer, and a hundred apps they have forgotten signing up for. Each system holds a slice of the user's life. None treats that slice the way a personal intelligence must treat the whole. Consolidating into a system designed for the work, with the privacy architecture built in from the start, is in many cases a privacy improvement over the current default, not a degradation of it.

The consolidation only works if the architecture is real. Without the six components and the regulatory layer around them, a personal intelligence is exactly the surveillance dystopia the skeptics fear. With them, the system is something genuinely different. The line between the two is in the engineering, not in the marketing.

How should a user evaluate the privacy of a personal intelligence?

Three questions sort the products that have done the work from those that have written about it.

Has the system been audited? SOC 2 Type II requires continuous review by an independent auditor. A product that claims security without an audit is making a claim the user cannot verify.

Is the action sandbox visible? A user evaluating a personal intelligence should be able to read the list of allowed actions. A product that does not publish this list is asking the user to trust the company instead of the system.

Is the audit log readable? The audit trail is the receipt for the relationship. A product where the log is buried, summarised, or unavailable to the user is a product whose retention is not built for the user.

The current generation of personal intelligence products is small, and the privacy architectures vary widely. The companies that have built the architecture into the substrate from the start will produce systems users live with for years. The companies that have not will produce the next generation of security incidents.

Try moccet

moccet is a personal intelligence built around a continuous model of one person’s life. The product is in early access. The founders run a live twenty-minute session daily at 1pm Pacific that walks through how it works on a real week.

Claim your seat

Common questions.

The lethal trifecta is the term coined by security researcher Simon Willison for the three properties that make an AI agent exploitable by design. Access to private data, exposure to untrusted content, and the ability to externally communicate. Most consumer AI agents in 2026 have all three. The architectural answer is to break the trifecta by removing one of the three legs.
Encryption in transit and at rest. Prohibition on training on personal data. Data sovereignty including the right to revoke and delete. Sandboxed action with a defined list of allowed actions. User confirmation before any action executes. Auditability with a readable log of every action and data access.
Prompt injection is the technique of hiding instructions inside content that an AI agent will process, causing the agent to take actions on behalf of the attacker rather than the user. The Microsoft 365 Copilot vulnerability of June 2025, catalogued as CVE-2025-32711 with a CVSS score of 9.3, was a prompt-injection attack that exfiltrated user data without any user interaction.
SOC 2 Type II for security controls audit. HIPAA for health data in the United States. GDPR for personal data of European Union residents, which has become the practical global standard. CCPA for California residents. Meeting all four is observable evidence that the company has taken the work seriously.
A personal intelligence built with the six components of privacy architecture is, in many cases, safer than the current default of scattered data across two dozen systems that were not designed to handle personal data well. The consolidation only works if the architecture is real. Without the architecture, a personal intelligence is the surveillance product the skeptics fear.
Live, daily at 1pm Pacific.

See moccet on a real week of yours.

Twenty minutes with the founders. They’ll show you how moccet works on a week like yours, what it’s good at, what it can do for you. Ten minutes for your questions.

Claim your seat