From Context to Intent: Personal Superintelligence is Here
- Decasonic
- 4 days ago
- 12 min read
AI is meeting users where they need intelligence
--Justin Patel, Venture Investor at Decasonic
Introduction
Imagine an AI system that already knows what you're working on when you open your laptop. It has context from your files, memory from past conversations, and enough understanding to help you move toward your goal before you even ask.
Personal superintelligence is easy to talk about in basic terms. Smarter models = better reasoning = faster multimodality = more capable agents. But that framing misses what’s actually happening right now in the market. The reason personal superintelligence feels near isn’t because models simply got better. It’s because AI systems are finally getting closer to the user at the point where they need AI most. Meeting them at their workflows, files, communication surfaces, devices and daily routines, and eventually their real-world actions. This is a product shift.
Over the last three weeks, the shift got a lot more clear. OpenAI expanded Projects so users can pull in context from multiple sources, reorganizing the whole interface around separate Chats and Sources tabs. Anthropic made memory from chat history available to all Claude users, including free accounts, and expanded how Claude works across connected surfaces through Google Workspace connectors and Slack integration. Google previewed Gemini handling multi-step tasks like booking rides and reordering doordash meals, while Google Workspace rolled out Gemini features that synthesize across emails, chats, files, and the web. On the execution side, Mastercard introduced Verifiable Intent as a trust layer for delegated AI transactions, Stripe expanded agentic payments across Mastercard, Visa, Affirm, and Klarna, and ERC-8183 proposed an open standard for escrowed agent work on Ethereum. Even legal action took place when a federal judge temporarily blocked Perplexity’s Comet shopping agent from accessing Amazon without permission. The takeaway: AI is moving from intelligence generation toward systems that actually understand user intent and can increasingly support trusted action.
This is why personal superintelligence should be understood as not just a model problem but a systems problem. As we have shared in prior blog posts, the stack still starts with model, context, and memory. In which the model reasons, context interprets the moment, and memory compounds the relationship over time. This is the framework by which we are building our AI OS, AI applications and AI Teammates.

What’s changing is what that stack now enables. As model, context, and memory reinforce each other, AI moves closer to the user’s intent and the ability to support increasingly useful action on their behalf. Personal superintelligence isn’t just smarter and better output. It’s AI that understands the user deeply because the surrounding system has become far more personal.
Context is becoming ambient and always on
The context layer used to be whatever you typed into a chat prompt. Now it's becoming the surface where personal AI actually lives. The products pushing this forward look nothing like chatbots. OpenClaw runs scheduled background work through cron jobs and heartbeat checks, functioning as a persistent agent across your apps and local machine. It doesn't wait for you to open a chat window. It monitors, acts, and reaches back out when something needs your attention. That's a fundamentally different product shape than “typical AI” and it hints at where the category is heading.
The mainstream platforms are converging on a softer version of the same idea. OpenAI's Projects now ingest Slack channels, Drive folders, and prior ChatGPT outputs as reusable source material. Turning the project into a living reference layer. Anthropic's Cowork update on February 24 shipped 13 new MCP connectors spanning Google Workspace, DocuSign, FactSet, and others, while Claude in Slack shows up in DMs, threads, and the assistant panel with access to all enabled integrations. Google's Workspace release on March 10, 2026 may be the most aggressive move: Gemini in Docs now generates fully formatted drafts by pulling from Drive, Gmail, and Chat simultaneously, and Gemini in Sheets scored 70.48% on SpreadsheetBench. None of these are just "AI features." They're context ingestion systems designed to make AI continuously aware of the user's working environment.
The 256k-token context window that OpenAI shipped for Thinking mode matters less than what's happening around it. Bigger windows help, but they still require the user to manually assemble the right inputs. The structural shift is that AI is starting to pull context on its own, from multiple external sources, without being asked. That's the difference between a tool you consult and a system that already knows what you're working on when you show up.
For startups, this is where a real wedge can emerge. The opportunity is not to outbuild the large model platforms on the horizontal layer. It is to own a high-value context surface they do not natively control. That could be a workflow, a communication layer, a vertical system of record, or an always-on environment where user intent becomes legible faster. As context becomes ambient, the best startups will look like products that sit adjacent to a user’s real workflow, capture the right signals by default, and make AI feel natively embedded.
Memory is becoming working state, not just recall
A few years ago, memory in AI used to mean "it remembers your name." What's happening now is different. Memory is becoming a working state, a persistent layer that carries forward not just facts about you, but an evolving understanding of what you're trying to accomplish.
The clearest signal came on March 2, 2026, when Anthropic made memory from chat history available to every Claude user, including free accounts, and shipped a memory import tool that lets you bring context over from ChatGPT or Gemini. That's significant not because of portability, but because of what it implies about how Anthropic sees the competitive moat. They're treating accumulated user understanding as something that belongs to the person, not the platform and making the bet that the quality of memory synthesis matters more than locking users in. Claude's documentation says it now synthesizes key insights across chat history and refreshes that synthesis every 24 hours, which means the system is actively maintaining a compressed model of who you are and what you care about, even between conversations.
OpenAI is approaching the same problem from a different angle. Project-level memory in ChatGPT is scoped and bounded, it draws only from conversations within the same project, creating self-contained workspaces where context doesn't bleed across unrelated work. That's a design choice that trades breadth for focus, and it matters for anyone running multiple workstreams who doesn't want one workflow contaminating another.
The gap between these two approaches, Anthropic's global synthesis versus OpenAI's project-scoped containment, is one of the more interesting design divergences in personal AI right now. Both are moving memory from passive recall toward active working state. The question is whether users want an AI that builds a single deep picture of them across everything, or one that maintains sharp separation between contexts. The answer is probably both, depending on the task. But either way, both are a shift in the correct direction.
For startups, this is where the opportunity gets much more interesting than simply “better memory features.” The opportunity is to build around memory ownership, memory permissioning, and memory interoperability. This is especially relevant at the Web3 x AI intersection. If memory becomes part of the operating layer for personal AI, then questions like who controls it, where it lives, how it is shared, and how it is audited become very important. That is where open identity, user-controlled data rails, agent permissioning, on chain attestations, and portable trust layers can matter. What China is doing makes this even more important. Over the last week, Chinese local governments have moved to build an industry around OpenClaw, with draft measures offering up to 10 million yuan in subsidies plus free compute and office support, while Reuters also reported that China’s broader “AI plus” push is encouraging society-wide adoption and treating tools like OpenClaw as engines for “one-person companies” and new forms of work. The deeper point is that China is increasingly treating open and agentic AI as ecosystem infrastructure, not just software. That raises the stakes for startups building open-source and decentralized alternatives: the next wedge may be heading towards trusted memory and user-owned working state that can persist across agents, surfaces, and economic systems.
Personal workflows are becoming the proving ground
The place where all of this gets tested is inside actual workflows. That is where the stack either holds together or falls apart. A one-off question can be answered with a strong model and almost no context. A real workflow is different. Drafting a report that pulls from last month’s email threads, two spreadsheets, meeting notes, and a decision buried in chat requires much more than generation. It requires the system to assemble the right context, maintain continuity across work, and move closer to what the user is actually trying to produce.
That is why the latest workflow products matter. The significance is not the benchmark on its own. It is that the system is starting to do real work inside the user’s environment rather than simply respond to isolated prompts.
This is also where Notion is pushing aggressively. Notion now frames itself as an AI workspace where agents can capture knowledge, search across apps, automate projects, and take action directly inside the operating surface where teams already work. Its February 24 release for Custom Agents is especially notable: agents can be given a job, triggered on a schedule, and run 24/7 without manual prompting. Combined with AI Meeting Notes anFebruary 24, 2026 – Notion 3.3: Custom Agentsd Enterprise Search, that means the workflow layer is no longer just about drafting faster. It is about building a system that can continuously absorb context, retain it, and act on it inside the same environment.
The same pattern is showing up in transcript recorders and meeting-memory products. These tools are no longer just creating transcripts. They are becoming personalization infrastructure for workflows. Granola’s recent Microsoft Teams integration is a good example. Granola combines the rough notes a user takes during a meeting with the full transcript to produce structured notes that reflect the user’s priorities rather than a generic summary. The meeting is no longer just being recorded. It is being transformed into workflow memory shaped around what matters to the individual. That same pattern is visible in Otter’s Meeting Agent, which goes beyond transcription by answering questions in meetings, scheduling follow-ups, and generating drafts based on a growing database of prior conversations. The transcript recorder is becoming a system that can carry forward organizational and personal working state.
What makes workflows the real proving ground is that they expose whether the stack actually compounds. Once AI can ingest the relevant inputs, preserve continuity from prior work, and improve the next action based on what it has already learned, the question is no longer whether it can be useful. The question becomes how fast the feedback loop tightens between what the system learns about the user and how effectively it can support the next task, meeting, or decision.
For startups, this is where the opportunity gets most interesting. The wedge is to own a workflow where context is high-value, memory compounds naturally, and the path from understanding to action is narrow enough to be trusted. That could be meeting intelligence, project operations, personal finance, recruiting, legal workflows, or vertical systems of record where the model platforms do not have native distribution or deep proprietary context. In Web3 x AI, the opportunity gets even sharper: open identity, portable reputation, user-owned work history, and programmable permissioning can turn workflow memory into an asset the user controls rather than something locked inside a single SaaS tool.
Intent is what model, context, and memory make possible
Intent is the missing bridge in most discussions of personal superintelligence. It’s not just what the user literally typed. It’s what the system understands the user is trying to achieve. And it matters a lot more once AI gets closer to action. In Mastercard’s March 5 launch of Verifiable Intent, the company describes the challenge head-on: as AI agents begin to buy on our behalf, consumers need clarity about what was authorized, confidence that instructions were followed, and protection if something goes wrong.
Mastercard’s line is worth borrowing: trust cannot be implied. It must be proven.
This is why personal superintelligence should be evaluated through what the model, context, and memory stack now enables, not just through output quality. A personal system becomes valuable when it can combine live context with accumulated memory and form a better read on what the user wants next. That could mean reordering within a budget, preparing a draft in a preferred style, surfacing the next step in a recurring workflow, or narrowing choices based on prior behavior and present constraints. The leap here is not moving from a weak model to a strong model. It’s from getting an isolated answer to getting a goal-aware system. That’s the threshold that matters.
One useful example is what this looks like when applied inside commerce. Abdul recently built a commerce-intents app on top of Podium, one of Decasonic’s portfolio companies. The product learns about the user over time and suggests cosmetics based on their preferences, behaviors, and likely needs. That is a much more interesting direction than generic search or static product recommendations. The system is not simply responding to a query. It is building a better understanding of the user and using that understanding to shape what gets surfaced next. In a category like beauty, where preferences are personal, contextual, and often repeated over time, intent becomes far more valuable than raw retrieval.
That is the broader point. Once model, context, and memory start compounding together, intent becomes much easier to infer. And when intent becomes clearer, the interface starts to change shape. The product no longer feels like a tool waiting for a command. It starts to feel like a system that can anticipate, guide, and increasingly support decisions in ways that are actually personal.
Trust becomes infrastructure, showcasing blockchain technologies
As soon as AI starts acting with money, trust stops being implicit. Mastercard’s Verifiable Intent announcement says the standard links identity, intent, and action into a privacy-preserving record and is being developed with Google as an open, standards-based trust layer for agentic commerce. The company says it’s designed to work across protocols, devices, wallets, platforms, and payment networks. Launch partners include Google, Fiserv, IBM, Checkout.com, Basis Theory, and Getnet. This is not a side note. It’s the trust stack for delegated action becoming explicit.

Stripe’s March 3 update expands Shared Payment Tokens to support Mastercard Agent Pay, Visa Intelligent Commerce, Affirm, and Klarna, making Stripe the first provider supporting both agentic network tokens and BNPL tokens through a single primitive. The stack is getting reusable execution primitives.
Crypto and blockchain infrastructure is a natural fit for this trust layer because the core requirements: cryptographic proof, tamper-resistant records, programmable authorization, and auditable state transitions are exactly what on-chain systems were built to do. Verifiable Intent needs a shared, neutral ledger where identity, authorization, and transaction outcome can be linked without relying on any single platform to maintain the record. Public blockchains provide that by default. Smart contracts can enforce scoped spending authority, time-bound permissions, and automatic refund conditions without a centralized intermediary. Token-based escrow systems like ERC-8183 already formalize this pattern: funds are locked programmatically, released only on evaluator attestation, and reclaimable on expiry. All of this is on-chain, all auditable, all composable with identity and reputation layers like ERC-8004. As AI agents become autonomous economic actors, the trust infrastructure underneath them needs to be as programmable and verifiable as the agents themselves. Traditional payment rails were designed for human-initiated, human-confirmed transactions. Agentic commerce requires trust that is machine-readable, cryptographically enforced, and portable across platforms. That’s not a theoretical case for blockchain. It’s the operational requirement that Mastercard, Stripe, and Ethereum builders are all converging on from different directions.
Crypto-native systems are already building toward this. ERC-8183, developed by Virtuals Protocol and the Ethereum Foundation’s dAI team, defines agentic commerce as a job with an escrowed budget, defined roles, an evaluator, and a compact state machine. Google’s Universal Commerce Protocol is described as an open standard designed for the future of commerce, enabling direct purchases across AI surfaces like AI Mode in Search and the Gemini app, while Google’s FAQ says AP2 serves as the specialized payment layer for secure, agent-led transactions within the broader UCP lifecycle. The point isn’t that every personal-AI product will use a commerce protocol. It’s that mainstream and crypto-native stacks are converging on the same needs: scoped authority, bounded execution, proof of completion, and portable trust. That’s exactly what personal superintelligence requires once it starts to act.
What founders should build toward
For founders, the implication is pretty straightforward. The next durable products probably won’t win if they only sit on top of the model layer. They’ll win because they own a high-value context surface, compound memory in a way that improves product performance over time, and get structurally closer to user intent before execution. The key questions become: (1) what unique context do we capture, (2) what memory compounds with repeated usage, (3) what intent becomes clearer over time, and (4) what actions can be taken once that understanding is strong enough?
The best personal-AI products will probably look less like generalized assistants and more like systems that sit inside valuable workflows, continuously absorb context, remember what matters, and earn the right to take increasingly useful actions. The companies that matter most in this next phase may be the ones that can turn model, context, and memory into a real understanding of the user and then make that understanding useful in the world.
The content of these blog posts is strictly for informational and educational purposes and is not intended as investment advice, or as a recommendation or solicitation to buy or sell any asset. Nothing herein should be considered legal or tax advice. You should consult your own professional advisor before making any financial decision. Decasonic makes no warranties regarding the accuracy, completeness, or reliability of the content in these blog posts. The opinions expressed are those of the authors and do not necessarily reflect the views of Decasonic. Decasonic disclaims liability for any errors or omissions in these blog posts and for any actions taken based on the information provided.
