The Spectrum of Trust in an AI-Native Internet
- Decasonic

- 6 days ago
- 14 min read
Generative AI redefines trust, Blockchain records reality
– Paul Hsu, Founder and CEO, and Justin Patel, Venture Investor at Decasonic
AI is redefining truth. Truth is no longer black and white. Generative models now produce text, images, audio, and video that often feel indistinguishable from what we once believed to be “real.” At the same time, blockchains have matured into credible infrastructure for recording provenance, coordinating rights, validating identities, and settling value between counterparties. Together, these technologies are forcing a reset in how we think about trust on the internet, how licensing and ownership will evolve, and where, as investors, durable value will be created.
We all feel this shift in technology. It’s the split-second hesitation before believing whether a viral video is actually real. It’s the "CEO" of a company calling a finance director with a request for a wire, with a voice that is indistinguishable from their real voice.. We are moving from the "seeing is believing" era to one where "verifying is surviving." This is becoming the default state of the consumer internet. When your eyes and ears can be deceived by a single sentence prompt, trust ceases to be a feeling and becomes a necessary utility, one that we must ingrain into the web itself.
For years, the dominant narrative has framed this world as “deepfake versus real,” as if trust could be reduced to a single switch. In practice, that framing breaks down almost immediately. As investors and operators, we’ve seen situations where content looked real, sounded real, and even behaved real, yet still failed under scrutiny. The problem is understanding where something comes from, how it has been transformed, and whether it can be relied on in high-stakes decisions. None of these questions are binary, and in a world saturated with AI-generated content, the old binary collapses. What we need instead is a spectrum of trust that can become shared infrastructure for platforms, creators, regulators, AI agents, and investors.
The numbers confirm that the old binary has already been broken. Deepfake fraud incidents in North America surged over 1,700% between 2022 and 2023. By 2027, the losses from generative AI-driven fraud are projected to hit $40 billion a year. We are currently seeing a flood of synthetic content across all platforms (projected to reach millions of deepfake files shared annually by 2025). This exponential rise in synthetic volume doesn't just create noise; it destroys the signal for businesses, insurers, and markets, creating a massive opening for builders looking to restore it.
A more productive way to anchor this spectrum is along a primary axis of origin: AI-composed to reality-captured. This distinction matters because most failures of trust we see today don’t come from obvious fabrications, but from ambiguity around where something came from and how far it has drifted from its source. On one end, AI-composed content is created when models assemble words, pixels, or sounds from learned patterns rather than from a specific real-world event. On the other end, reality-captured content originates from cameras, microphones, sensors, system logs, physical supply chains, or human eyewitness accounts at a particular point in time and space. Most content in the future will sit somewhere between these poles. Even what we currently call “raw” will increasingly be adjusted, enhanced, summarized, or translated by models. The spectrum of trust starts with recognizing this gradient rather than pretending that everything must neatly fit into “deepfake” or “real.”
Layers on Top of the Spectrum

The AI-composed to reality-captured axis describes origin, not morality. This distinction is often missed. We frequently see debates collapse into whether AI-generated content is “good” or “bad,” when the more important question is whether it faithfully represents reality, respects rights, and can be verified. Two pieces of AI-composed content can sit at opposite ends of the trust spectrum: one might be a malicious deepfake designed to impersonate a public figure, while another is a clearly labeled fictional scene in an entertainment product. Similarly, two reality-captured clips can differ meaningfully in trustworthiness depending on whether their metadata and provenance are intact, whether they have been selectively edited to mislead, and who is presenting them. To make the spectrum useful, we need additional layers.
The first layer is authenticity versus counterfeiting. This dimension asks whether a piece of content represents its underlying reality faithfully or whether it impersonates and fabricates it.
The second layer concerns licensing: whether the creator and subjects have rights and consent, or whether the content is effectively unlicensed and non-compliant. Licensing itself will evolve from static legal PDFs into machine-readable permissions, programmable on-chain and enforceable both by markets and by AI agents.
The third layer is verification, which addresses whether we can prove origin and edit history with reliable evidence or whether we are forced to rely on heuristics and trust in centralized intermediaries.
The fourth layer is derivation, distinguishing between works that are broadly inspired by diffuse influences and those that are explicitly remixed from identifiable originals. Social norms around “dupes,” remixes, and “inspired by” will keep shifting, especially in culture and consumer brands; the spectrum gives us tools to encode and price that nuance rather than treating all derivation as theft or all duplication as harmless.
Once these layers are applied, the spectrum becomes much richer. At the far AI-composed end, we find counterfeit fabrications and unlicensed clones: the classic “deepfake” style media that is AI-composed, counterfeit in intent, often unlicensed in its source material, and unverified in provenance. Moving inward, we encounter synthetic simulations that are grounded in real data but do not correspond to specific events, or reconstructions that recreate past moments from transcripts and partial records. In the middle, we see AI-assisted hybrids: summaries, translations, stylistic rewrites, accessibility transformations, and restorative enhancements where reality-captured inputs are transformed by models but retain their underlying truth. At the far reality-captured end, we find provenance-verified, rights-cleared media and materials whose origin in the physical world can be proven and whose edits or transformations are documented.
In this framing, “deepfake” is no longer the spectrum; it is a particular corner of it. It describes content that is AI-composed, counterfeit in its claims, often unlicensed in its use of likenesses or brands, and unverified in origin. “Real” also stops being synonymous with “I saw it on video.” It becomes a stricter concept: reality-captured, verified, authentic, and appropriately licensed, often tied back to human relationships grounded in the physical world. Those human relationships–who you have worked with, who you have seen deliver in reality, who you would trust with capital or your reputation–will matter even more as the internet becomes more synthetic.
Twenty Points Along the Spectrum

To make this framework operational, we break the spectrum into twenty reference points. These are not rigid categories, but practical markers we use to evaluate risk, opportunity, and monetization. They provide a shared language for builders, regulators, and investors navigating an AI-native internet.
Counterfeit Fabrication: Entirely AI-invented people, scenes, or claims with no real-world referent.
Identity Deepfake: AI-generated likeness or voice of a real person placed into a fabricated context.
Scenario Deepfake: Real people or brands, but the depicted actions or events never occurred.
Unlicensed Persona or Brand Clone: AI-generated use of a known celebrity, character, or mark without rights, implying endorsement.
Narrative Counterfeit Report: AI-written “article” or “transcript” framed as factual with no grounding in actual events or records.
Labeled Fictional Synthetic: Fully AI-generated scenes or narratives clearly presented as fiction or role-play.
Inspired Style Remix: AI-composed content “in the style of” given references without copying specific scenes; socially closer to a “dupe.”
Data-Driven Simulation: Synthetic outputs driven by real datasets or models (markets, traffic, weather) but not tied to a specific event.
Record-Based Reconstruction: AI-composed reenactments drawn from transcripts, logs, or partial records, labeled as reconstructions.
Conceptual Illustration: AI-generated diagrams, visualizations, or explainers for real concepts or events, not presented as raw footage.
AI-Drafted, Human-Verified Narrative: AI writes a first draft from notes or interviews; humans verify facts and make final edits.
AI-Summarized Capture: AI summaries of reality-captured documents, transcripts, or recordings where meaning is preserved.
AI-Translated or Accessibility Remix: Reality-captured content transformed across language, modality, or reading level while preserving truth.
AI Restorative Enhancement: Denoising, upscaling, colorizing, or minor in painting applied to reality-captured media without changing events.
Licensed AI Adaptation: Authorized AI-driven transformations of real works into new styles or formats, with clear onchain licensing.
Lightly Edited Capture: Cropping, color correction, or minor textual edits applied to raw media with no change to substance.
Edited but Documented Capture: Reality-captured media rearranged or shortened, with an edit log available for audit.
Licensed Authentic Capture: Reality-captured content with clear rights and consents recorded, and only non-deceptive edits.
Provenance-Verified Capture: Content signed at the device or system level, anchored onchain, with chain-of-custody metadata.
Raw Ground-Truth Capture: Direct, unedited outputs from trusted devices or systems, with intact metadata tying them to physical reality.
For investors and operators, the value of this spectrum is that it turns a noisy conversation about “deepfakes” into a concrete underwriting tool. Different segments will monetize in different ways:
Points 1-5 (counterfeit and deceptive synthetic content) represent: risk and cost (fraud losses, reputational damage, regulatory exposure). Companies that help shrink or reprice this zone monetize as security, fraud, and insurance infrastructure.
Points 6-15 (synthetic, simulated, and AI-assisted hybrids) represent: productivity and creativity upside. Here, the winning businesses will be tools and platforms that safely harness AI while staying on the right side of licensing and provenance.
Points 16-20 (verified, licensed, reality-captured content) represent: monetizable trust capital. Data cooperatives, high-integrity sensor networks, and provenance-rich brands can command premium pricing because they sit at the “gold” end of the spectrum.
When we evaluate companies in this space, we ask:
Which band of the spectrum are they compressing, enhancing, or monetizing?
Who pays for that shift, and how durable is that willingness to pay?
That turns a philosophical discussion about “truth” into a concrete thesis about revenue, margins, and market structure.
Where Blockchain Fits
Against this backdrop, Web3 becomes structurally important. Blockchains are not truth oracles; they cannot independently confirm that an event happened in the physical world. What they do provide is a shared, append-only ledger that multiple parties can write to and verify against without trusting one another. This is exactly what is needed to harden the verification, licensing, identity, and transaction layers of the spectrum.
On verification, blockchains can anchor provenance for both digital content and real-world materials. Capture devices, industrial sensors, and even supply-chain scanners can sign outputs at the point of creation and anchor hashes on-chain. Editors, platforms, and AI systems that later transform the content or the materials can add their own signatures and references, producing an evolving chain of attestations. Over time, this creates an open provenance graph linking media, datasets, and physical goods back to their origins and documenting each transformation: who touched it, what was done, and under what license. For investors, this is a substrate for entirely new trust-driven markets in both digital and physical goods.
On licensing, blockchains are the substrate for the next generation of rights. Ownership and usage rights become on-chain primitives that can be queried directly by AI agents and applications. Licenses can evolve from static legal text into composable, machine-readable contracts: specifying whether a work or dataset can be used to train models, whether derivatives are allowed, how revenue should be split between upstream and downstream contributors, and what happens when rights are revoked. As norms around “dupes,” remixes, and “inspired by” content evolve, these licensing structures can flex: distinguishing between acceptable homage and economically meaningful copying in a way that both markets and courts can understand.
Identity and reputation form the third pillar. Blockchain-validated identities can represent humans, organizations, and AI agents. Humans can anchor their professional histories, contributions, and attestations on-chain. AI agents can hold keys, sign their outputs, and transact autonomously. Over time, reputational context accrues to these identities: which licenses they respect, which transactions they honor, which claims they sign that later prove accurate or fraudulent. This becomes a behavioral layer on top of static provenance, giving platforms and counterparties a reason to trust some agents and discount others.
Finally, blockchains validate trust for transactions themselves. Monetary and contractual flows between humans, agents, and hybrid arrangements can be recorded, enforced, and settled on-chain. A human may delegate a budget and a set of constraints to an AI agent; that agent may negotiate with other agents, trigger payments, and update on-chain state as it executes. Each step leaves a cryptographic trail. In this environment, content on the spectrum of trust is not just information; it is tied directly into economic commitments. Who you choose to transact with, and on what terms, becomes a function of where they and their outputs sit on the spectrum and how they have behaved historically.
Through all of this, human relationships grounded in reality remain the anchor. No matter how sophisticated AI agents or blockchain infrastructure become, capital allocators and founders still form trust primarily through repeated interactions, execution in the real world, and shared experience. Those human relationships generate the high-value, reality-captured data and reputations that the rest of the system composes from. Web3 and AI merely make more of that trust machine-readable, composable, and economically aligned.
Real-World Application & Portfolio
Where Durable Adoption Can Emerge
Not every AI plus blockchain idea will matter. Durable adoption will emerge where AI is creating real pain or risk, where blockchain-based infrastructure is uniquely suited to mitigate that risk, and where economic incentives exist for participation. Viewed through the spectrum of trust, several zones stand out.
First, content provenance rails for AI-native media will become necessary infrastructure. As AI-composed content saturates feeds, collaboration tools, and marketplaces, platforms will need reliable ways to label content along the spectrum, from fully synthetic narratives at points 1-5 to provenance-verified captures at points 18-20. Enterprises will need these rails for compliance and risk management. Investors should expect protocol and middleware businesses that serve capture devices, content tools, and AI models to emerge here, with network effects and standard-like dynamics.
Second, machine-readable licensing for AI training and generation will move from theory to necessity. Creators will increasingly insist on explicit choices between “no training,” “train but no derivatives,” “train and share revenue,” and other modes. AI developers and enterprises will look for clean, compliant datasets whose licensing status is unambiguous. Systems that encode licenses on-chain, track derivations across the spectrum, and route value accordingly will become the default for high-stakes, high-value models.
Third, identity and wallets for AI agents expose a new layer of infrastructure. As agents operate across the stack, writing code, managing content pipelines, transacting on behalf of users, their identities, reputations, and economic incentives will matter. Blockchain-validated identities and transaction histories will allow markets to discriminate between trustworthy and untrustworthy agents. This is directly tied to the spectrum: if an agent consistently signs outputs that are later validated at points 16-20, its content and transactions will command a premium over those from agents whose outputs are frequently disputed at the counterfeit end.
Fourth, data collectives and cooperatives will crystallize around reality-captured, provenance-rich datasets. Contributors of high-integrity data, sensor networks, specialized professionals, communities with unique access, will use Web3 structures to pool data, govern access, and share revenue from models that depend on that data. The better the provenance and licensing (the closer to points 18-20), the more bargaining power these collectives will have.
Portfolio in Practice: Building the Spectrum of Trust
We are seeing these adoption zones materialize across our portfolio, demonstrating how the Spectrum of Trust moves from theory to tangible value creation. In each case, what stood out early was how clearly these teams understood where trust breaks down and where it can be rebuilt as a product, a network, or a market.
1. Authenticity vs. Counterfeiting
This layer ensures content faithfully represents its underlying reality, securing inputs and actions against fabrication or impersonation.
Prisma X is tackling the "Physical AI" gap. By using token incentives to crowdsource high-quality, reality-captured data for robot teleoperation, they are building a verified "ground truth" for robotics that purely synthetic data cannot match. This drives content toward Provenance-Verified Capture (Point 19).
Paragon ensures that enterprise AI agents aren't just hallucinating but are grounded in actual business data. Their infrastructure connects AI to the messy reality of third-party SaaS integrations, ensuring the "context" an agent acts on is Licensed Authentic Capture (Point 18).
TestMachine is the immune system for smart contracts. They use AI to aggressively attack and audit code, finding vulnerabilities that could otherwise allow a malicious agent to create a Counterfeit Fabrication (Point 1). It is AI securing the rails that other AI agents will use to transact.
2. Licensing
This layer moves rights and consent from static documents into machine-readable, on-chain permissions that are enforceable by AI agents.
Scenario allows game developers to train custom AI models on their own artistic style. This turns Licensed AI Adaptation (Point 15) into a product, where creators can generate infinite assets that remain true to their specific, IP-protected aesthetic.
Kiki World flips the model from "consume" to "co-create." By using onchain voting to let the community decide on physical beauty products before they are made, they anchor the manufacturing supply chain in verified human preference, establishing a foundational input close to Raw Ground-Truth Capture (Point 20) for product design based on explicit consent.
Forum3 is helping major brands bridge this gap, using AI to transform workforce productivity and customer loyalty. They provide the integration layer for secure, AI-Drafted, Human-Verified Narrative (Point 11) in enterprise workflows, respecting customer and brand IP rights.
3. Verification
This layer addresses how we prove origin and edit history with reliable, cryptographic evidence rather than relying on centralized trust.
Hyperspace is building a peer-to-peer network for distributed AI inference. By verifying the computation across a decentralized grid, they ensure that the thinking behind an AI agent is transparent and accountable, supporting integrity around AI-Summarized Capture (Point 12) and above.
Opinion Labs focuses on the social consensus layer. They use prediction markets and decentralized opinion protocols to determine what a community actually believes is true, acting as a "verification oracle" for sentiment that pushes consensus toward Provenance-Verified Capture (Point 19).
Prisma X (also key to Authenticity) uses tokenization to mandate a verifiable origin for its data, ensuring the on-chain record serves as the immutable evidence of its provenance.
4. Derivation
This layer tracks and prices the nuance between works broadly inspired by influences and those explicitly remixed from identifiable originals.
Scenario (also key to Licensing) inherently manages derivation: assets created are derived from the original IP but are clearly tracked and owned by the creator, enabling monetization of derivative works.
Anomaly is pushing the boundaries of Labeled Fictional Synthetic (Point 6) and beyond. By building Layer 3 infrastructure for AI-driven gaming, they create complex, rapid derivation and remixing of game assets and narratives, which requires on-chain primitives to track and value that creative flow.
Kiki World (also key to Licensing) tracks derivation from Raw Ground-Truth Capture (Point 20) inputs. The final physical product design is derived from the community’s initial on-chain verified input, proving the source of the inspiration.
The Spectrum of Trust as the Guiding Force
The next internet will be AI-native. The question is whether it will also be trust-native. That outcome will depend less on the models themselves and more on the choices made by builders, platforms, and capital allocators designing the rails beneath them. The spectrum of trust–AI-composed to reality-captured, layered with authenticity, licensing, verification, and derivation–should act as a guiding force for how we design systems, set policy, and allocate capital in this environment.
For builders, the spectrum clarifies where to anchor products: provenance protocols for media and materials, evolving licensing systems that can be understood and enforced by AI agents, blockchain-validated identities for humans and agents, transaction rails that bind content and commitments, and data collectives that monetize reality-captured signals. For platforms and regulators, it offers a precise language to describe obligations and enforcement across the full range of content we will see, including the gray areas of “dupes,” remixes, and “inspired by” culture. For investors, it highlights where durable moats can form as AI melts the boundary between synthetic and real, and as blockchain hardens the rails for provenance, identity, and value distribution.
We are not going back to a pre-AI world where visual evidence is self-authenticating and text can be assumed to have a human author. The systems that will matter most from here are those that make visible where on the spectrum a piece of content or a transaction sits, who stands behind it, human or agent, and how value should flow as it is inspired, remixed, and recomposed across the network. That is where Web3 and AI move beyond buzzwords and compound together into the infrastructure of digital trust and long-term value creation.
The content of these blog posts is strictly for informational and educational purposes and is not intended as investment advice, or as a recommendation or solicitation to buy or sell any asset. Nothing herein should be considered legal or tax advice. You should consult your own professional advisor before making any financial decision. Decasonic makes no warranties regarding the accuracy, completeness, or reliability of the content in these blog posts. The opinions expressed are those of the authors and do not necessarily reflect the views of Decasonic. Decasonic disclaims liability for any errors or omissions in these blog posts and for any actions taken based on the information provided.

Comments