top of page

The Web3 x Physical AI Market Map

  • Writer: Decasonic
    Decasonic
  • Sep 19
  • 16 min read

Updated: Sep 25

Early Innings Emerge of a $100B Market -– Abdul Al Ali, Venture Investor at Decasonic 


We also published a live interactive version of our market map here. If you are building at this intersection, reach out to us at Decasonic: link


Introduction


We are excited to introduce the Web3 x Physical AI Market Map. Decasonic is proud to be the first crypto-native venture fund to publish a market map of this intersection. We believe the integration of blockchain and web3 tokenization and coordination will enhance the trajectory of AI adoption in the physical world. This article follows a significant, recent announcement by Google. This announcement enables Google’s Cloud A2A protocol to be used and communicated by AI Agents, with the recent addition of x402 integration enabling agent-to-agent commerce. We believe this announcement increasingly expands the market opportunity at the intersection of Web3 x AI, but also is a measure of the potential machine-to-machine payments that will be enabled by the intersection of Web3 and physical AI. The intersection is emerging to create new market leaders and we believe, over time, market leaders building at this intersection will accrue and sustain long term value. 


Our latest deep dive into the Web3 x AI supercycles focuses our perspectives on Web3 x Physical AI, a landscape that includes AI robotics, wearables, devices and other vehicles as interfaces. It represents the expansion of the market from software, agentic digital to agentic physical, amongst the other opportunities actively emerging at the intersection. 


Market Overview


In 2024, the collective raised amount by robotics startups was ~$7B, which represented a ~19% growth from 2023. Accounting for the first half of 2025, over $6B has been invested in robotics startups. The Q2 growth from 2024 -> 2025 is nearly ~263% YoY, underscoring the appetite emerging for robotics. The reasons behind the surge in funding is due to the combination of advances in AI models at an increasingly rapid pace, including foundational models dedicated to robotics, alongside the fall in hardware costs. This surge in robotics investment also underscores America’s leadership in frontier AI, where US founders are shaping embodied intelligence through world-class hardware innovation, software integration, and policy environments that enable responsible scaling.


Some of the recent announcements gaining both excitement and momentum include the recently announced Meta’s Ray-Ban line, which continues to push the boundaries of AI Wearables. Unveiled on September 17, 2025 Meta introduced a display edition for their AI glasses with a built-in-heads-up display (HUD) and a wristband for gesture control. These glasses allow users to view notifications, AI-generated summaries, and interact with AI Agents directly from their wearable devices. 


Another device we use internally at Decasonic is Limitless, which offers a pendant with an on-device AI used to transcribe, summarize, and organize meetings into chats, offering actionable insights and actions for users. We are also increasingly seeing physical AI’s integration embedded in every-day devices, including phones. 


Google’s Pixel 10 lineup exemplifies an “AI-first” device with real-time screening and Gemini AI integration for contextual assistance. The transition and emergence of physical AI in Web3 is happening today, with the introduction of AI-powered phones, AI-wearables, and recently the Solana Seeker phone, which embeds AI-native applications on Solana. 


At Decasonic, we have been active in investing at the intersection of Web3 and Physical AI, recently backing PrismaX, a company focused on scaling decentralized teleoperations for robotics.  Over the past quarter, we have observed both a surge in high-quality opportunities emerging that are driven by ambitious founders aiming to grow the emerging market. This is matched by a growing wave of interest and discussion from the crypto community on X, with many seeking exposure to the next evolution of AI from agentic software to physical AI, with the recent emergence of Robotics as a key area of interest. 


The ChatGPT Moment for Physical AI Applications and Interfaces


During his CES, GTC 2025 keynotes, Jensen Huang highlighted the major expansion in Agentic AI, from AI systems that are able to plan and execute digital tasks autonomously to physical AI, which brings “intelligence into the realm of robotics and embodied interaction with the real world.” Huang describes the evolution as moving through four phases: 


  1. Perception: Systems able to understand images, sound, and text. This is primarily in the form of recognition and sensory awareness for devices. 

  2. Generative AI: AI that can create content, including text, images, and code. 

  3. Agentic AI: AI that can reason, plan, and execute digital tasks autonomously. 

  4. Physical AI: Next frontier, where intelligence is embedded in machines (robots, vehicles, drones, and industrial devices). These machines are able to reason with and interact with the real world, by understanding spatial layout, environmental context, and physics in order to operate in unstructured, real-world settings. 


In his keynote speech, Jensen predicted that general robotics is close to its “ChatGPT moment,” where autonomous robots become capable of interacting with an unstructured, real-world environment primarily powered by advancements in foundation models designed for physical, robotics use cases. These robots will be capable of real-world navigation, manipulation, and coordination.


The Web3 x Physical AI
Source: Link

Innovations at the Interface Layer for Physical AI


There has been significant recent innovations in the interfaces across Physical AI.  Notably, social media has focused on physical AI, a  category capturing attention through the human-like movements from humanoids (Unitree, Figure, Optimus, Atlas).  We consider robotics to be one of many market categories falling under Physical AI interfaces. Other categories also include:


  1. Humanoids: Robots with human-like limbs and bodies. 

    1. Unitree’s R1, the full-size humanoid available for $5.9k has captured a significant amount of attention over the past few quarters. 

  2. Industrial/Cobots: Robotic arms, pick-up and place-like functionality, but limited to specific functions rather than general purpose humanoids. AI powers how these robots are superior in performance than general internet enabled robotic arms. 

    1. In medicine, those enable use-cases where doctors can tele-operate surgeries while being in completely different geographical locations. 

  3. Service Robots: Used in hospitality, healthcare, cleaning, and delivery. Those robots can be wheeled, tracked, or remain stationary in their given AI settings and responsiveness to the environment.

  4. Consumer Robots: Primarily fueled by the integration of AI software, and this category includes personal assistants, vacuums, and more-entertainment robots. 

  5. Drones/UAVs: Robots capable of flight or ground navigation, often with agentic AI integrations. 

  6. Medical Robots: Robots often used in telemedicine, surgery, and rehabilitation.

  7. AI Wearables: Devices equipped with sensors and artificial intelligence that monitor, analyze, and assist users in real time, often used for health tracking, productivity, communication, and personalized experiences.


A key reason behind the increasing growth in the physical AI sector, with the primary driver being robotics hardware is the fall of hardware costs. However, a significant bottleneck in the advancement of robotics remains software. Humanoid robots were formerly costing buyers ~$150K. With the introduction of Unitree R1, Tesla Optimus, and Figure the availability and affordability of those devices significantly increased, enabling for greater scale of adoption and mainstream application use-cases. Both Tesla Optimus and Figure 02 are targeting a range of $20-$50K, with a long-term goal of sub-$20k prices for their respective devices. 


Crypto’s primary interaction with robotics today occurs at the software layer, where emerging use-cases at the intersection are beginning to emerge that address long-standing bottlenecks in robotics software development from project financing to validation of co-work. This primary includes enabling coordination, machine to machine transactions, and open networks for data/resource sharing, the latter being primarily relevant to enable collaboration between labs and companies. 


Web3 x Robotics Tokens


CoinGecko recently introduced a “Robotics” category with the current market cap of the tokens identified at $443M. To underscore how truly emerging this market is, the top three tokens collectively by market cap account for nearly ~55% of the collective market cap identified. Peaq, has been building since 2017, with both Auki and Geodnet being founded in 2021. The identified liquid token opportunities for exposure to the sector does not include the valuation of private, venture opportunities we have been seeing at Decasonic emerge in the market. 


Collectively, the valuation of this market across both liquid and private opportunities has crossed $1B+ as of the time of writing this article. When you consider this latter valuation as a subset of the wider market cap of AI tokens identified by CoinGecko, the collective value of the robotics sector is ~3% of the $31.1B market cap of AI tokens on CG. We expect this market to grow to a $100B market, a 3.2x increase from the current size of the AI market in Web3, crypto. 


Highlighting Projects at the Intersection of Web3 x Physical AI


PrismaX: PrismaX is a decentralized tele-operations platform. It bridges human oversight with robotic autonomy in real-world applications, and is developing a standardized tele-operations SDK designed for seamless interaction with robotics across diverse embodied AI devices. It enables remote control and data collection for complex tasks, with PrismaX offering a marketplace in order to build more efficient foundation models for robotics using training data collected from their tele-operators. 

  1. We also had Bayley, one of the founders on PrismaX for our biweekly Web3 x AI space on September 18, 2025. You can view the recorded conversation here: link


NRN: Formerly known as AI Arena, the Neuron team is leveraging sim-to-real reinforcement learning pipelines to enable AI agents trained in virtual environments to transfer “skills,” to embodied AI devices. Those devices include arms, humanoids, and drones. 


Neuron World: Neuron Innovations, which currently operates as Neuron World is focused on developing a platform to power the machine-to-machine economy. It enables autonomous AI agents, sensors, IoT devices, robots, vehicles to discover each other, connect, and transact with limited to no human intervention. 


Auki Labs: Auki is enabling a decentralized machine perception network that enables robots to share spatial awareness and collaborate in real-time. Auki in a sense, ‘democratizes’ robotics by turning perception into a shared resource for the network participants. It rewards participants for contributing data and computational resources through the $AUKI token, and aims to develop an ecosystem where robots can navigate with and interact with the unstructured, physical environment. 


Geodnet: Operates a GNSS network providing centimeter-level precision for robotics navigation and autonomy. Miners set up Geodnet’s station devices to contribute real-time location data, which empowers robotics in sectors like agriculture, logistics, and beyond. Miners, in return, $GEOD tokens as rewards for their contributions, and this model aims to outpace traditional GPS systems and build a global, community-driven infrastructure for precision data collection, sharing for robotics. One of their more impactful announcements to date has been the introduction of GEO-PULSE, which the team behind Geodnet claims provides the most accurate Car GPS navigation device. 


Peaq: L1 network for the machine economy. It enables the tokenization of robots, vehicles, and devices through its infrastructure. It functions by assigning an onchain identity to robots, enabling secure, autonomous, machine to machine interactions (including transactions) on their network. 


FrodoBots: Robotics platform that turns sidewalk robots into playable assets. It allows users to remotely control sidewalk robots for urban exploration, and earn rewards through data crowdsourcing. The team behind FrodoBots is also building BitRobot, which is a network of subnets with the goal of fostering global collaboration in AI research by rewarding contributions to robotic policies and simulations. 


Codec Flow : AI Automation platform, utilizes Vision-Language Actions (VLAs) to enable robots and devices to perceive, reason, and act autonomously. RoboMove, their upcoming platform built on Codec Flow’s infrastructure, aims to enable users to control simulated or real-world robots through tweets. 


OpenMind: OpenMind is developing the “decentralized operating system for robots,” with their open-source OM1 and FABRIC protocol. Their goal is to enable coordination, identity verification, and machine-to-machine communication, interaction in the robotics ecosystem. 


Reborn: Reborn is a protocol enabling the open ecosystem for AGI robots, transforming human motion into digital data to train embodied AI and robotic devices through their RFMs. It aims to address key bottlenecks in data scarcity, model generalization, and embodiment diversity by enabling community contributions through first-person videos, captured primarily by wearable motion devices. 


Mawari: AI-driven immersive compute network, leveraging decentralized spatial computing and cloud rendering to deliver real-time, low-latency streaming of photorealistic AI agents (including industrial twins) and avatars in XR environments, enabling adoption of embodied AI in AR/VR. Some of their most recent collaborations include the collaboration with Nankai Electric Railway to overlay interactive AI avatars onto real-world landmarks via smartphones or AR glasses to create personalized AI experiences. 


Web3 Ecosystems Driving Adoption


One ecosystem driving innovation at the intersection of blockchain, AI, and Physical AI is Sui. We previously published an article on the market map of the Sui AI ecosystem, which has rapidly evolved since. Since publishing the article, Mysten Labs, the foundation behind Sui, has been advancing and building in Physical AI, with the goal of enabling autonomous systems, collective machine learning, and on-chain interactions for physical, embodied AI devices. This continues to align with their “everything of things” concept, which aims to integrate Sui into applications like robots, agents, gaming, and payments. 


Kostas Chalkias, one of the co-founders and core contributors to Mysten Labs, recently announced the successful development of “small chipsets that can communicate directly with Sui.” This enables machine-to-machine transactions that are settled directly on Sui. Further, with the utilization of their Walrus Protocol, the Sui data layer, other robots and devices in the network can then leverage the data stored on Walrus and share knowledge amongst the registered network devices 


Web3 x Physical AI Market Map (2025)

Similar to our other market maps focused on Web3 x AI, we map the Web3 x Physical AI ecosystem across two axes:

  • X-Axis – AI standard layers: Compute → Data → Model → Interface → Application.

  • Y-Axis – Target audiences: Web3 Natives, Developers, Users, and Mainstream Audiences.


Note - as with our other market maps, this is a continued iteration and is a rapidly evolving, dynamic landscape. The current state of the ecosystem is unlikely to be the same over the next few quarters and years to come. The current labels we have identified and attached to projects are evolving, are subjective, and reflect our own internal frameworks. Projects may disagree with the labels or mapping they are identified by. To date, we have identified 82+ projects building at the rapidly evolving landscape of Physical AI. 


The Three Pillars of Web3 x Physical AI 

The Web3 x Physical AI

Infrastructure 


Model Layer

  1. Model Framework: SDKs, APIs, and toolkits for training or fine-tuning robotics-specific AI models. These models can be broken down into Foundation Models, Control Models, Simulation-Trained Models and Policy Networks. 

    1. A key project operating at this intersection is ElizaOS. The team behind Eliza is working on their ‘Eliza Wakes Up’ project, which aims to create lifelike companions that blend their agentic framework with a humanoid, personal companion. 


Data Layer

  1. Telemetry: Data infrastructure for capturing real-time robotic signals (movement, performance, diagnostics), crucial for monitoring, control, and predictive maintenance.

    1. An example of a project operating at this intersection is Dimo. Dimo enables users to connect their OBD2 adapter to their vehicle, providing data to collect, analyze, and monetize in real-time. 

  2. Mapping: Systems enabling spatial awareness and mapping, allowing robots and other physical AI devices to navigate environments through SLAM (simultaneous localization and mapping) and geospatial intelligence. 

    1. Two projects operating at this intersection include FrodoBots and Natix Network. FrodoBots’ sidewalk robots, called Earth Rovers, are designed to crowdsource real-world data through gamification. Natix Network is primarily DePIN focused, but provides a platform for crowdsourcing geospatial data by using everyday devices, including smartphones, dashcams, and other devices. 

  3. Data Collection: Pipelines and infrastructure for harvesting robotics-related datasets, whether through sensors, user interactions, interface devices, or simulation environments, supporting training and optimization.

    1. Reborn facilitates the collection of large-scale, community-driven data collection of human motion and interaction data. These data-sets are then used to train their RFMs, or the Robotic Foundation Models. 

  4. Data Labelling: Platforms and tools for annotating robotics datasets (vision, LiDAR, positional data), ensuring data is structured and usable for machine learning model training.

    1. OpenGraph, the first place winner of the Sui Basecamp 2025 Pitch Day and the Typhoon Hackathon, is building a decentralized data labeling and annotation system for Physical AI and robotics. 

  5. Reinforcement Learning: Frameworks for applying RL techniques to robotic control, enabling agents to learn through trial and error in simulated or real environments. 

    1. NRN, formerly AI Arena, is utilizing Reinforcement Learning through sim-to-real transfer, enabling models to be trained in virtual environments to be adapted to real-world robots through high-fidelity pipelines, continual learning loops, and crowdsourced data collection. 

  6. Positioning: Technologies for precise localization and positioning of robots in dynamic environments, often leveraging GPS, RTK, or sensor-fusion methods.

    1. Geodnet provides reliable verification data essential for navigation, coordination, and real-world autonomy for robotics and physical Ai devices. 

  7. Perception: Systems that allow robots to interpret their surroundings through vision, audio, tactile, or multimodal sensors, bridging the gap between raw sensor data and actionable intelligence.

    1. Nubila Network serves as the physical perception layer for AI, capturing real world, hyperlocal, and real-time environmental signals including weather data. 

  8. Training: Infrastructure for training robotics models at scale, enabling continuous improvement of robotic policies, perception systems, and task execution.

    1. Cherry AI recently launched a Robotics Data Parsing Tool in September 2025, enabling monetization of datasets collected from its AI products to create domain-specific datasets used to power neural networks for robots. 


Compute Layer

  1. Aethir provides decentralized GPU compute for training AI models, running real-time inference, and enabling sim-to-real transfer in simulated environments. 


Interface Layer

  1. Simulation: Virtual environments for designing, training, and testing robotics agents, reducing real-world risk and enabling iterative experimentation at scale for robotics. 

    1. Qace Dynamics, an emerging project, aims to provide a plug-and-play AI layer for enabling rapid prototyping and testing simulations for robotics and physical AI devices. It allows users to simulate devices and test AI workflows instantly via a web app, using a laptop and webcam, without physical hardware.

  2. Identity: Interfaces establishing robotic identity to authenticate, track, and integrate robots into on-chain ecosystems. 

    1. Peaq assigns unique DIDs (decentralized identifier) to machines, devices, vehicles, and robots on their network. This enables interoperable communication and transactions amongst the connected entities. 

  3. Teleoperations: Interfaces allowing human operators to remotely control or supervise robots in real-time. 

    1. PrismaX provides a platform for decentralized tele-operations, with the vision of developing a unified SDK for tele-operations. In the future, PrismaX will also aim to operate Guilds, where users can collectively own physical AI devices, including humanoids and rent them for tasks to earn rewards. 

  4. OS: Robotics operating systems that serve as middleware to unify hardware, software, and AI models, enabling interoperability across robotic ecosystems.

    1. OpenMind is delivering OM1, which is its open-source, device hardwire-agnostic OS that empowers robotics to perceive, reason, interact, and adapt in real-world environments. 

  5. Coordination: Multi-agent coordination layers that allow robotic swarms or fleets to operate collectively, optimizing efficiency and collaborative behavior.

    1. Fetch AI, now part of the ASI alliance, enables coordination for physical AI, robotics devices through autonomous AI agents and their Cortex model, a neural-inspired robotics framework. The agents on Cortex support multi-agent learning, allowing robots to collaborate in real-time by sharing data, adapting actions via RL, and executing multi-step tasks in dynamic, unstructured environments. 

  6. Hardware: Physical interfaces that connect AI models to robotic bodies, enabling embodied intelligence. 

    1. PlaiPin, a project developed by the Suibotics team, is creating physical embodied AI companions called PlaiPins. These companions serve as personalized AI  entities that users can raise and customize. 

  7. Orchestration: Tools and protocols for orchestrating robotic workloads across distributed systems, simulations, or real-world deployments.

    1. DeltaEngine enables orchestration for robotics through their no-code builder that enables coordination of physical devices through a drag-and-drop workflow. Users can assign tasks and define rewards with the goal of enabling autonomous execution. 

  8. Automation: Interfaces designed to automate workflows between AI agents and robotic systems, allowing robots to execute tasks autonomously with minimal human input.

    1. Neuron enables automation for physical AI, robotics, and IoT devices through its no-code AI Node Builder platform, enabling a machine-to-machine economy through an automation platform.   


Application Layer

  1. Marketplace: Platforms facilitating exchange of robotics components, data, services, or AI-driven robotic capabilities, enabling discoverability and monetization of robotics assets.

    1. Homebrew Robotics provides a decentralized marketplace for sharing, monetizing, and trading robotics data, sensor inputs, and software packages with the goal of making physical AI devices more accessible. 

  2. SocialFi: Robotics-integrated platforms that blend social interactions, tokenized economies, and on-chain identity, with physical robotics or embodied AI components as participants in community-driven systems.

    1. Show Robotics is behind VitaNova robot, an agentic robotic actor that is an interactive performer. 

  3. DAO: Robotics-focused decentralized autonomous organizations that govern robotics ecosystems, funding decisions, or robotic deployments through community consensus.

    1. XMAQUINA operates a DAO focused on enabling and democratizing access to humanoid robotics and physical AI. DAO members can vote on investments, asset allocations for robotics, and strategic directions for the DAO.  

  4. Gaming: Robotics-augmented gaming experiences or simulations where embodied AI agents and robotics interact within play-to-earn economies, user-generated content, or immersive AR/VR metaverse environments.

    1. ET Fugi, a gaming project developed by FrodoBots blends physical AI robotics with interactive gameplay, creating a real-world scavenger hunt. It enables players to drive sidewalk robots to capture alien NFTs, and generate valuable data for training models. 


Opportunities


The intersection of Web3 and Physical AI remains in its early stages and continues to evolve. We have identified 82+ projects across various stages of operations, some at their infancy, and some revenue-generating. Of the identified projects, ~51% of the projects fall under the ‘Infrastructure Layer’, which includes Data, Compute, and Models. 13% of the projects  fall under Applications, and ~37% under Interfaces. We at Decasonic remain excited for the future of the intersection that is really only beginning to emerge and develop. We believe it continues to develop at a rapid face. 


Some of the opportunities we are excited about include:

  1. Robotic Services: Platform enabling Robots-as-a-Service (RaaS) model, where operators (humans) can post tasks for robotics integrated in  the marketplace or interface, enabling an end-user to receive a target output. 

    1. In the future, you might be able to ‘tip’ the tele-operator of a device on-chain utilizing USDC/other stables. 

  2. Guilds: Co-owned robotic devices, operated under a fleet owned by Guilds, or a collective of users. This enables revenue-sharing between the various economic participants of the guild, encouraging them to continue scaling the robots under their guild. 

  3. Social Robotics: Those robots/physical AI devices enable for new, unique experiences operated by primarily robots and enabling entertainment use-cases. 

  4. Unified Tele-operations platform: Decentralized coordination layer for remote robotic operations, enabling multiple users (potentially guilds) to interact with and supervise robots with scale across differing geographies. 

    1. Operators can be authenticated, rated on-chain, and receive rewards in the form of tokens. It might not be an unlikely scenario - where you can see AI Agents operate tele-operations as well. 

  5. Decentralized Robot Fleets: Network of autonomous robots managed on-chain, enabling communities, guilds, social networks (in the form of DAOs). 

  6. Companion AI Devices: Companion AI devices, often presented in the form of individual robots. Those could be linked on-chain, with their respective AI Agent living on a network. 

    1. Agents can live and interact with other agents on-chain, enabling transfer of skills and enhanced knowledge. 

  7. Identity-Enabled Automation: Each device is housed on a network, with the ability for the device to validate the reputation of the robot and provide avenues for machine-to-machine commerce. 

    1. Machine-to-machine commerce and the respective physical AI device’s reputation will be assigned on-chain, enabling discoverability amongst a shared network of humans and devices. 

  8. Sim-to-real Pipelines: Training policies, sim-to-read environments with proven performance. Tokenization and crypto unlocks royalties on downstream usage. 

    1. Coordination through on-chain incentives and primitives will help advance the scale of sim-to-real. 

  9. Model and Skill Platforms: Distribution and monetization of robotic skills with in-devence licensing. Licensing extends to downstream use-cases, including on-chain monetization. 

  10. Edge AI: Providing decentralized compute at the edge through small, portable physical AI devices acting as nodes in decentralized compute or data inference network. 

  11. Physical AI Applications: Enabling users to connect with, discover, and request tasks/services for physical AI devices through a decentralized, coordinated marketplace. 


Conclusion


The Web3 x Physical AI ecosystem is entering its inflection point. Advances in foundation models designed for embodied intelligence, coupled with the falling costs of robotics hardware, are fueling adoption at an unprecedented pace. With the US at the forefront of Physical AI innovation, this is an opportunity to build ecosystems that lead globally. At Decasonic, we believe the next wave of opportunity in Physical AI will be unlocked by teams building at the intersection of Web3, Physical AI. 


Decasonic is a Web3 x AI venture and digital assets fund focused on advancing and working alongside founders at the frontier of Web3 x AI. If you are a founder building a physical AI project, a potential founder, or a partner investor, reach out to us at Decasonic. Together, we can shape the future of embodied intelligence and its on-chain economy.


The content of these blog posts is strictly for informational and educational purposes and is not intended as investment advice, or as a recommendation or solicitation to buy or sell any asset. Nothing herein should be considered legal or tax advice. You should consult your own professional advisor before making any financial decision. Decasonic makes no warranties regarding the accuracy, completeness, or reliability of the content in these blog posts. The opinions expressed are those of the authors and do not necessarily reflect the views of Decasonic. Decasonic disclaims liability for any errors or omissions in these blog posts and for any actions taken based on the information provided.



 
 
 

Comments


bottom of page