Imagination to Invention: Our AI Operating System
- Decasonic
- Nov 5
- 7 min read
The journey of our applications, agents, and workflows
– Paul Hsu, CEO and Founder, Justin Patel, Venture Investor, and Rizza Torres, Marketing Manager, at Decasonic
Imagination, Invention and Execution will win the AI Age. We’re excited to share with you our journey in AI with the goal of collaborating with like-minded investors and founders building on the frontier of AI.
We started where most builders start: initial curiosity about a breakthrough in AI. Early experiments with ChatGPT in late 2022 focused on compressing time from question to answer to action. Those experiments quickly moved us from writing about and using AI, to building with and innovating AI. Today, Decasonic centers our workflows through an AI Operating System. We’re proud to be actively imagining, inventing and executing on AI.
Today, an AI-Native human team works alongside a team of embodied AI agents that synthesize signals, analyze markets, draft templates, assess consensus, and keep our institutional knowledge repositories adaptive. Humans add judgment, context, and the non-consensus perspective that ultimately drives investment alpha in venture capital.
We describe the progression from writing about AI → using AI → building AI → innovating AI. These four phases of AI are how we operate with AI day to day. It fits our broader thesis: AI isn’t just a trend we invest in; it’s a capability built into our firm’s DNA. We adopted AI in parallel with market capability (GPT3/ ChatGPT → GPT-4/4o → GPT-5/next-gen reasoning), and we built where that adoption compounded investment alpha.
We’re excited to share more about our journey, our principles around AI, how the AI OS sharpens investing, and what’s next.
Decasonic’s AI Evolution

Our adoption happened in practical steps that compounded on each other. Each milestone built upon the last from early curiosity, to structured experimentation, to running Decasonic on an AI Operating System.
Writing About AI (2021 - 2022). Before the LLM wave, we were deliberate in structuring an internal knowledge base to capture theses, taxonomies, and outcomes, first as a human-curated repository we call the Decasonic Knowledge Repository (DKR). This became our foundation for systematizing learning and decision-making. When public LLMs arrived at the end of 2022, they became the catalyst for systematic experimentation across the team. We began testing GPT-3 not as a novelty, but as a way to compress time between questions to answer to action. These experiments laid the groundwork for how we would embed intelligence into our venture workflows.
Externally, by April 2023, Paul shared his perspectives on the intersection of AI x Web3 to our Limited Partners, signaling our conviction that intelligence automation would reshape venture building. Around the same period, marketing and investment teams began integrating ChatGPT into daily workflows, from early marketing automation experiments to research summaries and market scanning by the investment team. We have ramped up our writing, including these blog posts: Web3 x AI Adoption Scenarios 2025 - 2030, AI x Web3 Use Cases, and Build to Belong: 50 Ways to Embrace Web3 x AI in 2025
Using AI (2023 - early 2024). By mid-2023, this evolved into firm-wide adoption of AI-driven workflows, custom GPTs, and research assistants that made summarization, extraction, and organization reliable. We saw the potential for fine tuning LLMs and brought on both internal and external partners into our AI.
We also implemented NotebookLM in October 2024 to assist in investor letters and knowledge retention, creating a living record of institutional learning. The emphasis throughout this phase was on traceability: if an agent produced it, it needed receipts. We treated this phase as “fine-tune the workflow”: start “human only”, test tools against it, then automate the pieces that are held up under review.
We continue to test and experiment with leading edge products, both in fun and productive ways. From the Limitless AI companion to Google’s AI-native phones, Meta’s smart glasses, and even Sora-powered videos, we test how intelligence moves closer to the user, more intuitive, personal, and expressive. Our AI-first marketing shows how creativity scales when human insight meets intelligent tools. Across work and play, we’re testing and building the future of AI.
Building AI (March - August 2025). We formalized the “Decasonic Agent Knowledge Repository” (DAKR), an agent-ready knowledge repository, and began shipping in-house agents at pace. This phase also marked a cultural shift: we moved from exploring tools to designing agents as teammates, autonomous contributors with clear jobs, input contracts, and performance metrics. Each success expanded our belief that AI could meaningfully scale human capability inside a venture firm. The center piece of this phase is our approach to have AI building AI, covered in this blog post.
Core internal apps followed: AI Product Management (AI PM); AI Product Evaluation (AI PE); AI Due Diligence (AI DD); AI Reinforcement Learning Expert Network (RLEN) patent filed; AI Roster, and AI Agent Emma, our Web3 Investor Day AI Concierge. All of the applications are housed within the AI OS. This is where “AI building AI” turned from idea to operating norm. RLEN (provisional patent filed) formalized how expert clones critique and reward other agents, raising quality while lowering escalation.
Innovating AI (Sept - Nov 2025). We crossed 100+ agents, introduced “agent teams,” and added reinforcement loops so expert clones could critique and improve other agents. AI Sourcing became a stand-alone rail. Today, we orchestrate ~140 agents across 16 internal applications spanning marketing, research, operations, and investor workflows.
Decasonic filed a provisional patent for its Reinforcement Learning Expert Network (RLEN), a breakthrough in how AI learns from and enhances human expertise. The system embeds reinforcement learning into a network of AI clones and expert agents that reflect Decasonic’s accumulated investment knowledge and operational experience. Operating within the AI OS, RLEN transforms experience into intelligence, enabling agents to critique, reward, and improve one another’s outputs with human feedback loops in the loop. This structure compounds insight and consistency across the firm’s workflows turning collective learning into a scalable advantage. The patent filing marks a significant milestone in Decasonic’s mission to build AI-native venture capital models where AI flourishes people.
This growth was powered by a cross-functional team that turned prototypes into production systems. As a team, these efforts made the AI OS not just a productivity layer, but the living infrastructure of Decasonic’s innovation engine. Net effect: a flywheel consisting of wider surface area, tighter decision loops, and an OS that improves with every cycle.

Inside Decasonic’s AI OS
Our OS runs on a few principles that keep the system reliable and useful.
Agents own jobs, not prompts. Each agent has a defined job, input contract, and success metrics. Names like Edison (analyst) aren’t for flair; they reinforce role boundaries and accountability. If a job drifts, we split it; if a job stalls, we retire it. Reviews center on purpose, repeatability, and whether performance improves with use.
DAKR is the brain. The Decasonic Agentic Knowledge Repository (DAKR) structures our collective intelligence across memos, market maps, call notes, presentations, and AI product docs. It’s the context layer that lets agents retrieve, reason, and explain. Every output leaves receipts (citations, replayable steps, and a change log in DAKR) so humans can verify and refine quickly.
Outcomes (what changed). We measure the OS by what it makes faster, cheaper, and better evidenced:
AI Due Diligence (AI DD): 6 hours → ~20 minutes (~18x efficiency) for initial diligence baselines. AI DD pulls the research an investor needs to dive deeper into an opportunity and formulate an opinion, with agents assembling the consensus baseline and partners driving the non-consensus view.
AI Sourcing: We moved from manually sourcing ~200 deals per week to a mixed human+agent system that surfaces hundreds of qualified leads overnight, so we engage earlier and faster with teams.
AI Product Evaluation (AI PE): Our product evaluation layer uses the patented Reinforcement Learning Expert Network (RLEN) and a team of agents to run structured, side-by-side tests on internal and third-party products. Product evaluations that typically take hours to complete, have been cut down to minutes.
AI Product Management (AI PM): ~60× step reduction, a 6-hour development process → 6 clicks to ship or update internal agents, accelerating iteration across the stack. AI building AI (via AI PM). Our AI Product Management layer proposes PRDs, checklists, and routing for “build crews” (a small set of agents plus a human owner). This is how “AI building AI” moves from principle to weekly shipping cadence.
Using AI to Seek Alpha
Our AI OS turns judgment into a higher-throughput, higher-fidelity process: it expands the speed and coverage of sourcing while sharpening the depth of research, then converts that insight into 1:1 personalization for outreach and meetings so we engage earlier with high-fit teams. In diligence, we move from static snapshots to living, scalable analysis, a “faster due diligence → higher-conviction decisions → clearer path to liquidity” loop, where agents establish the consensus baseline and investors push for the non-consensus view. In parallel, we use generative-AI marketing to translate research into public signal, reinforce investment-thesis compatibility, and attract the right deal flow. Always-on monitors track tipping points of adoption, while hands-on demonstrations of AI product and technical depth keep our bar grounded in real UX and reliability. For the portfolio, scenario generators help us reason about catalysts and risk, and aligning AI speed with founders improves capital efficiency. Net effect: finding signal vs. noise faster with greater accuracy and compounding it across the firm.
Enhancement Capital in the Age of AI
We call this Enhancement Capital because we do more than fund innovation. We enhance it by building and operating like founders ourselves. The rhythm is consistent: automate the consensus baseline, cultivate the non-consensus insight, and keep human judgment at the center. The OS widens our surface area through earlier looks, deeper evidence, and continuous learning, while DAKR compounds our institutional intelligence every cycle.
We’ll keep shipping internal agents where it compounds our edge, adopt the best external tools where they accelerate us, and publish what we learn so the right founders and partners can find us.
At Decasonic, progress is built in community with founders, operators, and believers who create together and grow together. Join us in celebrating this collective spirit at Web3 x AI FrensGiving on November 20th, where we explore the frontier of AI patents and the importance of protecting innovation in a world shaped by Web3 and intelligent systems. Together, we honor the innovators transforming bold ideas into lasting impact.
If you’re building at the frontier of Web3 x AI and want a partner who ships alongside you, we’d love for you to connect with us.
The content of these blog posts is strictly for informational and educational purposes and is not intended as investment advice, or as a recommendation or solicitation to buy or sell any asset. Nothing herein should be considered legal or tax advice. You should consult your own professional advisor before making any financial decision. Decasonic makes no warranties regarding the accuracy, completeness, or reliability of the content in these blog posts. The opinions expressed are those of the authors and do not necessarily reflect the views of Decasonic. Decasonic disclaims liability for any errors or omissions in these blog posts and for any actions taken based on the information provided.
