AI Building AI
- Decasonic
- Jul 3
- 5 min read
The Flywheel that Compounds Moats – Paul Hsu, CEO and Founder, and Abdul Al Ali, Venture Investor, Decasonic
Introduction
AI is rapidly accelerating how we live, work, and play. “AI Building AI,” is our strategy in the AI era. This concept is centered on compounding the rapid advantages in AI models, data, and context. This concept accelerates the rapid pace of growth and innovation. Importantly, this concept aligns with humans guiding AI towards wisdom, knowledge, and expertise.
Speed is everything in the world of AI. We are noticing AI race competitions amongst companies, with weeks going by of ‘supersonic milestones.’ The market conditions are rapidly changing, with an exponential competition towards the future of AI adoption and advancements.
We at Decasonic are on the front lines in this journey of rapid acceleration in AI by being pro-active AI x Web3 investors, AI builders, and by running an AI-native venture fund. Our core guiding principles for investing is centered around our key belief that future greatness belongs not only to AI native teams, but to AI compounding organizations.
We believe “AI Building AI” will create the separation for AI Natives from AI Laggards. Winners in the AI market are going to compound AI advancements through AI Building AI, scaling, delegating, and letting AI build more of itself.
Our strategy and core guiding principle of investing reflects our aspirations as a world class AI Native Venture firm, with our company being structured with org charts of humans working alongside AI. We started with fine-tuned OpenAI models, mapping out our custom GPTs based on our internal knowledge and investment ‘steps.’ We then built on top of our models by introducing AI Agents. Currently, Decasonic is a team of 5 AI-native humans, AI clones (digital twins of each human team member), and 61+ specialized, domain-expert agents. Our agents are trained on our internal knowledge repository, resulting in investment-alpha aligned, value-add AI Agents. This is all accelerated by building and investing in tools that align with our core guiding principle of “AI Building AI.”

Three forces now shorten the distance between idea and deployment: Reinforcement Learning from Human Feedback (RLHF), AI agent swarms, and the emerging promise of Reinforcement Learning from Human Clone Feedback (RLHCF), a concept which we introduce.
Together, they form a powerful flywheel. Better models generate higher quality data. That data, looped through feedback systems, improves the next generation of models. This results in more efficient, higher quality outputs at a significantly faster, accelerating rate than previously known.
This is the core promise unlocked by AI Building AI, an acceleration towards a continuous, never-ending flywheel of rapid advancements and innovation.
RLHF: Power and Bottlenecks
Reinforcement Learning from Human Feedback was a breakthrough technique, aligning large language models with human preference. Yet it has clear limitations. Humans remain in the loop, creating latency in feedback and limiting the granularity of signals. The limit in the rapid advancement is the rate of the feedback a human is capable of providing.
Human raters provide feedback after the fact. Depending on the rate of feedback collected, alongside the potential domain-sector expertise of the respective human raters, traditional RLHF deployment could be a bottle-neck for the acceleration of compounding flywheel intelligence. Feedback queues create friction. As the demand for AI-generated outputs increases exponentially, the bottleneck of episodic human feedback becomes untenable.
AI Agent Swarms
The early paradigm of a single AI assistant is giving way to the more dynamic architecture of agentic swarms, teams of specialized agents that collaborate, parallelize tasks, and share state through orchestrators. There is power to specialized AI agents acting in coordination. We view orchestrator agents that manage specialized agents as an effective way to scale AI frontier use cases. Generalist agents are not efficient, nor are they effective for specialized, high-quality use case deployment.
Today, orchestrators assign tasks to the most qualified agent, managing hierarchies, dependencies, and goals in real-time. They function as agentic managers. In practice, a meta-agent might deploy sub-agents to research, write specifications, code, review, and deploy all under 24/7 operations.
This AI Agent swarm team will be capable of outcompeting human counterparts, due to their nature to work 24/7. The only limit in their capabilities is the cost of compute, inference, and potential accessible endpoints for interactions and deployment of output.
RLHCF: A New Feedback Paradigm
The next evolution we believe from RLHF is: Reinforcement Learning from Human Clone Feedback (RLHCF). This approach removes the latency of RLHF by inserting a digital twin between the human and the swarm. These clones act as real-time proxies for feedback, fine-tuned on personal preferences, tone, and domain expertise.
Clones are trained on humans; with their respective personalities, workflows, biases and other relevant information, abstracting the need for manual human feedback loops. Humans in this case will never be entirely abstracted, but the time spent per interaction giving feedback to agents or fine-tuning models is greatly reduced.
When deployed, candidate outputs are piped through the clone, which returns reward signals immediately. Periodically, humans audit these decisions, correct for drift, and feed new examples back into the clone. The result is a continuously improving, always-available reinforcement loop that scales with near-zero marginal cost and nearly unlimited scaling opportunities.
v-Infinity
AI enables continuous upgrades, a concept we refer to internally as “v-infinity.” No longer does a product innovator, founder, or builder ship a product once and meet the evolving market needs. Shipping AI products is a non-stop challenge of exponential, continuous improvements with higher expectations, capabilities, and functionalities to be achieved with each iteration. Agents learning from both RLHF and potential RLHCF methods enhance this functionality, enabling further product and build improvements, customizations, and personalization.
Moats are Morphing
Execution velocity is becoming the primary moat. In a world where base models are increasingly commoditized, differentiation emerges from systems that can learn faster and adapt in real time.
The next defensibility layer isn’t static IP, it’s applied intelligence. The ability to run a swarm, audit it through clones, and evolve that system continuously. This creates a living software stack, always learning, never shipping a final version.
Conclusion
AI Building AI is the path to continuous improvements. In the AI Age, ideas are abundant and execution speed is everything. Speed to network effects powers durability. It enables continuous learning, compounding output, and context-rich orchestration. We believe that the future belongs to teams and platforms that don’t just use AI, they build more of it and explore its emerging frontiers. They scale through it. And they delegate to it.
This flywheel is just getting started. The organizations that embrace AI Building AI will not just keep pace. They will define the frontier. If you are building at the intersection of AI x Web3, reach out to us as Decasonic, we can help transform your vision into a reality.
The content of this material is strictly for informational and educational purposes only. It is not intended to constitute investment advice, nor should it be considered a recommendation or a solicitation to buy, sell, or hold any asset. Decasonic does not endorse investments in any specific tokens, and nothing in these blog posts should be construed as legal, tax, or financial advice. Please consult with a qualified professional advisor before making any financial decisions. Decasonic provides no warranties, whether expressed or implied, on the content provided in these blog posts, including its accuracy, completeness, or correctness. The opinions expressed here are those of the authors and do not necessarily reflect the views of Decasonic. Please note that Decasonic may hold a position in some of the tokens mentioned, including Virtuals. Decasonic is not liable for any errors or omissions in the content of this material or for any actions taken based on the information provided herein.
Comments