In our previous post, we explored the History of Application Design. In Part 1 of our second Agentic AI series post, we examine the current Web2 AI landscape and its key trends, platforms, and technologies. In Part 2, we explore how blockchain and trustless verification enable the evolution of AI agents into truly agentic systems.
Figure 1. E2B Web2 AI Agent Landscape.
The contemporary AI landscape is predominantly characterized by centralized platforms and services controlled by major technology companies. Companies like OpenAI, Anthropic, Google, and Microsoft provide large language models (LLMs) and maintain crucial cloud infrastructure and API services that power most AI agents.
Recent advancements in AI infrastructure have fundamentally transformed how developers create AI agents. Instead of coding specific interactions, developers can now use natural language to define agent behaviors and goals, leading to more adaptable and sophisticated systems.
Figure 2. AI Agent Infrastructure Segmentation.
Key advancements in the following areas have led to a proliferation in AI agents:
figure_3_ai_business_models1920×1080 178 KB
Figure 3. AI Business Models.
Traditional Web2 AI companies primarily employ tiered subscriptions and consulting services as their business models.
Emerging business models for AI agents include:
While current Web2 AI systems have ushered in a new era of technology and efficiency, they face several challenges.
The main constraints of Web2 AI—centralization, data ownership, and transparency—are being addressed with blockchain and tokenization. Web3 offers the following solutions:
Both Web2 and Web3 AI agent stacks share core components like model and resource coordination, tools and other services, and memory systems for context retention. However, Web3ʻs incorporation of blockchain technologies allows for the decentralization of compute resources, tokens to incentivize data sharing and user ownership, trustless execution via smart contracts, and bootstrapped coordination networks.
figure_4_web3_ai_agent_stack1920×3627 407 KB
Figure 4. Web3 AI Agent Stack.
The Data layer is the foundation of the Web3 AI agent stack and encompasses all aspects of data. It includes data sources, provenance tracking and authenticity verification, labeling systems, data intelligence tools for analytics and research, and storage solutions for different data retention needs.
The Compute layer provides the processing infrastructure needed to run AI operations. Computing resources can be divided into distance categories: training infrastructure for model development, inference systems for model execution and agent operations, and edge computing for local decentralized processing.
Distributed computing resources remove the reliance on centralized cloud networks and enhance security, reduce the single point of failure issue, and allow smaller AI companies to leverage excess computing resources.
1.Training. Training AI models are computationally expensive and intensive. Decentralized training compute democratizes AI development while increasing privacy and security as sensitive data can be processed locally without centralized control.
Bittensor and Golem Network are decentralized marketplaces for AI training resources. Akash Network and Phala provide decentralized computing resources with TEEs. Render Network repurposed its graphic GPU network to provide computing for AI tasks.
2.Inference. Inference computing refers to the resources needed by models to generate a new output or by AI applications and agents to operate. Real-time applications that process large volumes of data or agents that require multiple operations use a larger amounts of inference computing power.
Hyperbolic, Dfinity, and Hyperspace specifically offer inference computing. Inference Labsʻs Omron is an inference and computes verification marketplace on Bittensor. Decentralized computing networks like Bittensor, Golem Network, Akash Network, Phala, and Render Network offer both training and inference computing resources.
3.Edge Compute. Edge computing involves processing data locally on remote devices like smartphones, IoT devices, or local servers. Edge computing allows for real-time data processing and reduced latency since the model and the data run locally on the same machine.
Gradient Network is an edge computing network on Solana. Edge Network, Theta Network, and AIOZ allow for global edge computing.
The Verification and Privacy layer ensures system integrity and data protection. Consensus mechanisms, Zero-Knowledge Proofs (ZKPs), and TEEs are used to verify model training, inference, and outputs. FHE and TEEs are used to ensure data privacy.
1.Verifiable Compute. Verifiable compute includes model training and inference.
Phala and Atoma Network combine TEEs with verifiable compute. Inferium uses a combination of ZKPs and TEEs for verifiable inference.
2.Output Proofs. Output proofs verify that the AI model outputs are genuine and have not been tampered with without revealing the model parameters. Output proofs also offer provenance and are important for trusting AI agent decisions.
zkML and Aztec Network both have ZKP systems that prove computational output integrity. Marlinʻs Oyster provides verifiable AI inference through a network of TEEs.
3.Data and Model Privacy. FHE and other cryptographic techniques allow models to process encrypted data without exposing sensitive information. Data privacy is necessary when handling personal and sensitive information and to preserve anonymity.
Oasis Protocol provides confidential computing via TEEs and data encryption. Partisia Blockchain uses advanced Multi-Party Computation (MPC) to provide AI data privacy.
The Coordination layer facilitates interaction between different components of the Web3 AI ecosystem. It includes model marketplaces for distribution, training and fine-tuning infrastructure, and agent networks for inter-agent communication and collaboration.
1.Model Networks. Model networks are designed to share resources for AI model development.
2.Training / Fine Tuning. Training networks specialize in distributing and managing training datasets. Fine-tuning networks are focused on infrastructure solutions to enhance model external knowledge through RAGs (Retrieval Augmented Generation) and APIs.
Bittensor, Akash Network, and Golem Network offer training and fine-tuning networks.
3.Agent Networks. Agent Networks provide two main services for AI agents: 1) tools and 2) agent launchpads. Tools include connections with other protocols, standardized user interfaces, and communication with external services. Agent launchpads allow for easy AI agent deployment and management.
Theoriq leverages agent swarms to power DeFi trading solutions. Virtuals is the leading AI agent launchpad on Base. Eliza OS was the first open-source LLM model network. Alpaca Network and Olas Network are community-owned AI agent platforms.
The Services layer provides the essential middleware and tooling that AI applications and agents need to function effectively. This layer includes development tools, APIs for external data and application integration, memory systems for agent context retention, Retrieval-Augmented Generation (RAG) for enhanced knowledge access, and testing infrastructure.
The Application layer sits at the top of the AI stack and represents the end-user-facing solutions. This includes agents that solve use cases like wallet management, security, productivity, gaining, prediction markets, governance systems, and DeFAI tools.
Collectively, these applications contribute to secure, transparent, and decentralized AI ecosystems tailored to Web3 needs.
The evolution from Web2 to Web3 AI systems represents a fundamental shift in how we approach artificial intelligence development and deployment. While Web2’s centralized AI infrastructure has driven tremendous innovation, it faces significant challenges around data privacy, transparency, and centralized control. The Web3 AI stack demonstrates how decentralized systems can address these limitations through data DAOs, decentralized computing networks, and trustless verification systems. Perhaps most importantly, token incentives are creating new coordination mechanisms that can help bootstrap and sustain these decentralized networks.
Looking ahead, the rise of AI agents represents the next frontier in this evolution. As we’ll explore in the next article, AI agents – from simple task-specific bots to complex autonomous systems – are becoming increasingly sophisticated and capable. The integration of these agents with Web3 infrastructure, combined with careful consideration of technical architecture, economic incentives, and governance structures, has the potential to create more equitable, transparent, and efficient systems than what was possible in the Web2 era. Understanding how these agents work, their different levels of complexity, and the distinction between AI agents and truly agentic AI will be crucial for anyone working at the intersection of AI and Web3.
In our previous post, we explored the History of Application Design. In Part 1 of our second Agentic AI series post, we examine the current Web2 AI landscape and its key trends, platforms, and technologies. In Part 2, we explore how blockchain and trustless verification enable the evolution of AI agents into truly agentic systems.
Figure 1. E2B Web2 AI Agent Landscape.
The contemporary AI landscape is predominantly characterized by centralized platforms and services controlled by major technology companies. Companies like OpenAI, Anthropic, Google, and Microsoft provide large language models (LLMs) and maintain crucial cloud infrastructure and API services that power most AI agents.
Recent advancements in AI infrastructure have fundamentally transformed how developers create AI agents. Instead of coding specific interactions, developers can now use natural language to define agent behaviors and goals, leading to more adaptable and sophisticated systems.
Figure 2. AI Agent Infrastructure Segmentation.
Key advancements in the following areas have led to a proliferation in AI agents:
figure_3_ai_business_models1920×1080 178 KB
Figure 3. AI Business Models.
Traditional Web2 AI companies primarily employ tiered subscriptions and consulting services as their business models.
Emerging business models for AI agents include:
While current Web2 AI systems have ushered in a new era of technology and efficiency, they face several challenges.
The main constraints of Web2 AI—centralization, data ownership, and transparency—are being addressed with blockchain and tokenization. Web3 offers the following solutions:
Both Web2 and Web3 AI agent stacks share core components like model and resource coordination, tools and other services, and memory systems for context retention. However, Web3ʻs incorporation of blockchain technologies allows for the decentralization of compute resources, tokens to incentivize data sharing and user ownership, trustless execution via smart contracts, and bootstrapped coordination networks.
figure_4_web3_ai_agent_stack1920×3627 407 KB
Figure 4. Web3 AI Agent Stack.
The Data layer is the foundation of the Web3 AI agent stack and encompasses all aspects of data. It includes data sources, provenance tracking and authenticity verification, labeling systems, data intelligence tools for analytics and research, and storage solutions for different data retention needs.
The Compute layer provides the processing infrastructure needed to run AI operations. Computing resources can be divided into distance categories: training infrastructure for model development, inference systems for model execution and agent operations, and edge computing for local decentralized processing.
Distributed computing resources remove the reliance on centralized cloud networks and enhance security, reduce the single point of failure issue, and allow smaller AI companies to leverage excess computing resources.
1.Training. Training AI models are computationally expensive and intensive. Decentralized training compute democratizes AI development while increasing privacy and security as sensitive data can be processed locally without centralized control.
Bittensor and Golem Network are decentralized marketplaces for AI training resources. Akash Network and Phala provide decentralized computing resources with TEEs. Render Network repurposed its graphic GPU network to provide computing for AI tasks.
2.Inference. Inference computing refers to the resources needed by models to generate a new output or by AI applications and agents to operate. Real-time applications that process large volumes of data or agents that require multiple operations use a larger amounts of inference computing power.
Hyperbolic, Dfinity, and Hyperspace specifically offer inference computing. Inference Labsʻs Omron is an inference and computes verification marketplace on Bittensor. Decentralized computing networks like Bittensor, Golem Network, Akash Network, Phala, and Render Network offer both training and inference computing resources.
3.Edge Compute. Edge computing involves processing data locally on remote devices like smartphones, IoT devices, or local servers. Edge computing allows for real-time data processing and reduced latency since the model and the data run locally on the same machine.
Gradient Network is an edge computing network on Solana. Edge Network, Theta Network, and AIOZ allow for global edge computing.
The Verification and Privacy layer ensures system integrity and data protection. Consensus mechanisms, Zero-Knowledge Proofs (ZKPs), and TEEs are used to verify model training, inference, and outputs. FHE and TEEs are used to ensure data privacy.
1.Verifiable Compute. Verifiable compute includes model training and inference.
Phala and Atoma Network combine TEEs with verifiable compute. Inferium uses a combination of ZKPs and TEEs for verifiable inference.
2.Output Proofs. Output proofs verify that the AI model outputs are genuine and have not been tampered with without revealing the model parameters. Output proofs also offer provenance and are important for trusting AI agent decisions.
zkML and Aztec Network both have ZKP systems that prove computational output integrity. Marlinʻs Oyster provides verifiable AI inference through a network of TEEs.
3.Data and Model Privacy. FHE and other cryptographic techniques allow models to process encrypted data without exposing sensitive information. Data privacy is necessary when handling personal and sensitive information and to preserve anonymity.
Oasis Protocol provides confidential computing via TEEs and data encryption. Partisia Blockchain uses advanced Multi-Party Computation (MPC) to provide AI data privacy.
The Coordination layer facilitates interaction between different components of the Web3 AI ecosystem. It includes model marketplaces for distribution, training and fine-tuning infrastructure, and agent networks for inter-agent communication and collaboration.
1.Model Networks. Model networks are designed to share resources for AI model development.
2.Training / Fine Tuning. Training networks specialize in distributing and managing training datasets. Fine-tuning networks are focused on infrastructure solutions to enhance model external knowledge through RAGs (Retrieval Augmented Generation) and APIs.
Bittensor, Akash Network, and Golem Network offer training and fine-tuning networks.
3.Agent Networks. Agent Networks provide two main services for AI agents: 1) tools and 2) agent launchpads. Tools include connections with other protocols, standardized user interfaces, and communication with external services. Agent launchpads allow for easy AI agent deployment and management.
Theoriq leverages agent swarms to power DeFi trading solutions. Virtuals is the leading AI agent launchpad on Base. Eliza OS was the first open-source LLM model network. Alpaca Network and Olas Network are community-owned AI agent platforms.
The Services layer provides the essential middleware and tooling that AI applications and agents need to function effectively. This layer includes development tools, APIs for external data and application integration, memory systems for agent context retention, Retrieval-Augmented Generation (RAG) for enhanced knowledge access, and testing infrastructure.
The Application layer sits at the top of the AI stack and represents the end-user-facing solutions. This includes agents that solve use cases like wallet management, security, productivity, gaining, prediction markets, governance systems, and DeFAI tools.
Collectively, these applications contribute to secure, transparent, and decentralized AI ecosystems tailored to Web3 needs.
The evolution from Web2 to Web3 AI systems represents a fundamental shift in how we approach artificial intelligence development and deployment. While Web2’s centralized AI infrastructure has driven tremendous innovation, it faces significant challenges around data privacy, transparency, and centralized control. The Web3 AI stack demonstrates how decentralized systems can address these limitations through data DAOs, decentralized computing networks, and trustless verification systems. Perhaps most importantly, token incentives are creating new coordination mechanisms that can help bootstrap and sustain these decentralized networks.
Looking ahead, the rise of AI agents represents the next frontier in this evolution. As we’ll explore in the next article, AI agents – from simple task-specific bots to complex autonomous systems – are becoming increasingly sophisticated and capable. The integration of these agents with Web3 infrastructure, combined with careful consideration of technical architecture, economic incentives, and governance structures, has the potential to create more equitable, transparent, and efficient systems than what was possible in the Web2 era. Understanding how these agents work, their different levels of complexity, and the distinction between AI agents and truly agentic AI will be crucial for anyone working at the intersection of AI and Web3.