Abstract

另见该文档的中文版本

This paper explores the evolution from the vision of the Semantic Web to the emerging Agentic Web, and analyzes the necessity of establishing standardized agent network protocols. Despite the forward-thinking concept of the Semantic Web proposed twenty years ago, it was not fully realized due to the limitations of artificial intelligence capabilities at that time. With the rapid development of modern AI technologies such as Large Language Models (LLMs), agents now possess the ability to autonomously execute tasks, perform complex reasoning, and solve multi-step problems, thus giving rise to the Agentic Web. Through systematic analysis, this paper identifies four core trends of the agent network: agents replacing traditional software as internet infrastructure, universal interconnection between agents, protocol-based native connection patterns, and agents' autonomous organization and collaboration capabilities. Meanwhile, the research reveals three major challenges that the current internet architecture poses to the development of the Agentic Web: data silos limiting the quality of agent decision-making, human-machine interfaces hindering agent interaction efficiency, and the absence of standard protocols impeding agent collaboration. In response to these challenges, this paper elaborates on the design principles and core requirements for agent network protocols, and provides a systematic comparison and analysis of current major agent network protocol initiatives (MCP, A2A, ACP, ANP, etc.). The conclusion emphasizes that establishing standardized agent network protocols is crucial for breaking down data silos, enabling heterogeneous agent collaboration, building AI-native data networks, and ultimately realizing an open and efficient Agentic Web, while calling on all stakeholders to actively participate in W3C's standardization process.

Introduction: From the Unfulfilled Vision of the Semantic Web to the Dawn of the Agentic Web

Twenty years ago, Tim Berners-Lee and his collaborators visionary proposed the concept of the Semantic Web, with the core objective of creating a data-centric, machine-readable web of data, enabling computers and humans to collaborate more efficiently. This concept depicted an intelligent future: daily transactions, administrative affairs, and various life scenarios would be automatically completed by "intelligent agents" through machine-to-machine dialogues. To achieve this goal, the Semantic Web planned to give clear semantic definitions to information on the web through technologies such as XML, RDF, and Ontology, enabling software agents to autonomously navigate between web pages and efficiently execute complex tasks on behalf of users.

TODO: This section needs further development and refinement.

Notably, the original concept of the Semantic Web already included rich "agent" ideas. These agents were envisioned as entities that could automatically execute tasks on behalf of users. The technological breakthroughs represented by Large Language Models (LLMs) have enabled agents to act autonomously, perform complex reasoning, and solve multi-step problems. These agents are no longer just passive tools, but have become active participants in the digital ecosystem. Against this background, the concept of the "Agentic Web" or "Internet of Agents" has emerged. This new network paradigm views agents as primary actors, actively interacting with network resources, services, and other agents to collectively accomplish user goals. The Agentic Web inherits the core vision of the Semantic Web and, leveraging advanced AI capabilities, is committed to building an ecosystem composed of autonomous, intelligent, and efficiently collaborative agents, gradually making the Semantic Web's ideal of machine intelligence efficiently processing information and effectively assisting humans a reality.

This transformation heralds a fundamental change in user interaction patterns—from human-centered clicking and browsing through browsers to agent-centered interactions and collaborations driven by agents. In this new model, agents would autonomously interact directly with other agents, automatically complete tasks, and provide personalized experiences based on user preferences and context. This agent-dominated model is not just an incremental update to the existing network, but may trigger profound changes in internet architecture and interaction logic. The way users access information would also change, from actively querying information through interfaces to agents actively executing tasks and delivering results, possibly bypassing traditional website interfaces. This would promote a comprehensive innovation in the design methods, discovery mechanisms, and interaction modes of network services, pushing the internet into a new stage of development.

Challenges of the Agentic Web: Limitations of the Current Internet and the Urgent Need for Standardized Interaction

With the development of AI technology, agents are gradually becoming the new generation of core participants in the internet ecosystem, following websites and applications. However, the accelerated evolution of the Agentic Web also exposes many limitations in the technical foundation and connection paradigms of the current internet. If these issues are not addressed, they would severely constrain the scalability and collaborative efficiency of agent systems. The main challenges include the following three aspects:

These challenges, especially the lack of standardized agent network protocols, would lead to fragmentation of the agent ecosystem in the future. Numerous heterogeneous agents would become "agent islands," making it difficult to interoperate and collaborate effectively, not only limiting the overall potential of the Agentic Web but also significantly increasing integration costs and complexity .

Faced with this situation, establishing standardized agent network protocols has become an urgent priority for building a truly Agentic Web. Such protocols aim to provide a unified framework for discovery, identification, verification, communication, and collaboration among agents from different platforms and vendors, thereby overcoming interoperability barriers and ensuring secure and efficient interactions. The establishment of the W3C AI Agent Protocol Community Group and its mission is an active response to this need. Standardization is not only a technical requirement but also a strategic cornerstone to prevent the Agentic Web from becoming balkanized and to fully leverage its network effects and realize the vision of "billions of agents" working collaboratively.

Defining the Blueprint: Key Issues and Core Requirements for Agent Network Protocols

To address the challenges presented in Chapter 3 and fully leverage the potential of the Agentic Web, designing and implementing standardized agent network protocols is crucial. These protocols are not just technical specifications but cornerstones for building an interoperable, trustworthy, and efficient agent ecosystem. A comprehensive agent network protocol framework needs to address a series of key issues and meet specific functional and non-functional requirements.

Key Issues That Agent Network Protocols Aim to Solve

Core Functional Requirements for Agent Network Protocols

A comprehensive agent network protocol should meet the following core functional requirements to support the effective operation of agents in the Agentic Web:

Key Non-Functional Requirements for Agent Network Protocols

In addition to core functionalities, agent network protocols must also meet a series of key non-functional requirements to ensure their security, usability, scalability, and controllability in real-world applications:

By addressing the key issues above and meeting these core requirements, standardized agent network protocols would lay a solid foundation for building a prosperous, collaborative, and trustworthy Agentic Web.

Overview of Typical Agent Protocols

This section aims to provide a neutral overview of some current and emerging agent protocols, highlighting how they address the challenges and requirements discussed earlier. These protocols each target different aspects of interoperability and deployment scenarios, collectively forming the exploratory frontier of current agent communication standardization.

Model Context Protocol (MCP)

Agent-to-Agent Protocol (A2A) (Google)

Agent Connect Protocol (ACP) (Cisco)

Agent Network Protocol (ANP)

Protocol Comparison Analysis

To clearly compare the above major protocols, the following table summarizes some of their key features:

Feature Model Context Protocol (MCP) Agent-to-Agent Protocol (A2A) Agent Connect Protocol (ACP) Agent Network Protocol (ANP)
Main Supporters/Initiators Anthropic Google with 50+ industry partners Cisco (AGNTCY initiative) ANP open-source community
Main Goals/Focus Areas Providing structured external context for LLMs/agents, solving M×N integration problems Cross-vendor/framework heterogeneous agent interoperability, task collaboration, and dynamic negotiation Structured, persistent multi-agent collaboration and workflows in enterprise environments Agent connection and collaboration on the internet
Communication Style Client-server Client-remote agent (peer-to-peer concept, can have intermediaries), task-oriented RESTful API, execution-based messaging, supports stateful threads Peer-to-peer protocol architecture
Core Technologies Used JSON-RPC, HTTP, SSE HTTP(S), JSON-RPC 2.0, SSE RESTful APIs, JSON W3C DIDs, JSON-LD, W3C VC, End-to-End Encryption
Discovery Mechanism Typically application-integrated or managed by host application Agent Cards (JSON metadata, typically published at /.well-known/agent.json) Agent Directory, Agent Manifests (JSON) Based on RFC 8615, typically published at /.well-known/agent-descriptions
Identity Management Method OAuth 2.1 Out-of-band authentication schemes Depends on enterprise integration (e.g., OAuth) W3C DIDs (Decentralized Identifiers)
Emphasized Security Features Secure context acquisition (e.g., via TLS), local-first security TLS, server authentication, client/user authentication TLS, enterprise-grade security practices TLS, end-to-end encryption, DID-based authentication
State Management Typically stateless or managed by client/host application, though MCP servers may expose stateful resources Supports long-running task state tracking (stateful interactions) Stateful communication threads Can support stateful interactions (determined by application protocol layer)
Key Differentiators/Unique Aspects Focuses on the "last mile" connection between models and tools/data, complementary to other protocols Emphasizes open standards for agent collaboration across different systems and vendors, supports multiple interaction modalities Deep collaboration in controlled enterprise environments Designed for agent interaction and collaboration in untrusted internet environments

Building AI-Native Data Networks Based on Agent Network Protocol

Current internet infrastructure is primarily designed for human interaction through browsers and graphical user interfaces. However, the rise of the Agentic Web requires us to reimagine a network environment more suitable for AI agents' native interactions. This "AI-native data network" would no longer be merely a platform for displaying human information, but an optimized space for agents to efficiently acquire data, invoke services, and collaborate.

The core characteristics of such a network would include:

AI-native data networks would be key infrastructure for the Agentic Web to fully realize its potential, enabling agents to interact with the digital world in their most proficient way (i.e., directly processing information through protocols and APIs), thereby catalyzing higher levels of automation, intelligence, and collaborative efficiency.

Future Outlook: Reshaping the Open Network Through Connection

The evolution of the internet profoundly confirms a core principle: "Connection is Power." In a truly open, interconnected network, free interaction between nodes can maximize innovation potential and create enormous value. However, today's internet ecosystem is increasingly dominated by a few large platforms, with vast amounts of data and services confined within closed "digital islands," concentrating the power of connection in the hands of a few tech giants.

The advent of the Agentic Web era provides us with a historic opportunity to reshape this imbalanced landscape. Our goal is to drive the internet from its current generally closed, fragmented state back to its open, freely connected origins. In the future Agentic Web, each agent would simultaneously play the dual roles of information consumer and service provider. More importantly, every node should be able to discover, connect, and interact with any other node in the network without barriers. This vision of universal interconnection would greatly reduce the barriers to information flow and collaboration, returning the power of connection truly to each user and individual agent.

This marks an important shift: from platform-centric closed ecosystems to protocol-centric open ecosystems. In the latter, value acquisition depends more on the unique capabilities and contributions that participants bring to the network by following open protocols, rather than relying on control over a closed platform. This shift would stimulate more intense application-layer innovation and competition, as the key to success is no longer "locking in" users, but providing superior agent services, similar to the innovation patterns historically promoted by open protocols like TCP/IP and SMTP.

Building the Future of a Collaborative Agentic Web

Standardized agent network protocols are crucial for unleashing the potential of the Agentic Web, realizing certain aspects of the original semantic web vision, and fostering innovation. They are the cornerstone for building a network where machines can process information more intelligently and assist humans more effectively.

We urge all stakeholders to actively participate in the standardization process through the W3C. This is an opportunity to shape the future network—one that is more intelligent, collaborative, and empowering, built on foundations of openness and trust. A well-designed Agentic Web has tremendous transformative potential, and now is the critical moment to lay its solid foundation.

Security Considerations

This section is expected to be expanded. And warmly welcome contributions from the security community. We are actively following relevant work within W3C, including the AI in the Browser, which will inform our approach to security considerations.

To be added.

References

  1. Tim Berners-Lee, James Hendler and Ora Lassila. The Semantic Web. Scientific American, 2001.
  2. Semantic Web – A Forgotten Wave of Artificial Intelligence?
  3. The Agentic Web: A Paradigm Shift in Web Architecture for an LLM-powered Internet
  4. LLM Agents Architecture: A Survey
  5. Building Multi-Agent Marketing Campaign with AGNTCY
  6. Agent Network Protocol Technical White Paper
  7. Towards a New Internet: Agent Network Protocol for Agentic Web
  8. Agent-to-Agent (A2A) Protocol Specification
  9. A2A: A new era of agent interoperability
  10. MCP & ACP: Decoding the Language of Models and Agents
  11. Model Context Protocol (MCP)