The Internet of AI Agents: From Billions to Trillions
We built the internet for humans. Documents linked to documents. Browsers fetched pages. Search engines indexed text. The protocols we created—DNS, HTTP, TCP/IP—assumed a human would be reading the result, clicking the link, making the decision.
That assumption is breaking.
In our Masters of Automation episode with Professor Ramesh Raskar—MIT Media Lab researcher and director of Project NANDA—a different vision emerges: an internet where AI agents are the primary actors. Not thousands. Not millions. Trillions of autonomous agents discovering each other, negotiating in milliseconds, delegating tasks, and transacting on our behalf.
"Can we upgrade the current internet—or do we need a new one entirely?" — Ramesh Raskar
The answer, it turns out, is both.
Watch/Listen
Why the current internet won't scale for agents
DNS was designed for humans typing domain names. Certificate authorities were built for websites serving browsers. Our entire trust infrastructure assumes a person will verify the padlock icon, read the certificate details, make a judgment call.
But what happens when your AI agent needs to:
- Find another agent that can book flights in Japanese
- Verify that agent is who it claims to be
- Negotiate terms in 50 milliseconds
- Execute a transaction without human oversight
- Remember the interaction for future reputation scoring
DNS can't do this. Neither can traditional PKI. The bottleneck isn't bandwidth or compute—it's identity and discovery at machine speed.
NANDA: DNS for the agentic era
Project NANDA (Networked Agents and Decentralized AI) has been ten years in development at MIT Media Lab. It introduces the foundational infrastructure for what Raskar calls the "Internet of AI Agents"—a parallel network where agents can discover, authenticate, and collaborate without centralized gatekeepers.
The architecture addresses four critical chokepoints:
Discovery: How does an agent find another agent with specific capabilities? NANDA implements a decentralized registry—think DNS, but for agents. Agents register their capabilities, and other agents query the registry to find matches. Unlike DNS, it's designed for sub-second resolution of newly spawned agents.
Identity: How do you verify an agent is who it claims to be? NANDA uses cryptographically verifiable AgentFacts—structured metadata that includes capabilities, credentials, and behavioral history, all anchored to Decentralized Identifiers (DIDs).
Federation: How do agents across different organizations and protocols communicate? NANDA bridges Anthropic's MCP, Google's A2A, Microsoft's NLWeb, and standard HTTPS. The NANDA Adapter handles protocol translation automatically.
Attestation: How do you trust an agent's claimed capabilities? Each capability assertion links to credentialing paths with revocation status management. An agent claiming to handle HIPAA-compliant data must prove it, cryptographically.
Zero Trust for autonomous agents
Traditional Zero Trust Network Access (ZTNA) assumes a human is authenticating. NANDA extends this to Zero Trust Agentic Access (ZTAA)—security principles designed for autonomous actors.
The threat model is different when agents are involved:
- Capability spoofing: An agent claims abilities it doesn't have
- Impersonation attacks: An agent pretends to be another agent
- Data exfiltration: An agent leaks sensitive information to unauthorized parties
- Prompt injection via agent communication: Malicious agents manipulating other agents
NANDA's response is Agent Visibility and Control (AVC)—mechanisms that let enterprises govern agent behavior while preserving operational autonomy. Every interaction is traceable. Every capability is verifiable. Every agent has a reputation.
The five-phase evolution
Raskar describes the evolution of AI systems in five phases, mirroring computing history:
- Mainframe era: Centralized LLMs owned by a few companies (where we are now)
- Personal computing: On-device models running locally (emerging)
- Networked computing: Agents communicating peer-to-peer (NANDA's target)
- Web era: Open discovery and interoperability across agents
- Internet of Agents: Trillions of specialized agents forming a global intelligence layer
We're somewhere between phases 1 and 2, rapidly approaching phase 3. The infrastructure decisions made now will determine whether phase 5 is decentralized and open—or controlled by whoever builds the best walled garden.
What agents will actually do
The vision isn't abstract. Here's what a mature agent ecosystem enables:
Personal agents that persist across platforms. Your agent knows your preferences, maintains context over months, and represents you in negotiations. It doesn't reset when you switch apps.
Specialized agents that do one thing extraordinarily well. A legal research agent. A travel optimization agent. A medical literature synthesis agent. They register their capabilities in NANDA and wait for requests.
Agent marketplaces where capabilities are priced and traded. Your agent finds the best flight-booking agent, checks its reputation, negotiates a fee, and delegates the task—all without your involvement.
Cross-organizational collaboration where agents from different companies work together on complex tasks. The plumbing of authentication, authorization, and accountability is handled by infrastructure, not custom integration.
The MCP + A2A + NANDA stack
Three protocols are converging to make this possible:
Model Context Protocol (MCP) from Anthropic defines how agents interact with tools and data sources. It's the "how-to"—structured ways for agents to use external capabilities.
Agent-to-Agent (A2A) from Google defines how agents talk to each other. It's the "who-to-talk-to"—JSON-based Agent Cards, discovery mechanisms, and conversation protocols.
NANDA from MIT provides the global layer—discovery across all registries, identity verification, and cross-protocol interoperability. It's the glue that lets an MCP agent find and work with an A2A agent without either knowing the other's native protocol.
Together, they form a stack that's open, interoperable, and designed for decentralization.
What could go wrong
The same infrastructure that enables beneficial agent collaboration enables less savory applications:
Agent spam: Millions of low-quality agents flooding registries with fake capabilities Manipulation markets: Agents optimized to exploit other agents' decision-making Concentration risk: A few registries becoming de facto gatekeepers despite decentralized design Accountability gaps: When an agent causes harm, who's responsible?
NANDA addresses some of these through reputation systems, capability attestation, and behavioral tracking. But the governance questions remain open. Who decides what agents can register? Who arbitrates disputes? Who sets the rules for this new internet?
The parallel to early web decisions
In 1995, decisions about DNS, HTTP, and HTML seemed technical and obscure. They determined the shape of the internet for decades—who could publish, who controlled discovery, how trust worked.
We're at a similar inflection point. The protocols and registries being built now will determine whether the Internet of AI Agents is:
- Open or closed
- Decentralized or concentrated
- Privacy-preserving or surveillance-enabling
- Human-aligned or autonomously drifting
Project NANDA represents one vision: open infrastructure, verifiable identity, decentralized discovery. Whether it becomes the foundation or a footnote depends on adoption, governance, and the competitive dynamics of the next few years.
A closing thought
The internet we built assumed humans at both ends. The internet we're building assumes machines—agents that persist, learn, transact, and collaborate at scales and speeds humans can't match.
Raskar's question haunts me: Do we upgrade the current internet, or build a new one? The honest answer is that we're doing both, simultaneously, with incomplete knowledge of where either path leads.
What we can control is the infrastructure. Open protocols. Verifiable identity. Decentralized discovery. The plumbing decisions that seem boring now but determine everything later.
The Internet of AI Agents is coming whether we design it or not. The question is whether we're intentional about the design.
Masters of Automation explores the intersection of artificial intelligence, human autonomy, and the critical choices facing civilization. We believe the future remains unwritten, though it approaches with unprecedented speed.