GoDaddy just announced a “Trusted Identity Naming System for AI Agents.”

At first, the original blog post from GoDaddy sounds good. The promise of an open system is catchy: “New agnostic framework allows anyone to easily find, verify and trust AI agents.” A way to give artificial intelligences unique names, “build confidence,” and let humans know which agents to trust.

But it may quietly reintroduce the oldest form of digital control: deciding who gets to exist online. In practice, it reads like the oldest trick on the internet — turning trust into a service.

A Familiar Pattern

Every decade or so, someone rediscovers that there’s money in “managing trust.” In the 2000s it was Extended Validation certificates. Then came the blue-tick era of “verified” users. Now it’s the AI agent namespace: A new market for digital legitimacy.

GoDaddy isn’t proposing a decentralized identity system; it’s proposing a central ledger of permission. No standards body, no RFC, no hint of open governance. Just a corporate database that decides which AI gets to be called “trusted.”

If you can name it, you can price it. If you can price it, you can control it.

A few weeks ago, I wrote about Cloudflare’s Annual Founders’ Letter, where the company proposed a very different future: one where content and creators earn credibility through transparency and attribution, not certification.

Cloudflare argued that the web’s infrastructure should remain neutral — that the problem isn’t who’s allowed to speak, but how we measure and reward honest contribution. Cloudflare wants an open web of provenance; GoDaddy seems to prefer a registered one.

GoDaddy seems to have “missed” that memo. Its new proposal feels less like a protocol and more like a registry — a cosmetic rebranding of the same old authority model.

Two Philosophies

AspectCloudflareGoDaddy
Trust basisProvenance and behaviorAuthority and registration
GovernanceOpen ecosystemProprietary namespace
Incentive modelMerit-based recognitionPay-to-participate legitimacy
RiskFragmented signalsCentralized gatekeeping

When “Trusted” Means “Approved”

If systems like this gain traction, the web will quietly fracture again. AI outputs from “unregistered” agents will be filtered, demoted, or simply ignored. Platforms will claim it’s about safety, regulators will nod approvingly, and a few large registrars will quietly own the authentication layer of machine communication.

As “AI ingestion” replaces search engine crawling, creators will fight to catch the eyes of AI just as they once fought to rank first on Google. Only this time, it will come at a price.

An Open Alternative

It doesn’t have to go that way. An AI’s identity could be verifiable through open mechanisms: decentralized identifiers (DIDs), DNSSEC, cryptographic provenance. Anyone could issue or verify trust claims, and the system would evolve through use, not decree.

That’s how the internet used to work, before trust became another product line.


GoDaddy wants to name the machines. Cloudflare wants to prove who they are.
- Both say they’re protecting the web. Only one still remembers what it’s made of.

The question isn’t whether AI will have names. It’s who gets to write the phonebook

Some personal considerations

I just read Cloudflare’s 2025 Annual Founders’ Letter and found it quite insightful. Much of today’s lobbying, often driven by governments and large media companies, seems to push toward a strictly regulated internet, a model profoundly disrespectful of the values the net was originally built upon. These lobbying actions can be summed up in just one word: censorship, with privacy compression not far behind.

Cloudflare’s approach, on the other hand, feels very different: a liberal, agnostic stance that doesn’t deny the need for content monetization. As they put it: “what fundamentally needed to change was not more content moderation at the infrastructure level but instead a healthier incentive system for content creation.”

AI has dramatically changed the rules. Instead of driving users to websites where creators might at least earn some reward in terms of advertising revenue or personal recognition, AI agents now consume the content for us, leaving (at best) only faint traces of attribution.

Sites that built their reputation on answers (like Stack Exchange) have been torn apart by AI. The simplest LLM can now surface correct answers without the hassle of scrolling through hundreds of posts.

Social media platforms, on the other hand, seem more resistant: users still feel the urge to open the original article so they can comment, debate, or rant. But even this model has its dark side: the rise of content farms that thrive on clickbait or ragebait.

Cloudflare’s vision is to create a new business model that incentivizes genuine content creation: rewarding authors when their work is used to train or answer through AI systems. It’s almost like the old “bounty” model of Stack Exchange. But with real money at stake.

If this model works, it could realign the incentives of the internet, giving creators a fair share in the AI-driven future. The real challenge is whether the industry will value people (users and creators) who make the web worth having over the seemingly unlimited funds that governments and disinformation factories are ready to pour into the system.

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

This is Atomic

All the pages you see here are built with the sections & elements included with Atomic. Import any page or this entire site to your own Oxygen installation in one click.
GET OXYGEN
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram