Four Protocols and No Consent
Arlo Gilbert ·
On April 8, Visa launched something called Intelligent Commerce Connect. The name is corporate mush, but what it does is not. It's a platform that lets AI agents browse products, compare prices, authenticate payments, and complete purchases. Not recommend purchases. Complete them. Your agent finds the flight, picks the seat, enters the card, and checks out. You don't touch anything.
Visa built four new protocols to make this work. Trusted Agent Protocol handles the handshake between user and agent. Machine Payments Protocol governs machine-to-machine payment instructions. Agentic Commerce Protocol defines how agents discover and select products. Universal Commerce Protocol is the interoperability layer across platforms. They brought in tokenization so the agent never sees your actual card number. They added scoped spend controls so you can set limits by amount, merchant, or product category. Over 30 partners are building in the sandbox right now. Hundreds of transactions have already gone through.
The payments side is real. Tested. Functional. Impressive, even.
Nobody built the consent side.
What the agent does before it pays
Think about what an AI shopping agent actually does on its way to completing a purchase. It browses and searches across vendors. It reads reviews, compares prices, evaluates shipping options. It might check your calendar to estimate when you'll be home. It probably reviews your past purchases to figure out your size, your preferences, how much you're willing to pay before you flinch.
Every one of those steps generates data. Behavioral data. Financial data. Location data. Preference data. The kind of data that privacy laws exist to protect.
When you shop online yourself, you encounter consent banners. You accept terms of service. You decide whether to check the box that says "share my data with partners." The mechanisms are imperfect. Most people click through without reading. But they exist because every major privacy framework assumes a human is present when their data is processed. GDPR, CCPA, Brazil's LGPD. The human has the opportunity to say yes or no.
An AI agent shopping on your behalf doesn't read consent banners. It doesn't click "I agree." Visa's four new protocols are detailed about payment authentication and tokenization, but they have nothing to say about the data trail the agent leaves at every stop before checkout.
We've been here before (sort of)
In 1949, a businessman named Frank McNamara went to dinner at Major's Cabin Grill in Manhattan and realized he'd left his wallet in another suit. His wife had to come pay the bill. The story is probably embellished, but what happened next was real. McNamara and his lawyer Ralph Schneider created Diners Club. On February 8, 1950, they went back to the same restaurant and paid with a small cardboard card. By the end of that year, 20,000 people carried one.
What Diners Club actually changed was structural. Before it existed, every purchase was bilateral. Buyer and seller. Done. The charge card inserted a third party into the transaction: the payment network. The network handled the trust problem. You didn't need to know if the buyer was good for it. Diners Club guaranteed it.
Visa's Intelligent Commerce Connect inserts a fourth party. Buyer, seller, payment network, and now the agent. The agent that browses, decides, and transacts on the buyer's behalf. An autonomous commercial actor that didn't exist in the transaction model twelve months ago.
The difference is that the credit card didn't make decisions. It was a tool. The AI agent is an actor. It has preferences (or at least it optimizes toward them). It has a browsing history. It generates behavioral patterns that, in any regulatory framework that matters, constitute personal data belonging to the human it represents.
The gap I keep thinking about
I spend a lot of my time working on consent. It's what we do at Osano. We build the tools that help companies collect, manage, and honor privacy choices. So I notice when a major new commercial infrastructure launches and the word "consent" appears exactly zero times in the protocol documentation.
Existing consent frameworks assume a few things. The data subject is present. The data subject can read and respond to a disclosure. There's a moment where the human can say no. Agentic commerce breaks all three.
When your AI agent visits a vendor's website to compare prices, does the vendor's privacy policy apply to you? You never saw it. Your agent didn't process it either, because it was optimizing for price and delivery speed, not parsing legal text. When the agent shares your purchase history with a different vendor to negotiate a discount, that's a data transfer. Under GDPR, it almost certainly requires consent. Nobody asked.
Visa's spend controls address the financial risk. You can cap what the agent spends. Good. But there's no equivalent mechanism for data exposure. No toggle that says "don't share my browsing behavior with more than three vendors per transaction." No consent layer sitting between the agent's commercial actions and the privacy implications of those actions.
What they got right, and what they skipped
I want to be fair. Visa's protocol work is solid engineering. The tokenization is thoughtful. The delegation token model, where the card issuer confirms the agent has authority, is a smart approach to authentication. The scoped spend controls are a real feature, not a checkbox. They clearly thought hard about fraud prevention and payment security.
What they didn't think about, or at least didn't ship, is who's responsible for the data the agent generates along the way. The protocols define how an agent authenticates a payment. They're silent on how an agent handles your personal data at the twelve steps before the payment.
Edgar, Dunn and Company projects the agentic commerce market at $1.7 trillion by 2030. That's a lot of commercial activity conducted by software, generating data about humans, with no governance framework in between.
So who builds it?
The payments industry moved fast. Credit where it's due. Visa shipped real infrastructure while most of the AI industry was still debating what "agentic" means.
But the privacy side of agentic commerce is still a whitepaper problem. Nobody's shipping consent infrastructure for autonomous purchasing agents. Not the payment networks. Not the browser vendors. Not the AI platforms building the agents. Not the regulators, who are still mostly focused on whether chatbots hallucinate.
Someone has to build the consent layer for agentic commerce. Not just how the agent pays, but how it handles your data at every step before it pays. How it discloses. How it honors your preferences. How it declines a vendor's data collection on your behalf when you aren't there to decline it yourself.
If you're building agentic products right now, if you're integrating with Visa's platform, if you're in the room where these protocols get finalized: bring a privacy engineer. The payments team did their job. The consent team hasn't been hired yet.