Insights

Perspectives on AI, payments, and financial infrastructure—from someone who has built and operated these systems.

February 2026

Visa Built the Agent Checkout Layer. Most Banks Don’t Know Yet.

In late 2025, Visa shipped the Trusted Agent Protocol—an open framework that lets merchants distinguish between bots and legitimate AI agents acting on behalf of consumers. Google launched AP2 with 60+ partners including PayPal, Coinbase, and Mastercard. Mastercard shipped Agent Pay with Microsoft and IBM.

This isn’t a research paper. Real agent-initiated transactions are clearing. Visa is projecting millions of consumers using AI agents to complete purchases by holiday 2026.

I spent years at Visa designing the tokenization infrastructure that sits underneath all of this—the Visa Token Service API that powers Apple Pay, Android Pay, and Samsung Pay. The new agent protocols are built on that same identity and token layer. The difference is that now the entity initiating the transaction isn’t a human tapping a phone. It’s an autonomous agent with its own credentials.

Here’s the problem: most banks have no strategy for authenticating, authorizing, or auditing agent-initiated transactions. Their fraud systems are tuned for human behavior patterns. Their compliance frameworks assume a person is on the other end. Their vendor contracts don’t address agent liability.

The banks that figure out agent-ready infrastructure in the next 12 months will have a structural advantage. The ones that wait for regulatory guidance will be playing catch-up against institutions that already have live agent transaction flows.

This is exactly why we built KnowYourAgent—to treat agent identity as a first-class financial primitive, the same way KYC treats human identity. The infrastructure layer for autonomous commerce is being laid right now. The question is whether your institution is building on it or will be disrupted by it.

January 2026

The GENIUS Act Passed. Here’s What Banks Actually Need to Do.

The GENIUS Act became law in mid-2025. The FDIC published proposed implementation rules in December. The OCC has conditionally approved digital asset bank charters. For the first time, US banks have a clear legal path to issue payment stablecoins.

I was CTO at a digital asset bank navigating stablecoin compliance before any of this existed—before there was a framework, before there were guidelines, when every decision was a judgment call against ambiguous regulatory signals. That experience is why I can say with confidence: the gap between regulatory permission and operational readiness is enormous.

Having the legal right to issue a stablecoin and having the infrastructure to actually do it are completely different problems. You need token minting and burning workflows tied to your core banking system. You need real-time reserve attestation. You need wallet infrastructure that meets BSA/AML requirements. You need smart contract audit processes. You need a compliance architecture that maps stablecoin-specific risks to your existing examination framework.

Most banks don’t have any of this. The ones exploring stablecoin issuance are discovering that their technology teams have never built on-chain, their compliance teams have never underwritten token-based products, and their boards don’t have a framework for evaluating the risks.

We built StablecoinRoadmap specifically for this moment—working templates for wallets, payment gateways, and remittance platforms with sandbox simulation, so banks can validate payment flows before committing capital and regulatory capital to live deployment.

The regulatory door is open. The question is execution.

December 2025

Your Bank’s AI Vendor Just Turned Off Its Best Feature. Here’s Why.

I keep hearing the same story from bank CTOs: they bought an AI-enabled platform, ran a successful pilot, got excited about the results—and then their compliance team asked the vendor to disable the generative AI features before production rollout.

This isn’t a technology failure. It’s a governance failure. And it’s happening everywhere.

The core issue is that the Model Risk Management guidance most banks operate under—SR 11-7—was written in 2011. It was designed for logistic regression models and credit scorecards, not for large language models that generate novel outputs on every inference. Bank examiners are applying a framework built for deterministic models to probabilistic systems, and the result is paralysis.

Vendors are responding rationally. Rather than risk a finding in a regulatory exam, they’re shipping products with AI features turned off by default for banking clients. The banks that want the features have to explicitly opt in, accept the model risk, and build their own validation framework—which most don’t have the internal expertise to do.

The Bank Policy Institute has been flagging this for months: regulatory inaction on AI guidance is itself a risk. Banks aren’t rejecting AI because it doesn’t work. They’re rejecting it because the supervisory framework makes adoption irrational for anyone who has to sit in front of an examiner.

This is the exact problem our advisory practice was built for. We’ve been on both sides—building AI systems at Visa that had to survive regulatory scrutiny, and now helping banks build governance frameworks that let them deploy AI without hoping their examiner doesn’t ask hard questions. The answer isn’t waiting for updated guidance. It’s building an internal framework that’s defensible under the current one.