Insights

Perspectives on AI, payments, and financial infrastructure. From someone who has built and operated these systems.

April 2026

The Research Behind Our Compliance Scenario Generator

When we started building a synthetic compliance scenario generator into our stablecoin platform, the hard problem wasn’t the rule packs. MiCA, OFAC, and reserve-composition rules have been stable for a year. The hard problem was producing test data that could walk into a regulator’s office without embarrassing anyone — reproducible, auditable, and diverse enough to actually stress a rule pack rather than happily pass it.

We didn’t invent the answer. We adapted it from Davidson, Seguin, Bacis, Ilharco, and Harkous — Reasoning-Driven Synthetic Data Generation and Evaluation, published in TMLR in March. The paper introduces Simula: a framework that generates synthetic data by first mapping a target domain into explicit taxonomies, then running an agentic generator-and-critic loop against those taxonomies to produce diverse, complex, reproducible examples.

The shape transfers cleanly to compliance. Financial-crime typologies (layered remittance, sanctioned-counterparty proximity, reserve-drift patterns) sit where Simula’s taxonomy nodes sit. The generator produces synthetic counterparty graphs and transaction sequences. The critic rejects scenarios that don’t exhibit the typology they were meant to test.

The pipeline

STAGE 1Map taxonomySIMULAFactors expandedinto taxonomy treeOUR ADAPTATIONAML, sanctions,reserve typologiesas nodesSTAGE 2Sample & planSIMULANodes sampled→ meta-promptsOUR ADAPTATIONTypology nodes→ scenario plan(seed + assumptions)STAGE 3GenerateSIMULAGenerator draftssample proposalsOUR ADAPTATIONCounterparty graph+ transactionsequenceSTAGE 4Critique loopSIMULACritic verdict →refine or rejectOUR ADAPTATIONRule-aware check:does scenarioexhibit its typology?OUTPUTScenario+ fingerprint(manifest hash,rule-pack version,assumption bundle)refine on rejectSimula stages shown: taxonomy generation (§2.1), agentic sampling & critic refinement (§2.2).Compliance-specific additions (not in paper): provenance fingerprint, append-only audit ledger, synthetic-onlyenforcement at the schema layer, rule-pack versioning on every dossier export.
Pipeline mapping. Gray labels follow Davidson et al. (2026); teal labels are our compliance-domain adaptation.

What we added

Simula’s contribution is the methodological spine — reasoning-first taxonomy expansion, agentic refinement, critic-gated quality. What we layered on for a regulator-facing deployment:

  1. Provenance fingerprint. Every scenario carries a hash over its manifest, rule-pack version, and assumption bundle. A reviewer can prove today’s scenario is the same one an officer ran last quarter with a string match.
  2. Append-only audit ledger. Dossier exports write a row to an org-scoped Supabase table where UPDATE and DELETE are hard-blocked at the database layer. Reviewers see the full run history.
  3. Synthetic-only enforcement. The schema rejects any attempt to mix synthetic scenarios with live customer records. This is a compliance constraint, not a data-science one.

The honest caveat

Reasoning-first generation only produces defensible data if the underlying reasoning is actually reasonable. A well-structured taxonomy applied to a weak model produces well-structured nonsense. Davidson et al. benchmark this empirically across several datasets. Any platform deploying the approach for a regulated domain has to do the same work in that domain — which is why our engine-disposition scoring sits in a separate eval harness, and why the first live corridor will run synthetic and real pilot data side-by-side rather than swapping one for the other.

February 2026

Visa Built the Agent Checkout Layer. Most Banks Don’t Know Yet.

In late 2025, Visa shipped the Trusted Agent Protocol, an open framework that lets merchants distinguish between bots and legitimate AI agents acting on behalf of consumers. Google launched AP2 with 60+ partners including PayPal, Coinbase, and Mastercard. Mastercard shipped Agent Pay with Microsoft and IBM.

This isn’t a research paper. Real agent-initiated transactions are clearing. Visa is projecting millions of consumers using AI agents to complete purchases by holiday 2026.

I spent years at Visa designing the tokenization infrastructure that sits underneath all of this: the Visa Token Service API that powers Apple Pay, Android Pay, and Samsung Pay. The new agent protocols are built on that same identity and token layer. The difference is that now the entity initiating the transaction isn’t a human tapping a phone. It’s an autonomous agent with its own credentials.

Here’s the problem: most banks have no strategy for authenticating, authorizing, or auditing agent-initiated transactions. Their fraud systems are tuned for human behavior patterns. Their compliance frameworks assume a person is on the other end. Their vendor contracts don’t address agent liability.

The banks that figure out agent-ready infrastructure in the next 12 months will have a structural advantage. The ones that wait for regulatory guidance will be playing catch-up against institutions that already have live agent transaction flows.

This is exactly why we built KnowYourAgent. To treat agent identity as a first-class financial primitive, the same way KYC treats human identity. The infrastructure layer for autonomous commerce is being laid right now. The question is whether your institution is building on it or will be disrupted by it.

January 2026

The GENIUS Act Passed. Here’s What Banks Actually Need to Do.

The GENIUS Act became law in mid-2025. The FDIC published proposed implementation rules in December. The OCC has conditionally approved digital asset bank charters. For the first time, US banks have a clear legal path to issue payment stablecoins.

I was CTO at a digital asset bank navigating stablecoin compliance before any of this existed. Before there was a framework, before there were guidelines, when every decision was a judgment call against ambiguous regulatory signals. That experience is why I can say with confidence: the gap between regulatory permission and operational readiness is enormous.

Having the legal right to issue a stablecoin and having the infrastructure to actually do it are completely different problems. You need token minting and burning workflows tied to your core banking system. You need real-time reserve attestation. You need wallet infrastructure that meets BSA/AML requirements. You need smart contract audit processes. You need a compliance architecture that maps stablecoin-specific risks to your existing examination framework.

Most banks don’t have any of this. The ones exploring stablecoin issuance are discovering that their technology teams have never built on-chain, their compliance teams have never underwritten token-based products, and their boards don’t have a framework for evaluating the risks.

We built StablecoinRoadmap specifically for this moment. Working templates for wallets, payment gateways, and remittance platforms with sandbox simulation, so banks can validate payment flows before committing capital and regulatory capital to live deployment.

The regulatory door is open. The question is execution.

December 2025

Your Bank’s AI Vendor Just Turned Off Its Best Feature. Here’s Why.

I keep hearing the same story from bank CTOs: they bought an AI-enabled platform, ran a successful pilot, got excited about the results. And then their compliance team asked the vendor to disable the generative AI features before production rollout.

This isn’t a technology failure. It’s a governance failure. And it’s happening everywhere.

The core issue is that the Model Risk Management guidance most banks operate under, SR 11-7, was written in 2011. It was designed for logistic regression models and credit scorecards, not for large language models that generate novel outputs on every inference. Bank examiners are applying a framework built for deterministic models to probabilistic systems, and the result is paralysis.

Vendors are responding rationally. Rather than risk a finding in a regulatory exam, they’re shipping products with AI features turned off by default for banking clients. The banks that want the features have to explicitly opt in, accept the model risk, and build their own validation framework, which most don’t have the internal expertise to do.

The Bank Policy Institute has been flagging this for months: regulatory inaction on AI guidance is itself a risk. Banks aren’t rejecting AI because it doesn’t work. They’re rejecting it because the supervisory framework makes adoption irrational for anyone who has to sit in front of an examiner.

This is the exact problem our advisory practice was built for. We’ve been on both sides: building AI systems at Visa that had to survive regulatory scrutiny, and now helping banks build governance frameworks that let them deploy AI without hoping their examiner doesn’t ask hard questions. The answer isn’t waiting for updated guidance. It’s building an internal framework that’s defensible under the current one.