Your Bank’s AI Vendor Just Turned Off Its Best Feature. Here’s Why.
I keep hearing the same story from bank CTOs: they bought an AI-enabled platform, ran a successful pilot, got excited about the results. And then their compliance team asked the vendor to disable the generative AI features before production rollout.
This isn’t a technology failure. It’s a governance failure. And it’s happening everywhere.
The core issue is that the Model Risk Management guidance most banks operate under, SR 11-7, was written in 2011. It was designed for logistic regression models and credit scorecards, not for large language models that generate novel outputs on every inference. Bank examiners are applying a framework built for deterministic models to probabilistic systems, and the result is paralysis.
Vendors are responding rationally. Rather than risk a finding in a regulatory exam, they’re shipping products with AI features turned off by default for banking clients. The banks that want the features have to explicitly opt in, accept the model risk, and build their own validation framework, which most don’t have the internal expertise to do.
The Bank Policy Institute has been flagging this for months: regulatory inaction on AI guidance is itself a risk. Banks aren’t rejecting AI because it doesn’t work. They’re rejecting it because the supervisory framework makes adoption irrational for anyone who has to sit in front of an examiner.
This is the exact problem our advisory practice was built for. We’ve been on both sides: building AI systems at Visa that had to survive regulatory scrutiny, and now helping banks build governance frameworks that let them deploy AI without hoping their examiner doesn’t ask hard questions. The answer isn’t waiting for updated guidance. It’s building an internal framework that’s defensible under the current one.