Enterprise AI Investment is Sprinting, Governance is Crawling
Four reports landed in my reading last week. They weren’t written for the same audience, weren’t published by organizations with any relationship to each other, and covered different industries and roles. But by the fourth one, it was clear they were all describing the same room.
Coupa surveyed 500+ CFOs. 85% said AI is central to their financial strategy. 92% said they don’t trust their organization to execute on it. That number was 66% a year ago.
PwC, writing about what CEOs told them at Davos, found that more than half had seen no significant financial returns from AI investment.
The FinOps Foundation updated its framework to formally add “Executive Strategy Alignment” as a capability. An acknowledgment that FinOps teams have been optimizing cloud costs in one room while executives make investment decisions in another.
Forrester VP Mark Moccia argued that “AI economics”, managing the cost and value of autonomous systems that now do real enterprise work, will be a defining CIO discipline by 2030. The framing suggests this discipline doesn’t fully exist yet.
Different organizations and different vocabularies but describing the same gap of organizations choosing to invest AI tools now and figure out governance later.
Why Now
The first wave of enterprise AI investment was largely experimental. Organizations could evaluate pilots the way they’d always evaluate any technology: Did it work? Did it not work? What’s next? That phase is over. AI is running inside core workflows, informing decisions, and in some cases making them.
Legacy processes are adequate for managing experiments with new technology (loose governance, approximate cost tracking, outcome measurement left to individual teams) but it’s turning out that it doesn’t really hold for AI. The CFO anxiety that Coupa is measuring isn’t irrational; it’s showing ad-hoc governance failing in real time. The CEO who has seen no significant financial returns may be running AI programs that produce real value somewhere in the organization, it’s just in ways the current financial infrastructure can’t see, measure, or connect back to the investment case leadership signed off on eighteen months ago.
Where the gap is
This isn’t necessarily a story of AI failing; the models work and the tools are often fine.
What’s failing is the decision infrastructure — the layer between what the data shows and what the organization actually decides to do next. Most organizations have adequate decision infrastructure for categories of spend they’ve been managing for decades: cost centers, allocation models, reporting cadences, escalation paths for IT, real estate, people. Those structures took years, decades, to build.
AI spend is now large enough to warrant similar treatment, however it isn’t getting it. In a lot of organizations, AI costs are tracked in fragments: some in cloud cost tools, some in software licensing, some buried in team budgets, some not tracked at all. The cost of an AI-assisted decision is rarely calculated, let alone connected to whether that decision class is producing the right outcomes.
That’s what the FinOps Foundation formalized and what Forrester named. What feels like a technology gap is actually a governance gap.
What Closing the Gap Actually Requires
The instinct is to solve governance problems with software. There’s no shortage of vendors offering to be the platform that closes this gap. Some of those tools are or will be useful. None of them alone is the answer, because the gap isn’t primarily a data problem. The data often exists. What’s missing is the accountability structure that decides what to do with it.
In practice, the organizations making progress on this share three characteristics; none of which involve a new tool.
The first is ownership. Somewhere in the organization, someone is asking the question:
What are we spending on AI?
What is it producing?
Is the relationship between those two numbers moving in the right direction?
In the organizations where this works, instead of a dashboard, it’s a person, or small team, with the mandate and access to interrogate that question to the people making resource decisions.
The second is a shared unit of measure. Finance and engineering inside the same organization often can’t talk to each other about AI spend because they’re measuring different things. Engineering is optimizing tokens and latency. Finance is looking at contracts and invoices. Neither maps to the business outcome leadership cares about. What tends to work is the deliberate agreement, usually between a finance leader and a technology leader, on what the relevant unit of measure is for each major AI use case. Cost per decision. Cost per transaction processed. Cost per outcome achieved. The exact metric matters less than the fact that it’s iterated and agreed upon.
The third is a live connection to the original investment case. Most significant AI investments were approved against a business case with assumptions about cost and return. What we see in practice is that by eighteen months in, no one is actively comparing what’s happening to what was promised — not because anyone is avoiding it, but because the governance structure that would make that comparison routine was never built alongside the investment. The organizations that build it don’t need much: a review cadence, an owner, a handful of agreed metrics.
The Implication
Companies that name this problem correctly resist buying software for a governance problem and focus on building the accountability structures that make any investment legible.
The organizations that will close this gap will identify who owns AI financial governance, build the translation layer between technical metrics and business outcomes, and create a regular cadence for comparing AI spend to AI value. Then the tools become useful infrastructure supporting a governance structure that already knows what it’s managing.
The four reports were describing the same room. The question for every organization is whether they’re going to keep furnishing it with tools, or start deciding who’s responsible for what happens inside it.