Ninety-eight percent of PE sponsors have directed their portfolio company CFOs to prioritize AI adoption. Sixty-eight percent of those CFOs haven't acted, primarily because they don't know where to begin.
That's not a technology problem. It's an infrastructure problem disguised as a technology conversation. And the distinction matters, because PE operating partners who misdiagnose it will spend the next eighteen months watching AI initiatives burn budget without producing measurable value.
The Accordion 2025 survey of 200 PE sponsors and 200 portfolio company CFOs put the gap in sharp relief: 83% of sponsors want their CFOs investing in AI right now, viewing extended hold periods as the opportune moment to build capability. Meanwhile, most CFOs are paralyzed. Not by skepticism. By the honest realization that they're being asked to deploy sophisticated technology on top of a data foundation that can barely support a monthly board deck.
Gartner's 2025 survey of 183 CFOs found that finance AI adoption has effectively flatlined. After jumping from 37% in 2023 to 58% in 2024, it inched to 59% in 2025. The initial enthusiasm has hit a wall. And two-thirds of companies remain stuck in what analysts call "pilot purgatory," running experiments that never translate to production value.
The operating partners who understand why this is happening are the ones who will actually capture AI's value. The ones who don't will keep adding it to value creation plans that never deliver.
The 80/20 Nobody Wants to Hear
Here is the number that should reframe every AI conversation in a PE portfolio review: technology selection accounts for roughly 20% of AI success. The remaining 80% depends on data quality, workflow redesign, and change management.
That ratio, documented across multiple industry analyses, explains why the median reported ROI on AI in finance sits at just 10%, according to BCG's 2025 survey of over 280 finance executives. Nearly a third of finance leaders report only limited gains. The problem isn't the tools. The tools are extraordinary. The problem is what they're being plugged into.
Consider the typical PE-backed mid-market company. PwC found that 54% of portfolio company respondents still use an email with an attachment as their primary method of collecting and sharing data with sponsors. More than a third respond to data requests with a plain-text email. The finance team is five to seven people, spending the majority of their capacity manually consolidating spreadsheets. The chart of accounts may have been inherited from a founder who optimized for tax, not analysis. If bolt-on acquisitions have occurred, there are likely multiple ERP systems with incompatible data structures.
Now layer an AI initiative on top of that. An AI model trained on inconsistent, fragmented, manually assembled data doesn't produce insight. It produces confident-sounding garbage at machine speed. The output looks polished. The underlying logic is built on sand. And the CFO who flags this isn't being resistant to change. They're being honest about physics.
This is the conversation operating partners need to have before the AI conversation. Not "which AI vendor should we evaluate?" but "can our data infrastructure actually support what we're asking AI to do?"
The Prerequisites Nobody Is Building
The companies generating real returns from AI in finance share a pattern that has nothing to do with which model they chose or which vendor they hired. BCG's research found that high-ROI teams focus on value from the start rather than learning for learning's sake. They take a broad transformation view instead of optimizing around single use cases. They invest equally in data infrastructure and organizational change.
That last point is the one PE operating partners consistently underweight. Equal investment in data infrastructure and organizational change means that for every dollar spent on an AI tool, a corresponding dollar goes toward cleaning data, standardizing taxonomies, integrating systems, redesigning workflows, and training people. In a PE context where hold periods are finite and sponsors want to see EBITDA impact, the temptation to skip the foundation and jump to the shiny application is almost irresistible.
It's also why Gartner predicts that at least 30% of generative AI projects will be abandoned after proof of concept due to poor data quality, inadequate risk controls, escalating costs, or unclear business value.
The prerequisites are unglamorous. A unified chart of accounts across entities. Consistent data definitions so that "revenue" means the same thing in every system. Automated data pipelines that eliminate manual consolidation. A reporting infrastructure where the finance team spends its time analyzing data rather than assembling it. These aren't AI projects. They're the conditions under which AI projects succeed rather than generating impressive demos that never reach production.
For PE-backed companies that have been through multiple bolt-on acquisitions, the prerequisite work is even more fundamental. When a portfolio company is running three or four ERPs with no integration layer, the AI readiness gap isn't a gap. It's a chasm. And no amount of sponsor pressure or vendor promises will bridge it without the structural work of building a coherent data foundation first.
Where AI Actually Works in PE-Backed Finance (and Where It Doesn't)
The honest answer is narrower than the hype suggests, and that's actually good news for operating partners who want to deploy capital efficiently rather than broadly.
FP&A leads finance AI adoption at 78%, and it's not an accident. Forecasting, variance analysis, and scenario modeling are repetitive, data-heavy, and benefit from pattern recognition across large datasets. When the underlying data is clean and consistent, AI-assisted FP&A can compress days of analyst work into hours and surface trends that manual analysis misses. Accounts payable automation has emerged as the most production-ready application. Invoice processing, matching, and exception handling are rule-bound enough for AI to handle reliably while generating measurable cost savings.
Tax trails at 45% adoption for a reason. The regulatory complexity, jurisdictional variation, and audit-trail requirements make tax a domain where AI errors carry material financial and legal consequences. The compliance bar is higher, the tolerance for hallucination is zero, and the liability is real.
The pattern is clear: AI works best in finance where the task is repetitive, the data is structured, the error tolerance is quantifiable, and the output can be validated by a human before it matters. It works worst where judgment, regulatory nuance, and contextual interpretation are required, which is precisely where most CFOs spend their most valuable time.
For operating partners evaluating where to deploy AI across a portfolio, this means resisting the temptation to lead with the most complex use case. The companies seeing real ROI are starting with three to five concrete applications that have measurable baselines, clear success metrics, and bounded risk. They're proving value in accounts payable before attempting AI-driven revenue forecasting. They're automating data consolidation before asking AI to generate strategic insights from that data.
The Valuation Signal Sponsors Are Missing
There's a dimension to the finance AI readiness gap that transcends operational efficiency, and it connects directly to exit value.
Accordion's exit readiness survey found that 85% of buyers now consider AI-enabled finance capabilities when assessing valuation. Sponsors report that CFOs who embed AI in planning, forecasting, and reporting are twice as likely to achieve smoother exits and higher perceived valuations. This isn't about AI as a productivity tool. It's about AI as a signal to buyers that the company has the data maturity, process sophistication, and operational discipline that de-risk a transaction.
A buyer evaluating two similar companies at similar multiples will favor the one where financial data flows cleanly from source systems through automated pipelines into dashboards that update in real time. Not because the AI is impressive. Because what the AI implies about the underlying infrastructure is impressive. It means the data is clean. It means the processes are documented. It means the finance function can scale without proportional headcount growth. It means diligence will be fast and clean rather than a months-long excavation.
The irony is that the infrastructure work required to make AI function properly in a finance context is the same infrastructure work required to be genuinely exit-ready. Unified data, consistent definitions, automated reporting, documented processes. These are dual-use investments. They make AI possible and they make exits smoother. Sponsors who frame AI spending as an operational cost are missing the exit-value multiplier embedded in the same investment.
What Operating Partners Should Actually Do
The temptation is to issue a portfolio-wide directive: "Adopt AI in finance." That's what 98% of sponsors have done. It hasn't worked.
What works is sequencing. And the sequence isn't what most operating partners expect.
First, assess the data foundation honestly. Not through a vendor evaluation or an AI readiness scorecard. Through a blunt audit of how financial data actually moves through the organization. How many manual handoffs exist between source systems and board reporting? How many hours per month does the finance team spend assembling data versus analyzing it? How many definitions of "revenue" or "EBITDA" exist across entities? If the answers are ugly, that's the starting point.
Second, invest in the boring infrastructure before the exciting applications. Data integration, chart of accounts harmonization, automated consolidation, consistent KPI definitions. These investments won't show up in a board presentation about AI adoption. They'll show up in faster closes, cleaner reporting, fewer manual errors, and a finance team that finally has bandwidth for strategic work. They'll also create the foundation on which AI applications can actually function.
Third, start AI deployment in bounded, measurable use cases. AP automation. Cash flow forecasting using historical patterns. Automated variance commentary on monthly financials. These aren't transformative. They're practical. They generate measurable time savings. They build organizational comfort with AI-assisted workflows. And they create the proof points that justify broader deployment.
Fourth, measure impact in terms the sponsor actually cares about. Not "we deployed AI." But "the monthly close is three days faster." Or "the finance team reallocated 40% of consolidation time to analysis." Or "we can now produce cohort-level revenue data that we couldn't before." The metric isn't adoption. It's capability.
The companies that will be genuinely AI-enabled in their finance functions two years from now aren't the ones that moved fastest. They're the ones that built the foundation first and deployed AI on top of something that could actually support it. In a PE context where hold periods are finite and every dollar of investment needs to translate to enterprise value, that distinction is the difference between AI as a line item on the value creation plan and AI as something that actually creates value.
How AI-ready is your finance function?
Assess your readiness across five critical dimensions.