Private equity firms have spent the last decade professionalizing digital diligence. They hired technical advisors, built playbooks, learned to score cloud adoption and legacy modernization risk. By 2026, most mature PE shops have a working protocol: confirm the target runs on ERP, has some cloud spend, maintains a website that doesn't crash, and call it done.

That protocol was never wrong. It just became incomplete.

The underwriting question has shifted. It's no longer "Does this business have a functioning tech infrastructure?" That's table stakes. The real question now—the one that determines 300–500 bps of value creation—is this: Can this business absorb AI operating leverage over a 5-7 year hold?

Those are fundamentally different questions. A company with a 15-year-old legacy ERP but pristine transactional data, well-documented workflows, and a CEO who's already piloting AI tools is a radically better AI-era target than a company with a modern SaaS stack, siloed data, and a leadership team viscerally opposed to anything that feels like automation. Traditional digital diligence would score the latter higher. It would be catastrophically wrong.

This article introduces a 90-day framework, the diagnostic and timeline that PE deal teams can run, in parallel with legal and commercial diligence, to measure AI readiness and identify the specific value creation wedges worth 6-8 figures on day one of ownership.

Why Traditional Digital Diligence Misses the AI-Era Upside

Most PE firms budget $80K–$150K for tech due diligence. A third of that goes to a third-party tech audit (security, cloud, architecture assessment). Another third goes to a few advisor calls. The rest is the operator's seat-of-the-pants judgment.

That spend isn't wasted. But it's optimized for the wrong thing. It answers: "What's the technical debt?" It doesn't answer: "What's the technical opportunity?"

Here's the gap: a business can have zero technical debt and still be worthless as an AI transformation target. Conversely, a business can be loaded with legacy code and highly valuable, if the underlying data and processes are clean.

The mistake, in concrete terms: traditional diligence looks at tech stack modernity. It scores Kubernetes and microservices and headless commerce and modern data lakes. These are genuinely useful things. But they're not correlated, not even loosely, with AI readiness. You can have a Kubernetes cluster managing a pipeline of dirty, aggregated, duplicated data. You can have a 2005 system that produces a single, clean source of customer truth.

AI readiness depends on four things. None of them are in the standard tech audit.

Data flow integrity: Can we see what the business actually does? Not kinda. Actually. Not via a reporting layer someone built three years ago and hasn't updated. Can we see the source transactions? Are customer records deduplicated? Can we trace margin from deal to delivery to invoice? If there's a mystery in the P&L, can we find it in the data?

Firms with terrible tech stacks often have this. They compensate for bad software with obsessive spreadsheet discipline. That spreadsheet discipline is gold for AI work.

Workflow definability: Are the core processes codified? Can you write them down? Not as religious artifacts, as actual sequences. If the answer is "it depends" or "the experienced people just know," the business is fragile and unaugmentable.

A business where 80% of order fulfillment is rule-based (we do this if this, that if that, exception if this) is infinitely more valuable for AI leverage than one where it's 40% rule-based and 40% intuition and 20% "the person who knew quit."

Organizational absorbency: Will your operators adopt the tools, or will they reject them? This isn't theoretical. We've watched a $200M manufacturing business reject a $2M warehouse optimization system because the ops VP felt threatened. We've watched a fragmented services firm absorb AI-driven scheduling in four weeks because the branch managers saw immediate relief.

Leadership buy-in is usually binary. Does the CEO believe AI is opportunity or threat? (Skeptical is fine. Hostile is not.) Does the executive team see time savings or job loss? Do frontline operators think you're trying to automate them or augment them?

This is culture, which most PE financial models still treat as unmeasurable. It's not. It's measurable and it's critical.

Economic unit cleanliness: Can you measure unit economics? Not EBITDA. Unit economics. Margin per customer, per order, per employee-hour. Can you see cost of acquisition? Can you see delivery cost? Can you isolate the economics of a customer cohort, a product line, a sales channel?

If you can, you can use AI to systematically improve each vector and measure the lift. If you can't, you're flying blind. You'll implement tools. You'll hope they work. You'll argue about whether things got better.

These four dimensions, data, process, people, measurement, are what AI readiness actually means. A business strong in all four is a 20–30% EBITDA lift candidate. A business weak in all four is a 5–8% candidate, if you're lucky. A business with three strong and one weak has a ceiling you can identify on day one of ownership.

The 90-Day Framework

The framework divides into three phases, aligned with deal timeline: 30 days of signal gathering pre-LOI, 30 days of confirmatory diligence, and 30 days of post-close substrate install.

Phase 1: Signal Scan (Days 1–30, Pre-LOI)

Objective: Quick signal on AI readiness. Go/no-go on the deal's AI value thesis, plus a rough quantification of upside.

Effort: Eight hours of your team's time, one working day of the target's senior management.

Method:

Call the CFO and COO. Here are the five diagnostic questions. Allocate 10 minutes per question.

  1. Data question: "Walk me through your single source of customer truth. Is it your ERP? A data warehouse? A CRM that talks to your ERP? Or do you have three systems that don't fully talk to each other?" Listen for complexity and gaps. Write down the names and version years. Ask: "If I wanted to know the true margin on customer X's orders in 2025, where would that data live?"
  2. Process question: "Take your biggest revenue-generating process, sales, delivery, support, whatever. What percentage of it is rules-based (you do this because of policy), what percentage is judgment, and what percentage is pattern-matching that you'd be hard-pressed to explain?" A business that says "75% rules, 20% judgment, 5% pattern" is codifiable. A business that says "40% rules, 50% judgment, 10% pattern" will be hard to augment.
  3. People question: "If I told you we were going to implement AI tools to augment your operations team over the next year, not automate people, augment, what would be your team's stance? Excited? Skeptical? Resistant?" Excited is rare. Skeptical is normal and fine. Resistant is a red flag. Write down the name of the person most likely to resist and why.
  4. Measurement question: "Can you tell me the unit economics of your business right now? Not EBITDA. Gross margin per customer? Cost of acquisition by channel? Delivery cost per order? What metrics can you actually see today?" Write down the ones they can see and the ones they can't. The ones they can't are your first 90-day upgrade targets.
  5. Regulatory question: "Are there any regulatory or contractual constraints on how we can use customer or operational data? GDPR, HIPAA, customer agreements that restrict analytics or automation?" Write down every constraint. Some businesses can't do certain kinds of AI work. That's not a dealbreaker, it just changes the scope.

Output: An AI Readiness Score on four dimensions (Data, Process, People, Measurement), a register of regulatory constraints, and a rough thesis on where the upside lives.

Scoring: 1–3 on each dimension, where 1 is "significant constraint," 2 is "acceptable, with work," and 3 is "genuinely clean." A business that scores 3-3-3-3 is a 20–30% EBITDA lift candidate. A business that scores 1-2-2-2 is probably 6–10%.

This takes a PE team maybe four hours of analysis and write-up. It's worth the clarity.

Phase 2: Confirmatory Scan (Days 31–60, During Diligence)

Objective: Lock in the value creation thesis with specific leverage points. Quantify AI upside by value lever.

Effort: One week on-site, plus two weeks of remote data analysis.

Method:

This is where Xivic's Value Friction Index (VFI) comes in, but the logic applies regardless of who runs it. You're scoring the five value levers that drive PE returns: margin, growth, capital, risk, and multiple. For each lever, you assess the current state, the friction points, and the AI-specific opportunities.

Margin lever: Where is the business losing margin today? Manual workflows eating hours? Repricing decisions based on incomplete data? Delivery routes run the old way? Use case: an industrial distributor where 30% of orders are repriced after quote because the system missed margin opportunities. AI-driven pricing, fed by your clean data and clean processes, closes that leak. Quantifiable upside on day 90.

Growth lever: What's preventing faster growth? Sales team spending time on non-selling work? Manual RFP response? Lack of visibility into what actually closes? Use case: a B2B services firm where the sales team spends 40% of time on non-selling. Deploy AI-driven workflow augmentation, research, proposal assembly, follow-up sequencing, and redirect 20% back to selling. That's 1–2 points of margin, plus velocity.

Capital lever: What capital is tied up in friction? Accounts receivable stretched because invoicing is manual? Inventory because planning is rule-of-thumb? Use case: a supply chain business that carries 60 days of inventory as a safety buffer. Predictive demand, enabled by AI, cuts that to 45 days. At $50M revenue with 40% COGS, that's $4M of freed capital.

Risk lever: What risk is the business exposed to that AI can mitigate? Customer concentration because your data doesn't show it? Operational concentration in key people? Margin compression because you can't see competitive risk early? AI here is protective, not aggressive. But it enables faster adjustment.

Multiple lever: What would make this business worth more in a secondary sale? Recurring revenue visibility? Lower customer concentration? Lower key-person risk? Better unit economics transparency? AI work on the first three translates to multiple expansion.

The week on-site is spent walking the business, interviewing operators, pulling data, and stress-testing the signal-phase thesis. The output is a VFI score, a 1–10 across each dimension, and a ranked list of the 2–3 highest-leverage AI value plays.

Output: Confirmatory VFI score, 2–3 specific use cases with quantified upside (in dollars and timeline), a 100-day operating plan that addresses the highest-leverage wedges first, and a Compound Value Model (CVM) that shows the expected AI-driven lift across the hold period.

Phase 3: Substrate Install (Days 61–90, Post-Close)

Objective: Build the technical and organizational foundation for sustained AI-driven value creation.

Effort: 15–20% of the COO/CTO's time for 90 days.

Method:

This phase is not "full AI transformation." That's a 18–24 month program. This phase is "substrate install", the infrastructure that makes transformation possible.

The substrate has three components:

Unified data layer: Not a data lake. Not even necessarily a warehouse. The simplest version: a set of automated data pipelines that pull source transactions from your core systems, ERP, CRM, point of sale, payroll, into a single analytical space. Deduplicate customer records. Standardize codes. Create a clean business rules layer so that "revenue" means the same thing in every query.

This isn't pretty. It's not a BI tool. It's a production dependency, and it's foundational. If you build AI tools on top of dirty data, they'll propagate garbage faster.

Timelines: 60–90 days for a 10–15 person, $50M revenue business. Longer for bigger, messier companies.

First agentic workflow: Pick the highest-leverage use case from the confirmatory phase and build it. Not a pilot. A production tool used by at least one team, every day, solving a real problem. It's usually something like: AI-assisted RFP response, AI-driven pricing recommendation, or AI-based scheduling.

The point isn't the tool. It's the muscle memory. Your team learns what it feels like to work with AI-augmented workflows. They build confidence. They see lift. They ask for the next one.

Timelines: 45–60 days from design to daily use.

VFI baseline lock: By day 90, run your first post-close VFI. Lock in the baseline. Document the current state of data integrity, workflow definition, organizational absorbency, and economic unit clarity. This becomes your measurement anchor. Everything you do over the next 18 months is scored against this baseline, using the CVM dashboard.

Output: Operational data layer live. First AI workflow in use. VFI baseline and CVM dashboard ready. You can now see, in real-time, where AI is driving lift and where friction persists.

Red Flags That Should Repricing or Kill the Thesis

Some diagnostics are deal-breakers, or at least repricing events:

Green Flags Worth Paying Up For

The Operating Thesis

Diligence is no longer about what the business is. It's about what it can become.

The 90-day framework is a discipline for seeing that future clearly, for quantifying it, and for building the operating substrate that makes it real. It turns AI readiness from a qualitative hunch into a measurable, staged program that you can staff, fund, and track.

It also changes how you price the deal. A business with a 3-3-3-3 AI readiness score is not the same as a business with a 1-2-2-1 score, even if their LTM EBITDA is identical. The upside is different. The execution risk is different. The multiple you should pay is different.

That's the point. This framework makes the difference concrete. And in PE, concrete is where value lives.