The Problem: Eight Pilots, Zero Clarity
The CTO emails you the monthly AI dashboard. Eighteen projects. Six months of runway. Three separate tools logging results.
You pull the file into your calendar. Your CFO asks what this means for EBITDA. Your operating partner asks which pilots to fund. You have no answer, not because the data is missing, but because you're measuring inputs (pilot count, budget spent) instead of outputs (where value actually lives).
This is the pattern across most PE portfolios. Companies run AI initiatives as isolated experiments. There's no common language for what's working. There's no diagnostic to separate the pilots that move the needle from the ones that feel good.
Why Your Existing Measures Fall Short
Most portfolio companies rely on ROI-per-project to justify AI spend. The problem is obvious: not every AI initiative produces revenue, and not every dollar saved goes to the bottom line. A 40% reduction in data-storage costs looks good until you ask whether it compounds over time or vanishes with the next infrastructure decision.
Some portfolios try revenue-share-of-AI, the contribution AI initiatives make to top-line growth. But this treats each project as independent. It misses the fact that real value in a mature portfolio comes from compounding: when AI improves operations, which fuels revenue, which attracts better talent, which drives better decisions. Measure the parts and you miss the whole.
Vanity metrics, pilot count, headcount in AI, investment dollars, are even worse. They tell the board that you're doing something without telling you whether it matters.
Introducing the Value Friction Index
The Value Friction Index is a diagnostic scorecard that measures the friction between strategy and execution in how a portfolio company deploys AI and data assets. It scores five value levers, each on a scale of 1 to 5, based on observable evidence in your operating model, product, and financials. It's not an audit. It's a compass that points you toward where stuck value lives.
The five levers are:
- Revenue Expansion: are data and AI initiatives directly driving customer acquisition, conversion, or wallet share?
- Margin Improvement: is automation or optimization reducing cost-to-serve without sacrificing quality or customer experience?
- Speed & Throughput: are workflows, sales cycles, or time-to-market measurably faster because of AI or data integration?
- Risk Reduction: is the company better able to predict, avoid, or mitigate operational, financial, or reputational risks?
- Multiple Expansion: are the business model shifts unlocked by these capabilities visible in valuation multiples or customer stickiness?
How VFI Is Scored
Each lever gets a score from 1 to 5. You don't rely on self-reporting. You look at the business.
A score of 1 means the lever is fragmented or aspirational. There's a pilot, but no evidence that it's flowing into the core business. A 5 means it's woven into the operating model. The mechanism is repeatable. The results compound.
For Revenue Expansion, that means looking at whether new revenue attributable to AI-driven features is growing quarter over quarter and flowing into core product metrics. It means checking whether your marketing team uses data to segment, personalize, and convert, or whether they run campaigns the same way they always have.
For Margin Improvement, you're looking at cost-per-transaction, cost-per-employee-output, or cost-of-goods-sold in categories where automation was deployed. Is the improvement persistent? Does it come from a process change or just a one-time reduction?
Speed & Throughput is the most concrete. You're measuring cycle time: how long from inquiry to estimate? Quote to close? Idea to launch? You should have these numbers in your board deck. If you don't, you don't have clarity on execution friction.
Risk Reduction is harder to quantify, but it shows in churn, ARR volatility, or customer-concentration metrics. It shows in claims frequency or underwriting error rates. The diagnostic is: did we move the needle on something that matters to the business, or just collect more data?
Multiple Expansion is the ultimate signal. Are comparable companies with similar AI maturity trading at higher revenue multiples, EBITDA multiples, or retain-rate multiples? Are your customers stickier because of what you've built?
None of these are self-reported. You ground each score in the books, the systems, and the evidence your operating partners can actually see.
What High-VFI Looks Like
A high-VFI portfolio company doesn't have more AI pilots than anyone else. It has fewer, and they're deeper.
The operating model is unified around data. There's a single source of truth for customer behavior, product usage, or operational metrics. When the product team wants to run an experiment, they don't build a new dashboard, they query the platform. When the supply chain team wants to optimize, the data asset is already there. When revenue decides to target a new segment, they know the margin profile and the churn risk before they spend a dollar.
The playbooks are durable. When a new person joins the revenue team or the ops team, they inherit a repeatable way of using data to make decisions. It's not dependent on one person's SQL skills. It's not dependent on the current data analyst staying. The capability persists.
The data assets compound. Each successive AI initiative doesn't start from zero. It leverages what came before. The data model strengthens with use. The integrations expand the surface area for the next innovation. Over 18 months, you move from eight isolated pilots to one coherent system that powers continuous improvement across the whole business.
That's high friction elimination. And it shows in the financials.
How to Run Your First VFI Scan
Start by picking one portfolio company. It doesn't need to be your most mature AI player. It needs to be one where you're spending operating time right now.
Schedule a half-day with the CEO, CFO, and the operating partner who knows the business best. Don't do this over email. You need to ask follow-up questions, and you need to see the organization's actual answer, not a PowerPoint answer.
Walk through each lever. For Revenue Expansion, ask: "What revenue has AI or data driven in the last 12 months?" You should get a number. If you get a story, you don't have a clear mechanism yet. For Margin Improvement, ask to see cost-per-unit or cost-per-transaction trends. For Speed & Throughput, ask for a specific cycle-time metric. You probably have it already, it's just not labeled as an AI outcome.
Then, for each lever, score it 1 to 5. Don't average. Note where the friction is highest. That's usually where the biggest value creation opportunity sits. A portfolio company with a 1 or 2 on Revenue Expansion probably has a clear path to unlocking value by reorganizing how the product or marketing org uses customer data. A company with a 3 on Speed & Throughput usually has the infrastructure but not the discipline to apply it everywhere.
Create a simple one-pager: company name, the five scores, and three near-term moves to close each friction point. Share it with your operating partner. This becomes your diagnostic and your roadmap.
The Compounding Effect
VFI isn't a goal. It's a measurement. But measurement drives behavior.
When you score your portfolio companies against these five levers, you start to see which ones are building durable, compounding value and which ones are running experiments. You start to allocate operating time to the right friction points. And in exit conversations, you can point to something tangible: "This company moved from a 2.8 VFI to a 4.2 VFI in 18 months. That's why the multiple went from 8x to 11x EBITDA."
Value that's compound is value that sticks. It's value that survives a leadership change, a market cycle, or a new competitor. That's the kind of value that shows up in exit multiples.