Core Web Vitals is one of those topics where the conversation drifted away from the thing that actually matters. Every dev team has a Lighthouse report. Every report has a green-yellow-red score. Almost nobody asks what the score is supposed to mean for revenue. In 2026, the answer is different than it was in 2021 — and the metrics that matter for revenue are not the same metrics Google publishes loudest.
The Lighthouse Trap
Lighthouse is a synthetic test. It runs in a sandbox, simulates a mobile device, and measures performance against a fixed benchmark. That’s useful for diagnosis. It’s a terrible proxy for revenue. The actual users on your site are running on a different distribution of devices, networks, and contexts. The synthetic score doesn’t see them.
We’ve worked with brands that obsessed over Lighthouse scores while their real-user (RUM) data told a totally different story. Field data wins. Lab data is a diagnostic.
The Three Metrics That Actually Move Revenue
Across the brands we’ve measured at scale, three field metrics correlate with conversion in a way the synthetic ones don’t:
- Largest Contentful Paint (LCP), field data.Real users seeing the main content on real devices. A 0.5s improvement on LCP almost always shows up in conversion. The lab number doesn’t.
- Interaction to Next Paint (INP).The replacement for FID. Measures how responsive the page feels when the user actually does something. Brands that ignored INP through 2024 saw conversion erosion they couldn’t explain.
- Cumulative Layout Shift (CLS), specifically on PDP and checkout.Layout shift kills add-to-cart confidence. Most brands have CLS issues in the cart they don’t know about.
What Google Actually Weights (vs. What It Says)
Google publishes Core Web Vitals as ranking signals. They’re ranking signals. They’re also weighted less in 2026 than Google’s marketing implies. The ranking impact is real but modest. The revenue impact, when measured properly, is much larger than the ranking impact.
Optimize Web Vitals for revenue first. The ranking benefit comes along for the ride.
Brands that treat Web Vitals as an SEO project leave most of the value on the table. The revenue framing changes who owns the work, how the work gets prioritized, and how the wins get measured.
A Performance Program That Pays Back
The performance program we’ve seen work across DTC and eCommerce:
- Field-data baseline. Pull 90 days of real-user data from Search Console + Web Vitals report. Segment by device, country, and template.
- Revenue correlation. Map field metrics to conversion rate by segment. The metric that correlates hardest is the one to attack first.
- Single-template focus.Optimize one template (PDP, category, checkout, homepage) at a time, measure the delta, then move to the next. Whole-site rewrites don’t prove anything.
- Continuous regression monitoring. Performance regresses. Quarterly audits keep it from sneaking back.
Looking to launch your Web Performance program?
Our development team runs revenue-correlated performance programs across DTC and eCommerce. The field-first methodology is what makes the math hold up at scale. Parts Place Inc is one example where the performance rebuild and the conversion-rate lift moved together — the work is more useful when CRO and performance ship as one program.
Tell us what your Lighthouse score looks like and we’ll come back with a custom plan in 24 hours.




