Manage GPL

← Page Speed analysis

Page Speed analysis

Why a Page Speed score changes between runs (and what to do about it)

Lighthouse is run-to-run noisy. Understand why a score might swing 5–10 points without anything changing, and the parts that genuinely move the needle.

Updated

If you run a Page Speed analysis, then run it again the next day, you'll often see the score change by 5–10 points without having touched a thing. This isn't a bug — Lighthouse is genuinely run-to-run noisy by design. This article explains why, and helps you focus on the signals that aren't noise.

Why scores swing

A Lighthouse run is a real Chrome browser loading your page from a Google data center over a (simulated) slow 4G connection. Several things vary between runs:

  • Network conditions to your origin. The same data center might have different routing latency to your host on different days.
  • Your origin's response time. Cache-warm vs cache-cold servers respond at different speeds. PHP-FPM workers spinning up vs already-running matter.
  • Third-party scripts. Anything loaded from another domain (Google Tag Manager, Hotjar, Cloudflare Insights, ad networks) has its own variable latency contributing to your Performance score.
  • JavaScript task scheduling. The order in which the browser runs main-thread tasks isn't perfectly deterministic.

Performance is by far the noisiest score (5–15 points of run-to-run variance is common). Accessibility, Best Practices, and SEO are mostly checklist-based and barely move between runs unless your code actually changed.

What this means in practice

  • Don't react to a single run. If your Performance score drops from 84 to 78 between runs, that's within normal noise. Wait for a second data point.
  • Look at trends, not snapshots. If your score goes from 84 → 78 → 76 → 71 over four weekly runs, that's a real regression. If it goes 84 → 78 → 86 → 81, that's noise.
  • Big jumps usually have a cause. If you see a 20-point drop after a deploy, something concrete probably regressed — start by looking at what changed.

Score vs metric: which to trust

The 0–100 score is a weighted aggregate of the underlying lab metrics. The metrics themselves (LCP, CLS, INP/TBT) are what Google uses for SEO ranking — Core Web Vitals are field data, not the lab score.

For ranking purposes, what matters is whether your real users (the field data shown on pagespeed.web.dev) are seeing fast pages — not whether your lab Performance score is 92 vs 87. The lab score is a useful proxy but shouldn't be optimized as the goal in itself.

Things that genuinely move the needle

If you want a real improvement and not just a noise fluctuation, focus on:

  1. LCP < 2.5s consistently. Server response time + the largest hero image / heading. Cache aggressively, compress and resize images, use modern formats (WebP/AVIF).
  2. CLS < 0.1. Every image and embed needs explicit width/height. Late-loading fonts, ads, and consent banners are common offenders.
  3. INP / TBT < 200ms. Heavy JavaScript on initial page load. Code-split, defer non-critical scripts, drop unused dependencies.
  4. Cut third-party scripts. Audit Google Tag Manager containers ruthlessly. Remove unused tags. Each third-party script is variance you can't control.

When to ignore the score entirely

If your site requires login, runs heavy admin-only experiences, or sits behind a captcha that Google's crawler can't solve, Lighthouse can't load it properly and the score is meaningless. The Analysis failed error usually surfaces this clearly. For protected pages, run Lighthouse manually from your own browser via Chrome DevTools instead.

Is something wrong with this article?

Tell us so we can fix it — outdated info, broken steps, wrong numbers, anything.

Report an issue

Article: Why a Page Speed score changes between runs (and what to do about it)

Still stuck?

If this article didn't solve it, open a ticket and we'll help.

Open a support ticket