If you’ve ever run Lighthouse audits manually, you already know the pain points:
- Clicking through pages one by one
- Running audits multiple times to smooth out variance
- Copying metrics into spreadsheets
- Trying to compare performance over time
- Re-running everything after even a small change
I ran into this exact problem on real client projects — especially performance-sensitive, SEO-driven sites — so I built a small Python-based tool to automate the entire Lighthouse workflow.
This post walks through what the tool is, what it does, and how to use it.
What This Tool Is
This is a Python wrapper around Lighthouse CLI that lets you:
- Run Lighthouse against multiple URLs
- Run multiple passes per URL
- Automatically aggregate results
- Extract meaningful metrics like:
- Performance score
- Total load time
- LCP, CLS, TBT, FCP
- Optionally generate HTML Lighthouse reports
- Run the whole thing locally or in CI
Instead of treating Lighthouse like a one-off manual check, this tool turns it into a repeatable audit pipeline.
At its core, it’s designed for developers who want real numbers, not vibes.
What It Does
Here’s what the tool handles for you:
1. Multiple URLs, One Run
You can define a list of URLs (homepage, category pages, detail pages, etc.) and audit all of them in one command.
No more tab-hopping.
2. Multiple Passes Per Page
Lighthouse results can vary run-to-run.
This tool runs each URL multiple times and averages the results.
That means:
- Less noise
- More reliable metrics
- Better before/after comparisons
3. Clean, Structured Output
The script extracts the metrics that actually matter and writes them to structured output (JSON / CSV-friendly).
That makes it easy to:
- Track regressions
- Compare deploys
- Feed results into dashboards
- Share numbers with stakeholders without sending screenshots
4. Optional HTML Reports
If you still want Lighthouse’s visual reports, you can keep them.
Each run can output the full HTML reports alongside the aggregated data.
5. CI-Friendly by Design
Because it’s CLI-driven and scriptable, this works great in:
- GitHub Actions
- GitLab CI
- Any build pipeline where you want performance checks baked in
You can even fail builds if scores drop below a threshold.
How to Use It
Prerequisites
You’ll need:
- Node.js (for Lighthouse CLI)
- Python 3.9+
- Lighthouse installed globally or via
npx
Install Dependencies
pip install -r requirements.txt(If Lighthouse isn’t installed globally, the script will use npx lighthouse automatically.)
Configure Your URLs
In the config file (or script args), define:
- URLs to test
- Number of runs per URL
- Output directory
- Whether to generate HTML reports
Example:
{
"urls": [
"https://example.com",
"https://example.com/products",
"https://example.com/about"
],
"runs_per_url": 5,
"output_html": true
}Run the Audit
python run_lighthouse.pyThat’s it.
The script will:
- Run Lighthouse against each URL
- Repeat runs per your config
- Aggregate the results
- Save structured output + reports
Why I Built This
On modern projects — especially Next.js, headless WordPress, and large content sites — performance regressions happen quietly.
A new component here.
A bigger image there.
A third-party script sneaks in.
Manual Lighthouse checks don’t scale, and they’re easy to forget.
This tool exists to:
- Make performance measurable
- Make audits repeatable
- Make regressions obvious
It’s not meant to replace Lighthouse — it’s meant to weaponize it.
Who This Is For
If you’re:
- A front-end or full-stack dev
- Working on performance-sensitive sites
- Tired of one-off Lighthouse runs
- Interested in CI-driven quality checks
This tool will save you time and give you better data.
What’s Next
I’m actively expanding this idea toward:
- Automated QA checks (navigation, accessibility, layout)
- Playwright-based validation
- Screenshot comparisons
- Combined performance + UX audits
If you’re interested, keep an eye on future posts — this tool is just the foundation.