Zurück zum Blog
10 min read
Guide

Website performance audit: the complete method to diagnose and act

Metrics, tools, method, prioritization, and reporting: the operational guide for freelancers and agencies who want to turn a performance audit into a client-validated action plan.

Key takeaways
  • A performance audit combines lab measurements, field metrics, and prioritized diagnosis — not just a score.
  • Core Web Vitals (LCP, INP, CLS) are Google's reference baseline, but they are not enough on their own.
  • The value of an audit lies in the reporting: actionable recommendations and a quantified fix plan.

A client calls you: their site takes five seconds to render on mobile, conversions are dropping, and the previous agency delivered an eighty-page PageSpeed report they never opened. You have two options: rerun PageSpeed Insights and send back the same incomprehensible PDF, or run a real performance audit and walk away with an action plan they will validate.

A well-conducted website performance audit is not just a score. It is a method that crosses objective measurements, business context understanding, and hierarchical reporting. Freelancers and agencies who master this method turn a free diagnosis into a maintenance contract or an optimization project.

This guide describes the complete method, step by step, with the metrics to look at, the tools to use, the reference thresholds, and the mistakes to avoid.

Website performance audit: complete method, metrics, tools, and reporting

What a performance audit really measures

Before talking about tools, you need to clarify what you are looking for. A performance audit does not measure a site's speed the way a stopwatch measures a sprint. It measures a chain of server and browser behaviors that, together, produce the impression of slowness or fluidity for the user.

The three layers of a performance diagnosis

The first layer concerns the server: how long it takes to respond, the size of responses, the cache configuration. This is the layer most often ignored by mainstream audits, and yet the one that determines everything else. If the server takes two seconds to send the first byte, no front-end optimization will save the experience.

The second layer concerns the resources loaded by the page: scripts, stylesheets, images, fonts, iframes. Each resource consumes a network request, download time, and parsing time. This is the classic playground of technical optimizations: minification, compression, lazy loading, modern formats.

The third layer concerns rendering in the browser: the order in which elements appear, visual stability, responsiveness to interactions. This is where Core Web Vitals live, and the layer Google observes to rank pages in its results.

Lab measurements and field measurements

A serious audit uses two complementary types of data. Lab measurements are produced by tools that simulate a load under controlled conditions: Lighthouse, WebPageTest, Orilyt. They are reproducible, allow before/after comparison, and isolate technical problems.

Field measurements come from real visits by real users. Google collects them via the Chrome User Experience Report (CrUX), accessible in Search Console and PageSpeed Insights. They reflect the performance perceived by the site's actual audience, with the diversity of devices, connections, and usage conditions.

An audit relying only on lab measurements misses the reality of the traffic. An audit relying only on field measurements does not know why the numbers are bad.

The metrics that really matter in 2026

The list of performance indicators is long, but a handful is enough to make a solid diagnosis. The rest are comfort or debug metrics.

Core Web Vitals, the unavoidable foundation

Google integrated Core Web Vitals as a ranking signal in May 2021. Since March 2024, INP has replaced FID as the responsiveness metric. These three indicators form the basis of any SEO-oriented performance audit.

LCP (Largest Contentful Paint) measures the time needed to display the main element visible on screen, generally an image or a text block. The "good" threshold is under 2.5 seconds, INP (Interaction to Next Paint) must stay below 200 milliseconds, and CLS (Cumulative Layout Shift) below 0.1. These thresholds are those published by Google on web.dev and serve as universal reference.

TTFB, server health indicator

Time to First Byte measures the delay between the browser request and the first byte of response. A TTFB under 800 ms is good, between 800 ms and 1.5 s acceptable, above 1.5 s problematic. It is often the first revealer of underdimensioned hosting or absent server cache.

To understand in detail what this metric reveals and how to improve it, you can measure server response time with a dedicated test before tackling the rest.

Page weight and number of requests

These two indicators speak the most clearly to a non-technical client. A site weighing 8 MB per page and firing 180 external requests is heavy by nature, regardless of server optimization. The reasonable target for a modern showcase site is under 2 MB and under 50 requests per page, with margins by sector.

On the Orilyt platform, it is regularly observed that seemingly simple WordPress sites exceed 5 MB solely because of overloaded themes and plugins that load their own libraries on every page.

How to run a performance audit end to end

The method that follows is reproducible on any site, WordPress or not. It takes between thirty minutes and two hours depending on site size and the level of detail expected.

Step 1: scope and objectives

Before running the slightest test, clarify what you are measuring and for whom. A 500-SKU e-commerce site does not have the same stakes as a freelancer landing page. Identify priority pages: home, category pages, product pages, conversion funnel, or paid landing pages.

Also set the client's objectives. Improve Google ranking? Reduce mobile bounce rate? Prepare a redesign? Each objective drives the prioritization of recommendations. An audit without an objective produces a generic document that triggers no action.

Step 2: collect existing field data

If the site already has traffic, use it. Google Search Console gives access to CrUX data: LCP, INP, CLS measured on real users, segmented by mobile and desktop. Google Analytics provides average load times per page and device type.

These field data are the reference point on which you will be judged. A report that improves the Lighthouse score without moving the Core Web Vitals measured in production is a report that missed its target.

Step 3: run an automated technical audit

This is where Orilyt comes in. An Orilyt audit covers performance, security, technical SEO, accessibility, and GDPR compliance in one pass, across five analysis categories. The advantage over a single-purpose tool like GTmetrix is that correlations become visible: an external script slowing LCP can also signal a GDPR issue and a security risk if the domain is not controlled.

The audit runs on a URL, with no install, no admin access, no plugin to install on the client's site. That is the condition to audit a prospect before they are even a client.

Step 4: complement with a detailed measurement tool

For critical pages, an aggregated audit is not enough. WebPageTest and Lighthouse let you analyze the loading waterfall resource by resource, identify the critical path, and detect precise bottlenecks. Plan fifteen to thirty minutes per page analyzed in detail.

PageSpeed Insights synthesizes lab (Lighthouse) and field (CrUX). It is the reference tool to align your observations with those Google uses for ranking.

Step 5: prioritize fixes by impact and effort

This is the step 80% of audits miss. Once problems are identified, you must classify them along two axes: impact on experience and metrics, and effort to fix. A hosting change improving TTFB by 1 second has huge impact but requires migration. Lazy loading on images has medium impact and takes fifteen minutes.

An actionable performance audit delivers an impact/effort matrix on which the client can decide what to prioritize. Without this matrix, you deliver a list of fifty technical bugs that no one will know where to attack.

The most frequent performance problems

After thousands of audits run with Orilyt, certain patterns come back constantly. Knowing them speeds up diagnosis and lets you anticipate recommendations.

Underdimensioned hosting and absent server cache

This is by far the number one cause of slow sites. Cheap shared hosting splits resources across hundreds of sites, producing latency spikes during busy hours. The absence of server cache forces WordPress to regenerate every page on every request, which can add a full second to TTFB.

According to Cloudflare documentation, enabling a page cache upstream of the PHP server typically reduces TTFB by 60 to 80%. For a site whose content rarely changes, this is the first lever to pull.

Uncontrolled third-party scripts

Analytics tools, live chats, advertising pixels, A/B testing scripts, and social widgets weigh down pages in ways often invisible to the site owner. Each adds one or several requests to a third-party domain, with its own latency and weight.

To understand how JavaScript scripts that block rendering affect the first paint and how to make them non-blocking, you must examine the page case by case. In an audit, the goal is not to remove everything but to identify those that bring zero business value.

Unoptimized images

Images remain the top weight item on the majority of sites. A JPEG image served at full resolution on mobile where a resized WebP version would suffice can represent a factor 5 on total weight. The absence of width and height attributes on images is also a classic cause of degraded CLS.

Modern formats (WebP, AVIF) reduce weight by 25 to 50% compared to JPEG at equivalent quality. Their support is now universal on recent browsers.

WordPress technical debt

On a WordPress site, technical debt accumulates fast: abandoned plugins, premium themes loading fifteen JavaScript libraries, page builders generating bloated HTML, accumulation of revisions in the database. An audit must systematically inventory these debts and quantify the cost of cleaning them up.

How to deliver an audit the client will actually use

A technical audit that produces no client decision is a failed audit, regardless of its quality. Reporting is often what separates a consultant who sells their diagnosis from a consultant who sells nothing.

Separate technical report and client report

A developer and a director do not need the same information. The technical report contains details: resource lists, Lighthouse priorities, proposed code diffs. The client report synthesizes: what is wrong, what it costs in conversions or SEO, what is proposed to fix it, how long it takes.

Orilyt automatically generates both versions in white-label. To go further on this logic and structure a readable audit report that drives decision, the FIA method (Fact, Impact, Action) provides a presentation framework clients understand immediately.

Quantify business impact, not just scores

A Lighthouse score going from 45 to 85 speaks to no one. A sentence like "by passing under Core Web Vitals thresholds, the page gains Google ranking eligibility and reduces traffic loss risk over 18 months" speaks to everyone. That is precisely why a score is never enough to make a client decide: it must be translated into business impact.

According to studies published by Google and Deloitte (Milliseconds Make Millions, 2020), an LCP improvement from 2 to 1 second can increase conversions by 8 to 25% depending on sector. This kind of projection, even approximate, turns the audit into a sales argument.

Propose an action plan spread over time

A good report does not say "fix these 40 problems". It proposes phasing: quick wins in the first two weeks, medium fixes in the month, foundation work over the quarter. This phasing lets the client budget, plan, and measure progress. It is also the natural basis to turn an audit into a recurring maintenance contract, since foundation work becomes scoped engagements.

When and how often to audit a site

Performance is not a fixed state. A site that loads fast today can degrade in six months under the effect of content additions, new plugins, hosting changes.

A complete audit at least twice a year

For a stable site, two complete audits per year are enough. For a site evolving fast, publishing regularly, or with significant traffic, a quarterly audit is the norm. As part of a maintenance contract, a monthly mini-audit lets you quickly detect degradations.

Continuous monitoring as a complement

A one-off audit photographs the site state at time T. Continuous monitoring, via automatic checks, detects changes as soon as they happen: a plugin breaking a page, a host slowing down, an SSL certificate expiring. Orilyt plans starting at €39/month (Solo) include automatic weekly monitoring with SSL, uptime, and score alerts, plus full audit history. Details on the Orilyt pricing.

Mandatory audit before a redesign

Before launching a site redesign, a baseline audit is essential. It provides the comparison base that will let you, six months later, prove to the client that the redesign actually improved performance, or on the contrary identify a regression to fix before going live.

What a performance audit does not do

Clarifying the limits of an audit avoids misunderstandings and impossible promises.

An audit is not a ranking guarantee

No audit, no score, no tool guarantees better Google ranking. Core Web Vitals are one signal among hundreds. A site with perfect performance but weak content will not rank. A slow site with strong authority can stay well ranked. The audit improves chances, not certainties.

An audit is not a fix

An audit diagnoses. Technical fixes are the developer's job, who has hands on the code and the hosting. Positioning an audit as a "magic solution" creates expectations no tool can meet. Orilyt is explicit on this point: the platform produces the diagnosis, the fix falls to your technical team or the client's.

An audit is not frozen in time

Google thresholds evolve. INP replaced FID in 2024. Best practices from five years ago (jQuery everywhere, PNG images) are now anti-practices. A 2023 audit reread in 2026 inevitably contains dated recommendations.

The real value of a website performance audit lies in the method it applies, the decisions it facilitates, and the fixes it triggers. A tool like Orilyt industrializes the measurement and reporting parts so your energy goes into analysis and client relationship, not into compiling PageSpeed screenshots.

For freelancers and agencies, mastering this method opens two doors: the free commercial diagnosis that triggers engagements, and the recurring maintenance contract that secures monthly revenue. In both cases, audit quality drives credibility and conversion.

Running a first audit takes two minutes. The longer part will be deciding, with your client, where to attack first.

Audit any site's performance in 2 minutes
No credit card, no install, instant white-label report, five analysis categories covered in one pass.
Run a free audit

Your most frequent questions

How long does a complete performance audit take?

An automated audit with Orilyt takes two minutes between entering the URL and generating the report. Human analysis of results, prioritization of recommendations, and preparation of client reporting then take between thirty minutes for a simple site and two hours for an e-commerce or high-traffic site with several critical pages to examine.

What is the difference between a performance audit and an SEO audit?

A performance audit focuses on speed, stability, and loading experience. An SEO audit additionally covers indexation, semantic structure, internal linking, structured data, and content quality. Both overlap on Core Web Vitals and technical SEO, but a complete SEO audit goes well beyond pure performance.

Can a site be audited without admin access?

Yes, and that is even a major prospecting advantage. Orilyt audits in read-only mode: a public URL is enough. No plugin to install, no password to ask the client, no risk of breaking the site. This approach allows auditing a prospect even before the first commercial meeting.

Does a performance audit apply to non-WordPress sites?

Absolutely. The majority of performance tests are universal: Core Web Vitals, page weight, compression, browser cache, image formats, TTFB. They work on Shopify, Webflow, a static HTML site, a Laravel or Symfony application. Some WordPress-specific checks activate only when the CMS is detected automatically.

How much does a tool like Orilyt cost for professional use?

Orilyt offers a free trial without credit card, and paid plans from €39/month (Solo) to €249/month (Business): unlimited audits, white-label reports (technical + client), automatic monitoring with alerts, AI-assisted prospecting, and REST API on higher tiers. Full details on orilyt.com/pricing.

Sources and references

  • Google web.dev, Core Web Vitals — official documentation of LCP, INP, CLS metrics and their thresholds.
  • Google Developers, PageSpeed Insights — how the lab + field reference tool works.
  • Google Chrome UX Report — field data source used by Search Console and PageSpeed.
  • HTTP Archive, Web Almanac — annual state of the art on global web performance.
  • Cloudflare Blog, performance and cache — resources on edge cache and server optimization.
  • MDN Web Docs, performance optimization — reference technical documentation on optimization techniques.