Why fast ecommerce sites still feel slow
Most performance work targets page load time. That's rarely what users perceive. Perceived speed is not load speed. It's decision speed.
Most performance work targets page load time.
That's rarely what users perceive.
Perceived speed is not load speed. It's decision speed.
A product page has three latency layers. Teams optimise the first because it's measurable. Users experience the third because it's commercial. The gap between those two realities explains why stores improve Lighthouse scores and see no revenue change.
Layer 1 — Network latency
Assets, scripts, images. This is what Lighthouse measures. Important — but mostly solved by CDNs and modern frameworks. A well-configured Cloudflare setup with lazy loading and WebP images handles this adequately for most stores.
The common mistake here isn't ignoring network performance. It's over-investing in it. Teams spend weeks shaving 200ms from LCP when the actual bottleneck is elsewhere.
Layer 2 — Interaction latency
Variant changes, gallery swaps, cart actions, size selections.
This is where most "fast" stores fail.
You click a size. Nothing happens for 400ms. The page loaded in 1.2 seconds, but the decision stalled because the variant change triggered a full inventory API call, a price recalculation, and a re-render of the availability badge.
The causes are consistent across platforms. Blocking JS hydration — where the entire component tree re-renders for a single state change. Variant recomputation that hits the server instead of pre-loading combinations client-side. Inventory API calls that block the UI thread instead of resolving optimistically.
In WooCommerce, the typical product page fires 12-38 database queries per variant change if AJAX cart is enabled with stock management. Shopify handles this better natively, but apps layer their own API calls on top, creating cumulative interaction latency that doesn't appear in any performance test.
The page loaded fast, but the decision stalled.
Layer 3 — Cognitive latency
The time required to confirm: "This is the right product and safe to buy."
This dominates conversion. And it's almost never measured.
Cognitive latency is affected by inconsistent product data — where the description says one thing and the specifications say another. Missing comparison context — the user can't determine if this is the right variant without leaving the page. Unclear availability — "in stock" on the product page, "2-3 weeks" at checkout. Conflicting shipping signals — free shipping in the header, £4.99 at cart, calculated at checkout.
Each inconsistency doesn't just create confusion. It creates a micro-decision. And each micro-decision adds latency to the purchase.
The mechanism is cumulative. A user can tolerate one ambiguity. Three ambiguities create enough uncertainty to trigger comparison shopping. Five ambiguities trigger abandonment.
Why performance optimisation misses the point
Teams optimise layer 1 because the tools measure it. Lighthouse, WebPageTest, CrUX — all report network and rendering performance. None of them capture interaction latency under real user behaviour or cognitive latency from inconsistent information architecture.
So stores improve page speed scores and see no revenue change. They accelerated delivery, not decisions.
In most audits, removing a blocking review widget improves revenue more than halving load time — because it removes uncertainty, not milliseconds. Consolidating three different "delivery information" elements into one definitive statement does more for conversion than lazy-loading every image below the fold.
Performance optimisation only works when tied to decision confidence. Otherwise you're optimising transport, not outcomes.