A Practical Guide to Core Web Vitals
What LCP, CLS, and INP mean — and how to improve them for better SEO and user experience.
If you run a website, you have probably seen the term "Core Web Vitals" in Google Search Console or a Lighthouse report. Maybe you clicked through, saw a wall of acronyms, and closed the tab. Fair enough. This guide will break down each metric in plain language, explain why it matters, and give you concrete steps to improve your scores.
What Are Core Web Vitals?
Core Web Vitals are a set of metrics defined by Google that measure real-world user experience on a web page. They focus on three aspects of the experience that users actually notice: loading speed, visual stability, and interactivity. Since 2021, Google has used these metrics as a ranking signal in search results.
The three core metrics are:
- LCP (Largest Contentful Paint) — loading performance
- CLS (Cumulative Layout Shift) — visual stability
- INP (Interaction to Next Paint) — responsiveness
There are also two supplemental metrics worth tracking:
- FCP (First Contentful Paint) — perceived load speed
- TTFB (Time to First Byte) — server responsiveness
Together, these five metrics give you a complete picture of how your site feels to the people using it. Let's walk through each one.
LCP — Largest Contentful Paint
LCP measures how long it takes for the largest visible element on the page to finish rendering. This is usually a hero image, a heading block, or a large chunk of text. It is the closest proxy to "when does the page look loaded?" from the user's perspective.
Target: under 2.5 seconds.
Common Causes of Poor LCP
- Unoptimized or oversized hero images
- Render-blocking CSS and JavaScript files in the <head>
- Slow server response time (high TTFB)
- Client-side rendering that delays content display
How to Fix It
- Serve images in modern formats like WebP or AVIF and resize them to the displayed dimensions
- Preload your LCP image with <link rel="preload">
- Inline critical CSS or load it asynchronously
- Defer non-essential JavaScript with defer or async
- Use a CDN to reduce latency for static assets
CLS — Cumulative Layout Shift
CLS measures how much the page layout moves around while the user is viewing or interacting with it. You know the experience: you are about to tap a button and the page suddenly shifts because a banner loaded above it. CLS quantifies that frustration.
Target: under 0.1.
Common Causes of Poor CLS
- Images and embeds without explicit width and height attributes
- Ads, banners, or cookie notices that inject content at the top of the page after load
- Web fonts that cause a flash of unstyled text (FOUT) with different sizing
- Dynamically injected content above existing content
How to Fix It
- Always set explicit width and height on images, videos, and iframes so the browser reserves space
- Reserve space for ad slots and dynamic content using CSS min-height
- Use font-display: swap or font-display: optional and preload key fonts
- Avoid inserting content above existing content unless the user triggered the action
INP — Interaction to Next Paint
INP replaced First Input Delay (FID) as a Core Web Vital in March 2024. While FID only measured the delay of the very first interaction, INP tracks all interactions throughout the page session — clicks, taps, and keyboard input — and reports the worst one (roughly the p98). This makes it a much better reflection of how responsive your site actually feels.
Target: under 200 milliseconds.
Common Causes of Poor INP
- Long JavaScript tasks (over 50ms) that block the main thread
- Heavy event handlers that do too much work synchronously
- Excessive DOM size that slows down rendering after an interaction
- Third-party scripts competing for main-thread time
How to Fix It
- Break long tasks into smaller chunks using requestAnimationFrame or scheduler.yield()
- Debounce or throttle expensive event handlers
- Reduce DOM size — fewer nodes means faster rendering after interactions
- Audit third-party scripts and remove or defer anything non-essential
FCP — First Contentful Paint
FCP measures the time from when the page starts loading to when the first piece of content is rendered on screen — any text, image, or canvas element. It is the user's first visual confirmation that the page is actually loading. FCP is not one of the three Core Web Vitals, but it is a useful diagnostic because a slow FCP almost always leads to a slow LCP.
Target: under 1.8 seconds.
The fixes for FCP overlap heavily with LCP: reduce server response time, eliminate render-blocking resources, and inline critical CSS. If your TTFB is high, FCP will be high too — the browser cannot paint anything until it receives the first bytes of HTML.
TTFB — Time to First Byte
TTFB measures the time between the browser sending an HTTP request and receiving the first byte of the response. It captures DNS lookup, connection setup, TLS negotiation, and server processing time. While TTFB is not a Core Web Vital, it is the foundation that every other metric builds on. A slow TTFB puts a floor under FCP, LCP, and everything else.
Target: under 800 milliseconds.
How to Fix It
- Use a CDN to serve content from edge locations close to your users
- Optimize server-side processing — database queries, API calls, template rendering
- Implement page-level caching where possible (full-page cache, reverse proxy, stale-while-revalidate)
- Ensure your hosting has adequate resources and is not overloaded
- Use HTTP/2 or HTTP/3 to reduce connection overhead
How to Measure Core Web Vitals
There are two fundamentally different ways to measure these metrics, and it is important to understand the distinction.
Lab Data
Tools like Lighthouse and PageSpeed Insights run a synthetic test in a controlled environment — a simulated device on a simulated connection. Lab data is useful for debugging specific issues and testing changes before deploying, but it does not reflect what your actual users experience. A Lighthouse score of 100 does not guarantee good field performance if your real users are on slow connections or older devices.
Field Data (Real User Monitoring)
Field data comes from real users loading your real pages on their actual devices and networks. This is what Google uses for ranking decisions — specifically, the 75th percentile (p75) of each metric across your users. Field data is the ground truth.
The best approach is to use both. Lab tools help you find and fix problems. Field data tells you whether those fixes actually moved the needle for your users.
How Abner Tracks Web Vitals
Abner collects all five metrics — LCP, CLS, INP, FCP, and TTFB — automatically from real users visiting your site. There is no extra configuration needed. If you have the standard Abner script tag installed, web vitals data is already being collected.
In your dashboard, the Web Vitals panel shows distribution charts for each metric along with the p75 value, so you can see exactly where you stand relative to Google's thresholds. You can filter by page, device type, and date range to pinpoint which pages or user segments need attention.
Because Abner uses real user monitoring (RUM), the data reflects actual experience rather than synthetic lab conditions. This is the same type of data that feeds into the Chrome User Experience Report (CrUX) and informs Google's ranking decisions. For details on what is collected and how, see the Web Vitals documentation. If you have not set up Abner yet, the installation guide takes about two minutes.
Quick Wins for Improving Your Scores
If you are looking for the highest-impact changes you can make right now, start with this list:
- Optimize images. Convert to WebP, resize to the actual display dimensions, add explicit width and height attributes, and lazy-load anything below the fold.
- Minimize render-blocking resources. Inline critical CSS, defer non-critical CSS with media queries or JavaScript, and add defer to script tags that do not need to execute immediately.
- Use a CDN. Serving assets from edge locations close to your users reduces TTFB and improves every downstream metric.
- Preload critical assets. Use <link rel="preload"> for your LCP image, key fonts, and any resources the browser would otherwise discover late.
- Avoid layout shifts from dynamic content. Reserve space for ads, embeds, and async content. Never inject elements above existing content unless the user explicitly triggered it.
- Break up long JavaScript tasks. If you have event handlers or initialization code that takes more than 50ms, split them into smaller chunks to keep the main thread responsive.
- Audit third-party scripts. Each third-party script competes for network and main-thread time. Remove anything you are not actively using and defer the rest.
You do not need to do everything at once. Pick the metric that is furthest from its target, apply the relevant fixes, deploy, and then check your field data in Abner after a few days to see the improvement.
The SEO Impact
Google has confirmed that Core Web Vitals are a ranking signal. That said, it is important to have realistic expectations. Content relevance, backlinks, and search intent are still the dominant ranking factors. Good Core Web Vitals will not make thin content rank on the first page.
Where CWV makes a real difference is as a tiebreaker. When two pages have similar content quality and authority, the one with better user experience metrics has an edge. For competitive queries where multiple pages are roughly equal, that edge matters.
Beyond rankings, better web vitals directly improve user experience. Faster pages have lower bounce rates, higher engagement, and better conversion rates. Even if Google never used CWV as a ranking factor, the performance work would still be worth doing.
Start Tracking Today
You cannot improve what you do not measure. If you are not already tracking Core Web Vitals from real users, you are optimizing blind. Abner makes it simple: add one script tag and field data starts flowing into your dashboard automatically.
Start your free 14-day trial and see how your site performs for real users.