Categories
SEO

Core Web Vitals: The Definitive Guide

Now that we’ve cleared that up, how do Core Web Vitals factor into Google’s ranking algorithms?

Currently, Google evaluates a website’s user experience by determining if it:

is compatible with mobile devices provides safe browsing There are no distracting interstitials on HTTPS.

Now, in addition to these four variables, they’re adding a fifth: Core Web Vitals, which will form part of a community of signals that Google uses to rate “Page Experience,” as seen in this handy diagram:

It’s important to remember that Core Web Vitals—both field data and lab data metrics—will be a moving target: they’re a proxy for evaluating user experience, and since the web and its users are continually changing, Core Web Vitals will inevitably change as well.

Why this guide on Core Web Vitals?

And there’s a lot of confusion out there about Core Web Vitals, and the right information is dispersed everywhere. Even Google is unable to hold all of its information in one location. We hope that by developing this guide, we will be able to provide an evergreen resource with accurate information about Core Web Vitals, as well as point you in the right direction if you want to learn more.

What are Core Web Vitals?

While we can already hear you reciting the popular Top Gun quote “I feel the need… the need for pace!” in your mind, the Core Web Vitals metrics are about far more than speed.

Core Web Vitals are a series of user-facing metrics for speed, responsiveness, and visual consistency that site owners may use to assess their users’ online experience.

Core Web Vitals and non-Core Web Vitals are the two types of Web Vitals metrics.

The Core Web Vitals are:

And the non-Core Web Vitals are:

Each metric assesses the “quality” of a specific aspect of the Page Experience. The diagram below depicts how a page loads and where the various metrics are used:

But, before we go through all of the Web Vitals, let’s go over why they’re relevant. When you’re pitching management or a client about enhancing a site’s Core Web Vitals, we want you to have all the ammunition you need to make a strong argument.

Why should you care about Core Web Vitals?

Here are the three main reasons you (and everyone) should care about Core Web Vitals:

  1. Visitors prefer convenient, easy-to-use websites that can be accessed from any computer and from any venue. The bottom line is that if you have a better user experience, you’ll make more money.
  2. Google has announced that Core Web Vitals will become a ranking factor starting in mid-June 2021, as we described in the introduction. Although we don’t expect much of a change in rankings by June, relevance is still very relevant.
  3. We anticipate that its significance will expand over time.
  4. Since you’re delivering a strong user experience, passing the Core Web Vitals evaluation is likely to result in fewer users retreating back to the SERP—and Google has hinted that they may start showing a “Good Page Experience” badge in their search results. These are referred to as “indirect ranking variables” because they have an effect on searcher activity (e.g., more clicks for pages with this badge), which is then fed back into Google’s algorithms. JR Oakes was poking around in the front-end code of Google Search Console in February 2021 when he found that Google had made some preparations in this area:

Core Web Vitals in detail

Let’s dive into each of the Web Vitals metrics, beginning with the Core Web Vitals, without further ado!

Largest Contentful Paint (LCP)

Largest Contentful Paint (LCP) is a Core Web Vital that counts down the seconds from when the page loads and when the largest text block or image feature appears on the screen.

Its aim is to determine when the page’s main content has completed loading. The LCP should be as low as possible. Since it is a metric that calculates perceived load speed, a fast LCP reassures users that a page is useful. LCP can be found in both field and lab data.

As soon as the user interacts with the page, the browser will stop reporting new LCP candidates (via tapping, scrolling, a keypress, switching tabs, or closing the tab). When the LCP is done in the lab, it’s not completely obvious. This should happen as the page approaches Time to Interactive (TTI) and it becomes clear which aspect is the final LCP nominee.

Important considerations

The largest text block or image element on a page can change while it is loading, and the most recent candidate is used to calculate the LCP.

Consider the case where an H1 heading is the largest text block at first, but a larger image is loaded later. When it comes to determining the LCP, the larger image is the most likely candidate.

Please notice that svg> elements are not qualified for the Largest Contentful Paint at this time. As a result, if you use an svg> feature to load a large logo, it won’t be considered an LCP nominee. This decision was made to keep things straightforward, but it could change in the future.

We’ve reached out to Google for clarity on whether video> elements are actually considered LCP candidates.

How to interpret your LCP score

Here’s how to interpret your LCP score:

  • Good: <= 2.5s (2.5 seconds or less)
  • Needs improvement: > 2.5s <= 4s (between 2.5 and 4 seconds)
  • Poor: > 4s (more than 4 seconds)

What could be causing a poor LCP score

A low LCP score may be caused by a variety of factors, including slow server response times, render-blocking JavaScript and CSS, and having your largest content resources be too big and take too long to load.

Improving your LCP score

You can increase your LCP score by optimising your vital rendering course, CSS, and pictures, for example. It is well beyond the reach of this article to describe all of them. Instead, we suggest that you look at web.dev’s tools.

First Input Delay (FID)

FID is a Core Web Vital that measures the time in milliseconds between when a user interacts with your site for the first time (i.e. when they click a connection, tap a button, or push a key) and when the browser is able to respond to that interaction.

A user’s first experience of your site’s interactivity and responsiveness is based on the FID. Make a good first impression!

As the name implies, this metric can only be evaluated in the field since it is dependent on user interaction. As a result, the FID can only be found in field results. The FID should be as low as possible.

The Total Blocking Time metric is used for lab research because it closely corresponds with First Input Delay.

Important considerations

The FID does not account for the time it takes the browser to process an event (such as a button click) or to update the user interface afterward.

Scrolling and zooming aren’t counted as actions because they’re constant in nature and have very different performance constraints because the scrolling operation is done by the GPU rather than the CPU, or the CPU’s main thread instead of the main thread.

How to interpret your FID score

Here’s how to interpret your FID score:

  • Good: <= 100ms
  • Needs improvement: > 100ms and <= 300ms
  • Poor: > 300ms

What could be causing a poor FID score

The fact that it is busy parsing and executing JavaScript code is a common cause of a low FID ranking. The main thread can’t react to a user’s interaction while it’s busy.

Improving your FID score

If you want to raise your FID score, you should investigate what is preventing the browser from being interactive. You will boost your FID score by doing the following things:

  • JavaScript execution time is being slashed.
  • Work in the main thread is being minimised.
  • Third-party code’s influence is being reduced.

The scope of this article does not allow for a detailed description of how to improve your FID ranking. As a result, we suggest consulting web.dev’s tools on improving FID ratings.

Cumulative Layout Shift (CLS)

CLS (Cumulative Layout Shift) is a Core Web Vital that calculates the total score of all unintended layout changes within the viewport that occur over the course of a page’s lifecycle.

Its aim is to assess a page’s “visual stability,” which has a significant impact on the user experience. CLS can be found in both field and lab data. The higher the visual stability, the lower the CLS score.

CLS, unlike most other metrics, is not measured in seconds. It takes the viewport size as a starting point, then relates to elements that switch between two frames (known as unstable elements) and tests their movement in the viewport. The “effect fraction” and the “distance fraction” are the two components that make up the layout change ranking.

The region of the viewport taken up by the unstable factor in both frames is called the “effect fraction”:

The “distance fraction” is the greatest distance travelled by the unstable factor between the two frames divided by the viewport’s largest dimension (width or height):

You’ll find some examples to help you understand how CLS is measured even better.

Important considerations

The term “a page’s entire lifecycle” refers to when a page is open for days, if not weeks, and the CLS is calculated in that period. Since instruments only collect lab data for a short period of time, this is where CLS field data and lab data can report discrepancies.

It may be difficult to monitor for unforeseen layout changes in test environments because certain features may be disabled or operate differently. Cookie alerts may be disabled, live chat support may be disabled, and customised content may not be loaded, to name a few examples.

How to interpret your CLS score

Here’s how to interpret your CLS score:

  • Good: <= 0.1
  • Needs improvement: > 0.1 <= 0.25
  • Poor: > 0.25

What could be causing a poor CLS score

Unexpected layout changes are common as a result of unknown image or ad measurements, asynchronously loaded tools, and situations where new DOM elements are dynamically added to a page above previously loaded material. As a consequence, material that has already been loaded is moved to the side.

Improving your CLS score

You can avoid unintended layout changes by using size attributes for your photos and videos, as well as not adding content above already-loaded content. To learn about the full range of changes you can make, we suggest reading web.dev’s article on.

Total Blocking Time (TBT)

Complete Blocking Time (TBT) is a non-Core Web Vital that calculates the total time in milliseconds between First Contentful Paint (FCP) and Time To Interactive (TTI), during which the main thread is blocked for long enough to become unresponsive to user input.

TBT has a strong correlation with First Input Delay (FID), so it’s the best option for testing in a lab setting where real-world user interaction isn’t feasible. TBT can be collected in the field, but it is easily affected by user interaction, so it isn’t a reliable metric for determining how long it takes for a website to become sensitive to user feedback. As a result, TBT is only used in laboratory data.

Any task that takes longer than 50 milliseconds to complete is considered a long task, and the time above 50 milliseconds is referred to as “blocking time.” TBT is determined by adding the blocking portions of all long tasks together. If there are three long tasks, for example:

Task A takes 75 milliseconds (25ms longer than 50ms)

Task B takes 60 milliseconds (10ms longer than 50ms)

Task C takes 85 milliseconds (35ms longer than 50ms)

After that, the TBT is 70ms (25+10+35). The TBT should be as minimal as possible.

What does your TBT score mean?

Here’s how to figure out what your TBT score means:

300ms is a good time.

Needs to be improved: between 300 and 600 milliseconds

Poor: > 600 milliseconds

What causes a low TBT score and how to raise it

Improving your FID score and How to Increase your TBT score(opens in a new tab) explain what causes a low TBT score and how to improve it in detail.

First Contentful Paint (FCP)

FCP (First Contentful Paint) is a non-Core Web Vital that tracks the time it takes for a page to load to some part of its content being made on the computer. A fast FCP assures users that something is going on. Text, photos (including background images), svg> elements, and non-white canvas> elements are all considered material in this context.

The lower the FCP, the better. FCP is available in both field and lab results.

How to interpret your FCP score

Here’s how to interpret your FCP score:

  • Good: < 1s
  • Needs improvement: >= 1s < 3s
  • Poor: >= 3s

What could be causing a poor FCP score

High server response times and render-blocking resources are two common causes of a low FCP score.

Improving your FCP score

You can increase your LCP score by removing render-blocking tools, removing unused CSS, minifying CSS, and using a CDN, among other items.

The subject of increasing your FCP score is worthy of its own post. We strongly advise you to check out web.dev’s tools on improving FCP scores before we write one.

Speed Index (SI)

The Speed Index (SI) is a non-Core Web Vital that determines how quickly a page’s contents are visible populated during page load. It’s determined by analysing your page’s load behaviour frame by frame and counting the visual progression between frames every 100ms.

SI can be found in both field and lab data.

How to interpret your SI score

Here’s how to interpret your SI score:

  • Good: <= 4.3s
  • Needs improvement: > 4.3s <= 5.8s
  • Poor: > 5.8s

What could be causing a poor SI score

Anything that slows down the page’s loading time will lower your SI score. Some of the triggers listed for the other metrics, such as the main thread being blocked, are also applicable here.

Improving your SI score

If you concentrate on improving overall page load efficiency, your SI score will improve as well. Check out web.dev for more information.

Time to Interactive (TTI)

Time to Interactive (TTI) is a non-Core Web Vital that tracks how long it takes for a page to load and become fully interactive.

It must: Display valuable content in order to be completely interactive (measured by First Contentful Paint).

  • Render the majority of the page’s noticeable elements.
  • Within 50 milliseconds, react to user interactions.

Although measuring TTI in the field is possible, it is not recommended since user interaction can have a significant impact on your page’s TTI. As a result, TTI can only be used in a lab data environment.

How to interpret your TTI score

Here’s how to interpret your TTI score:

  • Good: <= 3.8s
  • Needs improvement: > 3.9s <= 7.3s
  • Poor: > 7.3s

What could be causing a poor TTI score

Many of the factors that contribute to low scores in the other metrics we discussed refer to TTI as well, since it is a metric that incorporates those other metrics.

Improving your TTI score

Check out for more information about how to boost your TTI.

Comparing apples to apples

It’s critical to compare apples to apples when it comes to Core Web Vitals info. That is why the distinctions between field data and lab data, as well as mobile data and desktop data, must be hammered out.

Field data vs. lab data

Again, there are two types of data in Web Vitals: field data and laboratory data:

Real users have field data, each with their own computer and network link.

Without the presence of actual users, lab data is obtained in a managed environment with predefined computer and network link settings.

It’s critical that you understand the distinction between these two types of data. Allow a moment for it to sink in, as this is perhaps the most overlooked part of Core Web Vitals metrics.

You may be getting fantastic Lighthouse (lab data) scores and high-fiving yourself, but your real users are having a bad time (field data). Alternatively, you may have the opposite—excellent field data scores and bad lab data scores!

Then there’s the “Origin Summary,” which is focused on field data and depicts the overall experience of all pages served from your platform. It’s worth noting that if you have any page templates that are notoriously sluggish to load, this will lower your Origin Summary ranking.

What else you need to know about field data

Field data:

  • If your pages aren’t getting enough traffic, which means there isn’t enough CrUX data, it’s possible that they won’t be accessible. This applies to individual URLs as well as your Origin Summary.
  • Is less useful for debugging because you’ll have to wait for new CrUX data to come in after you’ve made some changes. As a result, for debugging purposes, we suggest depending on lab results.
  • Data from markets you aren’t usually catering to is also included. If you’re mainly targeting the United States, but you’re also getting a lot of traffic from developing markets that don’t have the same access to fast internet and hardware, you’ll see a lot of mixed field data, simply because the audience from which your field data is obtained is also very mixed.
  • Data from non-indexable websites, such as PPC landing pages, may also be included.

What else you need to know about lab data

Lab data:

  • Is collected by simulating a Moto G4 phone with a fast 3G connection (as of April 2021).
  • Since lab data is simulated, it does not contain user interface data. As a result, First Input Delay (FID) isn’t available in labs.
  • Since you monitor the hardware and settings, the results are repeatable (internet connection and CPU performance).

Mobile data vs. desktop data

When researching your Core Web Vitals ratings, you’ll come across mobile data, desktop data, and a combination of the two.

In PageSpeed Insights and Google Search Console, as well as by exploring the CrUX results, you can see the field data for mobile and desktop separately.

It’s not uncommon for your desktop scores to outperform your mobile scores. It’s only natural, given that a desktop computer typically has better hardware and a quicker, more stable internet connection.

Tools to measure your Core Web Vitals

Now that we’ve covered the Web Vitals metrics and the discrepancies between field and lab data, as well as desktop and mobile data, let’s look at how our Web Vitals appear in the most common tools:

  1. ContentKing
  2. Lighthouse
  3. PageSpeed Insights
  4. web.dev Measure
  5. Google Search Console

ContentKing

ContentKing continuously monitors websites, tracks content updates, and flags SEO issues when they arise. You’ll be notified if there’s a problem.

Based on field info, the platform also tracks the site’s Origin Summary Core Web Vitals. The Origin Summary is a great metric for quickly checking how your site’s overall Core Web Vitals are looking because it is an aggregate of all pages on your site:

Though First Contentful Paint isn’t currently a Core Web Vitals metric, it is a useful measure of perceived load time.

Furthermore, we believe that in the future, Google will make First Contentful Paint a Core Web Vital, or at the very least, increase its significance.

Lighthouse

Lighthouse is an open-source project that diagnoses pages and suggests ways to develop them based on lab results. Lighthouse is used by a number of site performance tools (including PageSpeed Insights and Web.dev Measure), and it’s built into Chrome DevTools, so you won’t need to instal any additional extensions.

Lighthouse is updated on a regular basis, and the updates are timed to coincide with new Chromium launches (which powers Google Chrome).

Request a page in your browser and press Command + Option + I on Mac, or Control + Shift + I on Windows and Linux to run Lighthouse from Chrome DevTools. After that, go to the Lighthouse tab. This is what our homepage looks like:

Lighthouse starts up after we click Generate report, and we see the following report after 10-15 seconds:

Lighthouse proposes enhancements and provides a more thorough description of how each of the scores is calculated beneath the success scores.

Lighthouse from Chrome DevTools is simple and fast for one-off tests, but it’s not built to process a large number of URLs.

PageSpeed Insights (PSI)

(“PSI” for short) allows you to apply a URL, after which it will perform three actions:

  • Obtain data from the field (if there is enough available).
  • Through running, you can collect lab data.
  • Make suggestions for changes under the headings “Opportunities” and “Diagnostics.”

We send our homepage once more, and the following field data is returned:

The Origin Summary, on the other hand, paints a different image of our success. Our average performance is lower than the performance of the homepage, implying that pages other than the homepage are pulling us down.

You will see the Lab Data: if you scroll down a little further.

If you compare our lab data results from PageSpeed Insights vs. Lighthouse, you’ll find they are different. Even though we’re testing using the same settings, we’re not:

  1. In the same location (our physical location versus the location used by PSI).
  2. Running the same hardware 
  3. Using the same network connection 

web.dev Measure

web.development PageSpeed Insights can be replaced with Measure. Submit a URL, and Measure will generate Web Vitals based on lab data gathered with Lighthouse, in a slightly more user-friendly gui. The following is what it says about our homepage:

Even though the same experiments were run with the same settings, we’re seeing different published metrics once again. This is due to the fact that you can’t reliably reproduce the same conditions while running Lighthouse from a different location, on different hardware, and over a different network link.

Google Search Console

For all of your validated assets, Google Search Console offers field data for both desktop and mobile devices. You’ll see the following screen when you go to Enhancements > Core Web Vitals:

When you choose Open report for Mobile or Desktop, a more comprehensive view of your Core Web Vitals results for that device type is shown. When we select Mobile, we are presented with the following:

It appears that we’re doing very well!

To see a summary of sample URLs, click either of the rows in the Details table. Google groups together URLs that it considers to be similar. Some of the different models on our web, for example, are grouped together. It makes sense for Google to group these URLs together because they often suffer from the same issues that slow page load times.

.

How is your Lighthouse performance score calculated?

We’ve gone through the Web Vitals scores extensively, but we haven’t yet discussed how the overall Lighthouse success score is measured. First and foremost, the Lighthouse output score is derived from laboratory data. It’s a number ranging from 4 (extremely bad) to 100 (excellent) (amazing). It’s a weighted average of Web Vitals’ metric ratings, which aren’t all equally weighted.

Important notes:

  1. Lighthouse’s output ranking, as well as other tools that use Lighthouse data, is often based on lab data.
  2. Only two Core Web Vitals metrics are shown in the table above. That’s because the third factor, First Input Delay, can’t be measured in a lab.

The Web Vitals metrics scores are based on how well they perform in comparison to real-world website performance results.

Based on HTTP Archive info, a Largest Contentful Paint (LCP) metric value of 1,220 ms is mapped to a metric score of 99.

The following color-coded ranges are used by Lighthouse to “judge” your score:

  • Red (poor): 0 to 49
  • Orange (needs improvement): from 50 to 89
  • Green (good): from 90 to 100

We strongly advise you to experiment with the, which automatically updates the score when you adjust the metrics. Plug in a URL in PageSpeed Insights and click the See calculator link beneath the table with the lab data to save time configuring the calculator:

Remember that in order to pass the Core Web Vitals assessment, your page must demonstrate that all three Core Web Vitals are in the green:

  1. Largest Contentful Paint (LCP)
  2. First Input Delay (FID)
  3. Cumulative Layout Shift (CLS)

The future of Core Web Vitals

We’ve gone through the current Core Web Vitals, but what does the future hold for us? Over the next few years, we’re likely to see a lot of updates as Google continues to tweak them.

So far, here’s what we know:

The Core Web Vitals collection will likely expand over time, despite Google’s desire to keep the number of Core Web Vitals as low as possible and the Web Vitals as simple to understand and calculate as possible. For example, see the “Exploring the future of Core Web Vitals record” from Dec 14 2020 and the “First Contentful Paint (FCP) is a prime candidate to be added” from Feb 12 2021.

Increased CLS weight and tweaks: CLS weight is likely to be increased, and they’re trying to enhance the handling of long-lived sites, as layout shifts are now applied to the CLS score during interaction.

Support for calculating animation efficiency: Since the user experience extends beyond the initial page load, Google is considering adding animation performance metrics.

The First Input Delay (FID) should be made stricter: lowering the FID time to 50-75ms would allow for a more precise measurement of the user experience.

Better support for Single Page Applications (SPAs): Since SPAs don’t have specific URLs, the platform transformations don’t have output metrics. For example, the First Input Delay (FID) and Largest Contentful Paint (LCP) are only calculated when the system is first loaded, but the Cumulative Layout Shift grows with each interaction. As a result, to help assess Web Vitals in SPAs.

We also anticipate the following:

The Moto G4 on a fast 3G connection currently leads in determining your page’s Core Web Vitals, but as new, more advanced phones are created and made available, and faster internet connections become the norm, we expect Google to default to those.

Websites may begin to consider blocking traffic from non-target markets: websites that receive a lot of traffic from non-target markets that lack access to fast hardware and internet connectivity may begin to consider blocking users from these markets from accessing the website, preventing a negative impact on their Core Network Vital scores.

Need help with getting your business found online? Stridec is a top SEO agency in Singapore that can help you achieve your objectives with a structured and effective SEO programme that will get your more customers and sales. Contact us for a discussion now.