How to Measure Core Web Vitals: An In-depth Guide

How are Core Web Vitals and how are they calculated? When can you see the results of your fixes in Google Search Console, and how will you know if they worked? Let’s see what we can do.

Google has confirmed that starting in May 2021 (edit: the date has been changed),

They will begin to consider “Page Experience” as part of Search ranking, as determined by a collection of metrics called Core Web Vitals, which was recently pushed back to June 2021. The deadline is rapidly approaching, and I’m sure many of us are being asked to ensure we move our Core Web Vitals, but how can you tell?

Answering the question is more complex than you would think, because while many tools now expose these Core Web Vitals, there are many essential concepts and nuances to grasp. Even Google’s own tools, such as PageSpeed Insights and the Core Web Vitals report in Google Search Console, seem to have ambiguous data.

Why is that, and how can you be sure your fixes were successful? How can you get an accurate picture of your site’s Core Web Vitals? In this article, I’ll try to clarify a little bit more about what’s going on, as well as some of the complexities and misunderstandings surrounding these tools.

What Are The Core Web Vitals?

The Core Web Vitals are a series of three metrics that calculate a website’s “core” experience, or whether it feels quick or slow to users, and thus provides a good experience.

To receive the maximum value of any ranking raise, web pages must be within the green measurements for all three Core Web Vitals. Different values of a Core Web Vital metric between two pages could result in different page experience rankings if they are outside of the good range.


This is the most straightforward of the metrics; it tests how easily you can draw the largest item on the website, which is most likely the piece of content the user is interested in. This could be a banner picture, text, or something else. It’s obvious that it’s the most important piece because it’s the largest contentful feature on the page. We used to calculate the similarly named First Contentful Paint (FCP), but LCP has been seen as a better metric for when the content that the visitor is most likely looking for is drawn.

LCP is intended to calculate loading efficiency and is a decent proxy for all of the old performance metrics (e.g., Time to First Byte (TTFB), DOM Content Loaded, Start Render, Speed Index) — but from the perspective of the user. It is a simpler, single metric that attempts to give a good indication of page load. It does not cover all of the details covered by those metrics, but it is a simpler, single metric that attempts to give a good indication of page load.


The time between when a user interacts with a website, such as clicking on a connection or a button, and when the browser processes the click is measured by this second metric. Its aim is to determine how interactive a page is. It’s a frustrating experience for the user if all of the content is loaded but the website is unresponsive.

This metric cannot be simulated since it is dependent on when a user clicks or otherwise communicates with a website and how long it takes for that interaction to be completed. When using a testing tool without any direct user interaction, Total Blocking Time (TBT) is a decent proxy for FID, but keep an eye on Time to Interactive (TTI) when looking at FID.


For a variety of reasons, this metric is very unlike any other that has come before it. Its purpose is to determine the page’s visual stability, or how much it bounces around as new content is added. I’m sure we’ve all started reading an article just to have the text bounce around as photos, commercials, and other content loads.

Users would find this jarring and distracting, so it’s best to keep it to a minimum. Worse still, when the button you were about to press moves and you accidentally press another button! CLS makes an effort to account for these changes in style.

Lab Versus RUM

Core Web Vitals are focused on field metrics or Real User Metrics, which is one of the most important things to note (RUM). Google uses anonymized data from Chrome users to provide input metrics, which it publishes in the Chrome User Experience Report (CrUX). This data is what they’re using to calculate these three search ranking metrics. CrUX data can be found in a variety of places, including Google Search Console for your website.

Since some of these metrics (except FID) are available in synthetic or “lab-based” web performance tools like Lighthouse, which have been the standard of web performance monitoring for many in the past, the fact that RUM data is used is a significant distinction. These tools simulate page loads on simulated networks and computers and then report back on the results.

So, even if you run Lighthouse on your high-powered developer machine and get fantastic results, that does not represent what users encounter in the real world, and therefore what Google would use to calculate your website’s user experience.

LCP would be highly reliant on network conditions and system processing capacity (and many more of your users are definitely using lower-powered devices than you realise!). A counterpoint is that, at least for many Western sites, our mobiles are not as low-powered as resources like Lighthouse in mobile mode say, as these are heavily throttled. As a result, you may find that your field data on mobile is superior to what testing with this indicates (there are some discussions on changing the Lighthouse mobile settings).

Similarly, FID is always reliant on processing speed and the device’s ability to handle all of the content we’re sending it — be it photos to process, elements to style on the website, and, of course, all of the JavaScript we love to churn through.

CLS is, in principle, easier to quantify in tools since it is less vulnerable to network and hardware variations, so you’d assume it wouldn’t be as affected by the discrepancies between LAB and RUM — except for a few main factors that aren’t immediately apparent:

  • It is calculated over the life of the website, rather than only for page load, like most tools do, as we’ll see later in this post. If lab-simulated page loads have a low CLS but the field CLS score is much higher, it’s because CLS caused by scrolling or other adjustments after the initial load that testing tools usually calculate creates a lot of uncertainty.
  • It depends on the scale of the browser window — most tools, such as PageSpeed Insights, measure mobile and desktop, but different mobiles have different screen sizes, and desktops are also much larger than these tools set out to be (Web Page Test recently increased their default screen size to try to more accurately reflect usage).
  • On web sites, different users see different things. Cookie banners, personalised content such as advertisements, Adblockers, and A/B tests, to name a few examples, all have an effect on what content is drawn and therefore what CLS users can encounter.
  • It’s still in development, and the Chrome team has been hard at work removing “invisible” shifts and other things that shouldn’t count toward the CLS. Larger improvements to the way CLS is calculated are also in the works. This means that depending on which version of Chrome you’re using, you’ll see different CLS values.

Using the same name for metrics in lab-based testing tools when they may not be accurate reflections of real-life versions is misleading, and some are proposing that we rename some or all of these metrics in Lighthouse to differentiate these simulated metrics from the real-world RUM metrics that control Google rankings.

Previous Web Performance Metrics

Another source of misunderstanding is that these measures are fresh and distinct from the metrics we’ve historically used to assess web efficiency and that are surfaced by some of those tools, such as PageSpeed Insights, a free online auditing tool. Simply enter the URL you’d like to audit and press Analyze; you’ll be presented with two tabs (one for mobile and one for desktop) containing a wealth of information:

The major Lighthouse efficiency score of 100 is at the top. For a long time, this has been well-known among web performance groups, and it is frequently cited as a key performance metric to strive for, as well as a way to condense the complexity of several metrics into a single, easy-to-understand number. This goal overlaps with the Core Web Vitals goal, but it is a set of metrics rather than a rundown of the three Core Web Vitals (even the lab-based versions).

The Lighthouse efficiency score is currently made up of six metrics, including some Key Web Vitals and others:

  • First Contentful Paint (FCP)
  • SpeedIndex (SI)
  • Largest Contentful Paint (LCP)
  • Time to Interactive (TTI)
  • Total Blocking Time (TBT)
  • Cumulative Layout Shift (CLS)

To add to the complication, each of these six is weighted differently in the Performance score, and CLS, despite being one of the Core Web Vitals, currently accounts for only 5% of the Lighthouse Performance score (a figure I expect to rise soon after the next iteration of CLS is released). All of this means that your website may have a very high, green-colored Lighthouse success score and yet struggle to pass the Core Web Vitals threshold. As a result, you may need to refocus your efforts right now to concentrate on these three main metrics.

After we get past the big green score in that screenshot, we move on to the field results, where we run into another stumbling block: Despite not being part of the Core Web Vitals, First Contentful Paint is displayed in this field data alongside the other three Core Web Vitals, and it is often flagged as an alert, as in this example. (Perhaps the thresholds for this could be tweaked?) Is it possible that FCP just missed out on becoming a Core Web Vital, or does it simply look better balanced with four metrics? This section on field data is crucial, and we’ll return to it later.

If no field data for the URL being tested is available, origin data for the entire domain will be shown instead (this is hidden by default when field data is available for that particular URL as shown above).

We get the lab data after the field data, and the six measures that make up the success score are shown at the end. You can get a little more information about those metrics if you press the toggle on the top right:

As you can see, the lab versions of LCP and CLS are included, and since they’re part of Core Web Vitals, they’re given a blue label to indicate that they’re especially essential. PageSpeed Insights also has a handy calculator connection that allows you to see the effect of these scores on the overall score at the top, as well as change them to see how improving each metric would affect your overall score. However, as I previously said, the web performance score is likely to take a backseat for a while as the Core Web Vitals bask in the spotlight.

Additional Opportunities and Diagnostics are subjected to approximately 50 additional tests by Lighthouse. These, like Core Web Vitals, have no direct effect on the ranking, but they can be used by web developers to boost the performance of their site. These are also shown in PageSpeed Insights below all of the metrics, but they are just out of view in the screenshot above. Consider these recommendations for how to maximise results rather than concrete problems that must be resolved.

The diagnostics will show you the LCP factor as well as the shifts that have influenced your CLS ranking, both of which are extremely useful pieces of information when optimising for your Core Web Vitals!

So, while web efficiency advocates in the past might have focused heavily on Lighthouse scores and audits, I see this focusing on the three Main Web Vital indicators — at least for the time being, while we get our heads around them. The other Lighthouse metrics, as well as the overall score, are still useful for optimising your site’s results, but new web performance and SEO blog posts are currently focusing on the Core Web Vitals.

Viewing The Core Web Vitals For Your Site

Entering a URL into PageSpeed Insights, as discussed above, is the quickest way to get a fast look at the Core Web Vitals for an individual URL and for the entire origin. Get access to Google Search Console to see how Google views the whole site’s Core Web Vitals. This is a free Google product that allows you to see how Google views the whole site, including the Core Web Vitals (though there are some — shall we say — “frustrations” at the frequency with which the data updates here).

SEO teams have long used Google Search Console, but with the information that site developers would need to answer Core Web Vitals, development teams can, if they haven’t already, get access to this tool as well. To gain access, you’ll need a Google account, as well as the ability to prove your ownership of the site using different methods (such as uploading a file to your webserver, adding a DNS record, and so on).

The Core Web Vitals report in Google Search Console provides a rundown of how your site has performed in the last 90 days in terms of the Core Web Vitals:

To be considered absolutely passing the Core Web Vitals, all of your pages should be grey, with no ambers or reds. While an amber indicates that you’re on your way to passing, only greens count for the full gain, so don’t settle for second best. It’s up to you whether all of your pages pass or only the important ones, but there are often common problems on multiple pages, and resolving those for the web will help reduce the amount of URLs that don’t pass so you can make those decisions.

Google would initially only extend Core Web Vitals ranking to smartphones, but it’s only a matter of time before it’s applied to desktop as well, so don’t overlook desktop when you’re checking and fixing your sites.

When you click on one of the reports, you’ll get more information about which site vitals aren’t being met, as well as a sample of URLs that are affected. Google Search Console organises URLs into buckets so that you can address related pages at the same time. You can then use PageSpeed Insights to conduct a fast performance audit on a URL by clicking on it (including showing the Core Web Vitals field data for that page if they are available). You then address the problems it identifies, rerun PageSpeed Insights to ensure the lab metrics have been updated, and move on to the next page.

However, once you begin looking at the Core Web Vitals report (obsessively for some of us! ), you can become irritated that it does not appear to update to reflect your hard work. It appears to update every day as the graph moves, but it rarely changes after you’ve published your fixes — why is that?

Similarly, the data in the PageSpeed Insights area still shows the URL and site as failing. So, what’s the storey here?

The Chrome User Experience Report (CrUX)

Since the field data is based on the last 28 days of data in Chrome User Experience Report (CrUX), and only the 75th percentile of that data, the Web Vitals are sluggish to update. Using 28 days of data and the 75th percentiles of data are beneficial because they eliminate variances and extremes, resulting in a more precise representation of your site’s output without generating a lot of noise that’s difficult to interpret.

Since performance metrics are highly influenced by the network and devices, we must filter out the noise to get to the real storey of how the website is performing for the majority of users. In the other hand, they are excruciatingly slow to upgrade, resulting in a very slow feedback loop from fixing problems to seeing the effects of those corrections reflected there.

The 75th percentile (or p75) in particular, as well as the delay it causes, is intriguing, as I don’t believe it is well known. It examines what metric accounts for 75% of the visitors’ page views over the course of 28 days for each of the Core Web Vitals.

As a result, it has the highest Core Web Vital score of 75% of your page views (or conversely the lowest Core Web Vitals score that 75 percent of your page views will have less than). But it’s the worst value of that group of users, not the average of the 75 percent of page views.

A non-percentile-based rolling average does not trigger such a pause in reporting. We’ll have to get a little mathy here (I’ll try to keep it to a minimum), but let’s say for the sake of simplicity that everyone had an LCP of 10 seconds for the previous month, and you changed it so that it now only takes 1 second, and let’s say you had the exact same number of visitors every day who all scored this LCP.

As you can see, the dramatic changes in the CrUX score don’t show up until day 22, when it unexpectedly leaps to the current, lower value (once we pass 75% of the 28-day average — no coincidence!). Over 25% of your users were based on data collected prior to the update, so we’re still having the old value of 10, and your p75 value was stuck at 10.

As a result, it appears that you have made no progress for a long time, while a mean average (if used) will indicate a steady decline beginning immediately, indicating that progress has been made. On the positive side, the mean has been higher than the p75 value over the last few days, and p75, by definition, filters out the extremes entirely.

While the example in the table above is oversimplified, it illustrates why many people can see Web Vitals graphs like the one below, in which all of your pages cross a threshold one day and then are fine (woohoo!):

The graph begins with all amber, some orange, and no reds, and then abruptly changes to all green halfway through the gran.

This may come as a surprise to those expecting more incremental (and instantaneous) improvements as you work through page issues and as you visit different pages more frequently. On a related note, depending on your fixes and how they affect the thresholds, it’s not uncommon for your Search Console graph to go through an amber phase before reaching the sweet, sweet green colour:

Dave Smart conducted an interesting experiment called Tracking Changes in Search Console’s Report Core Web Vitals Data, in which he wanted to see how easily the graphs were updated. He didn’t include the 75th percentile portion of CrUX (which explains why some of his graphs aren’t moving), but it’s still an interesting real-world experiment about how this graph updates and well worth reading!

This 28-day p75 approach, in my opinion, does not completely explain the lag in the Core Web Vitals article, and we’ll explore some other possible explanations shortly.

Is that the best you can do: make the changes, then wait patiently, clicking your fingertips, before CrUX approves them and updates the graphs in Search Console and PageSpeed Insights? And if your fixes aren’t good enough, do you want to start the whole process over? That is not very satisfying in this day and age of instant feedback to fulfil our cravings and tight feedback loops for developers to increase productivity!

In the meantime, there are some things you can do to see if some of the fixes have the desired effect.

The Chrome User Experience Report (CrUX)

Let’s dig deeper into the CrUX info, which is at the heart of the calculation, to see what else it can tell us. Returning to PageSpeed Insights, we can see that it displays not only the site’s p75 value, but also the percentage of page views in each of the green, amber, and red buckets in the colour bars below:

CLS is failing the Core Web Vitals scoring with a p75 value of 0.11, which is higher than the 0.1 passing limit, as seen in the screenshot above. Despite the red colour of the font, this is an amber ranking (as red would be above 0.25). The green bar is at 73 percent, which means that if it reaches 75 percent, the page can pass the Core Web Vitals.

Though you won’t be able to see previous CrUX values, you can monitor them over time. If it rises to 74 percent tomorrow, we are on the right track (subject to fluctuations!) and can expect to reach the magic 75 percent mark soon. You should check on values that are further away on a regular basis to see if they are increasing, and then project when you will start to display as passing.

CrUX is also available as a free API for obtaining more accurate percentage statistics. You can use a curl command to call an API key once you’ve signed up for one (replacing the API KEY, formFactor, and URL as needed):

If the above makes you nervous and you just want to look at this data for one URL, PageSpeed Insights also returns this precision, which you can see by opening DevTools and then running your PageSpeed Insights test, and looking for the XHR call it makes:

There’s also an interactive CrUX API explorer that lets you try out some sample CrUX API queries. Having a free key and using Curl or another API tool to call the API on a daily basis is typically simpler.

Instead of a URL, the API can be named with a “origin,” which will return the total value of all page visits to that domain. This information is available in PageSpeed Insights, which is useful if your URL lacks CrUX information but does not have it in Google Search Console. Google hasn’t said (and isn’t likely to!) how Core Web Vitals will affect rating. Would the origin-level score have an effect on rankings, or will it just have an impact on individual URL scores? Can Google, like PageSpeed Insights, revert to original level scores if individual URL data isn’t available? It’s difficult to say right now, and the only clue so far comes from the FAQ:

Q: How do you calculate a score for a URL that was just published but hasn’t yet produced 28 days of data?

A: We can use techniques like grouping pages that are similar and computing scores based on that aggregation, similar to how Search Console reports page experience results. Small sites without field data should not be concerned because this applies to pages that generate little to no traffic.

The CrUX API can be accessed programmatically, and Rick Viscomi of the Google CrUX team has developed a Google Sheets monitor that allows you to search URLs or origins in bulk and even track CrUX data over time if you want to keep track of a large number of URLs or origins. Simply clone the sheet, open the Tools Script editor, and replace the CRUX API KEY script property with your key (this must be done in the legacy editor), then run the script, which will call the CrUX API for the provided URLs or origins and add rows to the bottom of the sheet with the details. This can then be run on a daily basis or on a schedule.

I used this to review all the URLs for a site with a slow-updating Google Search Console Core Web Vitals report, and it confirmed that CrUX had no data for a lot of the URLs and that most of the rest had passed, indicating that the Google Search Console report is behind — even though it is supposed to be based on CrUX data. I’m not sure whether this is due to URLs that had previously failed but now have insufficient traffic to receive updated CrUX data showing them passing, or whether it’s due to something else, but it shows that this report is late.

I believe a large part of this is due to URLs in CrUX that lack data, and Google Search attempting to proxy a value for them. So, while this report is a great place to start getting an overview of your site and one to keep an eye on in the future, it’s not ideal for working through problems where you need more urgent input.

For those who want to dig deeper into CrUX, there are monthly tables of CrUX data available in BigQuery (at the origin level only, not for individual URLs), and Rick has recorded how to build a CrUX dashboard based on that, which can be a useful way to track your overall website output over time.

Other Information About The Crux Data

So, now you should have a clear understanding of the CrUX dataset, why some of the tools that use it tend to update slowly and erratically, and how to dig deeper into it. But, before we move on to alternatives, there are a few more things to know about CrUX that will help you understand the data it displays. So here’s a list of other useful knowledge about CrUX in relation to Core Web Vitals that I’ve gathered.

CrUX is a Chrome-only application. Many of those iOS users, as well as users of other browsers (Desktop Safari, Firefox, Edge, and so on), as well as older browsers (Internet Explorer — please fade out! ), aren’t getting their user experience reflected in CrUX data and thus on Google’s view of Core Web Vitals.

Now, Chrome is widely used (though maybe not by your site visitors? ), and most of the performance problems it identifies will impact other browsers as well, but it’s worth noting. To say the least, it’s a little “gross” that Google’s monopoly status in search is now motivating users to optimise for their browser. Alternative options for this narrow vision will be discussed further down.

Since these metrics (CLS in particular) are still changing and bugs are being discovered and corrected, the version of Chrome being used will have an impact. Understanding the data takes on a new level of difficulty as a result of this. CLS has been steadily improving in recent versions of Chrome, with a redefinition of CLS possibly arriving in Chrome 92. Since field data is being used, it can take some time for these improvements to be passed on to users and then reflected in CrUX data.

CrUX is only available to Chrome users, or, to quote the official definition:

“[CrUX is] compiled from users who have agreed to have their browsing history synced, haven’t created a Sync pass, and have usage statistic reporting enabled.”

— Google Developers’ Chrome User Experience Report

So, if you’re searching for information on a site that’s mainly accessed from corporate networks, where those settings are disabled by central IT policies, you may be missing out on a lot of information — particularly if those poor corporate users are still forced to use Internet Explorer!

CrUX covers all pages, including those that aren’t usually surfaced in Google Search, such as “noindexed / robboted / logged in pages will be included” and “noindexed / robboted / logged in pages will be included” (though there are minimum thresholds for a URL and origin to be exposed in CrUX). Now, such types of pages are unlikely to appear in Google Search results, so their ranking effect will be minimal, but they will still be included in CrUX. However, the Google Search Console Core Web Vitals report appears to only display indexed URLs, so they will not appear there.

The non-indexed, non-public pages will be included in the origin figure shown in PageSpeed Insights and in the raw CrUX info, and as I stated above, we’re not sure what effect this will have. A large percentage of visitors to a site I work on visit our logged-in pages, and while the public pages were very performant, the logged-in pages were not, skewing the origin Web Vitals scores significantly.

The CrUX API can be used to get data from these logged-in URLs, but PageSpeed Insights can’t (since they run an actual browser and so will be redirected to the login pages). We fixed those after seeing the CrUX data and realising the effect, and the origin figures have started to drop, but it’s taking time to feed through.

Noindexed or logged-in pages are often “apps” rather than discrete sets of pages, indicating that they could be using a Single Page Application methodology with a single real URL but several separate pages underneath it. Since CLS is calculated over the entire life of the website, this can have an effect (though hopefully the upcoming changes to CLS will help with that).

While the Core Web Vitals report in Google Search Console is based on CrUX, it is not the same data, as previously stated. As I previously mentioned, I believe this is due to Google Search Console attempting to estimate Web Vitals for URLs without CrUX info. The CrUX data is also out of sync with the sample URLs in this article.

Many times I’ve seen URLs that have been patched, and the CrUX data in PageSpeed Insights or directly through the API shows it passing Web Vitals, but when you click on the red line in the Core Web Vitals report and get sample URLs, these passing URLs are included as if they are failing. I’m not sure what heuristics Google Search Console employs for this grouping, or how often it and the sample URLs are revised, but it seems to me that it should be updated more frequently!

Page views are used to measure CrUX. As a result, your most famous pages would have a significant impact on your CrUX data. Some pages will come in and out of CrUX every day depending on whether they reach these thresholds or not, and maybe the origin data will play a role in this? Also, if you had a huge campaign with a lot of visits for a while, then made progress but had less visits after that, you’ll see a lot of the older, bad results.

CrUX data is divided into three categories: mobile, desktop, and tablet, though most tools only show Mobile and Desktop. If you really want to look at Tablet info, you can use the CrUX API and BigQuery, but I’d recommend focusing on Mobile and then Desktop. Also, remember that in some cases (such as the CrUX API), it’s labelled as PHONE rather than MOBILE to indicate that the data is dependent on the form factor rather than being on a mobile network.

Overall, many of these problems are side effects of field (RUM) data collection, but all of these complexities can be overwhelming when you’re charged with “fixing our Core Web Vitals.” The more you understand how these Core Web Vitals are collected and interpreted, the more sense the data can make, and the more time you’ll be able to spend solving the real problems instead of scratching your head wondering why it’s not reporting what you think it should.

Getting Faster Feedback

OK, so you should have a clear idea of how the Core Web Vitals are gathered and exposed through the various resources by now, but we still need to figure out how to get stronger and faster feedback. Waiting 21–28 days to see a change in CrUX data before realising the fixes weren’t enough is just too long. Although some of the above tips can be used to determine if CrUX is trending in the right direction, they aren’t ideal. As a result, the only alternative is to look beyond CrUX in order to duplicate what it’s doing while exposing data more quickly.

There are a range of excellent commercial RUM products on the market that calculate user success on your platform and expose the data in dashboards or APIs, allowing you to query the data in much greater detail and with far greater granularity than CrUX allows. To prevent allegations of favouritism or offending someone I left off, I won’t name any items. Since the Core Web Vitals are revealed as browser APIs (at least by Chromium-based browsers; other browsers such as Safari and Firefox do not yet reveal some of the newer metrics like LCP and CLS), they should theoretically be the same data as that exposed to CrUX and thus to Google — with the same caveats in mind!

For those who do not have access to these RUM items, Google has made a Web Vitals JavaScript library available, which allows you to access these metrics and report them as you see fit. By running the following script on your web pages, you can send this data back to Google Analytics:

Now, I know that adding another script to test the impact of your website, which is probably slow in part due to too much JavaScript, is ironic, but as you can see above, the script is very thin, and the library it loads is only 1.7 kB compressed (4.0 kB uncompressed). Furthermore, since it is a module (which is overlooked by older browsers that don’t understand web vitals), its execution is delayed, so it shouldn’t have too much of an effect on your site, and the data it collects can be invaluable in helping you examine your Core Web Vital in a more real-time manner than CrUX data allows.

When each metric becomes usable, the script registers a feature to submit a Google Analytics case. This occurs when the page is loaded for FCP and TTFB, after the first contact from the user for FID, and when the page is navigated away from or backgrounded for LCP and CLS, and the real LCP and CLS are certainly identified. You can see these beacons being sent for that page using developer tools, while CrUX data happens in the background and isn’t visible here.

The advantage of putting this data in a platform like Google Analytics is that you can slice and dice it based on all of the other data you have, such as form factor (desktop or mobile), new or returning guests, funnel conversions, Chrome edition, and so on. And, since it’s RUM data, it’ll be influenced by real-world use — users with faster or slower devices will report back faster or slower values — rather than a developer checking on their high-end computer and declaring everything to be fine.

At the same time, keep in mind that the variation is removed by aggregating the CrUX data over 28 days and just looking at the 75th percentile. You can see more granular data if you have access to the raw data, but you’re more vulnerable to drastic variations. Even so, keeping early access to data can be extremely beneficial if you keep that in mind.

Phil Walton of Google developed a Web-Vitals dashboard that can be aimed at your Google Analytics account to import this data and measure the 75th percentile (which helps with the variations!) Then you’ll see your Core Web Vitals ranking, a data histogram, a data time series, and your top 5 visited sites, along with the top elements that caused those ratings.

You can also reveal more details (like what the LCP element is, or which element is causing the most CLS) into a Google Analytics Custom Dimension with a little more JavaScript. Phil wrote a great Debug Web Vitals in the Field post on this, essentially showing how to improve the above script to submit this debug information as well, as seen in this version of the script.

These dimensions can also be recorded in the dashboard (using ga:dimension1 as the Debug dimension area, assuming this is being sent back in Google Analytics Customer Dimension 1 in the script) to see the LCP feature as seen by those browsers:

Commercial RUM goods will also expose this type of data (and more! ), but for those just dipping their toe in the water and not willing to commit to the financial investment of those products, this at least gives you a taste of RUM-based metrics and how useful they can be for getting the important faster input on the changes you’re making. If this whets your appetite for more detail, be sure to check out the other RUM products available to see how they can assist you as well.

Remember to circle back to what Google sees for your site when looking at alternative measurements and RUM items, as it might be different. It would be a shame to put in the effort to improve your results only to miss out on all of the ranking benefits that come with it! So keep an eye on the graphs in Search Console to make sure you’re not missing something.


The Core Web Vitals are a series of main metrics that attempt to reflect the user experience of browsing the internet. As a strong supporter of improving site performance, I applaud every effort to do so, and the ranking effect of these metrics has definitely sparked a lot of interest in the web performance and SEO communities.

Although the metrics themselves are intriguing, the use of CrUX data to calculate them is maybe even more so. RUM data is now available to websites that have never contemplated assessing site success in the field in this way before. RUM data represents what users are currently experiencing in their various configurations, and there is no substitute for knowing how the website is doing and being experienced by your visitors.

However, RUM data is noisy, which is why we’ve been so reliant on lab data for so long. CrUX’s efforts to mitigate this help to provide a more stable view, but at the expense of making it difficult to see recent improvements.

Hopefully, this post has clarified the different methods for accessing Core Web Vitals data for your website, as well as some of the drawbacks of each process. I also hope it helps you grasp some of the data you’ve been having trouble understanding, as well as provide some suggestions for overcoming those limitations.

Need help with getting your business found online? Stridec is a top SEO agency in Singapore that can help you achieve your objectives with a structured and effective SEO programme that will get your more customers and sales. Contact us for a discussion now.