Making GraphQL Work In WordPress

Headless WordPress seems to be in vogue lately, with many new developments taking place in just the last few weeks. One of the reasons for the explosion in activity is the release of version 1.0 of WPGraphQL, a GraphQL server for WordPress.

WPGraphQL provides a GraphQL API: a way to fetch data from, and post data to, a WordPress website. It enables us to decouple the experience of managing our content, which is done via WordPress, from rendering the website, for which we can use the library of the framework of our choice (React, Vue.js, Gatsby, Next.js, or any other).

Until recently, WPGraphQL was the only GraphQL server for WordPress. But now another such plugin is available: GraphQL API for WordPress, authored by me.

These two plugins serve the same purpose: to provide a GraphQL API to a WordPress website. You may be wondering: Why another plugin when there’s already WPGraphQL? Do these two plugins do the same thing? Or are they for different situations?

Let me say this first: WPGraphQL works great. I didn’t build my plugin because of any problem with it.

I built GraphQL API for WordPress because I had been working on an engine to retrieve data efficiently, which happened to be very suitable for GraphQL. So, then I said to myself, “Why not?”, and I built it. (And also a couple of other reasons.)

The two plugins have different architectures, giving them different characteristics, which make particular tasks easier to achieve with one plugin or the other.

In this article, I’ll describe, from my own point of view but as objectively as possible, when WPGraphQL is the way to go and when GraphQL API for WordPress is a better choice.

Use WPGraphQL If: Using Gatsby

If you’re building a website using Gatsby, then there is only one choice: WPGraphQL.

The reason is that only WPGraphQL has the Gatsby source plugin for WordPress. In addition, WPGraphQL’s creator, Jason Bahl, was employed until recently by Gatsby, so we can fully trust that this plugin will suit Gatsby’s needs.

Gatsby receives all data from the WordPress website, and from then on, the logic of the application will be fully on Gatsby’s side, not on WordPress’. Hence, no additions to WPGraphQL (such as the potential additions of @stream or @defer directives) would make much of a difference.

WPGraphQL is already as good as Gatsby needs it to be.

Use WPGraphQL If: Using One of the New Headless Frameworks

As I mentioned, lately there has been a flurry of activity in the WordPress headless space concerning several new frameworks and starter projects, all of them based on Next.js:

If you need to use any of these new headless frameworks, then you will need to use WPGraphQL, because they have all been built on top of this plugin.

That’s a bit unfortunate: I’d really love for GraphQL API for WordPress to be able to power them too. But for that to happen, these frameworks would need to operate with GraphQL via an interface, so that we could swap GraphQL servers.

I’m somewhat hopeful that any of these frameworks will put such an interface into place. I asked about it in the Headless WordPress Framework discussion board and was told that it might be considered. I also asked in WebDevStudios’ Next.js WordPress Starter discussion board, but alas, my question was immediately deleted, without a response. (Not encouraging, is it?)

So WPGraphQL it is then, currently and for the foreseeable future.

Use Either (Or Neither) If: Using Frontity

Frontity is a React framework for WordPress. It enables you to build a React-based application that is managed in the back end via WordPress. Even creating blog posts using the WordPress editor is supported out of the box.

Frontity manages the state of the application, without leaking how the data was obtained. Even though it is based on REST by default, you can also power it via GraphQL by implementing the corresponding source plugin.

This is how Frontity is smart: The source plugin is an interface to communicate with the data provider. Currently, the only available source plugin is the one for the WordPress REST API. But anyone can implement a source plugin for either WPGraphQL or GraphQL API for WordPress. (This is the approach that I wish the Next.js-based frameworks replicated.)

Conclusion: Neither WPGraphQL nor the GraphQL API offers any advantage over the other for working with Frontity, and they both require some initial effort to plug them in.

Use WPGraphQL If: Creating a Static Site

In the first two sections, the conclusion was the same: Use WPGraphQL. But my response to this conclusion was different: While with Gatsby I had no regret, with Next.js I felt compelled to do something about it.

Why is that?

The difference is that, while Gatsby is purely a static site generator, Next.js can power both static and live websites.

I mentioned that WPGraphQL is already good enough for Gatsby. This statement can actually be broadened: WPGraphQL is already good enough for any static site generator. Once the static site generator gets the data from the WordPress website, it is pretty much settled with WordPress.

Even if GraphQL API for WordPress offers additional features, it will most likely not make a difference to the static site generator.

Hence, because WPGraphQL is already good enough, and it has completely mapped the GraphQL schema (which is still a work in progress for GraphQL API for WordPress), then WPGraphQL is the most suitable option, now and for the foreseeable future.

Use GraphQL API If: Using GraphQL in a Live (i.e. Non-Static) Website

Now, the situation above changes if we want GraphQL to fetch data from a live website, such as when powering a mobile app or plotting real-time data on a website (for instance, to display analytics) or combining both the static and live approaches on the same website.

For instance, let’s say we have built a simple static blog using one of the Next.js frameworks, and we want to allow users to add comments to blog posts. How should this task be handled?

We have two options: static and live (or dynamic). If we opt for static, then comments will be rendered together with the rest of the website. Then, whenever a comment is added, we must trigger a webhook to regenerate and redeploy the website.

This approach has a few inconveniences. The regeneration and redeployment process could take a few minutes, during which the new comment will not be available. In addition, if the website receives many comments a day, the static approach will require more server processing time, which could become costly (some hosting companies charge based on server time).

In this situation, it would make sense to render the website statically without comments, and then retrieve the comments from a live site and render them dynamically in the client.

For this, Next.js is recommended over Gatsby. It can better handle the static and live approaches, including supporting different outputs for users with different capabilities.

Back to the GraphQL discussion: Why do I recommend GraphQL API for WordPress when dealing with live data? I do because the GraphQL server can have a direct impact on the application, mainly in terms of speed and security.

For a purely static website, the WordPress website can be kept private (it might even live on the developer’s laptop), so it’s safe. And the user will not be waiting for a response from the server, so speed is not necessarily of critical importance.

For a live site, though, the GraphQL API will be made public, so data safety becomes an issue. We must make sure that no malicious actors can access it. In addition, the user will be waiting for a response, so speed becomes a critical consideration.

In this respect, GraphQL API for WordPress has a few advantages over WPGraphQL.

WPGraphQL does implement security measures, such as disabling introspection by default. But GraphQL API for WordPress goes further, by disabling the single endpoint by default (along with several other measures). This is possible because GraphQL API for WordPress offers persisted queries natively.

As for speed, persisted queries also make the API faster, because the response can then be cached via HTTP caching on several layers, including the client, content delivery network, and server.

These reasons make GraphQL API for WordPress better suited at handling live websites.

Use GraphQL API If: Exposing Different Data for Different Users or Applications

WordPress is a versatile content management system, able to manage content for multiple applications and accessible to different types of users.

Depending on the context, we might need our GraphQL APIs to expose different data, such as:

  • expose certain data to paid users but not to unpaid users,
  • expose certain data to the mobile app but not to the website.

To expose different data, we need to provide different versions of the GraphQL schema.

WPGraphQL allows us to modify the schema (for instance, we can remove a registered field). But the process is not straightforward: Schema modifications must be coded, and it’s not easy to understand who is accessing what and where (for instance, all schemas would still be available under the single endpoint, /graphql).

In contrast, GraphQL API for WordPress natively supports this use case: It offers custom endpoints, which can expose different data for different contexts, such as:

  • /graphql/mobile-app and /graphql/website,
  • /graphql/pro-users and /graphql/regular-users.

Each custom endpoint is configured via access control lists, to provide granular user access field by field, as well as a public and private API mode to determine whether the schema’s meta data is available to everyone or only to authorized users.

These features directly integrate with the WordPress editor (i.e. Gutenberg). So, creating the different schemas is done visually, similar to creating a blog post. This means that everyone can produce custom GraphQL schemas, not only developers.

GraphQL API for WordPress provides, I believe, a natural solution for this use case.

Use GraphQL API If: Interacting With External Services

GraphQL is not merely an API for fetching and posting data. As important (though often neglected), it can also process and alter the data — for instance, by feeding it to some external service, such as sending text to a third-party API to fix grammar errors or uploading an image to a content delivery network.

Now, what’s the best way for GraphQL to communicate with external services? In my opinion, this is best accomplished through directives, applied when either creating or retrieving the data (not unlike how WordPress filters operate).

I don’t know how well WPGraphQL interacts with external services, because its documentation doesn’t mention it, and the code base does not offer an example of any directive or document how to create one.

In contrast, GraphQL API for WordPress has robust support for directives. Every directive in a query is executed only once in total (as opposed to once per field and/or object). This capability enables very efficient communication with external APIs, and it integrates the GraphQL API within a cloud of services.

For instance, this query%0A%20%20%20%20excerpt%20%40translate(from%3A%22en%22%2Cto%3A%22es%22)%0A%20%20%7D%0A%7D) demonstrates a call to the Google Translate API via a @translate directive, to translate the titles and excerpts of many posts from English to Spanish. All fields for all posts are translated together, in a single call.

GraphQL API for WordPress is a natural choice for this use case.

Note: As a matter of fact, the engine on which GraphQL API for WordPress is based, GraphQL by PoP, was specifically designed to provide advanced data-manipulation capabilities. That is one of its distinct characteristics. For an extreme example of what it can achieve, check out the guide on “Sending a Localized Newsletter, User by User”.

Use WPGraphQL If: You Want a Support Community

Jason Bahl has done a superb job of rallying a community around WPGraphQL. As a result, if you need to troubleshoot your GraphQL API, you’ll likely find someone who can help you out.

In my case, I’m still striving to create a community of users around GraphQL API for WordPress, and it’s certainly nowhere near that of WPGraphQL.

Use GraphQL API If: You Like Innovation

I call GraphQL API for WordPress a “forward-looking” GraphQL server. The reason is that I often browse the list of requests for the GraphQL specification and implement some of them well ahead of time (especially those that I feel some affinity for or that I can support with little effort).

As of today, GraphQL API for WordPress supports several innovative features (such as multiple query execution and schema namespacing), offered as opt-in, and there are plans for a few more.

Use WPGraphQL If: You Need a Complete Schema

WPGraphQL has completely mapped the WordPress data model, including:

  • posts and pages,
  • custom post types,
  • categories and tags,
  • custom taxonomies,
  • media,
  • menus,
  • settings,
  • users,
  • comments,
  • plugins,
  • themes,
  • widgets.

GraphQL API for WordPress is progressively mapping the data model with each new release. As of today, the list includes:

  • posts and pages,
  • custom post types,
  • categories and tags,
  • custom taxonomies,
  • media,
  • menus,
  • settings,
  • users,
  • comments.

So, if you need to fetch data from a plugin, theme, or widget, currently only WPGraphQL does the job.

Use WPGraphQL If: You Need Extensions

WPGraphQL offers extensions for many plugins, including Advanced Custom Fields, WooCommerce, Yoast, Gravity Forms.

GraphQL API for WordPress offers an extension for Events Manager, and it will keep adding more after the release of version 1.0 of the plugin.

Use Either If: Creating Blocks for the WordPress Editor

Both WPGraphQL and GraphQL API for WordPress are currently working on integrating GraphQL with Gutenberg.

Jason Bahl has described three approaches by which this integration can take place. However, because all of them have issues, he is advocating for the introduction of a server-side registry to WordPress, to enable identification of the different Gutenberg blocks for the GraphQL schema.

GraphQL API for WordPress also has an approach for integrating with Gutenberg, based on the “create once, publish everywhere” strategy. It extracts block data from the stored content, and it uses a single Block type to represent all blocks. This approach could avoid the need for the proposed server-side registry.

WPGraphQL’s solution can be considered tentative, because it will depend on the community accepting the use of a server-side registry, and we don’t know if or when that will happen.

For GraphQL API for WordPress, the solution will depend entirely on itself, and it’s indeed already a work in progress.

Because it has a higher chance of producing a working solution soon, I’d be inclined to recommend GraphQL API for WordPress. However, let’s wait for the solution to be fully implemented (in a few weeks, according to the plan) to make sure it works as intended, and then I will update my recommendation.

Use GraphQL API If: Distributing Blocks Via a Plugin

I came to a realization: Not many plugins (if any) seem to be using GraphQL in WordPress.

Don’t get me wrong: WPGraphQL has surpassed 10,000 installations. But I believe that those are mostly installations to power Gatsby (in order to run Gatsby) or to power Next.js (in order to run one of the headless frameworks).

Similarly, WPGraphQL has many extensions, as I described earlier. But those extensions are just that: extensions. They are not standalone plugins.

For instance, the WPGraphQL for WooCommerce extension depends on both the WPGraphQL and WooCommerce plugins. If either of them is not installed, then the extension will not work, and that’s OK. But WooCommerce doesn’t have the choice of relying on WPGraphQL in order to work; hence, there will be no GraphQL in the WooCommerce plugin.

My understanding is that there are no plugins that use GraphQL in order to run functionality for WordPress itself or, specifically, to power their Gutenberg blocks.

The reason is simple: Neither WPGraphQL nor GraphQL API for WordPress are part of WordPress’ core. Thus, it is not possible to rely on GraphQL in the way that plugins can rely on WordPress’ REST API. As a result, plugins that implement Gutenberg blocks may only use REST to fetch data for their blocks, not GraphQL.

Seemingly, the solution is to wait for a GraphQL solution (most likely WPGraphQL) to be added to WordPress core. But who knows how long that will take? Six months? A year? Two years? Longer?

We know that WPGraphQL is being considered for WordPress’ core because Matt Mullenweg has hinted at it. But so many things must happen before then: bumping the minimum PHP version to 7.1 (required by both WPGraphQL and GraphQL API for WordPress), as well as having a clear decoupling, understanding, and roadmap for what functionality will GraphQL power.

(Full site editing, currently under development, is based on REST. What about the next major feature, multilingual blocks, to be addressed in Gutenberg’s phase 4? If not that, then which feature will it be?)

Having explained the problem, let’s consider a potential solution — one that doesn’t need to wait!

A few days ago, I had another realization: From GraphQL API for WordPress’ code base, I can produce a smaller version, containing only the GraphQL engine and nothing else (no UI, no custom endpoints, no HTTP caching, no access control, no nothing). And this version can be distributed as a Composer dependency, so that plugins can install it to power their own blocks.

The key to this approach is that this component must be of specific use to the plugin, not to be shared with anybody else. Otherwise, two plugins both referencing this component might modify the schema in such a way that they override each other.

Luckily, I recently solved scoping GraphQL API for WordPress. So, I know that I’m able to fully scope it, producing a version that will not conflict with any other code on the website.

That means that it will work for any combination of events:

  • If the plugin containing the component is the only one using it;
  • If GraphQL API for WordPress is also installed on the same website;
  • If another plugin that also embeds this component is installed on the website;
  • If two plugins that embed the component refer to the same version of the component or to different ones.

In each situation, the plugin will have its own self-contained, private GraphQL engine that it can fully rely on to power its Gutenberg blocks (and we need not fear any conflict).

This component, to be called the Private GraphQL API, should be ready in a few weeks. (I have already started working on it.)

Hence, my recommendation is that, if you want to use GraphQL to power Gutenberg blocks in your plugin, please wait a few weeks, and then check out GraphQL API for WordPress’ younger sibling, the Private GraphQL API.

Conclusion

Even though I do have skin in the game, I think I’ve managed to write an article that is mostly objective.

I have been honest in stating why and when you need to use WPGraphQL. Similarly, I have been honest in explaining why GraphQL API for WordPress appears to be better than WPGraphQL for several use cases.

In general terms, we can summarize as follows:

  • Go static with WPGraphQL, or go live with GraphQL API for WordPress.
  • Play it safe with WPGraphQL, or invest (for a potentially worthy payoff) in GraphQL API for WordPress.

On a final note, I wish the Next.js frameworks were re-architected to follow the same approach used by Frontity: where they can access an interface to fetch the data that they need, instead of using a direct implementation of some particular solution (the current one being WPGraphQL). If that happened, developers could which underlying server to use (whether WPGraphQL, GraphQL API for WordPress, or some other solution introduced in the future), based on their needs — from project to project.

Useful Links

16 Tools to Create an Online Course

An online course can generate revenue, grow an audience, and establish expertise. Given the range of helpful tools, it’s relatively easy to create, distribute, and monetize a course.

Here’s a list of tools to create an online course. There are all-in-one platforms to build and manage a course, and tools to create course materials, produce lesson videos, market and distribute content, and automate the instruction.

Tools to Create a Course

Teachable is an all-in-one platform to create an online course. Engage and manage students with quizzes, completion certificates, and compliance controls. Run one-on-one sessions with milestones, call hosting, and tasks. Offer coupons and advanced pricing options, including subscriptions, memberships, one-time payments, bundles, and more. All paid plans include unlimited video bandwidth, unlimited courses, and unlimited students. Price: Plans start at $29 per month.

Home page of Teachable

Teachable

Thinkific lets you create, market, and sell online courses. Easily upload videos, build quizzes, and organize all learning content with a drag-and-drop builder. Set pricing, schedule lessons, and automate content to curate the learning experience. Use a theme to launch a course site easily, or connect your courses to an existing site for a seamless brand experience. Additional features include completion tracking, automated progress emails, course discussions, and more. Price: Plans start at $49 per month.

Kajabi is a platform to build, market, and sell an online course, membership site, or coaching program. Use the product generator to create a course, or start from scratch. Customize pricing, delivery, and packaging. Create an online community for your customers. Set up an integrated website, create and customize emails with video and timers, run automated marketing campaigns, and get performance metrics. Price: Plans start at $119 per month.

Zippy Courses is an all-in-one platform to build and sell your online courses. Create lessons, and then rearrange, edit, and update your course at any time. Offer a course with multiple tiers and sell it at different prices. Attach a “launch window” to set when your course is open for enrollment. Release your course content at once, or drip it over time. Price: Plans start at $99 per month.

Home page of Zippy Courses

Zippy Courses

Screenflow is a screen recording and video editing and sharing tool to create courses. Record from multiple screens at once, or use retina displays. Access over 500,000 unique media clips, record from your iPhone or iPad, and add transitions and effects. Animate graphics, titles, and logos with built-in video and text animations. Directly publish to hosting sites. Price: Starts at $129.

Camtasia is another screen recorder and video editor to record and create professional-looking videos. Start with a template, or record your screen and then add effects. Instantly access your most-used tools. Easily modify your video with the drag-and-drop editor. Add quizzes and interactivity to encourage and measure learning. Price: Starts at $249.99.

AudioJungle, part of Envato Market, sells royalty-free stock music and sound effects. Purchase music and sounds for your course content as well as elements to strengthen your brand, such as audio logos and musical idents. Price: Purchase assets individually or access unlimited downloads through Envato Elements for $16.50 per month.

Home page of AudioJungle

AudioJungle

Vimeo is a platform to host, manage, and share videos, including courses. Set embed permissions, send private links, and lock videos or entire albums with a password. Customize the player with your logo, colors, speed controls, and more. Price: Hosting plans start at $7 per month.

Zoom provides multiple ways to connect with your course attendees. Run meetings, conference rooms with video, and full-featured webinars. Price: Basic is free. Premium plans start at $149.90 per year.

FormSwift is a tool to generate, edit, and collaborate on documents and forms. Create and modify your course materials. Use customizable lesson plan templates. Choose from a library of roughly 500 document templates and forms or upload your own documents and edit them with FormSwift’s tools. Price: Plans start at $39.95.

Home page of FormSwift

FormSwift

Canva is an online design and publishing tool that can be used to create a wide range of online course documents, including lessons and guides, infographics, presentations for video recordings, workbooks, and completion certificates. Pro version contains a brand kit to upload your own logos and fonts. Price: Basic is free. Pro is $119.99 per year.

PowerPoint is a tool to create branded presentations for lessons, particularly for screen recordings. With Presenter Coach, practice your delivery and get recommendations on pacing, word choice, and more through the power of artificial intelligence. PowerPoint is part of the Microsoft 365 suite. Price: Microsoft 365 is $69.99 per year.

Mailchimp is an email marketing platform that can help you connect with online course participants, grow your audience, increase engagement, and automate your marketing and course distribution. Price: Basic is free. Premium plans start at $9.99 per month.

Home page of Mailchimp

Mailchimp

Zapier is an automation tool to sync applications. Create workflow “zaps” to automate course tasks and apps, such as subscribing new participants to Mailchimp. Each time it runs, the zap automatically sends information from one app to another. Price: Basic is free. Premium plans start at $19.99 per month.

Logitech, the tech hardware producer, makes affordable webcams that capture high-definition video. Logitech’s new StreamCam records in full HD 1080p at 60 frames per second and AI-enabled facial tracking for a crisp image that’s always in focus. The C930e is an HD 1080p webcam that delivers sharp video in any environment, including low-light and harshly backlit settings. Price: SteamCam is $169.99. C930e is $129.99.

Blue Yeti produces professional microphones for capturing high-resolution audio directly to your computer. The Yeti Pro is a USB mic with an XLR cable to connect to professional studio gear. The Yeti Nano is a popular microphone that delivers 24-bit audio for podcasting and streaming. Price: Yeti Pro is $249.99. Yeti Nano is $99.99.

Home page of Blue Yeti

Blue Yeti

An In-Depth Guide To Measuring Core Web Vitals

Google has announced that from 1st May, they will start to consider “Page Experience” as part of Search ranking, as measured by a set of metrics called Core Web Vitals. That date is approaching quickly and I’m sure lots of us are being asked to ensure we are passing our Core Web Vitals, but how can you know if you are?

Answering that question is actually more difficult than you might presume and while lots of tools are now exposing these Core Web Vitals, there are many important concepts and subtleties to understand. even the Google tools like PageSpeed Insights and the Core Web Vitals report in Google Search Console seem to give confusing information.

Why is that and how can you be sure that your fixes really have worked? How can you get an accurate picture of the Core Web Vitals for your site? In this post, I’m going to attempt to explain a bit more about what’s going on here and explain some of the nuances and misunderstandings of these tools.

What Are The Core Web Vitals?

The Core Web Vitals are a set of three metrics designed to measure the “core” experience of whether a website feels fast or slow to the users, and so gives a good experience.

Web pages will need to be within the green measurements for all three Core Web Vitals to benefit from any ranking boost.

1. Largest Contentful Paint (LCP)

This metric is probably the easiest understood of these — it measures how quickly you get the largest item drawn on the page — which is probably the piece of content the user is interested in. This could be a banner image, a piece of text, or whatever. The fact that it’s the largest contentful element on the page is a good indicator that it’s the most important piece. LCP is relatively new, and we used to measure the similarly named First Contentful Paint (FCP) but LCP has been seen as a better metric for when the content the visitor likely wants to see is drawn.

LCP is supposed to measure loading performance and is a good proxy for all the old metrics we in the performance community used to use (i.e. Time to First Byte (TTFB), DOM Content Loaded, Start Render, Speed Index) — but from the experience of the user. It doesn’t cover all of the information covered by those metrics but is a simpler, single metric that attempts to give a good indication of page load.

2. First Input Delay (FID)

This second metric measures the time between when the user interacts with a page, clicking on a link or a button for example, and when the browser processes that click. It’s there to measure the interactivity of a page. If all the content is loaded, but the page is unresponsive, then it’s a frustrating experience for the user.

An important point is that this metric cannot be simulated as it really depends on when a user actually clicks or otherwise interacts with a page and then how long that takes to be actioned. Total Blocking Time (TBT) is a good proxy for FID when using a testing tool without any direct user interaction, but also keep an eye on Time to Interactive (TTI) when looking at FID.

3. Cumulative Layout Shift (CLS)

A very interesting metric, quite unlike other metrics that have come before for a number of reasons. It is designed to measure the visual stability of the page — basically how much it jumps around as new content slots into place. I’m sure we’ve all clicked on an article, started reading, and then had the text jump around as images, advertisements, and other content is loaded.

This is quite jarring and annoying for users so best to minimize it. Worse still is when that button you were about to click suddenly moves and you click another button instead! CLS attempts to account for these layout shifts.

Lab Versus RUM

One of the key points to understand about Core Web Vitals is that they are based on field metrics or Real User Metrics (RUM). Google uses anonymized data from Chrome users to feedback metrics and makes these available in the Chrome User Experience Report (CrUX). That data is what they are using to measure these three metrics for the search rankings. CrUX data is available in a number of tools, including in Google Search Console for your site.

The fact that RUM data is used, is an important distinction because some of these metrics (FID excepted) are available in synthetic or “lab-based” web performance tools like Lighthouse that have been the staple of web performance monitoring for many in the past. These tools run page loads on simulated networks and devices and then tell you what the metrics were for that test run.

So if you run Lighthouse on your high-powered developer machine and get great scores, that may not be reflective of what the users experience in the real world, and so what Google will use to measure your website user experience.

LCP is going to be very dependent on network conditions and the processing power of devices being used (and a lot of your users are likely using a lot of lower-powered devices than you realize!). A counterpoint however is that, for many Western sites at least, our mobiles are perhaps not quite as low-powered as tools such as Lighthouse in mobile mode suggest, as these are quite throttled. So you may well notice your field data on mobile is better than testing with this suggests (there are some discussions on changing the Lighthouse mobile settings).

Similarly, FID is often dependent on processor speed and how the device can handle all this content we’re sending to it — be it images to process, elements to layout on the page and, of course, all that JavaScript we love to send down to the browser to churn through.

CLS is, in theory, more easily measured in tools as it’s less susceptible to network and hardware variations, so you would think it is not as subject to the differences between LAB and RUM — except for a few important considerations that may not initially be obvious:

  • It is measured throughout the life of the page and not just for page load like typical tools do, which we’ll explore more later in this article. This causes a lot of confusion when lab-simulated page loads have a very low CLS, but the field CLS score is much higher, due to CLS caused by scrolling or other changes after the initial load that testing tools typically measure.
  • It can depend on the size of the browser window — typically tools like PageSpeed Insights, measure mobile and desktop, but different mobiles have different screen sizes, and desktops are often much larger than these tools set (Web Page Test recently increased their default screen size to try to more accurately reflect usage).
  • Different users see different things on web pages. Cookie banners, customized content like promotions, Adblockers, A/B tests to name but a few items that might be different, all impact what content is drawn and so what CLS users may experience.
  • It is still evolving and the Chrome team has been busy fixing “invisible” shifts and the like that should not count towards the CLS. Bigger changes to how CLS is actually measured are also in progress. This means you can see different CLS values depending on which version of Chrome is being run.

Using the same name for the metrics in lab-based testing tools, when they may not be accurate reflections of real-life versions is confusing and some are suggesting we should rename some or all of these metrics in Lighthouse to distinguish these simulated metrics from the real-world RUM metrics which power the Google rankings.

Previous Web Performance Metrics

Another point of confusion is that these metrics are new and different from the metrics we traditionally used in the past to measure web performance and that are surfaced by some of those tools, like PageSpeed Insights — a free, online auditing tool. Simply enter the URL you want an audit on and click Analyze, and a few seconds later you will be presented with two tabs (one for mobile and one for desktop) that contain a wealth of information:

At the top is the big Lighthouse performance score out of 100. This has been well-known within web performance communities for a while now and is often quoted as a key performance metric to aim for and to summarise the complexities of many metrics into a simple, easy-to-understand number. That has some overlap with the Core Web Vitals goal, but it is not a summary of the three Core Web Vitals (even the lab-based versions), but of a wider variety of metrics.

Currently, six metrics make up the Lighthouse performance score — including some of the Core Web Vitals and some other metrics:

  • First Contentful Paint (FCP)
  • SpeedIndex (SI)
  • Largest Contentful Paint (LCP)
  • Time to Interactive (TTI)
  • Total Blocking Time (TBT)
  • Cumulative Layout Shift (CLS)

To add to the complexity, each of these six is weighted differently in the Performance score and CLS, despite being one of the Core Web Vitals, is currently only 5% of the Lighthouse Performance score (I’ll bet money on this increasing soon after the next iteration of CLS is released). All this means you can get a very high, green-colored Lighthouse performance score and think your website is fine, and yet still fail to pass the Core Web Vitals threshold. You therefore may need to refocus your efforts now to look at these three core metrics.

Moving past the big green score in that screenshot, we move to the field data and we get another point of confusion: First Contentful Paint is shown in this field data along with the other three Core Web Vitals, despite not being part of the Core Web Vitals and, like in this example, I often find it is flagged as a warning even while the others all pass. (Perhaps the thresholds for this need a little adjusting?) Did FCP narrowly miss out on being a Core Web Vital, or maybe it just looks better balanced with four metrics? This field data section is important and we’ll come back to that later.

If no field data is available for the particular URL being tested, then origin data for the whole domain will be shown instead (this is hidden by default when field data is available for that particular URL as shown above).

After the field data, we get the lab data, and we see the six metrics that make up the performance score at the top. If you click on the toggle on the top right you even get a bit more of a description of those metrics:

As you can see, the lab versions of LCP and CLS are included here and, as they are part of Core Web Vitals, they get a blue label to mark them as extra important. PageSpeed Insights also includes a helpful calculator link to see the impact of these scores on the total score at the top, and it allows you to adjust them to see what improving each metric will do to your score. But, as I say, the web performance score is likely to take a backseat for a bit while the Core Web Vitals bask in the glow of all the attention at the moment.

Lighthouse also performs nearly 50 other checks on extra Opportunities and Diagnostics. These don’t directly impact the score, nor Core Web Vitals, but can be used by web developers to improve the performance of their site. These are also surfaced in PageSpeed Insights below all the metrics so just out of shot for the above screenshot. Think of these as suggestions on how to improve performance, rather than specific issues that necessarily need to be addressed.

The diagnostics will show you the LCP element and the shifts that have contributed to your CLS score which are very useful pieces of information when optimizing for your Core Web Vitals!

So, while in the past web performance advocates may have heavily concentrated on Lighthouse scores and audits, I see this zeroing in on the three Core Web Vital metrics — at least for the next period while we get our heads around them. The other Lighthouse metrics, and the overall score, are still useful to optimize your site’s performance, but the Core Web Vitals are currently taking up most of the ink on new web performance and SEO blog posts.

Viewing The Core Web Vitals For Your Site

The easiest way to get a quick look at the Core Web Vitals for an individual URL, and for the whole origin, is to enter a URL into PageSpeed Insights as discussed above. However, to view how Google sees the Core Web Vitals for your whole site, get access to Google Search Console. This is a free product created by Google that allows you to understand how Google “sees” your whole site, including the Core Web Vitals for your site (though there are some — shall we say — “frustrations” with how often the data updates here).

Google Search Console has long been used by SEO teams, but with the input that site developers will need to address Core Web Vitals, development teams should really get access to this tool too if they haven’t already. To get access you will need a Google account, and then to verify your ownership of the site through various means (placing a file in your webserver, adding a DNS record…etc.).

The Core Web Vitals report in Google Search Console gives you a summary of how your site is meeting the Core Web Vitals over the last 90 days:

Ideally, to be considered to be passing the Core Web Vitals completely, you want all your pages to be green, with no ambers nor reds. While an amber is a good indicator you’re close to passing, it’s really only greens that count, so don’t settle for second best. Whether you need all your pages passing or just your key ones is up to you, but often there will be similar issues on many pages, and fixing those for the site can help bring the number of URLs that don’t pass to a more manageable level where you can make those decisions.

Initially, Google is only going to apply Core Web Vitals ranking to mobile, but it’s surely only a matter of time before that rolls out to desktop too, so do not ignore desktop while you are in there reviewing and fixing your pages.

Clicking on one of the reports will give you more detail as to which of the web vitals are failing to be met, and then a sampling of URLs affected. Google Search Console groups URLs into buckets to, in theory, allow you to address similar pages together. You can then click on a URL to run PageSpeed Insights to run a quick performance audit on the particular URL (including showing the Core Web Vitals field data for that page if they are available). You then fix the issues it highlights, rerun PageSpeed Insights to confirm the lab metrics are now correct, and then move on to the next page.

However, once you start looking at that Core Web Vitals report (obsessively for some of us!), you may have then been frustrated that this report doesn’t seem to update to reflect your hard work. It does seem to update every day as the graph is moving, yet it’s often barely changing even once you have released your fixes — why?

Similarly, the PageSpeed Insights field data is stubbornly still showing that URL and site as failing. What’s the story here then?

The Chrome User Experience Report (CrUX)

The reason that the Web Vitals are slow to update, is that the field data is based on the last 28-days of data in Chrome User Experience Report (CrUX), and within that, only the 75th percentile of that data. Using 28 days worth of data, and the 75th percentiles of data are good things, in that they remove variances and extremes to give a more accurate reflection of your site’s performance without causing a lot of noise that’s difficult to interpret.

Performance metrics are very susceptible to the network and devices so we need to smooth out this noise to get to the real story of how your website is performing for most users. However, the flip side to that is that they are frustratingly slow to update, creating a very slow feedback loop from correcting issues, until you see the results of that correction reflected there.

The 75th percentile (or p75) in particular is interesting and the delay it creates, as I don’t think that is well understood. It looks at what metric 75% of your visitors are getting for page views over those 28 days for each of the Core Web Vitals.

It is therefore the highest Core Web Vital score of 75% of our users (or conversely the lowest Core Web Vitals score that 75% of your visitors will have less than). So it is not the average of this 75% of users, but the worst value of that set of users.

This creates a delay in reporting that a non-percentile-based rolling average would not. We’ll have to get a little mathsy here (I’ll try to keep it to a minimum), but let’s say, for simplicity sake that everyone got an LCP of 10 seconds for the last month, and you fixed it so now it only takes 1 second, and let’s say every day you had the exact same number of visitors every day and they all scored this LCP.

In that overly-simplistic scenario, you would get the following metrics:

DayLCP28 day Mean28 day p75
Day 0101010
Day 119.6810
Day 219.3610
Day 319.0410
Day 2013.5710
Day 2113.2510
Day 2212.931
Day 2312.611
Day 2711.321
Day 28111

So you can see you don’t see your drastic improvements in the CrUX score until day 22 when suddenly it jumps to the new, lower value (once we cross 75% of the 28-day average — by no coincidence!). Until then, over 25% of your users were based on data gathered prior to the change, and so we’re getting the old value of 10, and hence your p75 value was stuck at 10.

Therefore it looks like you’ve made no progress at all for a long time, whereas a mean average (if it was used) would show a gradual tick down starting immediately and so progress could actually be seen. On the plus side, for the last few days, the mean is actually higher than the p75 value since p75, by definition, filters out the extremes completely.

The example in the table above, while massively simplified, explains one reason why many might see Web Vitals graphs like below, whereby one day all your pages cross a threshold and then are good (woohoo!):

This may be surprising to those expecting more gradual (and instantaneous) changes as you work through page issues, and as some pages are visited more often than others. On a related note, it is also not unusual to see your Search Console graph go through an amber period, depending on your fixes and how they impact the thresholds, before hitting that sweet, sweet green color:

Dave Smart, ran a fascinating experiment Tracking Changes in Search Console’s Report Core Web Vitals Data, where he wanted to look at how quickly it took to update the graphs. He didn’t take into account the 75th percentile portion of CrUX (which makes the lack of movement in some of his graphs make more sense), but still a fascinating real-life experiment on how this graph updates and well worth a read!

My own experience is that this 28-day p75 methodology doesn’t fully explain the lag in the Core Web Vitals report, and we’ll discuss some other potential reasons in a moment.

So is that the best you can do, make the fixes, then wait patiently, tapping your fingers, until CrUX deems your fixes as worthy and updates the graph in Search Console and PageSpeed Insights? And if it turns out your fixes were not good enough, then start the whole cycle again? In this day of instant feedback to satisfy our cravings, and tight feedback loops for developers to improve productivity, that is not very satisfying at all!

Well, there are some things you can do in the meantime to try to see whether any fixes will get the intended impact.

Delving Into The Crux Data In More Detail

Since the core of the measurement is the CrUX data, let’s delve into that some more and see what else it can tell us. Going back to PageSpeed Insights, we can see it surfaces not only the p75 value for the site, but also the percentage of page views in each of the green, amber and red buckets shown in the colour bars beneath:

The above screenshot shows that CLS is failing the Core Web Vitals scoring with a p75 value of 0.11, which is above the 0.1 passing limit. However, despite the color of the font being red, this is actually an amber ranking (as red would be above 0.25). More interestingly is that the green bar is at 73% — once that hits 75% this page is passing the Core Web Vitals.

While you cannot see the historical CrUX values, you can monitor this over time. If it goes to 74% tomorrow then we are trending in the right direction (subject to fluctuations!) and can hope to hit the magic 75% soon. For values that are further away, you can check periodically and see the increase, and then project out when you might start to show as passing.

CrUX is also available as a free API to get more precise figures for those percentages. Once you’ve signed up for an API key, you can call it with a curl command like this (replacing the API_KEY, formFactor, and URL as appropriate):

curl -s --request POST 'https://chromeuxreport.googleapis.com/v1/records:queryRecord?key=API_KEY' \ --header 'Accept: application/json' \ --header 'Content-Type: application/json' \ --data '{"formFactor":"PHONE","url":"https://www.example.com"}'

And you’ll get a JSON response, like this:

{ "record": { "key": { "formFactor": "PHONE", "url": "https://www.example.com/" }, "metrics": { "cumulative_layout_shift": { "histogram": [ { "start": "0.00", "end": "0.10", "density": 0.99959769344240312 }, { "start": "0.10", "end": "0.25", "density": 0.00040230655759688886 }, { "start": "0.25" } ], "percentiles": { "p75": "0.00" } }, "first_contentful_paint": { ... } } }, "urlNormalizationDetails": { "originalUrl": "https://www.example.com", "normalizedUrl": "https://www.example.com/" }
} 

Incidentally, if above is scaring you a bit and you want a quicker way to get a look at this data for just one URL, then PageSpeed Insights also returns this precision which you can see by opening DevTools and then running your PageSpeed Insights test, and finding the XHR call it makes:

There is also an interactive CrUX API explorer which allows you to make sample queries of the CrUX API. Though, for regular calling of the API, getting a free key and using Curl or some other API tool is usually easier.

The API can also be called with an “origin”, instead of a URL, at which point it will give the summarised value of all page visits to that domain. PageSpeed Insights exposes this information, which can be useful if your URL has no CrUX information available to it, but Google Search Console does not. Google hasn’t stated (and is unlikely to!) exactly how the Core Web Vitals will impact ranking. Will the origin-level score impact rankings, or only individual URL scores? Or, like PageSpeed Insights will Google fall back to original level scores when individual URL data does not exist? Difficult to know at the moment and the only hint so far is this in the FAQ:

Q: How is a score calculated for a URL that was recently published, and hasn’t yet generated 28 days of data?

A: Similar to how Search Console reports page experience data, we can employ techniques like grouping pages that are similar and compute scores based on that aggregation. This is applicable to pages that receive little to no traffic, so small sites without field data don’t need to be worried.

The CrUX API can be called programmatically, and Rick Viscomi from the Google CrUX team created a Google Sheets monitor allowing you to bulk check URLs or origins, and even automatically track CrUX data over time if you want to closely monitor a number of URLs or origins. Simply clone the sheet, go into Tools → Script editor, and then enter a script property of CRUX_API_KEY with your key (this needs to be done in the legacy editor), and then run the script and it will call the CrUX API for the given URLs or origins and add rows to the bottom of the sheet with the data. This can then be run periodically or scheduled to run regularly.

I used this to check all the URLs for a site with a slow updating Core Web Vitals report in Google Search Console and it confirmed that CrUX had no data for a lot of the URLs and most of the rest had passed, so again showing that the Google Search Console report is behind — even from the CrUX data it is supposed to be based on. I’m not sure if it is due to URLs that had previously failed but have now not enough traffic to get updated CrUX data showing them passing, or if it’s due to something else, but this proves to me that this report is definitely slow.

I suspect a large part of that is due to URLs without data in CrUX and Google Search doing its best to proxy a value for them. So this report is a great place to start to get an overview of your site, and one to monitor going forward, but not a great report for working through the issues where you want more immediate feedback.

For those that want to delve into CrUX even more, there are monthly tables of CrUX data available in BigQuery (at origin level only, so not for individual URLs) and Rick has also documented how you can create a CrUX dashboard based on that which can be a good way of monitoring your overall website performance over the months.

Other Information About The Crux Data

So, with the above, you should have a good understanding of the CrUX dataset, why some of the tools using it seem to be slow and erratic to update, and also how to explore it a little more. But before we move on to alternatives to it, there are some more things to understand about CrUX to help you to really understand the data it is showing. So here’s a collection of other useful information I’ve gathered about CrUX in relation to Core Web Vitals.

CrUX is Chrome only. All those iOS users, and other browsers (Desktop Safari, Firefox, Edge…etc.), not to mention older browsers (Internet Explorer — hurry up and fade out would you!) are not having their user experience reflected in CrUX data and so on Google’s view of Core Web Vitals.

Now, Chrome’s usage is very high (though perhaps not for your site visitors?), and in most cases, the performance issues it highlights will also affect those other browsers, but it is something to be aware of. And it does feel a little “icky” to say the least, that the monopoly position of Google in search, is now encouraging people to optimize for their browser. We’ll talk below about alternative solutions for this limited view.

The version of Chrome being used will also have an impact as these metrics (CLS in particular) are still evolving as well as bugs are being found and fixed. This adds another dimension of complexity to understanding the data. There have been continual improvements to CLS in recent versions of Chrome, with a redefinition of CLS potentially landing in Chrome 92. Again the fact that field data is being used means it might take some time for these changes to feed through to users, and then into the CrUX data.

CrUX is only for users logged into Chrome, or to quote the actual definition:

“[CrUX is] aggregated from users who have opted-in to syncing their browsing history, have not set up a Sync passphrase, and have usage statistic reporting enabled.”

Chrome User Experience Report, Google Developers

So if you’re looking for information on a site mostly visited from corporate networks, where such settings are turned off by central IT policies, then you might not be seeing much data — especially if those poor corporate users are still being forced to use Internet Explorer too!

CrUX includes all pages, including those not typically surfaced to Google Search such as “noindexed / robboted / logged in pages will be included” (though there are minimum thresholds for a URL and origin to be exposed in CrUX). Now those categories of pages will likely not be included in Google Search results and so the ranking impact on them is probably unimportant, but they still will be included in CrUX. The Core Web Vitals report in Google Search Console however seems to only show indexed URLs, so they will not show up there.

The origin figure shown in PageSpeed Insights and in the raw CrUX data will include those non-indexed, non-public pages, and as I mentioned above, we’re not sure of the impact of that. A site I work on has a large percentage of visitors visiting our logged-in pages, and while the public pages were very performant the logged-in pages were not, and that severely skewed the origin Web Vitals scores.

The CrUX API can be used to get the data of these logged-in URLs, but tools like PageSpeed Insights cannot (since they run an actual browser and so will be redirected to the login pages). Once we saw that CrUX data and realized the impact, we fixed those, and the origin figures have started to drop down but, as ever, it’s taking time to feed through.

Noindexed or logged-in pages are also often “apps” as well, rather than individual collections of pages so may be using a Single Page Application methodology with one real URL, but many different pages underneath that. This can impact CLS in particular due to it being measured over the whole life of the page (though hopefully the upcoming changes to CLS will help with that).

As mentioned previously, the Core Web Vitals report in Google Search Console, while based on CrUX, is definitely not the same data. As I stated earlier, I suspect this is primarily due to Google Search Console attempting to estimate Web Vitals for URLs where no CrUX data exists. The sample URLs in this report are also out of whack with the CrUX data.

I’ve seen many instances of URLs that have been fixed, and the CrUX data in either PageSpeed Insights, or directly via the API, will show it passing Web Vitals, yet when you click on the red line in the Core Web Vitals report and get sample URLs these passing URLs will be included as if they are failing. I’m not sure what heuristics Google Search Console uses for this grouping, or how often it and sample URLs are updated, but it could do with updating more often in my opinion!

CrUX is based on page views. That means your most popular pages will have a large influence on your origin CrUX data. Some pages will drop in and out of CrUX each day as they meet these thresholds or not, and perhaps the origin data is coming into play for those? Also if you had a big campaign for a period and lots of visits, then made improvements but have fewer visits since, you will see a larger proportion of the older, worse data.

CrUX data is separated into Mobile, Desktop and Tablet — though only Mobile and Desktop are exposed in most tools. The CrUX API and BigQuery allows you to look at Tablet data if you really want to, but I’d advise concentrating on Mobile and then Desktop. Also, note in some cases (like the CrUX API) it’s marked as PHONE rather than MOBILE to reflect it’s based on the form factor, rather than that the data is based on being on a mobile network.

All in all, a lot of these issues are impacts of field (RUM) data gathering, but all these nuances can be a lot to take on when you’ve been tasked with “fixing our Core Web Vitals”. The more you understand how these Core Web Vitals are gathered and processed, the more the data will make sense, and the more time you can spend on fixing the actual issues, rather than scratching your head wondering why it’s not reporting what you think it should be.

Getting Faster Feedback

OK, so by now you should have a good handle on how the Core Web Vitals are collected and exposed through the various tools, but that still leaves us with the issue of how we can get better and quicker feedback. Waiting 21–28 days to see the impact in CrUX data — only to realize your fixes weren’t sufficient — is way too slow. So while some of the tips above can be used to see if CrUX is trending in the right direction, it’s still not ideal. The only solution, therefore, is to look beyond CrUX in order to replicate what it’s doing, but expose the data faster.

There are a number of great commercial RUM products on the market that measure user performance of your site and expose the data in dashboards or APIs to allow you to query the data in much more depth and at much more granular frequency than CrUX allows. I’ll not give any names of products here to avoid accusations of favoritism, or offend anyone I leave off! As the Core Web Vitals are exposed as browser APIs (by Chromium-based browsers at least, other browsers like Safari and Firefox do not yet expose some of the newer metrics like LCP and CLS), they should, in theory, be the same data as exposed to CrUX and therefore to Google — with the same above caveats in mind!

For those without access to these RUM products, Google has also made available a Web Vitals JavaScript library, which allows you to get access to these metrics and report them back as you see fit. This can be used to send this data back to Google Analytics by running the following script on your web pages:

<script type="module"> import {getFCP, getLCP, getCLS, getTTFB, getFID} from 'https://unpkg.com/web-vitals?module'; function sendWebVitals() { function sendWebVitalsGAEvents({name, delta, id, entries}) { if ("function" == typeof ga) {
ga('send', 'event', { eventCategory: 'Web Vitals', eventAction: name, // The id value will be unique to the current page load. When sending // multiple values from the same page (e.g. for CLS), Google Analytics can // compute a total by grouping on this ID (note: requires eventLabel to // be a dimension in your report). eventLabel: id, // Google Analytics metrics must be integers, so the value is rounded. // For CLS the value is first multiplied by 1000 for greater precision // (note: increase the multiplier for greater precision if needed). eventValue: Math.round(name === 'CLS' ? delta * 1000 : delta), // Use a non-interaction event to avoid affecting bounce rate. nonInteraction: true, // Use sendBeacon() if the browser supports it. transport: 'beacon' }); } } // Register function to send Core Web Vitals and other metrics as they become available getFCP(sendWebVitalsGAEvents); getLCP(sendWebVitalsGAEvents); getCLS(sendWebVitalsGAEvents); getTTFB(sendWebVitalsGAEvents); getFID(sendWebVitalsGAEvents); } sendWebVitals(); </script>

Now I realize the irony of adding another script to measure the impact of your website, which is probably slow in part because of too much JavaScript, but as you can see above, the script is quite small and the library it loads is only a further 1.7 kB compressed (4.0 kB uncompressed). Additionally, as a module (which will be ignored by older browsers that don’t understand web vitals), its execution is deferred so shouldn’t impact your site too much, and the data it can gather can be invaluable to help you investigate your Core Web Vital, in a more real-time manner than the CrUX data allows.

The script registers a function to send a Google Analytics event when each metric becomes available. For FCP and TTFB this is as soon as the page is loaded, for FID after the first interaction from the user, while for LCP and CLS it is when the page is navigated away from or backgrounded and the actual LCP and CLS are definitely known. You can use developer tools to see these beacons being sent for that page, whereas the CrUX data happens in the background without being exposed here.

The benefit of putting this data in a tool like Google Analytics is you can slice and dice the data based on all the other information you have in there, including form factor (desktop or mobile), new or returning visitors, funnel conversions, Chrome version, and so on. And, as it’s RUM data, it will be affected by real usage — users on faster or slower devices will report back faster or slower values — rather than a developer testing on their high spec machine and saying it’s fine.

At the same time, you need to bear in mind that the reason the CrUX data is aggregated over 28 days, and only looks at the 75th percentile is to remove the variance. Having access to the raw data allows you to see more granular data, but that means you’re more susceptible to extreme variations. Still, as long as you keep that in mind, keeping early access to data can be very valuable.

Google’s Phil Walton created a Web-Vitals dashboard, that can be pointed at your Google Analytics account to download this data, calculate the 75th percentile (so that helps with the variations!) and then display your Core Web Vitals score, a histogram of information, a time series of the data, and your top 5 visited pages with the top elements causing those scores.

Using this dashboard, you can filter on individual pages (using a ga:pagePath==/page/path/index.html filter), and see a very satisfying graph like this within a day of you releasing your fix, and know your fix has been successful and you can move on to your next challenge!:

With a little bit more JavaScript you can also expose more information (like what the LCP element is, or which element is causing the most CLS) into a Google Analytics Custom Dimension. Phil wrote an excellent Debug Web Vitals in the field post on this which basically shows how you can enhance the above script to send this debug information as well, as shown in this version of the script.

These dimensions can also be reported in the dashboard (using ga:dimension1 as the Debug dimension field, assuming this is being sent back in Google Analytics Customer Dimension 1 in the script), to get data like this to see the LCP element as seen by those browsers:

As I said previously, commercial RUM products will often expose this sort of data too (and more!), but for those just dipping their toe in the water and not ready for the financial commitment of those products, this at least offers the first dabble into RUM-based metrics and how useful they can be to get that crucial faster feedback on the improvements you’re implementing. And if this whets your appetite for this information, then definitely look at the other RUM products out there to see how they can help you, too.

When looking at alternative measurements and RUM products, do remember to circle back round to what Google is seeing for your site, as it may well be different. It would be a shame to work hard on performance, yet not get all the ranking benefits of this at the same time! So keep an eye on those Search Console graphs to ensure you’re not missing anything.

Conclusion

The Core Web Vitals are an interesting set of key metrics looking to represent the user experience of browsing the web. As a keen web performance advocate, I welcome any push to improve the performance of sites and the ranking impact of these metrics has certainly created a great buzz in the web performance and SEO communities.

While the metrics themselves are very interesting, what’s perhaps more exciting is the use of CrUX data to measure these. This basically exposes RUM data to websites that have never even considered measuring site performance in the field in this way before. RUM data is what users are actually experiencing, in all their wild and varied setups, and there is no substitute for understanding how your website is really performing and being experienced by your users.

But the reason we’ve been so dependent on lab data for so long is because RUM data _is_ noisy. The steps CrUX takes to reduce this does help to give a more stable view, but at the cost of it making it difficult to see recent changes.

Hopefully, this post goes some way to explaining the various ways of accessing the Core Web Vitals data for your website, and some of the limitations of each method. I also hope that it goes some way to explaining some of the data you’ve been struggling to understand, as well as suggesting some ways to work around those limitations.

Happy optimizing!