Making GraphQL Work In WordPress

Headless WordPress seems to be in vogue lately, with many new developments taking place in just the last few weeks. One of the reasons for the explosion in activity is the release of version 1.0 of WPGraphQL, a GraphQL server for WordPress.

WPGraphQL provides a GraphQL API: a way to fetch data from, and post data to, a WordPress website. It enables us to decouple the experience of managing our content, which is done via WordPress, from rendering the website, for which we can use the library of the framework of our choice (React, Vue.js, Gatsby, Next.js, or any other).

Until recently, WPGraphQL was the only GraphQL server for WordPress. But now another such plugin is available: GraphQL API for WordPress, authored by me.

These two plugins serve the same purpose: to provide a GraphQL API to a WordPress website. You may be wondering: Why another plugin when there’s already WPGraphQL? Do these two plugins do the same thing? Or are they for different situations?

Let me say this first: WPGraphQL works great. I didn’t build my plugin because of any problem with it.

I built GraphQL API for WordPress because I had been working on an engine to retrieve data efficiently, which happened to be very suitable for GraphQL. So, then I said to myself, “Why not?”, and I built it. (And also a couple of other reasons.)

The two plugins have different architectures, giving them different characteristics, which make particular tasks easier to achieve with one plugin or the other.

In this article, I’ll describe, from my own point of view but as objectively as possible, when WPGraphQL is the way to go and when GraphQL API for WordPress is a better choice.

Use WPGraphQL If: Using Gatsby

If you’re building a website using Gatsby, then there is only one choice: WPGraphQL.

The reason is that only WPGraphQL has the Gatsby source plugin for WordPress. In addition, WPGraphQL’s creator, Jason Bahl, was employed until recently by Gatsby, so we can fully trust that this plugin will suit Gatsby’s needs.

Gatsby receives all data from the WordPress website, and from then on, the logic of the application will be fully on Gatsby’s side, not on WordPress’. Hence, no additions to WPGraphQL (such as the potential additions of @stream or @defer directives) would make much of a difference.

WPGraphQL is already as good as Gatsby needs it to be.

Use WPGraphQL If: Using One of the New Headless Frameworks

As I mentioned, lately there has been a flurry of activity in the WordPress headless space concerning several new frameworks and starter projects, all of them based on Next.js:

If you need to use any of these new headless frameworks, then you will need to use WPGraphQL, because they have all been built on top of this plugin.

That’s a bit unfortunate: I’d really love for GraphQL API for WordPress to be able to power them too. But for that to happen, these frameworks would need to operate with GraphQL via an interface, so that we could swap GraphQL servers.

I’m somewhat hopeful that any of these frameworks will put such an interface into place. I asked about it in the Headless WordPress Framework discussion board and was told that it might be considered. I also asked in WebDevStudios’ Next.js WordPress Starter discussion board, but alas, my question was immediately deleted, without a response. (Not encouraging, is it?)

So WPGraphQL it is then, currently and for the foreseeable future.

Use Either (Or Neither) If: Using Frontity

Frontity is a React framework for WordPress. It enables you to build a React-based application that is managed in the back end via WordPress. Even creating blog posts using the WordPress editor is supported out of the box.

Frontity manages the state of the application, without leaking how the data was obtained. Even though it is based on REST by default, you can also power it via GraphQL by implementing the corresponding source plugin.

This is how Frontity is smart: The source plugin is an interface to communicate with the data provider. Currently, the only available source plugin is the one for the WordPress REST API. But anyone can implement a source plugin for either WPGraphQL or GraphQL API for WordPress. (This is the approach that I wish the Next.js-based frameworks replicated.)

Conclusion: Neither WPGraphQL nor the GraphQL API offers any advantage over the other for working with Frontity, and they both require some initial effort to plug them in.

Use WPGraphQL If: Creating a Static Site

In the first two sections, the conclusion was the same: Use WPGraphQL. But my response to this conclusion was different: While with Gatsby I had no regret, with Next.js I felt compelled to do something about it.

Why is that?

The difference is that, while Gatsby is purely a static site generator, Next.js can power both static and live websites.

I mentioned that WPGraphQL is already good enough for Gatsby. This statement can actually be broadened: WPGraphQL is already good enough for any static site generator. Once the static site generator gets the data from the WordPress website, it is pretty much settled with WordPress.

Even if GraphQL API for WordPress offers additional features, it will most likely not make a difference to the static site generator.

Hence, because WPGraphQL is already good enough, and it has completely mapped the GraphQL schema (which is still a work in progress for GraphQL API for WordPress), then WPGraphQL is the most suitable option, now and for the foreseeable future.

Use GraphQL API If: Using GraphQL in a Live (i.e. Non-Static) Website

Now, the situation above changes if we want GraphQL to fetch data from a live website, such as when powering a mobile app or plotting real-time data on a website (for instance, to display analytics) or combining both the static and live approaches on the same website.

For instance, let’s say we have built a simple static blog using one of the Next.js frameworks, and we want to allow users to add comments to blog posts. How should this task be handled?

We have two options: static and live (or dynamic). If we opt for static, then comments will be rendered together with the rest of the website. Then, whenever a comment is added, we must trigger a webhook to regenerate and redeploy the website.

This approach has a few inconveniences. The regeneration and redeployment process could take a few minutes, during which the new comment will not be available. In addition, if the website receives many comments a day, the static approach will require more server processing time, which could become costly (some hosting companies charge based on server time).

In this situation, it would make sense to render the website statically without comments, and then retrieve the comments from a live site and render them dynamically in the client.

For this, Next.js is recommended over Gatsby. It can better handle the static and live approaches, including supporting different outputs for users with different capabilities.

Back to the GraphQL discussion: Why do I recommend GraphQL API for WordPress when dealing with live data? I do because the GraphQL server can have a direct impact on the application, mainly in terms of speed and security.

For a purely static website, the WordPress website can be kept private (it might even live on the developer’s laptop), so it’s safe. And the user will not be waiting for a response from the server, so speed is not necessarily of critical importance.

For a live site, though, the GraphQL API will be made public, so data safety becomes an issue. We must make sure that no malicious actors can access it. In addition, the user will be waiting for a response, so speed becomes a critical consideration.

In this respect, GraphQL API for WordPress has a few advantages over WPGraphQL.

WPGraphQL does implement security measures, such as disabling introspection by default. But GraphQL API for WordPress goes further, by disabling the single endpoint by default (along with several other measures). This is possible because GraphQL API for WordPress offers persisted queries natively.

As for speed, persisted queries also make the API faster, because the response can then be cached via HTTP caching on several layers, including the client, content delivery network, and server.

These reasons make GraphQL API for WordPress better suited at handling live websites.

Use GraphQL API If: Exposing Different Data for Different Users or Applications

WordPress is a versatile content management system, able to manage content for multiple applications and accessible to different types of users.

Depending on the context, we might need our GraphQL APIs to expose different data, such as:

  • expose certain data to paid users but not to unpaid users,
  • expose certain data to the mobile app but not to the website.

To expose different data, we need to provide different versions of the GraphQL schema.

WPGraphQL allows us to modify the schema (for instance, we can remove a registered field). But the process is not straightforward: Schema modifications must be coded, and it’s not easy to understand who is accessing what and where (for instance, all schemas would still be available under the single endpoint, /graphql).

In contrast, GraphQL API for WordPress natively supports this use case: It offers custom endpoints, which can expose different data for different contexts, such as:

  • /graphql/mobile-app and /graphql/website,
  • /graphql/pro-users and /graphql/regular-users.

Each custom endpoint is configured via access control lists, to provide granular user access field by field, as well as a public and private API mode to determine whether the schema’s meta data is available to everyone or only to authorized users.

These features directly integrate with the WordPress editor (i.e. Gutenberg). So, creating the different schemas is done visually, similar to creating a blog post. This means that everyone can produce custom GraphQL schemas, not only developers.

GraphQL API for WordPress provides, I believe, a natural solution for this use case.

Use GraphQL API If: Interacting With External Services

GraphQL is not merely an API for fetching and posting data. As important (though often neglected), it can also process and alter the data — for instance, by feeding it to some external service, such as sending text to a third-party API to fix grammar errors or uploading an image to a content delivery network.

Now, what’s the best way for GraphQL to communicate with external services? In my opinion, this is best accomplished through directives, applied when either creating or retrieving the data (not unlike how WordPress filters operate).

I don’t know how well WPGraphQL interacts with external services, because its documentation doesn’t mention it, and the code base does not offer an example of any directive or document how to create one.

In contrast, GraphQL API for WordPress has robust support for directives. Every directive in a query is executed only once in total (as opposed to once per field and/or object). This capability enables very efficient communication with external APIs, and it integrates the GraphQL API within a cloud of services.

For instance, this query%0A%20%20%20%20excerpt%20%40translate(from%3A%22en%22%2Cto%3A%22es%22)%0A%20%20%7D%0A%7D) demonstrates a call to the Google Translate API via a @translate directive, to translate the titles and excerpts of many posts from English to Spanish. All fields for all posts are translated together, in a single call.

GraphQL API for WordPress is a natural choice for this use case.

Note: As a matter of fact, the engine on which GraphQL API for WordPress is based, GraphQL by PoP, was specifically designed to provide advanced data-manipulation capabilities. That is one of its distinct characteristics. For an extreme example of what it can achieve, check out the guide on “Sending a Localized Newsletter, User by User”.

Use WPGraphQL If: You Want a Support Community

Jason Bahl has done a superb job of rallying a community around WPGraphQL. As a result, if you need to troubleshoot your GraphQL API, you’ll likely find someone who can help you out.

In my case, I’m still striving to create a community of users around GraphQL API for WordPress, and it’s certainly nowhere near that of WPGraphQL.

Use GraphQL API If: You Like Innovation

I call GraphQL API for WordPress a “forward-looking” GraphQL server. The reason is that I often browse the list of requests for the GraphQL specification and implement some of them well ahead of time (especially those that I feel some affinity for or that I can support with little effort).

As of today, GraphQL API for WordPress supports several innovative features (such as multiple query execution and schema namespacing), offered as opt-in, and there are plans for a few more.

Use WPGraphQL If: You Need a Complete Schema

WPGraphQL has completely mapped the WordPress data model, including:

  • posts and pages,
  • custom post types,
  • categories and tags,
  • custom taxonomies,
  • media,
  • menus,
  • settings,
  • users,
  • comments,
  • plugins,
  • themes,
  • widgets.

GraphQL API for WordPress is progressively mapping the data model with each new release. As of today, the list includes:

  • posts and pages,
  • custom post types,
  • categories and tags,
  • custom taxonomies,
  • media,
  • menus,
  • settings,
  • users,
  • comments.

So, if you need to fetch data from a plugin, theme, or widget, currently only WPGraphQL does the job.

Use WPGraphQL If: You Need Extensions

WPGraphQL offers extensions for many plugins, including Advanced Custom Fields, WooCommerce, Yoast, Gravity Forms.

GraphQL API for WordPress offers an extension for Events Manager, and it will keep adding more after the release of version 1.0 of the plugin.

Use Either If: Creating Blocks for the WordPress Editor

Both WPGraphQL and GraphQL API for WordPress are currently working on integrating GraphQL with Gutenberg.

Jason Bahl has described three approaches by which this integration can take place. However, because all of them have issues, he is advocating for the introduction of a server-side registry to WordPress, to enable identification of the different Gutenberg blocks for the GraphQL schema.

GraphQL API for WordPress also has an approach for integrating with Gutenberg, based on the “create once, publish everywhere” strategy. It extracts block data from the stored content, and it uses a single Block type to represent all blocks. This approach could avoid the need for the proposed server-side registry.

WPGraphQL’s solution can be considered tentative, because it will depend on the community accepting the use of a server-side registry, and we don’t know if or when that will happen.

For GraphQL API for WordPress, the solution will depend entirely on itself, and it’s indeed already a work in progress.

Because it has a higher chance of producing a working solution soon, I’d be inclined to recommend GraphQL API for WordPress. However, let’s wait for the solution to be fully implemented (in a few weeks, according to the plan) to make sure it works as intended, and then I will update my recommendation.

Use GraphQL API If: Distributing Blocks Via a Plugin

I came to a realization: Not many plugins (if any) seem to be using GraphQL in WordPress.

Don’t get me wrong: WPGraphQL has surpassed 10,000 installations. But I believe that those are mostly installations to power Gatsby (in order to run Gatsby) or to power Next.js (in order to run one of the headless frameworks).

Similarly, WPGraphQL has many extensions, as I described earlier. But those extensions are just that: extensions. They are not standalone plugins.

For instance, the WPGraphQL for WooCommerce extension depends on both the WPGraphQL and WooCommerce plugins. If either of them is not installed, then the extension will not work, and that’s OK. But WooCommerce doesn’t have the choice of relying on WPGraphQL in order to work; hence, there will be no GraphQL in the WooCommerce plugin.

My understanding is that there are no plugins that use GraphQL in order to run functionality for WordPress itself or, specifically, to power their Gutenberg blocks.

The reason is simple: Neither WPGraphQL nor GraphQL API for WordPress are part of WordPress’ core. Thus, it is not possible to rely on GraphQL in the way that plugins can rely on WordPress’ REST API. As a result, plugins that implement Gutenberg blocks may only use REST to fetch data for their blocks, not GraphQL.

Seemingly, the solution is to wait for a GraphQL solution (most likely WPGraphQL) to be added to WordPress core. But who knows how long that will take? Six months? A year? Two years? Longer?

We know that WPGraphQL is being considered for WordPress’ core because Matt Mullenweg has hinted at it. But so many things must happen before then: bumping the minimum PHP version to 7.1 (required by both WPGraphQL and GraphQL API for WordPress), as well as having a clear decoupling, understanding, and roadmap for what functionality will GraphQL power.

(Full site editing, currently under development, is based on REST. What about the next major feature, multilingual blocks, to be addressed in Gutenberg’s phase 4? If not that, then which feature will it be?)

Having explained the problem, let’s consider a potential solution — one that doesn’t need to wait!

A few days ago, I had another realization: From GraphQL API for WordPress’ code base, I can produce a smaller version, containing only the GraphQL engine and nothing else (no UI, no custom endpoints, no HTTP caching, no access control, no nothing). And this version can be distributed as a Composer dependency, so that plugins can install it to power their own blocks.

The key to this approach is that this component must be of specific use to the plugin, not to be shared with anybody else. Otherwise, two plugins both referencing this component might modify the schema in such a way that they override each other.

Luckily, I recently solved scoping GraphQL API for WordPress. So, I know that I’m able to fully scope it, producing a version that will not conflict with any other code on the website.

That means that it will work for any combination of events:

  • If the plugin containing the component is the only one using it;
  • If GraphQL API for WordPress is also installed on the same website;
  • If another plugin that also embeds this component is installed on the website;
  • If two plugins that embed the component refer to the same version of the component or to different ones.

In each situation, the plugin will have its own self-contained, private GraphQL engine that it can fully rely on to power its Gutenberg blocks (and we need not fear any conflict).

This component, to be called the Private GraphQL API, should be ready in a few weeks. (I have already started working on it.)

Hence, my recommendation is that, if you want to use GraphQL to power Gutenberg blocks in your plugin, please wait a few weeks, and then check out GraphQL API for WordPress’ younger sibling, the Private GraphQL API.

Conclusion

Even though I do have skin in the game, I think I’ve managed to write an article that is mostly objective.

I have been honest in stating why and when you need to use WPGraphQL. Similarly, I have been honest in explaining why GraphQL API for WordPress appears to be better than WPGraphQL for several use cases.

In general terms, we can summarize as follows:

  • Go static with WPGraphQL, or go live with GraphQL API for WordPress.
  • Play it safe with WPGraphQL, or invest (for a potentially worthy payoff) in GraphQL API for WordPress.

On a final note, I wish the Next.js frameworks were re-architected to follow the same approach used by Frontity: where they can access an interface to fetch the data that they need, instead of using a direct implementation of some particular solution (the current one being WPGraphQL). If that happened, developers could which underlying server to use (whether WPGraphQL, GraphQL API for WordPress, or some other solution introduced in the future), based on their needs — from project to project.

Useful Links

16 Tools to Create an Online Course

An online course can generate revenue, grow an audience, and establish expertise. Given the range of helpful tools, it’s relatively easy to create, distribute, and monetize a course.

Here’s a list of tools to create an online course. There are all-in-one platforms to build and manage a course, and tools to create course materials, produce lesson videos, market and distribute content, and automate the instruction.

Tools to Create a Course

Teachable is an all-in-one platform to create an online course. Engage and manage students with quizzes, completion certificates, and compliance controls. Run one-on-one sessions with milestones, call hosting, and tasks. Offer coupons and advanced pricing options, including subscriptions, memberships, one-time payments, bundles, and more. All paid plans include unlimited video bandwidth, unlimited courses, and unlimited students. Price: Plans start at $29 per month.

Home page of Teachable

Teachable

Thinkific lets you create, market, and sell online courses. Easily upload videos, build quizzes, and organize all learning content with a drag-and-drop builder. Set pricing, schedule lessons, and automate content to curate the learning experience. Use a theme to launch a course site easily, or connect your courses to an existing site for a seamless brand experience. Additional features include completion tracking, automated progress emails, course discussions, and more. Price: Plans start at $49 per month.

Kajabi is a platform to build, market, and sell an online course, membership site, or coaching program. Use the product generator to create a course, or start from scratch. Customize pricing, delivery, and packaging. Create an online community for your customers. Set up an integrated website, create and customize emails with video and timers, run automated marketing campaigns, and get performance metrics. Price: Plans start at $119 per month.

Zippy Courses is an all-in-one platform to build and sell your online courses. Create lessons, and then rearrange, edit, and update your course at any time. Offer a course with multiple tiers and sell it at different prices. Attach a “launch window” to set when your course is open for enrollment. Release your course content at once, or drip it over time. Price: Plans start at $99 per month.

Home page of Zippy Courses

Zippy Courses

Screenflow is a screen recording and video editing and sharing tool to create courses. Record from multiple screens at once, or use retina displays. Access over 500,000 unique media clips, record from your iPhone or iPad, and add transitions and effects. Animate graphics, titles, and logos with built-in video and text animations. Directly publish to hosting sites. Price: Starts at $129.

Camtasia is another screen recorder and video editor to record and create professional-looking videos. Start with a template, or record your screen and then add effects. Instantly access your most-used tools. Easily modify your video with the drag-and-drop editor. Add quizzes and interactivity to encourage and measure learning. Price: Starts at $249.99.

AudioJungle, part of Envato Market, sells royalty-free stock music and sound effects. Purchase music and sounds for your course content as well as elements to strengthen your brand, such as audio logos and musical idents. Price: Purchase assets individually or access unlimited downloads through Envato Elements for $16.50 per month.

Home page of AudioJungle

AudioJungle

Vimeo is a platform to host, manage, and share videos, including courses. Set embed permissions, send private links, and lock videos or entire albums with a password. Customize the player with your logo, colors, speed controls, and more. Price: Hosting plans start at $7 per month.

Zoom provides multiple ways to connect with your course attendees. Run meetings, conference rooms with video, and full-featured webinars. Price: Basic is free. Premium plans start at $149.90 per year.

FormSwift is a tool to generate, edit, and collaborate on documents and forms. Create and modify your course materials. Use customizable lesson plan templates. Choose from a library of roughly 500 document templates and forms or upload your own documents and edit them with FormSwift’s tools. Price: Plans start at $39.95.

Home page of FormSwift

FormSwift

Canva is an online design and publishing tool that can be used to create a wide range of online course documents, including lessons and guides, infographics, presentations for video recordings, workbooks, and completion certificates. Pro version contains a brand kit to upload your own logos and fonts. Price: Basic is free. Pro is $119.99 per year.

PowerPoint is a tool to create branded presentations for lessons, particularly for screen recordings. With Presenter Coach, practice your delivery and get recommendations on pacing, word choice, and more through the power of artificial intelligence. PowerPoint is part of the Microsoft 365 suite. Price: Microsoft 365 is $69.99 per year.

Mailchimp is an email marketing platform that can help you connect with online course participants, grow your audience, increase engagement, and automate your marketing and course distribution. Price: Basic is free. Premium plans start at $9.99 per month.

Home page of Mailchimp

Mailchimp

Zapier is an automation tool to sync applications. Create workflow “zaps” to automate course tasks and apps, such as subscribing new participants to Mailchimp. Each time it runs, the zap automatically sends information from one app to another. Price: Basic is free. Premium plans start at $19.99 per month.

Logitech, the tech hardware producer, makes affordable webcams that capture high-definition video. Logitech’s new StreamCam records in full HD 1080p at 60 frames per second and AI-enabled facial tracking for a crisp image that’s always in focus. The C930e is an HD 1080p webcam that delivers sharp video in any environment, including low-light and harshly backlit settings. Price: SteamCam is $169.99. C930e is $129.99.

Blue Yeti produces professional microphones for capturing high-resolution audio directly to your computer. The Yeti Pro is a USB mic with an XLR cable to connect to professional studio gear. The Yeti Nano is a popular microphone that delivers 24-bit audio for podcasting and streaming. Price: Yeti Pro is $249.99. Yeti Nano is $99.99.

Home page of Blue Yeti

Blue Yeti

An In-Depth Guide To Measuring Core Web Vitals

Google has announced that from 1st May, they will start to consider “Page Experience” as part of Search ranking, as measured by a set of metrics called Core Web Vitals. That date is approaching quickly and I’m sure lots of us are being asked to ensure we are passing our Core Web Vitals, but how can you know if you are?

Answering that question is actually more difficult than you might presume and while lots of tools are now exposing these Core Web Vitals, there are many important concepts and subtleties to understand. even the Google tools like PageSpeed Insights and the Core Web Vitals report in Google Search Console seem to give confusing information.

Why is that and how can you be sure that your fixes really have worked? How can you get an accurate picture of the Core Web Vitals for your site? In this post, I’m going to attempt to explain a bit more about what’s going on here and explain some of the nuances and misunderstandings of these tools.

What Are The Core Web Vitals?

The Core Web Vitals are a set of three metrics designed to measure the “core” experience of whether a website feels fast or slow to the users, and so gives a good experience.

Web pages will need to be within the green measurements for all three Core Web Vitals to benefit from any ranking boost.

1. Largest Contentful Paint (LCP)

This metric is probably the easiest understood of these — it measures how quickly you get the largest item drawn on the page — which is probably the piece of content the user is interested in. This could be a banner image, a piece of text, or whatever. The fact that it’s the largest contentful element on the page is a good indicator that it’s the most important piece. LCP is relatively new, and we used to measure the similarly named First Contentful Paint (FCP) but LCP has been seen as a better metric for when the content the visitor likely wants to see is drawn.

LCP is supposed to measure loading performance and is a good proxy for all the old metrics we in the performance community used to use (i.e. Time to First Byte (TTFB), DOM Content Loaded, Start Render, Speed Index) — but from the experience of the user. It doesn’t cover all of the information covered by those metrics but is a simpler, single metric that attempts to give a good indication of page load.

2. First Input Delay (FID)

This second metric measures the time between when the user interacts with a page, clicking on a link or a button for example, and when the browser processes that click. It’s there to measure the interactivity of a page. If all the content is loaded, but the page is unresponsive, then it’s a frustrating experience for the user.

An important point is that this metric cannot be simulated as it really depends on when a user actually clicks or otherwise interacts with a page and then how long that takes to be actioned. Total Blocking Time (TBT) is a good proxy for FID when using a testing tool without any direct user interaction, but also keep an eye on Time to Interactive (TTI) when looking at FID.

3. Cumulative Layout Shift (CLS)

A very interesting metric, quite unlike other metrics that have come before for a number of reasons. It is designed to measure the visual stability of the page — basically how much it jumps around as new content slots into place. I’m sure we’ve all clicked on an article, started reading, and then had the text jump around as images, advertisements, and other content is loaded.

This is quite jarring and annoying for users so best to minimize it. Worse still is when that button you were about to click suddenly moves and you click another button instead! CLS attempts to account for these layout shifts.

Lab Versus RUM

One of the key points to understand about Core Web Vitals is that they are based on field metrics or Real User Metrics (RUM). Google uses anonymized data from Chrome users to feedback metrics and makes these available in the Chrome User Experience Report (CrUX). That data is what they are using to measure these three metrics for the search rankings. CrUX data is available in a number of tools, including in Google Search Console for your site.

The fact that RUM data is used, is an important distinction because some of these metrics (FID excepted) are available in synthetic or “lab-based” web performance tools like Lighthouse that have been the staple of web performance monitoring for many in the past. These tools run page loads on simulated networks and devices and then tell you what the metrics were for that test run.

So if you run Lighthouse on your high-powered developer machine and get great scores, that may not be reflective of what the users experience in the real world, and so what Google will use to measure your website user experience.

LCP is going to be very dependent on network conditions and the processing power of devices being used (and a lot of your users are likely using a lot of lower-powered devices than you realize!). A counterpoint however is that, for many Western sites at least, our mobiles are perhaps not quite as low-powered as tools such as Lighthouse in mobile mode suggest, as these are quite throttled. So you may well notice your field data on mobile is better than testing with this suggests (there are some discussions on changing the Lighthouse mobile settings).

Similarly, FID is often dependent on processor speed and how the device can handle all this content we’re sending to it — be it images to process, elements to layout on the page and, of course, all that JavaScript we love to send down to the browser to churn through.

CLS is, in theory, more easily measured in tools as it’s less susceptible to network and hardware variations, so you would think it is not as subject to the differences between LAB and RUM — except for a few important considerations that may not initially be obvious:

  • It is measured throughout the life of the page and not just for page load like typical tools do, which we’ll explore more later in this article. This causes a lot of confusion when lab-simulated page loads have a very low CLS, but the field CLS score is much higher, due to CLS caused by scrolling or other changes after the initial load that testing tools typically measure.
  • It can depend on the size of the browser window — typically tools like PageSpeed Insights, measure mobile and desktop, but different mobiles have different screen sizes, and desktops are often much larger than these tools set (Web Page Test recently increased their default screen size to try to more accurately reflect usage).
  • Different users see different things on web pages. Cookie banners, customized content like promotions, Adblockers, A/B tests to name but a few items that might be different, all impact what content is drawn and so what CLS users may experience.
  • It is still evolving and the Chrome team has been busy fixing “invisible” shifts and the like that should not count towards the CLS. Bigger changes to how CLS is actually measured are also in progress. This means you can see different CLS values depending on which version of Chrome is being run.

Using the same name for the metrics in lab-based testing tools, when they may not be accurate reflections of real-life versions is confusing and some are suggesting we should rename some or all of these metrics in Lighthouse to distinguish these simulated metrics from the real-world RUM metrics which power the Google rankings.

Previous Web Performance Metrics

Another point of confusion is that these metrics are new and different from the metrics we traditionally used in the past to measure web performance and that are surfaced by some of those tools, like PageSpeed Insights — a free, online auditing tool. Simply enter the URL you want an audit on and click Analyze, and a few seconds later you will be presented with two tabs (one for mobile and one for desktop) that contain a wealth of information:

At the top is the big Lighthouse performance score out of 100. This has been well-known within web performance communities for a while now and is often quoted as a key performance metric to aim for and to summarise the complexities of many metrics into a simple, easy-to-understand number. That has some overlap with the Core Web Vitals goal, but it is not a summary of the three Core Web Vitals (even the lab-based versions), but of a wider variety of metrics.

Currently, six metrics make up the Lighthouse performance score — including some of the Core Web Vitals and some other metrics:

  • First Contentful Paint (FCP)
  • SpeedIndex (SI)
  • Largest Contentful Paint (LCP)
  • Time to Interactive (TTI)
  • Total Blocking Time (TBT)
  • Cumulative Layout Shift (CLS)

To add to the complexity, each of these six is weighted differently in the Performance score and CLS, despite being one of the Core Web Vitals, is currently only 5% of the Lighthouse Performance score (I’ll bet money on this increasing soon after the next iteration of CLS is released). All this means you can get a very high, green-colored Lighthouse performance score and think your website is fine, and yet still fail to pass the Core Web Vitals threshold. You therefore may need to refocus your efforts now to look at these three core metrics.

Moving past the big green score in that screenshot, we move to the field data and we get another point of confusion: First Contentful Paint is shown in this field data along with the other three Core Web Vitals, despite not being part of the Core Web Vitals and, like in this example, I often find it is flagged as a warning even while the others all pass. (Perhaps the thresholds for this need a little adjusting?) Did FCP narrowly miss out on being a Core Web Vital, or maybe it just looks better balanced with four metrics? This field data section is important and we’ll come back to that later.

If no field data is available for the particular URL being tested, then origin data for the whole domain will be shown instead (this is hidden by default when field data is available for that particular URL as shown above).

After the field data, we get the lab data, and we see the six metrics that make up the performance score at the top. If you click on the toggle on the top right you even get a bit more of a description of those metrics:

As you can see, the lab versions of LCP and CLS are included here and, as they are part of Core Web Vitals, they get a blue label to mark them as extra important. PageSpeed Insights also includes a helpful calculator link to see the impact of these scores on the total score at the top, and it allows you to adjust them to see what improving each metric will do to your score. But, as I say, the web performance score is likely to take a backseat for a bit while the Core Web Vitals bask in the glow of all the attention at the moment.

Lighthouse also performs nearly 50 other checks on extra Opportunities and Diagnostics. These don’t directly impact the score, nor Core Web Vitals, but can be used by web developers to improve the performance of their site. These are also surfaced in PageSpeed Insights below all the metrics so just out of shot for the above screenshot. Think of these as suggestions on how to improve performance, rather than specific issues that necessarily need to be addressed.

The diagnostics will show you the LCP element and the shifts that have contributed to your CLS score which are very useful pieces of information when optimizing for your Core Web Vitals!

So, while in the past web performance advocates may have heavily concentrated on Lighthouse scores and audits, I see this zeroing in on the three Core Web Vital metrics — at least for the next period while we get our heads around them. The other Lighthouse metrics, and the overall score, are still useful to optimize your site’s performance, but the Core Web Vitals are currently taking up most of the ink on new web performance and SEO blog posts.

Viewing The Core Web Vitals For Your Site

The easiest way to get a quick look at the Core Web Vitals for an individual URL, and for the whole origin, is to enter a URL into PageSpeed Insights as discussed above. However, to view how Google sees the Core Web Vitals for your whole site, get access to Google Search Console. This is a free product created by Google that allows you to understand how Google “sees” your whole site, including the Core Web Vitals for your site (though there are some — shall we say — “frustrations” with how often the data updates here).

Google Search Console has long been used by SEO teams, but with the input that site developers will need to address Core Web Vitals, development teams should really get access to this tool too if they haven’t already. To get access you will need a Google account, and then to verify your ownership of the site through various means (placing a file in your webserver, adding a DNS record…etc.).

The Core Web Vitals report in Google Search Console gives you a summary of how your site is meeting the Core Web Vitals over the last 90 days:

Ideally, to be considered to be passing the Core Web Vitals completely, you want all your pages to be green, with no ambers nor reds. While an amber is a good indicator you’re close to passing, it’s really only greens that count, so don’t settle for second best. Whether you need all your pages passing or just your key ones is up to you, but often there will be similar issues on many pages, and fixing those for the site can help bring the number of URLs that don’t pass to a more manageable level where you can make those decisions.

Initially, Google is only going to apply Core Web Vitals ranking to mobile, but it’s surely only a matter of time before that rolls out to desktop too, so do not ignore desktop while you are in there reviewing and fixing your pages.

Clicking on one of the reports will give you more detail as to which of the web vitals are failing to be met, and then a sampling of URLs affected. Google Search Console groups URLs into buckets to, in theory, allow you to address similar pages together. You can then click on a URL to run PageSpeed Insights to run a quick performance audit on the particular URL (including showing the Core Web Vitals field data for that page if they are available). You then fix the issues it highlights, rerun PageSpeed Insights to confirm the lab metrics are now correct, and then move on to the next page.

However, once you start looking at that Core Web Vitals report (obsessively for some of us!), you may have then been frustrated that this report doesn’t seem to update to reflect your hard work. It does seem to update every day as the graph is moving, yet it’s often barely changing even once you have released your fixes — why?

Similarly, the PageSpeed Insights field data is stubbornly still showing that URL and site as failing. What’s the story here then?

The Chrome User Experience Report (CrUX)

The reason that the Web Vitals are slow to update, is that the field data is based on the last 28-days of data in Chrome User Experience Report (CrUX), and within that, only the 75th percentile of that data. Using 28 days worth of data, and the 75th percentiles of data are good things, in that they remove variances and extremes to give a more accurate reflection of your site’s performance without causing a lot of noise that’s difficult to interpret.

Performance metrics are very susceptible to the network and devices so we need to smooth out this noise to get to the real story of how your website is performing for most users. However, the flip side to that is that they are frustratingly slow to update, creating a very slow feedback loop from correcting issues, until you see the results of that correction reflected there.

The 75th percentile (or p75) in particular is interesting and the delay it creates, as I don’t think that is well understood. It looks at what metric 75% of your visitors are getting for page views over those 28 days for each of the Core Web Vitals.

It is therefore the highest Core Web Vital score of 75% of our users (or conversely the lowest Core Web Vitals score that 75% of your visitors will have less than). So it is not the average of this 75% of users, but the worst value of that set of users.

This creates a delay in reporting that a non-percentile-based rolling average would not. We’ll have to get a little mathsy here (I’ll try to keep it to a minimum), but let’s say, for simplicity sake that everyone got an LCP of 10 seconds for the last month, and you fixed it so now it only takes 1 second, and let’s say every day you had the exact same number of visitors every day and they all scored this LCP.

In that overly-simplistic scenario, you would get the following metrics:

DayLCP28 day Mean28 day p75
Day 0101010
Day 119.6810
Day 219.3610
Day 319.0410
Day 2013.5710
Day 2113.2510
Day 2212.931
Day 2312.611
Day 2711.321
Day 28111

So you can see you don’t see your drastic improvements in the CrUX score until day 22 when suddenly it jumps to the new, lower value (once we cross 75% of the 28-day average — by no coincidence!). Until then, over 25% of your users were based on data gathered prior to the change, and so we’re getting the old value of 10, and hence your p75 value was stuck at 10.

Therefore it looks like you’ve made no progress at all for a long time, whereas a mean average (if it was used) would show a gradual tick down starting immediately and so progress could actually be seen. On the plus side, for the last few days, the mean is actually higher than the p75 value since p75, by definition, filters out the extremes completely.

The example in the table above, while massively simplified, explains one reason why many might see Web Vitals graphs like below, whereby one day all your pages cross a threshold and then are good (woohoo!):

This may be surprising to those expecting more gradual (and instantaneous) changes as you work through page issues, and as some pages are visited more often than others. On a related note, it is also not unusual to see your Search Console graph go through an amber period, depending on your fixes and how they impact the thresholds, before hitting that sweet, sweet green color:

Dave Smart, ran a fascinating experiment Tracking Changes in Search Console’s Report Core Web Vitals Data, where he wanted to look at how quickly it took to update the graphs. He didn’t take into account the 75th percentile portion of CrUX (which makes the lack of movement in some of his graphs make more sense), but still a fascinating real-life experiment on how this graph updates and well worth a read!

My own experience is that this 28-day p75 methodology doesn’t fully explain the lag in the Core Web Vitals report, and we’ll discuss some other potential reasons in a moment.

So is that the best you can do, make the fixes, then wait patiently, tapping your fingers, until CrUX deems your fixes as worthy and updates the graph in Search Console and PageSpeed Insights? And if it turns out your fixes were not good enough, then start the whole cycle again? In this day of instant feedback to satisfy our cravings, and tight feedback loops for developers to improve productivity, that is not very satisfying at all!

Well, there are some things you can do in the meantime to try to see whether any fixes will get the intended impact.

Delving Into The Crux Data In More Detail

Since the core of the measurement is the CrUX data, let’s delve into that some more and see what else it can tell us. Going back to PageSpeed Insights, we can see it surfaces not only the p75 value for the site, but also the percentage of page views in each of the green, amber and red buckets shown in the colour bars beneath:

The above screenshot shows that CLS is failing the Core Web Vitals scoring with a p75 value of 0.11, which is above the 0.1 passing limit. However, despite the color of the font being red, this is actually an amber ranking (as red would be above 0.25). More interestingly is that the green bar is at 73% — once that hits 75% this page is passing the Core Web Vitals.

While you cannot see the historical CrUX values, you can monitor this over time. If it goes to 74% tomorrow then we are trending in the right direction (subject to fluctuations!) and can hope to hit the magic 75% soon. For values that are further away, you can check periodically and see the increase, and then project out when you might start to show as passing.

CrUX is also available as a free API to get more precise figures for those percentages. Once you’ve signed up for an API key, you can call it with a curl command like this (replacing the API_KEY, formFactor, and URL as appropriate):

curl -s --request POST 'https://chromeuxreport.googleapis.com/v1/records:queryRecord?key=API_KEY' \ --header 'Accept: application/json' \ --header 'Content-Type: application/json' \ --data '{"formFactor":"PHONE","url":"https://www.example.com"}'

And you’ll get a JSON response, like this:

{ "record": { "key": { "formFactor": "PHONE", "url": "https://www.example.com/" }, "metrics": { "cumulative_layout_shift": { "histogram": [ { "start": "0.00", "end": "0.10", "density": 0.99959769344240312 }, { "start": "0.10", "end": "0.25", "density": 0.00040230655759688886 }, { "start": "0.25" } ], "percentiles": { "p75": "0.00" } }, "first_contentful_paint": { ... } } }, "urlNormalizationDetails": { "originalUrl": "https://www.example.com", "normalizedUrl": "https://www.example.com/" }
} 

Incidentally, if above is scaring you a bit and you want a quicker way to get a look at this data for just one URL, then PageSpeed Insights also returns this precision which you can see by opening DevTools and then running your PageSpeed Insights test, and finding the XHR call it makes:

There is also an interactive CrUX API explorer which allows you to make sample queries of the CrUX API. Though, for regular calling of the API, getting a free key and using Curl or some other API tool is usually easier.

The API can also be called with an “origin”, instead of a URL, at which point it will give the summarised value of all page visits to that domain. PageSpeed Insights exposes this information, which can be useful if your URL has no CrUX information available to it, but Google Search Console does not. Google hasn’t stated (and is unlikely to!) exactly how the Core Web Vitals will impact ranking. Will the origin-level score impact rankings, or only individual URL scores? Or, like PageSpeed Insights will Google fall back to original level scores when individual URL data does not exist? Difficult to know at the moment and the only hint so far is this in the FAQ:

Q: How is a score calculated for a URL that was recently published, and hasn’t yet generated 28 days of data?

A: Similar to how Search Console reports page experience data, we can employ techniques like grouping pages that are similar and compute scores based on that aggregation. This is applicable to pages that receive little to no traffic, so small sites without field data don’t need to be worried.

The CrUX API can be called programmatically, and Rick Viscomi from the Google CrUX team created a Google Sheets monitor allowing you to bulk check URLs or origins, and even automatically track CrUX data over time if you want to closely monitor a number of URLs or origins. Simply clone the sheet, go into Tools → Script editor, and then enter a script property of CRUX_API_KEY with your key (this needs to be done in the legacy editor), and then run the script and it will call the CrUX API for the given URLs or origins and add rows to the bottom of the sheet with the data. This can then be run periodically or scheduled to run regularly.

I used this to check all the URLs for a site with a slow updating Core Web Vitals report in Google Search Console and it confirmed that CrUX had no data for a lot of the URLs and most of the rest had passed, so again showing that the Google Search Console report is behind — even from the CrUX data it is supposed to be based on. I’m not sure if it is due to URLs that had previously failed but have now not enough traffic to get updated CrUX data showing them passing, or if it’s due to something else, but this proves to me that this report is definitely slow.

I suspect a large part of that is due to URLs without data in CrUX and Google Search doing its best to proxy a value for them. So this report is a great place to start to get an overview of your site, and one to monitor going forward, but not a great report for working through the issues where you want more immediate feedback.

For those that want to delve into CrUX even more, there are monthly tables of CrUX data available in BigQuery (at origin level only, so not for individual URLs) and Rick has also documented how you can create a CrUX dashboard based on that which can be a good way of monitoring your overall website performance over the months.

Other Information About The Crux Data

So, with the above, you should have a good understanding of the CrUX dataset, why some of the tools using it seem to be slow and erratic to update, and also how to explore it a little more. But before we move on to alternatives to it, there are some more things to understand about CrUX to help you to really understand the data it is showing. So here’s a collection of other useful information I’ve gathered about CrUX in relation to Core Web Vitals.

CrUX is Chrome only. All those iOS users, and other browsers (Desktop Safari, Firefox, Edge…etc.), not to mention older browsers (Internet Explorer — hurry up and fade out would you!) are not having their user experience reflected in CrUX data and so on Google’s view of Core Web Vitals.

Now, Chrome’s usage is very high (though perhaps not for your site visitors?), and in most cases, the performance issues it highlights will also affect those other browsers, but it is something to be aware of. And it does feel a little “icky” to say the least, that the monopoly position of Google in search, is now encouraging people to optimize for their browser. We’ll talk below about alternative solutions for this limited view.

The version of Chrome being used will also have an impact as these metrics (CLS in particular) are still evolving as well as bugs are being found and fixed. This adds another dimension of complexity to understanding the data. There have been continual improvements to CLS in recent versions of Chrome, with a redefinition of CLS potentially landing in Chrome 92. Again the fact that field data is being used means it might take some time for these changes to feed through to users, and then into the CrUX data.

CrUX is only for users logged into Chrome, or to quote the actual definition:

“[CrUX is] aggregated from users who have opted-in to syncing their browsing history, have not set up a Sync passphrase, and have usage statistic reporting enabled.”

Chrome User Experience Report, Google Developers

So if you’re looking for information on a site mostly visited from corporate networks, where such settings are turned off by central IT policies, then you might not be seeing much data — especially if those poor corporate users are still being forced to use Internet Explorer too!

CrUX includes all pages, including those not typically surfaced to Google Search such as “noindexed / robboted / logged in pages will be included” (though there are minimum thresholds for a URL and origin to be exposed in CrUX). Now those categories of pages will likely not be included in Google Search results and so the ranking impact on them is probably unimportant, but they still will be included in CrUX. The Core Web Vitals report in Google Search Console however seems to only show indexed URLs, so they will not show up there.

The origin figure shown in PageSpeed Insights and in the raw CrUX data will include those non-indexed, non-public pages, and as I mentioned above, we’re not sure of the impact of that. A site I work on has a large percentage of visitors visiting our logged-in pages, and while the public pages were very performant the logged-in pages were not, and that severely skewed the origin Web Vitals scores.

The CrUX API can be used to get the data of these logged-in URLs, but tools like PageSpeed Insights cannot (since they run an actual browser and so will be redirected to the login pages). Once we saw that CrUX data and realized the impact, we fixed those, and the origin figures have started to drop down but, as ever, it’s taking time to feed through.

Noindexed or logged-in pages are also often “apps” as well, rather than individual collections of pages so may be using a Single Page Application methodology with one real URL, but many different pages underneath that. This can impact CLS in particular due to it being measured over the whole life of the page (though hopefully the upcoming changes to CLS will help with that).

As mentioned previously, the Core Web Vitals report in Google Search Console, while based on CrUX, is definitely not the same data. As I stated earlier, I suspect this is primarily due to Google Search Console attempting to estimate Web Vitals for URLs where no CrUX data exists. The sample URLs in this report are also out of whack with the CrUX data.

I’ve seen many instances of URLs that have been fixed, and the CrUX data in either PageSpeed Insights, or directly via the API, will show it passing Web Vitals, yet when you click on the red line in the Core Web Vitals report and get sample URLs these passing URLs will be included as if they are failing. I’m not sure what heuristics Google Search Console uses for this grouping, or how often it and sample URLs are updated, but it could do with updating more often in my opinion!

CrUX is based on page views. That means your most popular pages will have a large influence on your origin CrUX data. Some pages will drop in and out of CrUX each day as they meet these thresholds or not, and perhaps the origin data is coming into play for those? Also if you had a big campaign for a period and lots of visits, then made improvements but have fewer visits since, you will see a larger proportion of the older, worse data.

CrUX data is separated into Mobile, Desktop and Tablet — though only Mobile and Desktop are exposed in most tools. The CrUX API and BigQuery allows you to look at Tablet data if you really want to, but I’d advise concentrating on Mobile and then Desktop. Also, note in some cases (like the CrUX API) it’s marked as PHONE rather than MOBILE to reflect it’s based on the form factor, rather than that the data is based on being on a mobile network.

All in all, a lot of these issues are impacts of field (RUM) data gathering, but all these nuances can be a lot to take on when you’ve been tasked with “fixing our Core Web Vitals”. The more you understand how these Core Web Vitals are gathered and processed, the more the data will make sense, and the more time you can spend on fixing the actual issues, rather than scratching your head wondering why it’s not reporting what you think it should be.

Getting Faster Feedback

OK, so by now you should have a good handle on how the Core Web Vitals are collected and exposed through the various tools, but that still leaves us with the issue of how we can get better and quicker feedback. Waiting 21–28 days to see the impact in CrUX data — only to realize your fixes weren’t sufficient — is way too slow. So while some of the tips above can be used to see if CrUX is trending in the right direction, it’s still not ideal. The only solution, therefore, is to look beyond CrUX in order to replicate what it’s doing, but expose the data faster.

There are a number of great commercial RUM products on the market that measure user performance of your site and expose the data in dashboards or APIs to allow you to query the data in much more depth and at much more granular frequency than CrUX allows. I’ll not give any names of products here to avoid accusations of favoritism, or offend anyone I leave off! As the Core Web Vitals are exposed as browser APIs (by Chromium-based browsers at least, other browsers like Safari and Firefox do not yet expose some of the newer metrics like LCP and CLS), they should, in theory, be the same data as exposed to CrUX and therefore to Google — with the same above caveats in mind!

For those without access to these RUM products, Google has also made available a Web Vitals JavaScript library, which allows you to get access to these metrics and report them back as you see fit. This can be used to send this data back to Google Analytics by running the following script on your web pages:

<script type="module"> import {getFCP, getLCP, getCLS, getTTFB, getFID} from 'https://unpkg.com/web-vitals?module'; function sendWebVitals() { function sendWebVitalsGAEvents({name, delta, id, entries}) { if ("function" == typeof ga) {
ga('send', 'event', { eventCategory: 'Web Vitals', eventAction: name, // The id value will be unique to the current page load. When sending // multiple values from the same page (e.g. for CLS), Google Analytics can // compute a total by grouping on this ID (note: requires eventLabel to // be a dimension in your report). eventLabel: id, // Google Analytics metrics must be integers, so the value is rounded. // For CLS the value is first multiplied by 1000 for greater precision // (note: increase the multiplier for greater precision if needed). eventValue: Math.round(name === 'CLS' ? delta * 1000 : delta), // Use a non-interaction event to avoid affecting bounce rate. nonInteraction: true, // Use sendBeacon() if the browser supports it. transport: 'beacon' }); } } // Register function to send Core Web Vitals and other metrics as they become available getFCP(sendWebVitalsGAEvents); getLCP(sendWebVitalsGAEvents); getCLS(sendWebVitalsGAEvents); getTTFB(sendWebVitalsGAEvents); getFID(sendWebVitalsGAEvents); } sendWebVitals(); </script>

Now I realize the irony of adding another script to measure the impact of your website, which is probably slow in part because of too much JavaScript, but as you can see above, the script is quite small and the library it loads is only a further 1.7 kB compressed (4.0 kB uncompressed). Additionally, as a module (which will be ignored by older browsers that don’t understand web vitals), its execution is deferred so shouldn’t impact your site too much, and the data it can gather can be invaluable to help you investigate your Core Web Vital, in a more real-time manner than the CrUX data allows.

The script registers a function to send a Google Analytics event when each metric becomes available. For FCP and TTFB this is as soon as the page is loaded, for FID after the first interaction from the user, while for LCP and CLS it is when the page is navigated away from or backgrounded and the actual LCP and CLS are definitely known. You can use developer tools to see these beacons being sent for that page, whereas the CrUX data happens in the background without being exposed here.

The benefit of putting this data in a tool like Google Analytics is you can slice and dice the data based on all the other information you have in there, including form factor (desktop or mobile), new or returning visitors, funnel conversions, Chrome version, and so on. And, as it’s RUM data, it will be affected by real usage — users on faster or slower devices will report back faster or slower values — rather than a developer testing on their high spec machine and saying it’s fine.

At the same time, you need to bear in mind that the reason the CrUX data is aggregated over 28 days, and only looks at the 75th percentile is to remove the variance. Having access to the raw data allows you to see more granular data, but that means you’re more susceptible to extreme variations. Still, as long as you keep that in mind, keeping early access to data can be very valuable.

Google’s Phil Walton created a Web-Vitals dashboard, that can be pointed at your Google Analytics account to download this data, calculate the 75th percentile (so that helps with the variations!) and then display your Core Web Vitals score, a histogram of information, a time series of the data, and your top 5 visited pages with the top elements causing those scores.

Using this dashboard, you can filter on individual pages (using a ga:pagePath==/page/path/index.html filter), and see a very satisfying graph like this within a day of you releasing your fix, and know your fix has been successful and you can move on to your next challenge!:

With a little bit more JavaScript you can also expose more information (like what the LCP element is, or which element is causing the most CLS) into a Google Analytics Custom Dimension. Phil wrote an excellent Debug Web Vitals in the field post on this which basically shows how you can enhance the above script to send this debug information as well, as shown in this version of the script.

These dimensions can also be reported in the dashboard (using ga:dimension1 as the Debug dimension field, assuming this is being sent back in Google Analytics Customer Dimension 1 in the script), to get data like this to see the LCP element as seen by those browsers:

As I said previously, commercial RUM products will often expose this sort of data too (and more!), but for those just dipping their toe in the water and not ready for the financial commitment of those products, this at least offers the first dabble into RUM-based metrics and how useful they can be to get that crucial faster feedback on the improvements you’re implementing. And if this whets your appetite for this information, then definitely look at the other RUM products out there to see how they can help you, too.

When looking at alternative measurements and RUM products, do remember to circle back round to what Google is seeing for your site, as it may well be different. It would be a shame to work hard on performance, yet not get all the ranking benefits of this at the same time! So keep an eye on those Search Console graphs to ensure you’re not missing anything.

Conclusion

The Core Web Vitals are an interesting set of key metrics looking to represent the user experience of browsing the web. As a keen web performance advocate, I welcome any push to improve the performance of sites and the ranking impact of these metrics has certainly created a great buzz in the web performance and SEO communities.

While the metrics themselves are very interesting, what’s perhaps more exciting is the use of CrUX data to measure these. This basically exposes RUM data to websites that have never even considered measuring site performance in the field in this way before. RUM data is what users are actually experiencing, in all their wild and varied setups, and there is no substitute for understanding how your website is really performing and being experienced by your users.

But the reason we’ve been so dependent on lab data for so long is because RUM data _is_ noisy. The steps CrUX takes to reduce this does help to give a more stable view, but at the cost of it making it difficult to see recent changes.

Hopefully, this post goes some way to explaining the various ways of accessing the Core Web Vitals data for your website, and some of the limitations of each method. I also hope that it goes some way to explaining some of the data you’ve been struggling to understand, as well as suggesting some ways to work around those limitations.

Happy optimizing!

8 Reasons to Avoid Cryptocurrencies for Ecommerce

Cryptocurrencies are hot news. Visa announced that it would test a type of cryptocurrency on its network. Elon Musk proclaimed that Tesla would accept cryptocurrency in payment for its vehicles. Bitcoin’s value is soaring.

Merchants may be wondering if cryptocurrencies are ready for mainstream ecommerce. The answer is no. Here’s why.

8 Reasons to Avoid Cryptocurrencies

Problem 1: Volatility. The value of national fiat currencies such as the U.S. dollar fluctuates slightly. The buying power of the U.S. dollar, even in times of relatively high inflation, is more or less the same now as in a few months.

Cryptocurrencies, on the other hand, are remarkably volatile. One year ago, bitcoin traded at roughly $10,000. Today, it trades at approximately $60,000. It’s the equivalent of 17 cents expanding to $1.

To overcome volatility, the cryptocurrency industry has created stablecoin, a type of cryptocurrency that has its value tied to a more stable asset such as the U.S. dollar or gold. However, stablecoin is not yet widely adopted, in part because crypto speculators do not like stability as it harms their earning potential.

Despite what crypto proponents claim, the possibility of a meaningful drop in the value is frightening. It’s too great of a risk for small- and medium-sized ecommerce merchants.

Problem 2: No rewards. Consumers use credit cards, in part, to earn cash-back and point-based rewards. Issuers offer these incentives to motivate consumers to pay with cards. There is no large-scale crypto equivalent to Citi’s Double Cash card or Amazon’s Prime Rewards Visa, or any of the hundreds of other popular loyalty programs. The lack of those programs is a disincentive to use crypto for routine payments.

Problem 3: No consumer protection. Chargebacks are expensive and time-consuming for merchants. Nonetheless, they are an important part of the credit card ecosystem. Knowing that they are not responsible for fraudulent credit card purchases gives consumers confidence. Crypto payments have no such protections. A customer has no recourse if the merchant does not deliver on its commitments. Merchants have no legal or contractual obligation to refund a crypto purchase, although they may choose to do it. A consumer could presumably sue a merchant, but it’s unlikely.

Problem 4: Not universal. Cash is universal. Credit and debit cards are universal. Cryptocurrency is not. There a slew of cryptocurrencies, but using crypto payments for retail purchases remains an anomaly. Only a handful of payment gateways (and even fewer point-of-sale terminals) process crypto transactions. Bottom line: Cryptocurrencies are too difficult for consumers to obtain and for merchants to accept.

Problem 5: Fragmentation. There are approximately 5,000 cryptocurrencies. Banks, payment processors, and merchants are largely unsure how to process the thousands of available options. New “coins” pop up seemingly daily. Which ones should merchants accept? What if a consumer wants to pay with the latest cryptocurrency, but the payment gateway cannot process? In contrast, consider national currencies. Most large financial institutions deal with about 150 sovereign currencies, at most. The United Nations recognizes 180.

Problem 6: Expensive. Typical rates for accepting cryptocurrencies for online purchases are about 1 percent, roughly 1 percent lower than most credit cards. However, accepting crypto becomes expensive when integrating and maintaining a separate payment gateway and adding currency-conversion fees. The latter point, converting cryptos to fiat currencies, is the real catch. Merchants should carefully consider that cost as the rates are generally high, wiping out the savings from the low transaction fees.

Problem 7: Security risks. Cryptocurrency is essentially digital cash. Stolen credit cards and bank accounts are gigantic headaches. However, unless the account holder was negligent, the issuer or bank will return the money. But not so with cryptocurrencies. Once stolen, cryptocurrencies are gone forever — with no recourse. Thus holders of cryptocurrencies must add security measures to protect their accounts.

Problem 8: Coming regulation. National and local governments worldwide are contemplating taxing, banning, limiting, or controlling cryptocurrencies. Several countries are in the early stages of creating their own national cryptocurrencies, called CBDCs (central bank digital currencies). Interested merchants and consumers should let the dust settle.

Home Gym Provider Overcomes Supply Disruptions, Thrives

Imagine selling home gym equipment online at the onset of the pandemic. Athletic clubs were closed. Consumers were stuck in their houses. According to Kaevon Khoozani, the founder of Canada-based Bells of Steel, the demand for weight-training equipment was “obscene.”

“We ran into two giant hiccups,” Khoozani told me. “Everything we sell comes from specialty suppliers. Some large players, such as the chain stores, gobbled up all the capacity and raw materials. Three of our main suppliers dropped us.”

In other words, Khoozani faced obscene demand with no inventory. What was his solution?

“I set up an entirely new infrastructure for the production of iron weight plates in Vietnam. As far as I know, I’m the first to do it for export out of that country. It took a lot of time, money, and effort.”

Khoozani’s perseverance has paid off. His company is thriving, having retooled its supply chain, logistics, and inventory planning.

He and I recently discussed those developments and more. Our entire audio conversation is embedded below. The transcript that follows is edited for clarity and length.

Eric Bandholz: Selling gym equipment during the pandemic might make you one of the richest people in ecommerce.

Kaevon Khoozani: Theoretically, yes. I could be poor soon, but it’s going good now.

Bandholz: We’re all rich in life. This show’s not about the monetary value or get rich quick. Your site is Bells of Steel. How challenging has it been to get equipment? I’m a weightlifter myself. Pretty much everywhere has been sold out.

Khoozani: For sure. And the demand hasn’t really stopped. This month will be our biggest ever.

Bandholz: When I got into ecommerce, shipping heavy items was something I avoided. You sell products designed to be heavy. How do you manage that shipping process?

Khoozani: It’s tricky. UPS has high thresholds for package size and weight. Quite a few of our products are designed specifically to fit within those UPS guidelines. A lot goes into our packaging design. As for fulfillment, I use a 3PL in Indianapolis. I also ship from my own warehouse and staff in Calgary, Canada.

Most everyone who works for us is into weight lifting. Many of our warehouse guys are big and physical.

Bandholz: You’re in Canada. It’s a small market compared to the U.S. Does your business depend on sales to U.S. consumers?

Khoozani: Canada has about the same population as California. Bells of Steel has been around since 2010. There were no competitors up here at the time. So it was an advantage. Imagine being the first company in California to sell bumper plates on the internet. We were able to capture a ton of the Canadian market and a lot of organic Google traffic.

Amazon’s not nearly as dominant here as in the U.S. It’s a lot easier in Canada and a lot less expensive than in the U.S. To this day, our split is about 70-percent Canada and 30-percent U.S., although the U.S. share is growing rapidly.

Bandholz: Whenever Beardbrand ships into Canada, it costs an arm and a leg. It’s tremendously cost-prohibitive to serve Canadian consumers. What about shipping from Canada into the U.S.?

Khoozani: It’s a lot easier and cheaper to ship from Canada to the U.S. than vice-versa. There’s a difference in the duty and tax regulations, for whatever reason. I can ship anything less than $800 in value to the U.S., and it goes right through. No duty, no nothing.

But you’ll get dinged on anything and everything shipping from the U.S. to Canada. I won’t order stuff from the States directly because I’ll end up with these crazy duty bills, or it gets stuck at customs for three weeks, or it never gets through at all.

Bandholz: Where are your products manufactured?

Khoozani: Everything’s made in China and, more recently, Vietnam. It all comes in containers directly to Calgary and Indianapolis.

Bandholz: A lot of big companies sell weight training equipment. How do you compete?

Khoozani: Our big unique is we’re the best value for the money. There are competitors with thicker and stronger steel and maybe a better fit and finish. But my barbells are the best dollar for dollar. We also offer a lot of unique designs that are typically unavailable to home users. For example, one of our best-selling products is the Belt Squat Machine for under $2,000. Only one other company sells it for that price. And everything else on the market is commercial grade, which is very expensive.

And we don’t sell on Amazon. Here in Canada, maybe 10 percent of our pre-pandemic sales came from Amazon. And then the pandemic hit. I chose then not to give up a single percentage point to an entity that does nothing for my brand when there’s just obscene demand. And I don’t know if we’re ever going back. I like keeping all those customers on my platform and giving them the best experience I can.

It’s not worth it, fighting tooth and nail on the Amazon marketplace.

Bandholz: You’re singing my notes, right up my alley. Let’s talk about the crazy demand for your products due to Covid. How do you keep customers happy?

Khoozani: We ran into two giant hiccups. The first was that everything we sell comes from specialty suppliers. We don’t use trade agents. We buy barbells from a factory that only does barbells and nothing else. For some of those factories, we are a bigger customer. For others, we’re not.

At the beginning of the pandemic, some large players, such as the chain stores, gobbled up all the capacity and raw materials. Three of our main suppliers dropped us. Just, “See you later. Go get your weight plates from somewhere else.” But we couldn’t get them anywhere else because nobody was taking on new customers.

So I set up an entirely new infrastructure for the production of iron weight plates in Vietnam. And as far as I know, I’m the first to do it for export out of that country. There is some domestic production there, but not much export. I went from the ground up and built that supply chain there. It took a lot of time, money, and effort.

The second hiccup was a severe container shortage from Asia to North America. For years we used a reliable freight forwarder who set everything up. We didn’t have to pay the freight until the product arrived. Then came Covid. It was a frantic scramble, calling forwarders every day, asking, “You got space? You got space? You got space?”

And the cost of shipping containers, if you can get them, has tripled or quadrupled since the beginning of the pandemic. Lots of dirty tricks, such as middlemen selling VIP space on the containers for an extra $4,000. Crazy stuff.

It’s been tough. We learned the hard way. Before the pandemic, we were doing a lot of pre-selling. We would tell customers they could buy it now we would ship in the same month. Then Covid hit. We got burned pretty hard. We couldn’t fulfill those orders with containers sitting in port for weeks, unable to offload.

We’re much better now at tracking containers and planning inventory. We’re working much closer with our suppliers, asking, “How can we maximize your efficiency? If we buy 100 SKUs, is that going to expedite the manufacturing and shipping?”

We’ve completely reshaped how we order and how we plan our inventory and logistics.

Bandholz: Shifting direction, what’s your ideal customer?

Khoozani: In the beginning, I was focused on top-line revenue, which we all know is a silly number. I was trying to sell wherever I could — retail stores, gyms, home gyms, whatever. Three years ago, we looked closely at our numbers. It was clear that we needed to focus on the home gym user. That’s our bread and butter. That’s who we need to cater to. The segment has the best margins and the fewest warranty issues. So, yes, our focus is the home user.

Bandholz: Your site’s built on WooCommerce. How do you like it?

Khoozani: Good and bad. I was complaining about it just a few hours ago. I started on BigCommerce and remained on that platform for maybe five years. I can’t recall why I switched to WooCommerce. I think BigCommerce had jacked their rates. Perhaps I didn’t like dealing with them any longer. And I met a developer who sold me on WooCommerce.

Since then, it’s been a double-edged sword. I love that I can do anything with it, totally in my control. The thought of Shopify or BigCommerce dictating what I sell is not good.

WooCommerce has a bunch of functionality. It’s so far ahead. It’s a little clunkier, but it has way more features, and it’s way more cost-effective. But the downside is it takes much more maintenance. There are many more bugs. And we struggle with speed, always.

But I don’t think I’ll ever switch. We’re hoping to launch a new site in the next month or two. It should be a lot faster. If I were starting over, I probably would go with Shopify. But I’m glad we’re on WooCommerce now.

The level of sophistication we can do with WooCommerce is such a competitive advantage. For example, one of my leading products is called the garage gym builder. It walks you through choosing a bench, choosing a rack, and choosing a bar. And as far as I can tell, there’s not a good comparison to it on Shopify or other platforms. That product generates most of our revenue. We use a plugin, a WooCommerce composite builder.

Also, WooCommerce’s product bundle system is very sophisticated. It works in conjunction with WooCommerce composite.

Another great feature is quoting freight prices in the checkout process. We use freight carriers to deliver products to customers. But it’s been a pain to quote freight charges because none of those companies had a system like UPS does, for example.

Then one of our freight carriers created an open API. A developer used it to build a live freight-quoting plugin for WooCommerce. Now I can offer customers freight quoting at the checkout, which is a huge benefit.

Bandholz: Where can people learn more about you and Bells of Steel?

Khoozani: Our website is BellsOfSteel.com. To get in touch with me personally, send a message on the website. Somebody will direct it my way.

Seven Mistakes To Avoid In Your Technical Interviews

I have failed many technical interviews. Year after year would pass and I would slowly progress in my technical interviewing skills. It wasn’t until I received my dream job offer from Spotify and had passed the Google technical interviews that I realized how much I had learned over the preceding years. Finally, my studying had paid off! This was also around the time that many developers began losing their jobs due to COVID.

“If I have difficulty passing data structures and algorithms interviews with a computer science degree,” I thought, “I can’t imagine how overwhelming these concepts must be for self-taught developers.” So for the past year, I’ve made it my mission to make data structures and algorithms approachable for everyone.

I found it incredibly difficult to find one resource for learning everything about the technical interview process. From the recruiter’s phone call to the systems design interview to negotiating a job offer, there was no all-encompassing technical interview resource, so I decided to create one.

A Note About Remote Interviews

Due to the global pandemic, many companies have gone fully remote. This is great as it allows candidates across the world to apply, but this can be daunting for candidates who have little-to-no experience with online interviews.

Here are a few tips for your virtual interviews.

  • Arrive early.
    There is nothing more panic-inducing than going to join an online meeting and realizing you need to download an entire package of drivers to run the program. I recommend creating an account with the meeting application ahead of time and running a test meeting with a friend to ensure you have access to the application and feel comfortable using the online controls.
  • Use headphones.
    I always recommend using headphones for your remote technical interviews. They’ll help reduce background noise and ensure you hear the instructors clearly.
  • Charge your computer.
    Remote meeting tools can quickly drain your computer battery, especially if you’re live coding. To combat this, have your computer plugged in for the entirety of the interview if possible.
  • Test your camera.
    While remote interviews allow us to be in a safe and familiar environment, we can often forget to remove unsavory items from the background of our video frame. I always suggest running a test meeting to check your video frame and remove the dirty laundry from the background. You can also use a virtual background for your remote interview if your background is not ideal.

The Technical Interview Process

When you begin the technical interview process with a company, your recruiter should inform you about what you should expect from the process. One reason why technical interviews are so anxiety-inducing is the lack of process standardization. A technical interview at one company can look incredibly different from a technical interview at another company. But there are some commonalities between technical interview processes that you can prepare for.

Here is a generalized version of the technical interview process that you’re likely to see in your upcoming interviews.

Recruiter Phone Interview

Your first interview will be a recruiter phone interview. During this call you’ll discuss the job, the company, and what you can expect from the interview process. Do not take this interview lightly: all interviews in the technical interview process are vital to landing you a job offer. If you don’t seem excited about the role a recruiter might not move you forward to the next phase of the process.

If you’re applying to many different job openings, I recommend keeping a spreadsheet of the roles, companies, recruiter information, and any relevant information. You should refer back to your notes prior to the recruiter phone interview to ensure you’re well-informed and leave a great impression.

Technical Screening

If the recruiter’s phone interview goes well you will likely move into a technical screening interview. This interview may be asynchronous where you don’t interact with a human interviewer and instead complete the coding challenge on a platform with a time limit, or you may have a live interviewer.

Companies typically conduct technical screenings to ensure a candidate has the baseline technical knowledge required to thrive in a role. It can be expensive to fully interview every single candidate so a technical screening is a way to reduce the candidate pool.

You will be coding in this interview so it’s important to feel confident in your foundational programming language.

Take Home Project

Some companies require a take-home coding project in lieu of a coding challenge, or in addition to a coding challenge (again, all processes are different so consult your recruiter for the specifics).

Coding projects are a polarizing topic: some candidates love them while other candidates find them unfair. On one side, coding projects allow you to showcase your skills in a more natural environment, using the tools you love. On the other hand, these projects can be a way for a company to receive free (often unpaid) labor.

Many candidates with families, multiple jobs, or other time-consuming commitments likely don’t have the time necessary to complete a take-home coding project, which can lead to an unfair advantage for candidates without the same responsibilities.

If you’re tasked with a take-home project and do not have the time required to devote to it, you can ask the recruiter if there is an alternative. It might also be worth asking if you will be compensated for your time spent on this interview (some companies will pay you, although all of them should).

On-Site Interviews

The “on-site” interview phase is likely the last phase before ultimately receiving a job offer or a rejection. Many companies used to fly candidates to their offices for a full day of interviews, but due to the pandemic, these interviews are being held virtually.

Many candidates find the on-site interviews to be the most stressful as it requires you to take a vacation day from your current role to complete them. You will likely have three or four interviews (typically a half-day) consisting of a process/values/collaboration interview (how do you collaborate with your team, how do you resolve conflicts) and coding interviews.

The on-site interviews are stressful so remember to take breaks and decompress before each interview.

Notes On The Interview Process

The technical interview process is intense and can leave you burned out. Make sure you’re taking time to decompress after each interview and reflect on how it went. Were there interviews you struggled with more than others? If so, focus on those areas for your next interview process; some recruiters will even provide you with interviewer feedback so you can hyperfocus your studying.

You should also reflect on how you felt during the interview process. Did the interviewers make you feel safe and comfortable? Was this even a work environment you would thrive in? Remember that technical interviews are a two-way street.

Now that we’ve detailed the technical interview process, let’s dive into the seven mistakes candidates commonly make, and tips for avoiding them.

Mistake #1: Not Communicating Effectively

Technical interviews are supposed to measure your communication and problem-solving abilities, not necessarily whether you achieved the optimal, working solution to a coding challenge. Problem-solving is all about communication, but did you know that each culture has a different definition of what it means to be a “good communicator?”

There are two different types of communication:

  • Low-context
    Very explicit, redundant, and straight to the point. Messages are stated clearly and should be interpreted at face value.
  • High-context
    More ambiguous where listeners are expected to read between the lines (or read the air) and interpret the hidden message.
    Low-context communication is

During a technical interview, it’s imperative to practice low-context communication, regardless of how you’re used to communicating. If you need a moment to think, tell your interviewer. If you need help, ask for it!

Often candidates don’t move on to the next interview phase because they failed to communicate effectively. If you think of the interview as a conversation rather than an exam, you’re more likely to communicate effectively.

Mistake #2: Not Admitting When You Don’t Know The Answer

If you don’t know the answer to something, admit it! Interviewers appreciate when a candidate is self-aware and humble enough to admit they don’t know the answer to something. It’s much better to admit you don’t know something than to “BS” your way through it.

If you’re unsure how to answer a question you can say, “To be honest I’m not sure. If I had to make an educated guess I would say…” People don’t want to work with “know-it-all”s; they want to work with real humans who can admit they don’t know the answer.

Mistake #3: Cramming The Night Before An Interview

Let’s be honest: we’ve all crammed for an interview the night before. It’s exhausting to make time to interview but the reality is that interviewing is a skill (sadly) and it must be practiced.

Although you might feel like you’ve learned something whilst cramming the night before an interview, this learning is volatile and superficial. Our brain only encodes information into short-term memory when we cram the night before an interview. This means that all that information you just “learned” will dissipate quickly after the interview. Thus, it’s better for your long-term memory to do a little studying in the weeks leading up to an interview than cram the night before.

Additionally, you’re more likely to regurgitate information than actually understand it. It will become apparent very quickly if you’re just reciting information you memorized as opposed to working through a solution.

One strategy for effective learning is to use context-switching as a tool. While switching contexts in the midst of learning a new skill seems ineffective, it’s actually the most effective learning tool. When you context-switch during learning, it’s more difficult for our brain to recall information, ultimately strengthening the encoded information and making it easier to recall in the long run.

If you want to read more about effective learning methods here are a few resources that helped me:

Mistake #4: Memorizing Code For Algorithms & Data Structures

Candidates often feel they must memorize code for algorithms and data structures, but the reality of it is you likely won’t have to code these things from scratch. Regurgitating code is not a useful skill and your interviewer will be able to tell you’ve simply memorized a solution. Instead, you should aim to understand the process of what you’re accomplishing.

Additionally, you don’t need to learn every single sorting and searching algorithm ever invented. Instead, you can determine the optimal solution for different data structures and learn the concepts behind it. For example, if you’re asked to sort an array of integers, you might know that a divide-and-conquer algorithm like merge sort or quick sort is a great solution. If you understand the concept of how an algorithm or data structure works, you can build the solution.

Lastly, most coding interviews will be conducted in the foundational programming language (even if a company is looking for a React/Vue.js developer): you likely will not be asked to code using a framework or library, so make sure you’re confident in your foundational programming knowledge.

Mistake #5: Overlooking The “Cultural Fit” Interview

All interviews throughout the technical interview process are important, however, there seems to be a focus on data structures and algorithms. And while data structures and algorithms are an important area to study, you should give the other interviews in the process the same attention: Don’t prioritize data structures and algorithms over other “easier” interviews like the “collaboration and process.

The “culture fit” interview is meant to discern how you collaborate and handle conflicts in a team. You’ll likely receive questions such as:

“Tell me about a time a project you were working on failed. Why did it fail and how did you move forward?”

or

“Tell me about a time you had a conflict with a team member. How did you resolve it?”

Write down your responses to these questions and practice answering them out loud. You don’t want to sound rehearsed but you want to be succinct and not ramble. Keep your response to a few sentences. Additionally, eye contact and body language are important.

Try not to fidget and focus on making eye contact with your interviewer!

Mistake #6: Starting With The Optimized Solution

Unless you are 110% confident in the most optimized solution for a coding challenge, you don’t have to start with the most optimized solution. Candidates often think they have to start with an optimal solution and it trips them up. They get stuck and can’t move forward. Instead, start with a non-optimal solution and say:

“I know this isn’t the most performant solution but I would like to get a working solution and refactor it for performance later in the interview.”

Your interviewer will appreciate your honesty and regard to performance. You’ll also be able to make progress more quickly, and in an interview, small wins can have a huge impact on your self-confidence and overall performance.

Mistake #7: Overlooking Programming Foundations

Candidates for front-end developer roles neglect their HTML and CSS skills to prioritize JavaScript, but more interviews are testing knowledge of the foundational programming skills so don’t neglect them.

We often forget the foundations and skip to the more expert-level framework and libraries but this can hinder our interview performance. Interviews are conducted in the foundational languages (i.e. JavaScript, not React/Vue.js), so don’t neglect the foundations.

Conclusion

Everyone has anxiety over the technical interview process but by being mindful of these seven mistakes, you can improve your chances of landing a job offer.

Once you do receive a job offer you can decide whether or not you want to negotiate. There are many things you can negotiate: paid time off: working hours, equity, signing bonus, job title, and salary are just a few.

When negotiating a job offer it’s important to do your research. How much does someone in this role (and in this geographic location) make annually? You can use Glassdoor to do some market research.

You should also recognize that the recruiter has constraints and might not be able to get you a higher salary. Instead, you can ask for a signing bonus or equity, but be prepared for them to say they can’t increase your offer.

You should focus on “why” you should receive additional salary or benefits; what do you bring to the table that someone else won’t?

Lastly, don’t give a recruiter an ultimatum, i.e. “If you don’t give me this salary, I will walk away.” Instead, focus on the fact that you want to join the team but need an improvement/change to the offer to accept.

Here’s an example email you could use to ask for a base salary increase:

“Thank you so much for the offer. I’m genuinely thrilled and looking forward to joining the team. Before I accept the offer I’d like to discuss the base salary. I am an active member in the technical community and teach numerous courses online with X learning platforms. I know that my extensive knowledge of Y will greatly benefit the team. As such I’m looking for a base salary in the range of A to B. Please let me know if we can make this work and I’ll sign the offer right away!”

If you don’t get a job offer, don’t worry! Almost everyone gets rejected for a position at one time or another; you’re not alone! Take some time to reflect on your interviews and determine what areas you can improve for the next round of interviews.

If you want to learn more about data structures, algorithms, coding projects, culture fit interviews, systems design interviews, and more, check out my new book, “De-Coding The Technical Interview Process”. This book has been a passion of mine for the past year and has helped many developers land a job offer (including myself)!

Be patient with yourself. You can do this!

Further Reading on SmashingMag:

Ecommerce Product Releases: April 15, 2021

Here is a list of product releases and updates for mid-April from companies that offer services to online merchants. There are updates on live-stream shopping, eBay ad campaigns, customer experience platforms, loyalty programs, and more.

Got an ecommerce product release? Email releases@practicalecommerce.com.

Ecommerce Product Releases

Bambuser launches self-serve platform for live video shopping. Bambuser announced the launch of a live video shopping “One-to-Many” starter package, a self-service option. With the starter version, businesses can quickly incorporate shoppable live-streaming into their ecommerce strategy.

Screenshot of Bambuser's "One-to-Many" live-stream shopping service.

Bambuser’s “One-to-Many” live-stream shopping service.

eBay launches automated Promoted Listings campaigns. eBay sellers can now use rule-based technologies to automate how they promote new listings and adjust their ad rates. New rule capabilities make it easier for sellers to streamline how listings are added to their Promoted Listings campaigns. By selecting the “automate suggested ad rate” option, sellers can balance performance and costs by having eBay automatically adjust ad rates according to the rules that sellers set.

Talkwalker lets brands monitor mentions in podcasts. Talkwalker has added podcasts to the list of sources it covers. With Talkwalker’s new Speech Analytics technology, brands have visibility over what is being said about them in text, image, video, and audio. By including a catalog of 35,000 podcasts from a variety of platforms such as Apple Podcasts, Talkwalker is accessing a world of conversations from which professionals can benefit.

Vimeo brings the power of video to Constant Contact. Vimeo, a video platform, and Constant Contact, an email and marketing platform, have announced a strategic partnership to bring the power of video to hundreds of thousands of global marketers. Vimeo will now power scalable video creation, hosting, and management directly within Constant Contact’s marketing platform. This integration unlocks access to Vimeo’s suite of tools to help marketers reach and convert new customers with video. Constant Contact users can create and distribute marketing videos, embed GIFs in email campaigns, capture leads with contact forms and sync them to their Constant Contact accounts, and measure video performance — all from one secure dashboard.

Home page of Constant Contact

Constant Contact

Financing providers Clearbanc and FirePower partner to expand offerings. Clearbanc offers ecommerce and SaaS start-ups an alternative to traditional financing in the form of non-dilutive growth capital. FirePower lends from $1 million to $60 million to companies with excellent visibility into their future cash flows. Combined, Clearbanc and FirePower can invest as little as $10 and up to $60 million, with terms and conditions developed by founders for founders.

CommerceIQ expands ecommerce channel optimization beyond Amazon. CommerceIQ, a player in ecommerce channel optimization, has announced it is extending support beyond Amazon for all major online retailers, including Walmart, Instacart, Target, Costco, and Home Depot. With coverage connecting sales, marketing, and supply chain operations, CommerceIQ enables advertisers to leverage its optimization platform to drive growth. Advertisers can leverage CommerceIQ’s expanded capabilities to track and optimize sales channels, activate machine-learning-based automation, and generate cross-channel reporting from a single portal.

Squarespace acquires Tock, a reservation, takeout, and event platform. Squarespace has announced it has acquired Tock, a platform serving the hospitality industry via online reservations, table management, takeout, and events. With this acquisition, Squarespace continues the evolution of its product suite, enabling millions of worldwide businesses to build a brand and transact with their customers online.

Home page of tock

Tock

Threekit introduces Shop Threekit, a 3D marketplace. Threekit, a 3D and augmented reality platform, has announced the launch of Shop Threekit, a 3D marketplace. From Shop Threekit, users can visit more than 20 ecommerce stores that have enabled 3D, augmented reality, and virtual photography. With a few clicks, shoppers can configure and view, for example, their own TaylorMade SIM2 driver or custom Crate & Barrel sofa.

WeCommerce acquires Stamped, a ratings, reviews, and loyalty-program provider. WeCommerce Holdings has announced the acquisition of Stamped.io for roughly $110 million. Stamped offers a software suite that enables Shopify merchants to collect and feature customer reviews and product ratings and create their own loyalty and rewards programs.

Acquire upgrades customer support platform. Acquire has announced the release of its newest software that streamlines customer conversations. Acquire enables businesses to better serve customers by providing multiple conversation modes, including text, chat, voice, video, co-browse (a form of on-screen collaboration), and social messaging apps such as Facebook and WhatsApp. The conversational approach enables agents to speed up interactions. Acquire’s analytics and reporting capabilities further enable businesses to measure, iterate, and improve operational efficiency and customer satisfaction.

ShipBob joins Shopify Plus as Certified App Partner. ShipBob, a global cloud-based logistics platform for small and medium-sized businesses, has achieved Shopify Plus Certified App Partner status. The ShipBob platform gives Shopify Plus merchants a single view of their inventory, orders, and shipments across all sales channels and locations, in addition to advanced analytics with insights into shipping performance, inventory allocation, and fulfillment costs.

Home page of ShipBob

ShipBob

How Businesses Go Carbon Neutral

Climate change is front and center in the news. How businesses can make a positive impact is not well understood. I’ll provide an overview in this post.

First, a few definitions.

  • Carbon footprint refers to the weight, usually in metric tons, of greenhouse gases, those that warm the earth’s atmosphere. Examples include carbon dioxide (CO2) and methane generated by everyday human activities.
  • Carbon neutral footprint exists when the sum of the greenhouse gas emissions is offset by natural carbon sinks or carbon credits.
  • Carbon sink is any reservoir — natural or otherwise — that accumulates and stores a carbon-containing chemical compound for an indefinite period, thereby lowering the concentration of CO2 from the atmosphere. Globally, the two most important carbon sinks are forests and the ocean.
  • Carbon offset is a counterbalance to emissions of carbon dioxide or other greenhouse gases. Offsets are measured in metric tons of CO2-equivalent. Offsets do not reduce the volume of emissions.
  • Carbon credit is a financing tool to support projects that reduce greenhouse gas emissions or recapture carbon from the atmosphere. A single carbon credit is equal to one metric ton of equivalent carbon dioxide gases.

Carbon Offset Providers

Businesses can make a positive contribution in several ways. An ecommerce company could minimize packaging materials, reducing its carbon footprint. Delivery companies can switch to electric vehicles.

Most businesses that have committed to reducing carbon do so by purchasing carbon offset projects — carbon sinks.

For merchants, Cloverly offers carbon-neutral shipping via an application programming interface that calculates and neutralizes carbon emissions on a per-transaction basis. It purchases carbon credits on behalf of companies or their customers. Using Cloverly, an ecommerce store can give its customers the option of making their deliveries carbon-neutral, usually for less than $1 per transaction, by adding that amount to their shopping cart. Cloverly offers a plugin for BigCommerce, Magento, and Shopify stores. Merchants on other platforms can integrate via the API.

Carbon Checkout, another company, lets online merchants integrate a customer contribution, usually less than $1, into the checkout process via an API.

Likewise, Pachama offers an API for businesses to incorporate carbon credits into the purchase of products and services. Pachama supports verified forest conservation and reforestation projects.

Screenshot of a page from Pachama.com showing how to purchase carbon offset credit

Pachama offers an API for businesses to incorporate carbon credits into the purchase of products and services.

Wren is a subscription service that offsets an individual’s carbon footprint. When customers sign up, they share their transportation, diet, services, and energy usage, which Wren uses to calculate their carbon footprint. Then, customers pay a monthly fee averaging $23 to offset what they emit. That money goes to one of three projects managed by established environmental organizations. While Wren targets individuals, not businesses, it provides a plan that allows businesses to offer Wren as an employee benefit.

Other companies that offer carbon plans to businesses and individuals include:

Direct air capture (DAC) is another method of taking carbon out of the environment. It uses chemical reactions to capture CO2. Air moves over these chemicals, which selectively react with and remove CO2, allowing the other components of air to pass through. These chemicals can take the form of liquid solvents or solid sorbents (materials that absorb gasses), which make up the two types of DAC systems in use today.

Once the carbon dioxide is captured, heat is typically applied to release it from the solvent or sorbent. This regenerates the solvent or sorbent for another cycle of capture. The collected CO2 can be injected underground for permanent storage in certain geologic formations or used in various products and applications. Permanent storage results in the biggest climate benefit. Swiss company Climeworks and Canadian firm Carbon Engineering are leaders in the DAC market.

Screenshot of Climeworks' direct air capture process

Direct air capture, such as this example from Climeworks, takes carbon out of the environment using chemical reactions to capture CO2. Air moving over these chemicals removes CO2, allowing the other components of air to pass through. Image: Climeworks.

Committed Companies

In 2019 Etsy became the first global ecommerce company to offset 100 percent of its emissions from shipping. For every customer purchase, Etsy automatically purchases a verified offset.

Luxury ecommerce merchant Farfetch intends to offset the carbon footprint of all its deliveries and returns as part of its Climate Conscious Delivery program. The company says 85 percent of its emissions are related to shipping and returns. Projects include planting and protecting forests in the U.S. and Brazil. Farfetch is also using more efficient packaging as well as shipping more products in bulk, lowering its carbon footprint.

In 2012 Microsoft implemented an internal tax on all its divisions to make them responsible for reducing carbon emissions. Microsoft recently doubled its internal carbon fee to $15 per metric ton on all carbon emissions.

Microsoft’s goal is to be carbon negative by 2030 by reducing its carbon emissions by half. This will likely require Microsoft to procure millions of metric tons of carbon removal. In July 2020 Microsoft issued an RFP to procure in 2021 1 million metric tons of nature- and technology-based carbon removal.

The company received proposals from 79 applicants for 189 projects in over 40 countries. Microsoft then purchased 1.3 million metric tons of carbon removal from 26 projects worldwide at an average price of $20 per metric ton.

How To Get Web Design Clients Fast (Part 2)

In part 1, we explained how to use a monthly recurring revenue (MRR) model to grow your web design business. In this second part, we’ll explain how to use proven sales techniques to keep scaling your business profitably.

If you’re an agency owner, you know that you need customers to grow. No matter how big your dreams are, customers are the lifeblood of your business. But you’re probably wondering — how do you attract quality, high-paying clients?

We started our design agency from zero. Two and a half years later, that same business generated $50,000 USD in monthly revenue, and today, it’s many times that size and still growing — all thanks to the sales techniques you’re about to read.

The secret to any successful company is sales, and that applies to design businesses too. Some people are worried about their lack of experience, especially since real-world sales techniques aren’t taught in school. But don’t worry. Sales savvy is like anything else — a skill that you can learn. If you’re ready to learn how to get web design clients fast, keep reading.

How To Set (and Reach) Ambitious Sales Goals

To set a sales objective, choose a target monthly recurring revenue number and deadline. You can base this on your ideal income or what you currently make with one-off clients. For example, your goal could be earning $7,000 USD per month within 24 months after you kick-off. Then divide that figure by your average price. So if you charge $100 per month, you’ll need 70 customers.

When you start, you’ll probably convert about 2–3% of your leads, so you’ll need to contact 33 people for each new customer. So a goal of 70 customers for $7,000 USD per month means reaching 2,300–4,600 leads. (This number may be higher or lower depending on your sales skills and lead quality.)

Thousands of leads probably sounds like a lot! But it’s manageable if you break it down. Each month, you’ll need to contact about 100–200 leads. If you work Monday–Friday, that’s just 5–10 leads a day. Stick with that goal and have an accountability system to track how well you’re doing.

Focus on hitting those lead goals every day or week, even if you don’t see immediate results. Sometimes you’ll close a prospect the same day, but it will more likely take a few days or even weeks of follow-up, explanations, and demos before you finally win them over.

If you don’t work consistently on your goals, it will be frustrating down the line. If you pitch 40 prospects the first week, then 5 the next week, then 15, then 40 again, you’ll have a patchy funnel and inconsistent growth. Put in consistent work, and you’ll see continual progress that will snowball over time.

Once you have your goal set, where should you look for those MRR clients? Here are the best strategies we’ve learned.

Nine Places To Find Web Design Clients

When you’re just starting, you should try different methods to get clients. As you gain more experience, you’ll learn where to focus your efforts, and you’ll get better at converting those clients. Cold pitching a potential client might work best for you, while digital marketing does well for someone else.

1. Use Personal Connections

Chances are, you already know someone who could become a new web design client—or you know someone who knows someone. Share what you’re doing with friends, family, neighbors, and especially any local business owners you know.

You never know which referral might get you another client.

2. Sell With Your Website

Do you want a salesperson that is always working, never gets tired, and can sell to thousands of clients at once? Then you’ll want to make sure your current website is at its very best. If you’re using a basic theme, switch to a modern custom design. Web design clients will judge your design skills by the quality of your own site, so make sure it’s always looking good.

For our agency, we’re continually improving our website to keep it up-to-date and modern. We also include a portfolio of sites we’ve designed so that prospects can see the kind of quality we offer.

3. Ask For Referrals

You’ve worked insanely hard to get the customers you have. Why not leverage your trust with them for even more profit and sales? Ask a happy client to tell their hairdresser, favorite restaurant, plumber, dentist, lawyer, and other local businesses. Then check up on those leads and convince them to hire you as a web designer.

Remember, referring a friend is the best way past clients can thank you. To get referrals, you’ll need to ask! As a bonus, thank your customers or friends for a referral. A surprise gift for a referral goes a long way.

Some referral gifts we recommend are:

  • 10% off your next site update,
  • Free website health check,
  • One month free of charge,
  • $100 Amazon voucher.

4. Partner With Other Businesses

Another strategy to grow your client list is to partner with related businesses, like SEO firms or ad agencies. When you can find a great company in a related but non-competitive niche, reach out and form a partnership. You recommend clients to them, and they’ll recommend clients to you.

Everyone wins. Your customers get helpful services, and both of you will benefit from the referrals you share.

5. Use Content Marketing

You can also use inbound marketing to attract customers to you with content instead of going to them. Blogging on your own site gives you credibility, especially if you focus on writing about solutions for the biggest problems your clients have. New customers already see you as the expert because they’ve read a blog post. Write articles that cover the basic principles of building an online presence and growing a customer base.

The second strategy is guest posting. For example, you can write about best practices for a restaurant website and post them on a blog where restaurant owners get the latest news for their business. Educational content establishes you as an authority and opens you up to a new audience eager to learn about their industry. Writing for other sites has helped us a lot — you’re reading one of those articles now!

Note: We go into more detail on using content marketing in our free guide to finding web design clients.

6. Post On Social Media

We’ve seen success promoting our content on social media. The two that have worked the best for us have been Facebook and LinkedIn, but feel free to experiment with others. Various industries will have a preferred social media platform, so learn about this for your niche and target accordingly.

Organic social media works best as a part of your strategy alongside other methods. It might not bring in leads itself, but a strong social media presence helps convert potential clients who need a good reason to choose you. If you’re doing well on social networks, it can help with that decision-making process and close the deal.

The most important content you can share solves your customer’s problems. And it isn’t just about selling — think of how to teach your customers to take advantage of new digital technologies. For example, you can teach restaurants how to set up a QR code for a digital menu. In addition to helpful content, we recommend sharing sites you’ve designed and using hashtags your target customer will recognize. But make sure to keep your feed professional — don’t post pictures of what you ate for breakfast!

7. Test Paid Ads

The reality is that you won’t keep growing with free methods after a certain point. That’s why we recommend using paid ads as you grow. We’ve used various platforms, from Google Ads to job boards. We’ve also seen a lot of success in offering an email newsletter with multiple opt-ins.

You can also try Facebook Ads and a more complex sales funnel system, complete with a landing page to collect web design leads. Paid ads have brought in lots of new customers for us.

8. Build A Network

An effective way of getting new clients is by building your professional network. First, connect with other founders in person. If you’re not already involved in your local community of business leaders, start as soon as possible. You’ll get valuable advice and business contacts that can lead to more sales in the future.

One of the best places to do this is networking events, like local community business leader meetups. You’ll meet lots of potential clients and get leads for many more. Don’t pitch these contacts, just build relationships. Care about their business and learn what they’re looking for. When they need a website, they’ll know who to turn to.

As the world has gone remote, look for virtual events as well. Check out local business leader Facebook groups, digital summits, and other opportunities to connect remotely.

9. Do Cold Outreach

Last but not least is cold outreach. You’ll need to research a target audience, find a potential client, and reach out with a phone call introducing yourself. Cold outreach has been the main way we’ve built our agency. It’s a lot of hard work, but the results speak for themselves!

The best way to make a sale is by positioning a business website as the solution to a challenge your prospect faces, like restaurants wanting new customers or losing foot traffic to national chain competitors.

We’ll go into cold outreach more in the next section, but these three principles are a great starting point:

  • Build rapport with your prospect.
    Know their name and understand their business, and always look for a personal connection. Honestly care about their success.
  • Be an expert.
    Asking insightful questions is a great way to be knowledgeable without showing off. Help your prospect consider new opportunities in their business they wouldn’t have thought of if it wasn’t for you.
  • Get a commitment.
    Before you hang up the phone, try your best to get the prospect to close or else agree to talk later.

With these points in mind, you can use the following script to make the sale.

Our Most Effective Sales Strategy

We’ll walk through the template we’ve used to convert hundreds of cold leads into happy customers. This successful sales technique boils down to five key steps.

Step 1: Build Rapport And Understanding

Before you jump into a sales pitch, show you care about the business owner and want them to succeed. Start by introducing yourself with your name. Make sure you’re talking to the owner or decision-maker before moving on.

Next, draw a connection to their business—the more personal, the better. Maybe you ate at the client’s restaurant recently, saw one of their delivery vans, or found them on the internet (this neutral intro always works if you don’t have anything specific to point out).

Here’s a version of the script we might use:

Hi, it’s Dave Smith speaking!

Am I speaking to Lisa Samuelson? Great!

Some friends had dinner at Lisa’s Diner a few weeks ago and gave you very high praise.

Step 2: Create Demand By Showing How You Can Help

Your goal here is to offer a way to bring in new paying customers without extra work. Who wouldn’t take you up on that deal? Most of the time, business owners don’t want a website—they want the results a website will bring, like better visibility, high search rankings, more customers, more job applicants, and so on.

You can develop your versions of the following and include a relevant case study from a previous client. For example, a painter specializing in complete house exteriors might tire of requests for small interior jobs. A specialized website can filter their prospects and bring them better business.

Here’s a basic script our team has developed:

Well, Lisa, I run a firm here in CITY that helps business owners become more successful in the digital world with high-quality, full-service websites.

We realized most business owners don’t have the time or tech skills to build and maintain their own website. As a result, they have an outdated site or no site and lose potential customers every day.

We believe business owners should focus on their business. We handle every part of your site, from updates to domain, hosting, email, and even search engine optimization if you want.

Step 3: Show Why You’re The Best Option

Up next, you’ll need to show why the prospect must choose you. Cover the advantages of the recurring revenue model here and explain your fees. Explain that you deliver top-quality modern websites combined with outstanding service, all at affordable prices.

Here are the best talking points you can use:

We run a technology that allows us to deliver top-quality modern websites combined with outstanding service, all at affordable prices.

Unlike traditional agencies or web designers, you don’t pay us thousands upfront, only to get a website to maintain on your own that will be technically outdated in two years.

For a one-time setup fee of $499 USD and a monthly charge of just $99 USD, we’ll create a professional site, update the content, do technical maintenance, keep your domain name current, host the site, and keep your email accounts running.

We have a 20% discount on the monthly fee when billed annually.

Step 4: Tailor Your Pitch To Their Business

The next step is to understand their business and show you care about it. The more you find out about the client’s business and problems, the better you’ll be able to tailor your sales pitch!

Here are the best types of questions to use and how to show how a website will help:

  • What is the greatest challenge in your industry/for your business?
    However they respond, explain how a website will help! You can help them find employees, acquire customers, and stand apart from the competition.
  • Who is currently responsible for your website/web presence?
    Most of the time, it’s not in the hands of a professional. Ask questions to show why this is a problem, like asking what their backup plan is in case of a server crash or how they’re keeping the site updated for more recent devices, standards, and best practices. Explain how your team has experience handling website problems and will always treat them like professionals.
  • Do you know how many visits your current website has?
    If they do, show what you can do to increase this. If not, explain how your site will provide them with valuable data to find more customers and grow their business.
  • Do you know what percentage of customers in your industry are on mobile devices?
    Find out this number in advance. If the prospect’s website isn’t mobile responsive, point out that they’re missing out on a considerable number of customers.

Gathering data upfront from your customer and asking the right questions will show that you are a pro. You’ll demonstrate that you really care and thus build trust.

Step 5: Close The Sale

The most important part of the sales process is closing. Move the prospect to make a firm commitment to start working with you. If they aren’t ready to start immediately, offer a smaller next step, like scheduling a later meeting or sharing testimonials. Always make sure a decision-maker is participating in the next meeting!

Up next, we’ll look at some closing strategies that can help you seal the deal with clients.

Proven Closing Strategies To Finalize The Sale

When you reach the end of a call with a potential client, your job is simple—get them to pay for your web design services. But while the idea is simple, getting a prospect to sign up can be very difficult in practice. To help, here are some techniques we’ve used to close more deals faster.

Share References And Portfolio Pieces

One of the best ways to convince a prospect is by showing them a previous site you’ve designed for a similar client or letting them talk to a current client of yours. Keep portfolio sites for the various verticals you target, like salons, restaurants, dentist offices, and the like. With permission, you can also share the contact information of a current happy customer.

Design First, Charge Later

One technique that worked well for us at the beginning was doing web design first, then charging later. Charging later works best if you don’t have an extensive portfolio or are branching into a new web design niche without relevant work samples. (For example, if you have a dozen restaurant websites but want to land a new hairdresser client.)

To use this strategy, you’ll first design a draft of the new website. Then if the client likes it, they’ll pay the upfront design fee and move forward. This strategy involves more work for you upfront, but it proves to the client that you can build great sites and understand their business. And if they don’t like the website? Not to worry—you’ve created a portfolio piece you can use for another customer down the road.

Waive The Setup Fee

Another strategy you can use is waiving the setup fee. This fee can be a significant barrier for many new clients since they have to pay $500 USD (or whatever your setup fee is) before seeing results. Instead, just charge your monthly recurring payment. You’ll make less money in the short term, but you’ll be more likely to win over an ideal client to stay with you for a while.

If you don’t want to design a site for free like the previous suggestion, this is a great middle option that gives the client a great site with less risk but still lets you get paid for your work.

Show Your Process

You can also build trust by showing your web design process, from draft to design to publication. Doing this as the final stage before you ask for a sale can help create confidence in the prospect’s mind about what you have to offer. People don’t trust what they don’t understand, so show the steps and build trust.

Automatic Payments

This tip applies once you close a sale and want to make sure you still get paid every month: use automatic billing. If you have to ask for payment every month, it’s a constant reminder of what they’re paying. But if you have a credit card on file or use a payment processor that charges your clients automatically each month, you can count on steady, regular cashflow.

It’s also a timesaver for everyone—your client doesn’t have to spend time paying yet another bill, and you can rest easy knowing you don’t have to follow up for a missed payment.

Teach And Build A Relationship

If all else fails and the perfect prospect doesn’t want to sign up at the last minute, never burn the bridge. Don’t let the rejection get to you, and remember you’re a website expert, but also friendly and accessible and willing to help your clients understand what’s going on.

Take the chance to explain what a client might want to look for if they decide to launch a website later. Explain what features are most important based on your knowledge. If a client doesn’t want a website now, there still may be opportunities in the future. Build trust, strengthen the relationship, and play the long game.

Now It’s Your Turn To Find Web Design Clients

Over the last few years, we’ve been privileged to work with so many incredible clients—all following the ideas and suggestions outlined above. The real secret was, of course, putting in hard work and focusing on growth goals. The sales techniques mentioned above helped us then convert those prospects into paying customers.

We also used internal software that we recently released Sitejet to speed up the process and become more profitable. We designed Sitejet to help agencies grow with MRR clients by cutting site creation time by as much as 70% and streamlining client interactions. It’s created to help designers grow their business and give back time for what you love: being creative.

Anyone can successfully grow their design agency. As we explained in the first part of this series, starting takes motivation and an effective pricing model and mindset. And as we shared in this second part, growth comes once you combine proven techniques with lots of hard work! Good luck—and we can’t wait to hear your stories in the comments to this article!

From Cats With Love: New Navigation, Guides And Workshops

Not many people know that the entire Smashing Family is a very small team with just 15 wonderful people working day-to-day on everything from magazine and books to front-end and design. At times it might feel like that’s quite a bit of work, but we do our best to be well-organized and be productive, while working (well, mostly) 100% remote for almost a decade now.

In fact, we’ve been quite busy over the last few months. We’ve been running our online workshops, redesigned our navigation, refactored a number of components, refined performance and improved accessibility. There are more subtle UX changes coming in, and we’d love to share what we’ve been cooking. Settle in.

Upcoming Online Workshops

We’ve run 40 workshops with 2.600 attendees so far, and we’ve learned how to run a workshop where you, dear readers, learn best. So for the next months, we’ve set up a full schedule on front-end and design, from web performance to interface design. Jump to all workshops ↬

Workshops in April–May


Meet our friendly front-end & UX workshops. Boost your skills online and learn from experts — live.

Workshops in June–July

No pre-recorded sessions, no big picture talks. Our online workshops take place live and span multiple days across weeks. They are split into 2.5h-sessions, plus you’ll get all workshop video recordings, slides and a friendly Q&A in every session. (Ah, you can save up to 25% off with a Smashing Membershipjust sayin’!.)

New Navigation (Beta Testing)

With so many articles on the site, finding the right articles can be difficult. So for the last weeks, we’ve been going through 3.500 articles and manually refining and standardizing the underlying taxonomy of our posts. You might have been there as well: dealing with articles accommodated over 15 years wasn’t quite easy.

That was quite an exercise in patience and hard work — but now we are happy to roll out the new navigation, with important navigation options surfaced prominently across the entire site. Hopefully, you’ll find the new navigation (on the top of this page, too) more useful.

Please leave a comment if you spot any bugs, mistakes, or perhaps something important missing — we’ll do our best to fix it and deploy right away.

New Evergreen Guides (Beta Testing)

We have also rolled out new article formats — evegreen guides. These are the articles with curated articles, tutorials, tools and resources that we keep updating regularly. There are a few more of those coming up, but they should be a reliable source of techniques and tools.

Here’s what we’ve published so far:

You can also access the guide on the new Smashing Magazine’s frontpage, although some UI/UX changes will be coming in there as well. Feedback? We are listening on Twitter, of course.

Join Our Free Online Meet-Up (Apr 27)

We’re getting closer and closer to our free online meetup coming April 27 — and we’d be honored and humbled to welcome you there. There we will be running a website makeover of the Powercoders NGO, live.

Tickets are absolutely free. So, if you don’t have one yet, please check out the details, speakers, schedule and timezones and get your ticket today, mark your calendars and invite your friends and colleagues to join in.

Thank You!

We are very committed to improving Smashing in every possible way, and we are working hard to do just that behind the scenes. We’d sincerely appreciate you recommending our little site, our articles and workshops to your friends and colleagues — and we hope that they will help you boost your skills and the quality of your work.

A sincere thank you for your kind, ongoing support and generosity — thank you for being smashing, now and ever. Ah, and subscribe to our newsletter — we have plenty of new announcements coming up soon! 😉