Will the Creator Economy Embrace Ecommerce?

As the so-called creator economy evolves and expands, ecommerce could become a significant revenue channel, helping to create new business models and drive the development of new commerce platforms.

The terms “creator” and “creator economy” are a little ambiguous, perhaps because the idea of a creator is relatively new. For example, writer and consultant Hugo Amsellem of Arm the Creators uses a Venn diagram to define creators, placing them at the intersection of individuality, leverage, and rebellion.

Another description is to imagine an individual who creates original content, publishes it in some manner (typically online), and derives from it either a part-time or full-time income.

Thus, “creator” could describe someone such as journalist and author James Clear, who used his popular email newsletter to create a massive following and eventually promote his best-selling book, “Atomic Habits.”

This description would also fit musician and music producer Rick Beato, who uses his popular YouTube channel — with 2.54 million subscribers at the time of writing — to sell courses, books, and branded merchandise as well as generate ad revenue from views.

Screenshot of Rick Beato's YouTube pageScreenshot of Rick Beato's YouTube page

Rick Beato is a creator. He has a YouTube channel with millions of followers and earns income from his content and associated commerce channels.

If Clear and Beato are examples, the creator economy may describe their businesses and the platforms and tools needed to support those businesses.

Creator Boom

“We’re just seeing the tip of the iceberg of the creator economy,” said Ricky Ray Butler, CEO of marketing agency BEN Group, according to an Insider Intelligence report from June 2021. “It’s growing faster than ever, and there are also more creators now than before. That opens the door to a lot of opportunities and even more decentralization of content.”

The report went on to say that some 50 million folks worldwide consider themselves to be a creator, and at least 220 companies exist to serve creators specifically.

Many of these companies are aimed at helping creators monetize their content and associated products.

Nice at the Top

Earning money as a creator, however, is difficult for most.

“Just last year, YouTube creator David Dobrik’s monthly AdSense checks from the platform were $275,000 for an average of 60 million views. On Substack, the top 10 creators are collectively bringing in more than $7 million annually. Charli D’Amelio — who recently became the first TikTok creator to surpass 100 million followers — is estimated to be worth $4 million at age 16. She started on TikTok just a year and a half ago,” wrote Li Jin in a December 2020 article for the Harvard Business Review. Lin is the founder and managing partner of Atelier, a venture capital firm focused on lowering the barriers to entrepreneurship and broadening economic opportunities.

“​​But while some have been propelled to massive stardom, examples of a wide swath of the population achieving financial security from these platforms are few and far between. The current creator landscape more closely resembles an economy in which wealth is concentrated at the top.

“On Patreon, only 2% of creators made the federal minimum wage of $1,160 per month in 2017. On Spotify, artists need 3.5 million streams per year to achieve the annual earnings for a full-time minimum-wage worker of $15,080,” wrote Jin.

As it evolves, the creator economy will need a middle class, according to Jin, for most creators to earn a steady income sufficient to live on.

Commerce

Jin goes on to suggest a list of 10 strategies aimed at encouraging a middle class of creators. These strategies mostly focus on changes to monetization policies, creator platforms, and the availability of capital.

Jin’s suggestions make good sense, but some creators are already supplementing with commerce the income they earn from royalties and ad sharing.

Like Clear and Beato above, creators can sell everything from songs to ebooks to online classes. And in some cases, their commerce endeavors could earn more than advertising and royalties.

This fact alone has the potential to influence how the creator economy evolves. It may be easier for creators to add a downloadable ebook or an ear training course (one of Beato’s products) than waiting for YouTube, Instagram, or a future platform to make advertising and algorithms more egalitarian.

This relative ease, however, is not limited to transaction-based business models and platforms.

Likely, more providers will emerge aimed at the convergence of content and commerce. These solutions could fill gaps in existing options. Perhaps it involves adding content hosting and sharing to an existing platform — imagine Shopify becoming a much better content management system or Amazon buying and incorporating a social media platform.

Or it might be a completely new platform.

Ecommerce Briefs: Amazon v. Walmart, Prime Members, E.U. Sellers

“Ecommerce Briefs” is my occasional series on news and developments that impact online merchants. In this installment, I’ll focus on Amazon’s omnichannel growth and Prime membership, as well as acquisitions of third-party Amazon sellers in Europe.

Amazon Overtakes Walmart

Amazon’s revenue was expected to surpass Walmart’s next year, but The New York Times reported it has already occurred.

The Times disclosed that consumers spent more than $610 billion at Amazon over the 12 months ending in June, based on data compiled by the financial research firm FactSet. In contrast, consumers spent $566 billion at Walmart during the 12 months ending in July.

While its ecommerce sales rose sharply during the pandemic, Walmart could not keep up with Amazon, and its online sales have sagged in the past few months as customers return to physical stores.

Prime Membership Growth

As of the beginning of 2021, Amazon reported 200 million global Prime members, up from 150 million at the end of the 2019 fourth quarter, a growth rate of 33%. Prime is available in 22 countries. While Amazon does not reveal the number of U.S. Prime members, Consumer Intelligence Research Partners estimates 147 million members in the U.S. as of March 2021, an increase of 25% from March 2020. Growth in Prime memberships had been stagnant until Covid-19 hit. Global Prime membership fees totaled $25.2 billion in 2020, a 31.2% increase over 2019.

According to CIRP, the 2020 holiday season marked the first time in four years that annual Prime memberships increased in the fourth quarter. Previously, many shoppers took advantage of the monthly membership option during the holiday season and then dropped it afterward. That did not occur last year due to Covid-19.

Acquirers of Sellers in Europe

I wrote in December about aggregators purchasing FBA third-party sellers. Since then, buyouts have swelled. Venture capital firms and lenders are pouring money into the acquirers as they expand geographically.

In May, U.S.-based acquirer Perch announced it had closed a $775 million Series A funding round, bringing total funding to more than $900 million. Perch intends to use some of the money to develop a platform to identify acquisition candidates. In July, Perch announced it had hired former Amazon employee Rahul Shewakramani as head of mergers and acquisitions in Europe.

Perch’s portfolio includes more than 70 brands. Roughly 35% sell into the E.U. and other international markets. Perch has committed an additional €300 million to acquire more European sellers.

U.S.-based competitor Thrasio decided to expand into Europe at the same time, focusing on Germany and the U.K. Georg Hesse, Thrasio’s new vice president for the two countries, is a former Amazon employee in Germany.

Thrasio had a portfolio of more than 100 brands in 2020 and was purchasing an average of two to three companies per week earlier this year. In February, it received another $750 million in venture funding. With the additional capital, Thrasio seeks to acquire businesses generating up to $200 million in revenue, an increase from a previous ceiling of $40 million.

Both companies will be competing with home-grown aggregators in Europe, including Berlin-based firms SellerX and Razor Group. Germany is a hub for European FBA aggregators.

In August, SellerX secured a €100 million Series B round, bringing its total funding to almost €250 million in debt and equity in its first year of operation. SellerX owns 30 brands. Razor Group has raised €362 million, mostly debt, since August 2020. London-based Heroes has raised $265 million, with $200 million coming last month as debt. Heroes will use the money to purchase more brands.

Another German company, Berlin Brands Group, did not start as an aggregator but is now moving into the space. It has acquired 14 companies. This year alone, it has secured $940 million via debt and private equity to purchase direct-to-consumer brands, focusing on the U.S. and the UK. Our podcast recently featured the managing director of Berlin Brands.

Using SWR React Hooks With Next.js’ Incremental Static Regeneration (ISR)

If you’ve ever used Incremental Static Regeneration (ISR) with Next.js, you may have found yourself sending stale data to the client. This occurs when you are revalidating the page on the server. For some websites this works, but for others (such as Hack Club’s Scrapbook, a site built by @lachlanjc that I help maintain), the user expects the data to be kept up to date.

The first solution that comes to mind may be to simply server side render the pages, ensuring that the client is always sent the most up to date data. However, fetching large chunks of data before rendering can slow down the initial page load. The solution used in Scrapbook was to use the SWR library of React hooks to update the cached page from the server with client side data fetching. This approach ensures that users still have a good experience, that the site is fast and that the data is kept up to date.

Meet SWR

SWR is a React Hooks library built by Vercel, the name comes from the term stale-while-revalidate. As the name suggests, your client will be served stale/old data whilst the most up to date data is being fetched (revalidating) through SWR on the client side. SWR does not just revalidate the data once, however, you can configure SWR to revalidate the data on an interval, when the tab regains focus, when a client reconnects to the Internet or programmatically.

When paired with ISR and Next.js’ API routes, SWR can be used to create a responsive user experience. The client is first served the cached statically generated page (generated with getStaticProps()), in the background the server also begins the process of revalidating that page (read more here). This process feels fast for the client and they can now see the set of data, however it may be a touch out of date. Once the page is loaded, a fetch request is made to an Next.js API route of your’s which returns the same data as what was generated with getStaticProps(). When this request is completed (assuming it was successful), SWR will update the page with this new data.

Let’s now look back at Scrapbook and how this helped solve the problem of having stale data on the page. The obvious thing is that now, the client gets an updated version. The more interesting thing, however, is the impact on the speed of our side. When we measure speed through Lighthouse, we get a speed index of 1.5 seconds for the ISR + SWR variant of the site and 5.8 seconds for the Server Side Rendering variant (plus a warning regarding initial server response time). That’s a pretty stark contrast between the two (and it was noticeable when loading the pages up as well). But there is also a tradeoff, on the Server Side Rendered page the user didn’t have the layout of the site change after a couple of seconds with new data coming in. Whilst I believe Scrapbook handles this update well, it’s an important consideration when designing your user’s experience.

Where To Use SWR (And Where Not To)

SWR can be put in place in a variety of places, here are a couple of site categories where SWR would make a great fit:

  • Sites with live data that require updating at a rapid pace.
    Examples of such sites would be sports score sites and flight tracking. When building these sites, you’d look to use the revalidate on interval option with a low interval setting (one to five seconds).
  • Sites with a feed style of updates or posts that update in realtime.
    The classic example of this would be the news sites which have live blogs of events such as elections. Another example would be the aforementioned Scrapbook as well. In this case, you’d also likely want to use the revalidate on interval option but with a higher interval setting (thirty to sixty seconds) to save on data usage and prevent unnecessary API calls.
  • Sites with more passive data updates, that people keep open in the background a lot.
    Examples of these sites would be weather pages or in the 2020s COVID-19 case number pages. These pages don’t update as frequently and therefore don’t need the constant revalidation of the previous two examples. However, it would still enhance the user experience for the data to update. In these cases, I would recommend revalidating the date when the tab regains focus and when a client reconnects to the internet, that’ll mean if a person anxiously returns to the tap hoping there has only been a small increase in COVID cases they’ll get that data quickly.
  • Sites with small pieces of data that users can interact with.
    Think the Youtube Subscribe Button, when you click subscribe you want to see that count change and feel like you’ve made a difference. In these cases, you can revalidate the data programmatically using SWR to fetch the new count and update the displayed amount.

One thing to note, is that these can all be applied with or without ISR.

There are of course some places where you won’t want to use SWR or to use SWR without ISR. SWR isn’t much use if your data isn’t changing or changes very rarely and instead can clog up your network requests and use up mobile user’s data. SWR can work with pages requiring authentication, however you will want to use Server Side Rendering in these cases and not Incremental Static Regeneration.

Using SWR With Next.js And Incremental Static Regeneration

Now we’ve explored the theory of this strategy, let’s explore how we put it into practise. For this we’re going to build a website that shows how many taxis are available in Singapore (where I live!) using this API provided by the government.

Project Structure

Our project will work by having three files:

  • lib/helpers.js
  • pages/index.js (our frontend file)
  • pages/api/index.js (our API file)

Our helpers file will export a function (getTaxiData) that will fetch the data from the external API, and then return it in an appropriate format for our use. Our API file will import that function and will set it’s default export to a handler function that will call the getTaxiData function and then return it, this will mean sending a GET request to /api will return our data.

We’ll need this ability for SWR to do client-side data fetching. Lastly, in our frontend file we’ll import getTaxiData and use it in getStaticProps, it’s data will be passed to the default export function of our frontend file which will render our React page. We do this all to prevent code duplication and ensure consistency in our data. What a mouthful, let’s get started on the programming now.

The Helpers File

We’ll begin by creating the getTaxiData function in lib/helpers.js:

export async function getTaxiData(){ let data = await fetch("https://api.data.gov.sg/v1/transport/taxi-availability").then(r => r.json()) return {taxis: data.features.properties[0].taxi_count, updatedAt: data.features.properties[0].timestamp}
}

The API File

We’ll then build the handler function in api/index.js as well as importing the getTaxiData function:

import { getTaxiData } from '../../lib/helpers'
export default async function handler(req, res){ res.status(200).json(await getTaxiData())
}

There isn’t anything here unique to SWR or ISR, besides the aforementioned project structure. That stuff starts now in index.js!

The Front-End File

The first thing we want to do is create our getStaticProps function! This function will import our getTaxiData function, use it and then return the data with some additional configuration.

export async function getStaticProps(){ const { getTaxiData } = require("../lib/helpers") return { props: (await getTaxiData()), revalidate: 1 }
}

I’d like to focus on the revalidate key in our returned object. This key practically enables Incremental Static Regeneration. It tells your host that every one second regenerating the static page is an available option, that option is then triggered in the background when a client visits your page. You can read more about Incremental Static Regeneration (ISR) here.

It’s now time to use SWR! Let’s import it first:

import useSWR from 'swr'

We’re going to be using SWR in our React-rendering function, so let’s create that function:

export default function App(props){
}

We’re receiving the props from getStaticProps. Now we’re ready to set up SWR:

const fetcher = (...args) => fetch(...args).then(res => res.json())
const { data } = useSWR("/api", fetcher, {initialData: props, refreshInterval: 30000})

Let’s break this down. Firstly, we define the fetcher. This is required by SWR as an argument so that it knows how to fetch your data given that different frameworks etc. can have different set ups. In this case, I’m using the function provided on the SWR docs page. Then we call the useSWR hook, with three arguments: the path to fetch data from, the fetcher function and then an options object.

In that options object, we’ve specified two things:

  • the initial data
  • the interval at which SWR should revalidate the data

The initial data option is where we provide the data fetched from getStaticProps which ensures that data is visible from the start. Lastly, we use object destructuring to extract the data from the hook.

To finish up, we’ll render that data with some very basic JSX:

return <div>As of {data.updatedAt}, there are {data.taxis} taxis available in Singapore!</div>

And, we’ve done it! There we have a very barebones example of using SWR with Incremental Static Regeneration. (The source of our example is available here.)

If you ever run into stale data with ISR, you know who to call: SWR.

Further Reading on SmashingMag

11 Product Page Features to Drive Conversions

When it comes to compelling product pages, less can be more. Cluttered pages distract from selling points. Focus on the crucial details. Embrace a minimalist approach to lessen the thinking process and close the sale.

Do not, however, eliminate detailed descriptions. Rather, summarize what’s important first — perhaps with icons and images — and then place extensive copy below the “Buy” section.

What follows are 11 product page features from three online stores. Each presents unique ways to package products to sell.

Monos

Monos produces quality luggage. Competing with luxury lines, Monos’ mid-range travel gear has gained a substantial audience. The company’s social proof and product presentation drive conversions and repeat sales.

Monos carry on luggageMonos carry on luggage

Monos’s product pages are minimalist in design with social proof and product presentation that drive sales.

Importantly, Monos’s product pages exhibit essential components in a minimalist design, as follows.

Gallery and context-of-use images. The image gallery showcases the product’s features from varying distances and perspectives. Simple “in use” photos show size and applications.

FOMO inspired call-to-action. Monos receives orders and ships them in groups. To entice shoppers to buy now and accept lengthier delivery times, the call-to-action reads “pre-order” instead of “buy,” triggering one’s fear of missing out. The estimated ship date changes every few days, giving the company plenty of time to fulfill demand.

Attribute-style product selection. Most dropdown menus on product pages are for sizes and colors of the same item. But each size of Monos luggage is a different product with (sometimes slightly) different features. Positioned near the top, the dropdown menu saves space while giving visitors quick access to related products.

Intuitive cross-selling and upselling. Two elements increase average order values:

  • A cross-sell link prompts shoppers to save 15% on accessories.
  • An upsell function that makes purchasing a set of luggage a breeze. Tapping a + sign will display the bundled price and trigger the CTA to add two or more products.

Coolibar

Coolibar sells sun protective clothing and accessories. Its product pages are immensely informative in a small amount of space, with the most convincing details sitting near the top.

Coolibar product page with woman wearing a sun hatCoolibar product page with woman wearing a sun hat

Coolibar’s product pages are informative in a small amount of space, with convincing details (images, sizing guide) near the top.

Photos for color and style enable shoppers to see close-ups of each attribute, such as color.

Sizing guide. Coolibar places the link (“Size guide”) where it matters most: above the CTA.

Features icons. Each product page sports informative icons, such as the UPF rating.

Social media links are subtle yet encouraging.

Kettle & Fire

Kettle & Fire sells broths and soups. It’s a simple, brilliantly-showcased catalog using compelling background photography and suggestive design. The company’s product pages encourage multi-pack purchases and explain why you should purchase.

Kettle & Fire product pageKettle & Fire product page

Kettle & Fire encourages “6-Pack” purchases and explains “Why You’ll Love It.”

Prominent subscription offering (“Subscribe & Save”) while defaulting to the most clicked “One Time Purchase” option.

Suggestive quantity. Highlighting middle-tier pricing packages often closes more sales at that level. By pre-selecting the “6-Pack” quantity, Kettle & Fire likely has higher order values.

Quick, enticing bullet points. Instead of merely listing features, Kettle & Fire explains “Why You’ll Love It” with accompanying, fun emojis.

Research First

Refocusing the CTA area of a product page takes research. Not every selling point belongs here; too much info before the buy button can backfire. Rely on behavioral analytics to determine what’s most important to your audience. Then, use ample white space to present these details with a fresh, clean look.

Developer Decisions For Building Flexible Components

In the real world, content often differs vastly from the neat, perfectly fitting content presented in designs. Added to that, on the modern web, users have an ever-increasing range of options for how they access the sites we build.

In this article, we’ll walk through the process of taking a seemingly simple design for a text-and-media component and deciding how best to translate it into code, keeping in mind the needs of both users and content authors. We’re not going to delve into how to code it — rather, the factors that will determine our development decisions. We’ll consider the questions we need to ask (both ourselves and other stakeholders) at every step.

Changing Our Development Mindset

We simply can no longer design and develop only for “optimal” content or browsing conditions. Instead, we must embrace the inherent flexibility and unpredictability of the web, and build resilient components. Static mockups cannot cater to every scenario, so many design decisions fall to developers at build time. Like it or not, if you’re a UI developer, you are a designer — even if you don’t consider yourself one!

In my job at WordPress specialist web agency Atomic Smash, we build websites for clients who need maximum flexibility from the components we provide, while ensuring the site still looks great, no matter what content they throw at it. Sometimes interpreting a design means asking the designer to further elaborate on their ideas (or even re-evaluate them). Other times, it means making design decisions on the fly or making recommendations based on our knowledge and experience. We’ll look at some of the times these approaches might be appropriate in this case study.

The Design

We start with a simple design for a text-and-media component — something fairly commonly seen on product landing pages. It consists of an image or video on the left and a column on the right containing a heading, a paragraph of text and a call-to-action link. This design is for a (fictional) startup that helps people who want to learn a new skill find a tutor.

Note: If you want to jump straight to the code and view all the possible solutions we alighted on for this component, you can find it in this Codepen demo.

Layout And Order

The designer has stipulated that every other component should have the layout flipped so that the image is on the right and the text column on the left.

In the mobile layout, however, the image is stacked above the text content in all cases. Assuming we build this layout using either Grid or flexbox, we could use flex-direction or the order property to reorder the layout for every second component:

.text-and-media:nth-child(even) { flex-direction: row-reverse;
}

It’s worth bearing in mind that while these will reorder the content visually, it does not change the DOM order. This means that to a partially-sighted person browsing the site using a screenreader, the order of the content may not appear logical, jumping from left-to-right to right-to-left.

Speaking personally, in the case where the only content in one of the columns is an image, I feel that using the order property is more or less okay. But if we had two columns of text, for instance, reordering with CSS might make for a confusing experience. In those cases, we have some other options available to us. We could:

  1. Put forward our accessibility concerns and recommend that for mobile layouts the visual order is changed to match the desktop order.
  2. Use Javascript to reorder the elements in the DOM.

We also need to consider whether to enforce the order through the :nth-child selector or to allow the client to control the order (by adding a class to the component, say). The suitability of each option will likely depend on the project.

Dealing With Different Content Lengths

In the design, the proportion of text content compared to the image is quite pleasing. It allows the image to maintain an ideal aspect ratio. But what should happen if the text is longer or shorter than what’s presented? Let’s deal with the former first.

Longer Content

We can set a character limit on the text field in our chosen CMS (if we’re so inclined), but we should allow for at least some variation in our component. If we add a longer paragraph, the opposing media column could behave in one of several ways:

  1. The image or video remains at the top, while space is added below (fig. 1).
  2. The image or video is centered, adding space at the top or bottom (fig. 2).
  3. The proportions of the image or video are scaled to match the height, using object-fit: cover to prevent distortion and ensure the image fills the available space. This would mean some parts of the image may be clipped (fig. 3).

We decided that option 3 was the most pleasing visually, and that for the most part content authors would be able to source appropriate images where a small amount of clipping would be acceptable. But it presented more of a challenge for video content, where there is more of a risk that important parts could be clipped. We alighted on another option, which was to create a different variation of the design where the video would maintain its original aspect ratio, and be contained within a maximum width instead of aligning to the edge of the page.

Content authors could choose this option when it better suited their needs.

Additionally, we opted to extend this choice to instances where an image was used instead of a video. It gives the client a wider variety of layout options without adversely impacting the design. Seen in the wider page context it could even be considered an improvement, allowing for more interesting pages when several of these blocks are used on a page.

Shorter Content

Dealing with less content is a little simpler, but still presents us with some issues. How should the image behave when the text content is shorter? Should it become shallower, so that the overall height of the component is determined by the text content (fig. 4)? Or should we set a minimum aspect ratio, so that the image doesn’t become letterboxed, or allow the image to take up its natural, intrinsic height? In that case, we also have the consideration of whether to align the text centrally or to the top (fig. 5 and 5a).

Heading Length

Let’s not forget we’ll also need to test headings of different lengths. In the design the headings are short and snappy, rarely wrapping onto a second line. But what if a heading is several lines long, or the content uses a lot of long words, resulting in text wrapping differently? This might especially be a problem in languages such as German, where words tend to be much longer than English, for example. Does the size of the heading font in the design allow for an appropriate line length when used in this layout? Should long words be hyphenated when they wrap? This article by Ahmad Shadeed addresses the issue of content length and included some handy tips for ways to deal with it in CSS.

Are content authors permitted to omit a heading altogether where it suits them? That brings us to the next consideration.

Omitting Content

Building this component as flexibly as possible means making sure that content authors can omit certain fields and still have the design look and function properly. It seems reasonable that the client may wish to omit the body text, the link, or even the heading when using this component in the wild. We need to take care to test with every conceivable combination of content, so that we can be confident our component won’t break under stress. It’s good practice to ensure we’re not rendering empty HTML tags when field content isn’t present. This will help us avoid unforeseen layout bugs.

We can restrict content authors with “required” fields in the CMS, but perhaps we might also wish to consider scenarios where a client might choose to omit the image, or, conversely, without any of the text content? It might be helpful to provide them with these options. Here’s an example of how we might elect to render the component in those cases:

By indenting the text a little more and increasing the width of the body text, we can keep it feeling balanced, even when there is no image.

Multiple Links

Omitting content is one scenario. But at Atomic Smash, we found that more often, clients wanted the option to add more than one link to the component. That presents us with another choice: how to layout multiple links? Do we lay them out side-by-side (fig. 8), or stack them vertically (fig. 8a)?

How do we deal with link titles of wildly different lengths? A nice trick is to set the widths of both links to the maximum width of the longest (fig. 9). (This article covers just that.) That works well for vertically stacked buttons, whereas laying them out horizontally presents us with even more choices (fig. 9a).

Do we need a secondary link style, to differentiate them? These are all questions to consider.

We may also need to consider (in the case of a single link) whether, in fact, the clickable area of the link should encompass the entire component — so users can click anywhere on it to activate the link. That choice might perhaps depend on the wider context. It’s certainly common in card-based UIs.

Video

When the component is used for video rather than a static image, we might notice that the design omits some key information. How is the video playback controlled? On hover? Does it autoplay on scroll? Should there be controls visible to the user?

If the video plays on hover, we must consider how the user of a device without hover capabilities accesses the video content. Alternatively, if the video autoplays, we should consider preventing this for users with a preference for reduced motion, who may suffer from vestibular disorders (or might simply wish to avoid jarring animations). We should also provide a way for all users to stop the video when they wish.

Putting It In Context

One issue with focusing so closely on components when it comes to web design, is sometimes we forget to consider how the components we build will appear in the context of the overall web page. We’ll need to consider the spacing, both between components of the same type and in a page layout where other components are interspersed.

These text-and-media components are designed to be used sparingly, creating an eye-catching splash of color and a break from an otherwise linear layout. But using WordPress, a content author could easily decide to build an entire page made up of nothing but these components. That could end up looking rather dull, and not at all the effect we were hoping for!

During the build process, we decided to add an option to omit the background color. That allows us to break the page up and make it more interesting:

We could either enforce a pattern using :nth-child or add a field in the CMS to give the client more creative control.

Although this was not part of the original design, it shows that an open line of communication between designer and developer can help create better outcomes in terms of more flexible and robust designs.

WYSIWYG Text Styles

When considering content, we need to consider not just the length of text, but the actual HTML elements that might be permitted in the body text field. Content authors might want to add multiple paragraphs, anchor links, lists, and more to the body copy. At Atomic Smash we like to provide a WYSIWYG (What You See Is What You Get) or rich text field for these areas, which can allow for many different elements. It’s important to test with different types of content, and style appropriately — including testing for sufficient color contrast on all background colors used.

Wrapping Up

We’ve touched on many different decisions involved with building this seemingly simple component. You might even be able to think of a few others we haven’t covered here! By considering every aspect of the design and how it might be used in context, we’ve ended up with something far more versatile, which hopefully should result in happier clients!

Sometimes, the more that is omitted from a design, the more time and attention will be required from a developer. I’ve put together a checklist below of things to test and question when building a component, which you might find useful. It may be adapted for different components too.

Being able to look past the apparent simplicity, break down a component into its constituent parts, ask key questions (even before any development takes place), and even consider future uses, are all skills that will serve any developer well when building websites — and will help you provide vastly more accurate estimates when required. Good team communication and a strong collaborative process are invaluable for building resilient sites, but the end result makes it worth investing in nurturing this culture. Let’s bake flexibility into our design and build processes.

The Checklist

Things to test:

  1. Accessibility of layout (mobile and desktop).
  2. Images of different intrinsic aspect ratios — are they cropped appropriately?
  3. Longer and shorter body text (including multiple paragraphs).
  4. Longer and shorter heading (including various word lengths).
  5. Omitting (variously) the heading, body text, links and image.
  6. Multiple links (including different lengths of link text).
  7. Accessibility of video content.
  8. WYSIWYG text content (include links, lists, etc. in body text).
  9. Test in context — include multiple components (with different content options), as well as other components mixed into the page layout.

New Smashing Online Workshops on Front-End & UX

You might know it already, but perhaps not yet: we regularly run friendly online workshops around front-end and design. We have a couple of workshops coming up soon, and we thought that, you know, you might want to join in as well. All workshops sessions are broken down into 2.5h-segments across days, so you always have time to ask questions, share your screen and get immediate feedback.


Meet Smashing Online Workshops: live, interactive sessions on frontend & UX.

Live discussions and interactive exercises are at the very heart of every workshop, with group work, homework, reviews and live interaction with people around the world. Plus, you get all video recordings of all sessions, so you can re-watch at any time, in your comfy chair at your workspace.

Upcoming Live Workshops (Sep–Nov 2021)

Find, Fix, And Prevent Accessibility Issues
Preety Kumar
2 sessions Sep 14–15 free

Early birds!

Early birds!

Early birds!

Early birds!

Early birds!

Early birds!

Early birds!

What Are Workshops Like?

Do you experience Zoom fatigue as well? After all, who really wants to spend more time in front of their screen? That’s exactly why we’ve designed the online workshop experience from scratch, accounting for the time needed to take in all the content, understand it and have enough time to ask just the right questions.


In our workshops, everybody is just a slightly blurry rectangle on the screen; everybody is equal, and invited to participate.

Our online workshops take place live and span multiple days across weeks. They are split into 2.5h-sessions, and in every session there is always enough time to bring up your questions or just get a cup of tea. We don’t rush through the content, but instead try to create a welcoming, friendly and inclusive environment for everyone to have time to think, discuss and get feedback.

There are plenty of things to expect from a Smashing workshop, but the most important one is focus on practical examples and techniques. The workshops aren’t talks; they are interactive, with live conversations with attendees, sometimes with challenges, homework and team work.

Of course, you get all workshop materials and video recordings as well, so if you miss a session you can re-watch it the same day.


Meet our friendly frontend & UX workshops. Boost your skills online and learn from experts — live.

TL;DR

  • Workshops span multiple days, split in 2.5h-sessions.
  • Enough time for live Q&A every day.
  • Dozens of practical examples and techniques.
  • You’ll get all workshop materials & recordings.
  • All workshops are focused on frontend & UX.
  • Get a workshop bundle and save $250 off the price.

Thank You!

We hope that the insights from the workshops will help you improve your skills and the quality of your work. A sincere thank you for your kind, ongoing support and generosity — for being smashing, now and ever. We’d be honored to welcome you.

3 Ways to Reduce Shipping Costs

Lowering shipping costs is a surefire way to improve profits. Same-day delivery, free shipping, standard transit, click-and-collect — all are candidates for cost reductions.

3 Tips to Reduce Shipping Costs

Negotiate rates with shippers. There is always room to negotiate on price, even for small volumes. Gain a clear understanding of how weight and size impact rates. Flat-rate shipping is an option that can save money — the same size box ships at the same flat rate every time, regardless of weight. Explore the options for various shipping speeds.

Benchmark rates you are offered against FedEx SmartPost and UPS SurePost rates. These services use USPS to manage the “last-mile” deliveries. Consider migrating to USPS for your entire shipment.

Minimize packaging.

  • Size and weight. Know the exact size and weight of each of your products. Measure, weigh, and document dimensions and weights for all items — from heavy-large to small-light. Focusing on dimensions and weights can lower your shipping rates. In general, the less a package weighs, the less it costs to ship.
  • DIM factor. Dimensional or “DIM” weight is a formula carriers use to determine the cost to ship a package based on its volume. DIM pricing allows carriers to incorporate package size into their pricing structure.

ShippingEasy, a software provider, has a handy calculator for the dimensional weight of a package. Compare it to the actual weight of a package to see if you’ll pay more. DIM pricing mostly affects large and lightweight products as well as fragile items. To keep costs down, pack in the smallest box possible with lightweight filler.

Screenshot of ShippingEasy's DIM weight calculator pageScreenshot of ShippingEasy's DIM weight calculator page

Carriers apply a package’s dimensional weight when setting prices. This example DIM weight calculator is from ShippingEasy.

  • Packaging materials. Minimize your product packaging. Use the lightest and smallest possible carton for your items. This lowers weight and volume and also reduces filler. Look for lightweight, protective materials such as air pillows and bubble wrap and use the minimum needed to protect the items.

For smaller, lightweight, fragile products, use a padded poly mailer or returnable plastic bag. Compared to boxes, bags are weather-resistant, cheaper, quicker to pack, and inexpensive to ship. Plus, polybags are brandable, much like cardboard packaging.

Buying packaging materials in bulk to gain discounts makes sense when you are clear on future requirements. UPS, FedEx, and USPS offer free samples as well as free branded boxes and envelopes when you sign up for their services.

Consolidating orders into fewer boxes is another way to reduce shipping costs.

  • Recycle and reuse. Strip off labels from suppliers and repurpose the packing materials for shipping your own items. It saves money and fosters sustainability.

Outsourcing shipping. Contracting with a specialist ecommerce fulfillment service provider can reduce shipping costs. A third-party fulfillment company often provides lower rates because of its overall shipping volume. Many run multiple facilities, which could be closer to your customers, reducing transit times and fees.

Third-party providers offer custom solutions that suit your requirements, including money-saving packaging schemes for seemingly every product type. The providers can also solve staffing demands from seasonal peaks and fluctuating demand.

What’s New With DevTools: Cross-Browser Edition

Browser developer tools keep evolving, with new and improved features added all the time. It’s hard to keep track, especially when using more than one browser. With that much on offer, it is not surprising that we feel overwhelmed and use the features we already know instead of keeping up with what’s new.

It’s a shame though, as some of them can make us much more productive.

So, my goal with this article is to raise awareness on some of the newest features in Chrome, Microsoft Edge, Firefox and Safari. Hopefully, it will make you want to try them out, and maybe will help you get more comfortable next time you need to debug a browser-specific issue.

With that said, let’s jump right in.

Chrome DevTools

The Chrome DevTools team has been hard at work modernizing their (now 13 years old) codebase. They have been busy improving the build system, migrating to TypeScript, introducing new WebComponents, re-building their theme infrastructure, and way more. As a result, the tools are now easier to extend and change.

But on top of this less user-facing work, the team did ship a lot of features too. Let me go over a few of them here, related to CSS debugging.

Scroll-snapping

CSS scroll-snapping offers web developers a way to control the position at which a scrollable container stops scrolling. It’s a useful feature for, e.g., long lists of photos where you want the browser to position each photo neatly within its scrollable container automatically for you.

If you want to learn more about scroll-snapping, you can read this MDN documentation, and take a look at Adam Argyle’s demos here.

The key properties of scroll-snapping are:

  • scroll-snap-type, which tells the browser the direction in which snapping happens, and how it happens;
  • scroll-snap-align, which tells the browser where to snap.

Chrome DevTools introduced new features that help debug these key properties:

  • if an element defines scroll-snapping by using scroll-snap-type, the Elements panel shows a badge next to it.

  • You can click on the scroll-snap badge to enable a new overlay. When you do, several things are highlighted on the page:
    • the scroll container,
    • the items that scroll within the container,
    • the position where items are aligned (marked by a blue dot).

This overlay makes it easy to understand if and how things snap into place after scrolling around. This can be very useful when, e.g., your items don’t have a background and boundaries between them are hard to see.

While scroll snapping isn’t a new CSS feature, adoption is rather low (less than 4% according to chromestatus.com), and since the specification changed, not every browser supports it the same way.

I hope that this DevTools feature will make people want to play more with it and ultimately adopt it for their sites.

Container queries

If you have done any kind of web development in recent years, you have probably heard of container queries. It’s been one of the most requested CSS features for the longest time and has been a very complex problem for browser makers and spec writers to solve.

If you don’t know what container queries are, I would suggest going through Stephanie Eckles’ Primer On CSS Container Queries article first.

In a few words, they’re a way for developers to define the layout and style of elements depending on their container’s size. This ability is a huge advantage when creating reusable components since we can make them adapt to the place they are used in (rather than only adapt to the viewport size which media queries are good for).

Fortunately, things are moving in this space and Chromium now supports container queries and the Chrome DevTools team has started adding tooling that makes it easier to get started with them.

Container queries are not enabled by default in Chromium yet (to enable them, go to chrome://flags and search for “container queries”), and it may still take a little while for them to be. Furthermore, the DevTools work to debug them is still in its early days. But some early features have already landed.

  • When selecting an element in DevTools that has styles coming from a @container at-rule, then this rule appears in the Styles sidebar of the Elements panel. This is similar to how media queries styles are presented in DevTools and will make it straightforward to know where a certain style is coming from.

As the above screenshot shows, the Styles sidebar displays 2 rules that apply to the current element. The bottom one applies to the .media element at all times and provides its default style. And the top one is nested in a @container (max-width:300px) container query that only takes effect when the container is narrower than 300px.

  • On top of this, just above the @container at-rule, the Styles pane displays a link to the element that the rule resolves to, and hovering over it displays extra information about its size. This way you know exactly why the container query matched.

Hover over the container query to know why and where it matched.

The Chrome DevTools team is actively working on this feature and you can expect much more in the future.

Chromium Collaboration

Before going into features that other browsers have, let’s talk about Chromium for a little bit. Chromium is an open-source project that Chrome, Edge, Brave, and other browsers are built upon. It means all these browsers have access to the features of Chromium.

Two of the most active contributors to this project are Google and Microsoft and, when it comes to DevTools, they collaborated on a few interesting features that I’d like to go over now.

CSS Layout Debugging Tools

A few years ago, Firefox innovated in this space and shipped the first-ever grid and flexbox inspectors. Chromium-based browsers now also make it possible for web developers to debug grid and flexbox easily.

This collaborative project involved engineers, product managers and designers from Microsoft and Google, working towards a shared goal (learn more about the project itself in my BlinkOn talk).

Among other things, DevTools now has the following layout debugging features:

  • Highlight multiple grid and flex layouts on the page, and customize if you want to see grid line names or numbers, grid areas, and so on.

  • Flex and grid editors to visually play around with the various properties.

Play with the various flex alignment properties visually. (Large preview)

  • Alignment icons in the CSS autocomplete make it easier to choose properties and values.

  • Highlight on property hover to understand what parts of the page a property applies to.

Highlight various CSS properties independently to understand how they affect the layout. (Large preview)

You can read more information about this on Microsoft’s and Google’s documentation sites.

Localization

This was another collaborative project involving Microsoft and Google which, now, makes it possible for all Chromium-based DevTools to be translated in languages other than English.

Originally, there was never a plan to localize DevTools, which means that this was a huge effort. It involved going over the entire codebase and making UI strings localizable.

The result was worth it though. If English isn’t your first language and you’d feel more comfortable using DevTools in a different one, head over to the Settings (F1) and find the language drop-down.

Here is a screenshot of what it looks like in Chrome DevTools:

And here is how Edge looks in Japanese:

Edge DevTools

Microsoft switched to Chromium to develop Edge more than 2 years ago now. While, at the time, it caused a lot of discussions in the web community, not much has been written or said about it since then. The people working on Edge (including its DevTools) have been busy though, and the browser has a lot of unique features now.

Being based on the Chromium open source project does mean that Edge benefits from all of its features and bug fixes. Practically speaking, the Edge team ingests the changes made in the Chromium repository in their own repository.

But over the past year or so, the team started to create Edge-specific functionality based on the needs of Edge users and feedback. Edge DevTools now has a series of unique features that I will go over.

Opening, Closing, and Moving Tools

With almost 30 different panels, DevTools is a really complicated piece of software in any browser. But, you never really need access to all the tools at the same time. In fact, when starting DevTools for the first time, only a few panels are visible and you can add more later.

On the other hand though, it’s hard to discover the panels that aren’t shown by default, even if they could be really useful to you.

Edge added 3 small, yet powerful, features to address this:

  • a close button on tabs to close the tools you don’t need anymore,
  • a + (plus) button at the end of the tab bar to open any tool,
  • a context menu option to move tools around.

The following GIF shows how closing and opening tools in both the main and drawer areas can be done in Edge.

Easily open the tools you need and close the ones you don’t. (Large preview)

You can also move tools between the main area and drawer area:

  • right-clicking on a tab at the top shows a “Move to bottom” item, and
  • right-clicking on a tab in the drawer shows a “Move to top” item.

Move tools between the main top area and the bottom drawer area. (Large preview)

Getting Contextual Help with the DevTools Tooltips

It is hard for beginners and seasoned developers alike to know all about DevTools. As I mentioned before, there are so many panels that it’s unlikely you know them all.

To address this, Edge added a way to go directly from the tools to their documentation on Microsoft’s website.

This new Tooltips feature works as a toggleable overlay that covers the tools. When enabled, panels are highlighted and contextual help is provided for each of them, with links to documentation.

You can start the Tooltips in 3 different ways:

  • by using the Ctrl + Shift + H keyboard shortcut on Windows/Linux (Cmd + Shift + H on Mac);
  • by going into the main (...) menu, then going into Help, and selecting “Toggle the DevTools Tooltips”;
  • by using the command menu and typing “Tooltips”.

Display contextual help on the tools. (Large preview)

Customizing Colors

In code editing environments, developers love customizing their color themes to make the code easier to read and more pleasant to look at. Because web developers spend considerable amounts of time in DevTools too, it makes sense for it to also have customizable colors.

Edge just added a number of new themes to DevTools, on top of the already available dark and light themes. A total of 9 new themes were added. These come from VS Code and will therefore be familiar to people using this editor.

You can select the theme you want to use by going into the settings (using F1 or the gear icon in the top-right corner), or by using the command menu and typing theme.

Customize DevTools with one of 9 VS Code themes. (Large preview) Firefox DevTools

Similar to the Chrome DevTools team, the folks working on Firefox DevTools have been busy with a big architecture refresh aimed at modernizing their codebase. Additionally, their team is quite a bit smaller these days as Mozilla had to refocus over recent times. But, even though this means they had less time for adding new features, they still managed to release a few really interesting ones that I’ll go over now.

Debugging Unwanted Scrollbars

Have you ever asked yourself: “where is this scrollbar coming from?” I know I have, and now Firefox has a tool to debug this very problem.

In the Inspector panel, all elements that scroll have a scroll badge next to them, which is already useful when dealing with deeply nested DOM trees. On top of this, you can click this badge to reveal the element (or elements) that caused the scrollbar to appear.

You can find more documentation about it here.

Visualizing Tabbing Order

Navigating a web page with the keyboard requires using the tab key to move through focusable elements one by one. The order in which focusable elements get focused while using tab is an important aspect of the accessibility of your site and an incorrect order may be confusing to users. It’s especially important to pay attention to this as modern layout CSS techniques allow web developers to rearrange elements on a page very easily.

Firefox has a useful Accessibility Inspector panel that provides information about the accessibility tree, finds and reports various accessibility problems automatically, and lets you simulate different color vision deficiencies.

On top of these features, the panel now provides a new page overlay that displays the tabbing order for focusable elements.

To enable it, use the “Show Tabbing Order” checkbox in the toolbar.

You can find more documentation about it here.

A Brand New Performance Tool

Not many web development areas depend on tooling as much as performance optimization does. In this domain, Chrome DevTools’ Performance panel is best in class.

Over the past few years, Firefox engineers have been focusing on improving the performance of the browser itself, and to help them do this, they built a performance profiler tool. The tool was originally built to optimize the engine native code but supported analyzing JavaScript performance right from the start, too.

Today, this new performance tool replaces the old Firefox DevTools performance panel in pre-release versions (Nightly and Developer Edition). Take it for a spin when you get the chance.

Among other things, the new Firefox profiler supports sharing profiles with others so they can help you improve the performance of your recorded use case.

You can read documentation about it here, and learn more about the project on their GitHub repository.

Safari Web Inspector

Last but not least, let’s go over a few of the recent Safari features.

The small team at Apple has been keeping itself very busy with a wide range of improvements and fixes around the tools. Learning more about the Safari Web Inspector can help you be more productive when debugging your sites on iOS or tvOS devices. Furthermore, it has a bunch of features that other DevTools don’t, and that not a lot of people know about.

CSS Grid Debugging

With Firefox, Chrome, and Edge (and all Chromium-based browsers) having dedicated tools for visualizing and debugging CSS grids, Safari was the last major browser not to have this. Well, now it does!

Fundamentally, Safari now has the same features just like other browsers’ DevTools in this area. This is great as it means it’s easy to go from one browser to the next and still be productive.

  • Grid badges are displayed in the Elements panel to quickly find grids.
  • Clicking on the badge toggles the visualization overlay on the page.
  • A new Layout panel is now displayed in the sidebar. It allows you to configure the grid overlay, see the list of all grids on the page and toggle the overlay for them.

What’s interesting about Safari’s implementation though is that they’ve really nailed the performance aspect of the tool. You can enable many different overlays at once, and scroll around the page without it causing any performance problems at all.

The other interesting thing is Safari introduced a 3-pane Elements panel, just like Firefox, which allows you to see the DOM, the CSS rules for the selected element, and the Layout panel all at once.

Find out more about the CSS Grid Inspector on this WebKit blog post.

A Slew of Debugger Improvements

Safari used to have a separate Resources and Debugger panel. They have merged them into a single Sources panel that makes it easier to find everything you need when debugging your code. Additionally, this makes the tool more consistent with Chromium which a lot of people are used to.

Consistency for common tasks is important in a cross-browser world. Web developers already need to test across multiple browsers, so if they need to learn a whole new paradigm when using another browser’s DevTools, it can make things more difficult than they need to be.

But Safari also recently focused on adding innovative features to its debugger that other DevTools don’t have.

Bootstrap script:
Safari lets you write JavaScript code that is guaranteed to run first before any of the scripts on the page. This is very useful to instrument built-in functions for adding debugger statements or logging for example.

New breakpoint configurations:
All browsers support multiple types of breakpoints like conditional breakpoints, DOM breakpoints, event breakpoints, and more.

Safari recently improved their entire suite of breakpoint types by giving them all a way to configure them extensively. With this new breakpoint feature, you can decide:

  • if you want a breakpoint to only hit when a certain condition is true,
  • if you want the breakpoint to pause execution at all, or just execute some code,
  • or even play an audio beep so you know some line of code was executed.

queryInstances and queryHolders console functions:
These two functions are really useful when your site starts using a lot of JavaScript objects. In some situations, it may become difficult to keep track of the dependencies between these objects, and memory leaks may start to appear, too.

Safari does have a Memory tool that can help resolve these issues by letting you explore memory heap snapshots. But sometimes you already know which class or object is causing the problem and you want to find what instances exist or what refers to it.

If Animal is a JavaScript class in your application, then queryInstances(Animal) will return an array of all of its instances.

If foo is an object in your application, then queryHolders(foo) will return an array of all the other objects that have references to foo.

Closing Thoughts

I hope these features will be useful to you. I can only recommend using multiple browsers and getting familiar with their DevTools. Being more familiar with other DevTools can prove useful when you have to debug an issue in a browser you don’t use on a regular basis.

Know that the companies which make browsers all have teams working on DevTools actively. They’re invested in making them better, less buggy, and more powerful. These teams depend on your feedback to build the right things. Without hearing about what problems you are facing, or what features you lack, it’s harder for them to make the right decisions about what to build.

Reporting bugs to a DevTools team won’t just help you when the fix comes, but may also be helping many others who have been facing the same issue.

It’s worth knowing that the DevTools teams at Microsoft, Mozilla, Apple and Google are usually fairly small and receive a lot of feedback, so reporting an issue does not mean it will be fixed quickly, but it does help, and those teams are listening.

Here are a few ways you can report bugs, ask questions or request features:

  • Firefox DevTools
    • Firefox uses Bugzilla as their public bug tracker and anyone is welcome to report bugs or ask for new features by creating a new entry there. All you need is a GitHub account to log in.
    • Getting in touch with the team can either be done on Twitter by using the @FirefoxDevTools account or logging in to the Mozilla chat (find documentation about the chat here).
  • Safari Web Inspector
  • Edge DevTools
    • The easiest way to report a problem or ask for a feature is by using the feedback button in DevTools (the little stick figure in the top-right corner of the tools).
    • Asking questions to the team works best over Twitter by mentioning the @EdgeDevTools account.
  • Chrome DevTools
  • Chromium
    • Since Chromium is the open-source project that powers Google Chrome and Microsoft Edge (and others), you can also report issues on the Chromium’s bug tracker.

With that, thank you for reading!

HTTP/3: Practical Deployment Options (Part 3)

Hello, and welcome to the final installment of this three-part series on the new HTTP/3 and QUIC protocols! If after the previous two parts — HTTP/3 history and core concepts and HTTP/3 performance features — you’re convinced that starting to use the new protocols is a good idea (and you should be!), then this final piece includes all you need to know to get started!

First, we’ll discuss which changes you need to make to your pages and resources to optimally use the new protocols (that’s the easy part). Next, we’ll look at how to set up servers and clients (that’s the hard part unless you’re using a content delivery network (CDN)). Finally, we’ll see which tools you can use to evaluate the performance impact of the new protocols (that’s the almost impossible part, at least for now).

This series is divided into three parts:

  1. HTTP/3 history and core concepts
    This is targeted at people new to HTTP/3 and protocols in general, and it mainly discusses the basics.
  2. HTTP/3 performance features
    This is more in-depth and technical. People who already know the basics can start here.
  3. Practical HTTP/3 deployment options (current article)
    This explains the challenges involved in deploying and testing HTTP/3 yourself. It details how and if you should change your web pages and resources as well.

Changes To Pages And Resources

Let’s begin with some good news: If you’re already on HTTP/2, you probably won’t have to change anything to your pages or resources when moving to HTTP/3!. This is because, as we’ve explained in part 1 and part 2, HTTP/3 is really more like HTTP/2-over-QUIC, and the high-level features of the two versions have stayed the same. As such, any changes or optimizations made for HTTP/2 will still work for HTTP/3 and vice versa.

However, if you’re still on HTTP/1.1, or you have forgotten about your transition to HTTP/2, or you never actually tweaked things for HTTP/2, then you might wonder what those changes were and why they were needed. You would, however, be hard-pressed even today to find a good article that details the nuanced best practices. This is because, as I stated in the introduction to part 1, much of the early HTTP/2 content was overly optimistic about how well it would work in practice, and some of it, quite frankly, had major mistakes and bad advice. Sadly, much of this misinformation persists today. That’s one of my main motivations in writing this series on HTTP/3, to help prevent that from happening again.

The best all-in-one nuanced source for HTTP/2 I can recommend at this time is the book HTTP/2 in Action by Barry Pollard. However, since that’s a paid resource and I don’t want you to be left guessing here, I’ve listed a few of the main points below, along with how they relate to HTTP/3:

1. Single Connection

The biggest difference between HTTP/1.1 and HTTP/2 was the switch from 6 to 30 parallel TCP connections to a single underlying TCP connection. We discussed a bit in part 2 how a single connection can still be as fast as multiple connections, because of how congestion control can cause more or earlier packet loss with more connections (which undoes the benefits of their aggregated faster start). HTTP/3 continues this approach, but “just” switches from one TCP to one QUIC connection. This difference by itself doesn’t do all that much (it mainly reduces the overhead on the server-side), but it leads to most of the following points.

2. Server Sharding and Connection Coalescing

The switch to the single connection set-up was quite difficult in practice because many pages were sharded across different hostnames and even servers (like img1.example.com and img2.example.com). This was because browsers only opened up to six connections for each individual hostname, so having multiple allowed for more connections! Without changes to this HTTP/1.1 set-up, HTTP/2 would still open up multiple connections, reducing how well other features, such as prioritization (see below), could actually work.

As such, the original recommendation was to undo server sharding and to consolidate resources on a single server as much as possible. HTTP/2 even provided a feature to make the transition from an HTTP/1.1 set-up easier, called connection coalescing. Roughly speaking, if two hostnames resolve to the same server IP (using DNS) and use a similar TLS certificate, then the browser can reuse a single connection even across the two hostnames.

In practice, connection coalescing can be tricky to get right, e.g. due to several subtle security issues involving CORS. Even if you do set it up properly, you could still easily end up with two separate connections. The thing is, that’s not always bad. First, due to poorly implemented prioritization and multiplexing (see below), the single connection could easily be slower than using two or more. Secondly, using too many connections could cause early packet loss due to competing congestion controllers. Using just a few (but still more than one), however, could nicely balance congestion growth with better performance, especially on high-speed networks. For these reasons, I believe that a little bit of sharding is still a good idea (say, two to four connections), even with HTTP/2. In fact, I think most modern HTTP/2 set-ups perform as well as they do because they still have a few extra connections or third-party loads in their critical path.

3. Resource Bundling and Inlining

In HTTP/1.1, you could have only a single active resource per connection, leading to HTTP-level head-of-line (HoL) blocking. Because the number of connections was capped at a measly 6 to 30, resource bundling (where smaller subresources are combined into a single larger resource) was a long-time best practice. We still see this today in bundlers such as Webpack. Similarly, resources were often inlined in other resources (for example, critical CSS was inlined in the HTML).

With HTTP/2, however, the single connection multiplexes resources, so you can have many more outstanding requests for files (put differently, a single request no longer takes up one of your precious few connections). This was originally interpreted as, “We no longer need to bundle or inline our resources for HTTP/2”. This approach was touted to be better for fine-grained caching because each subresource could be cached individually and the full bundle didn’t need to be redownloaded if one of them changed. This is true, but only to a relatively limited extent.

For example, you could reduce compression efficiency, because that works better with more data. Additionally, each extra request or file has an inherent overhead because it needs to be handled by the browser and server. These costs can add up for, say, hundreds of small files compared to a few large ones. In our own early tests, I found seriously diminishing returns at about 40 files. Though those numbers are probably a bit higher now, file requests are still not as cheap in HTTP/2 as originally predicted. Finally, not inlining resources has an added latency cost because the file needs to be requested. This, combined with prioritization and server push problems (see below), means that even today you’re still better off inlining some of your critical CSS. Maybe someday the Resource Bundles proposal will help with this, but not yet.

All of this is, of course, still true for HTTP/3 as well. Still, I’ve read people claim that many small files would be better over QUIC because more concurrently active independent streams mean more profits from the HoL blocking removal (as we discussed in part 2). I think there might be some truth to this, but, as we also saw in part 2, this is a highly complex issue with a lot of moving parameters. I don’t think the benefits would outweigh the other costs discussed, but more research is needed. (An outrageous thought would be to have each file be exactly sized to fit in a single QUIC packet, bypassing HoL blocking completely. I will accept royalties from any startup that implements a resource bundler that does this. ;))

4. Prioritization

To be able to download multiple files on a single connection, you need to somehow multiplex them. As discussed in part 2, in HTTP/2, this multiplexing is steered using its prioritization system. This is why it’s important to have as many resources as possible requested on the same connection as well — to be able to properly prioritize them among each other! As we also saw, however, this system was very complex, causing it to often be badly used and implemented in practice (see the image below). This, in turn, has meant that some other recommendations for HTTP/2 — such as reduced bundling, because requests are cheap, and reduced server sharding, to make optimal use of the single connection (see above) — have turned out to underperform in practice.

Sadly, this is something that you, as an average web developer, can’t do much about, because it’s mainly a problem in the browsers and servers themselves. You can, however, try to mitigate the issue by not using too many individual files (which will lower the chances for competing priorities) and by still using (limited) sharding. Another option is to use various priority-influencing techniques, such as lazy loading, JavaScript async and defer, and resource hints such as preload. Internally, these mainly change the priorities of the resources so that they get sent earlier or later. However, these mechanisms can (and do) suffer from bugs. Additionally, don’t expect to slap a preload on a bunch of resources and make things faster: If everything is suddenly a high priority, then nothing is! It’s even very easy to delay actually critical resources by using things like preload.

As also explained in part 2, HTTP/3 fundamentally changes the internals of this prioritization system. We hope this means that there will be many fewer bugs and problems with its practical deployment, so at least some of this should be solved. We can’t be sure yet, however, because few HTTP/3 servers and clients fully implement this system today. Nevertheless, the fundamental concepts of prioritization won’t change. You still won’t be able to use techniques such as preload without really understanding what happens internally, because it might still mis-prioritize your resources.

5. Server Push and First Flight

Server push allows a server to send response data without first waiting for a request from the client. Again, this sounds great in theory, and it could be used instead of inlining resources (see above). However, as discussed in part 2, push is very difficult to use correctly due to issues with congestion control, caching, prioritization, and buffering. Overall, it’s best not to use it for general web page loading unless you really know what you’re doing, and even then it would probably be a micro-optimization. I still believe it could have a place with (REST) APIs, though, where you can push subresources linked to in the (JSON) response on a warmed-up connection. This is true for both HTTP/2 and HTTP/3.

To generalize a bit, I feel that similar remarks could be made for TLS session resumption and 0-RTT, be it over TCP + TLS or via QUIC. As discussed in part 2, 0-RTT is similar to server push (as it’s typically used) in that it tries to accelerate the very first stages of a page load. However, that means it is equally limited in what it can achieve at that time (even more so in QUIC, due to security concerns). As such, a micro-optimization is, again, how you probably need to fine-tune things on a low level to really benefit from it. And to think I was once very excited to try out combining server push with 0-RTT.

What Does It All Mean?

All the above comes down to a simple rule of thumb: Apply most of the typical HTTP/2 recommendations that you find online, but don’t take them to the extreme.

Here are some concrete points that mostly hold for both HTTP/2 and HTTP/3:

  1. Shard resources over about one to three connections on the critical path (unless your users are mostly on low-bandwidth networks), using preconnect and dns-prefetch where needed.
  2. Bundle subresources logically per path or feature, or per change frequency. Five to ten JavaScript and five to ten CSS resources per page should be just fine. Inlining critical CSS can still be a good optimization.
  3. Use complex features, such as preload, sparingly.
  4. Use a server that properly supports HTTP/2 prioritization. For HTTP/2, I recommend H2O. Apache and NGINX are mostly OK (although could do better), while Node.js is to be avoided for HTTP/2. For HTTP/3, things are less clear at this time (see below).
  5. Make sure that TLS 1.3 is enabled on your HTTP/2 web server.

As you can see, while far from simple, optimizing pages for HTTP/3 (and HTTP/2) is not rocket science. What will be more difficult, however, is correctly setting up HTTP/3 servers, clients, and tools.

Servers and Networks

As you probably understand by now, QUIC and HTTP/3 are quite complex protocols. Implementing them from scratch would involve reading (and understanding!) hundreds of pages spread over more than seven documents. Luckily, multiple companies have been working on open-source QUIC and HTTP/3 implementations for over five years now, so we have several mature and stable options to choose from.

Some of the most important and stable ones include the following:

LanguageImplementation
Pythonaioquic
Goquic-go
Rustquiche (Cloudflare), Quinn, Neqo (Mozilla)
C and C++mvfst (Facebook), MsQuic, (Microsoft), <a hrefhttps://quiche.googlesource.com/quiche/QUICHE (Google), ngtcp2, LSQUIC (Litespeed), picoquic, quicly (Fastly)

However, many (perhaps most) of these implementations mainly take care of the HTTP/3 and QUIC stuff; they are not really full-fledged web servers by themselves. When it comes to your typical servers (think NGINX, Apache, Node.js), things have been a bit slower, for several reasons. First, few of their developers were involved with HTTP/3 from the start, and now they have to play catch-up. Many bypass this by using one of the implementations listed above internally as libraries, but even that integration is difficult.

Secondly, many servers depend on third-party TLS libraries such as OpenSSL. This is, again, because TLS is very complex and has to be secure, so it’s best to reuse existing, verified work. However, while QUIC integrates with TLS 1.3, it uses it in ways much different from how TLS and TCP interact. This means that TLS libraries have to provide QUIC-specific APIs, which their developers have long been reluctant or slow to do. The issue here especially is OpenSSL, which has postponed QUIC support, but it is also used by many servers. This problem got so bad that Akamai decided to start a QUIC-specific fork of OpenSSL, called quictls. While other options and workarounds exist, TLS 1.3 support for QUIC is still a blocker for many servers, and it is expected to remain so for some time.

A partial list of full web servers that you should be able to use out of the box, along with their current HTTP/3 support, follows:

Note some important nuances:

  • Even “full support” means “as good as it gets at the moment”, not necessarily “production-ready”. For instance, many implementations don’t yet fully support connection migration, 0-RTT, server push, or HTTP/3 prioritization.
  • Other servers not listed, such as Tomcat, have (to my knowledge) made no announcement yet.
  • Of the web servers listed, only Litespeed, Cloudflare’s NGINX patch, and H2O were made by people intimately involved in QUIC and HTTP/3 standardization, so these are most likely to work best early on.

As you can see, the server landscape isn’t fully there yet, but there are certainly already options for setting up an HTTP/3 server. However, simply running the server is only the first step. Configuring it and the rest of your network is more difficult.

Network Configuration

As explained in part 1, QUIC runs on top of the UDP protocol to make it easier to deploy. This, however, mainly just means that most network devices can parse and understand UDP. Sadly, it does not mean that UDP is universally allowed. Because UDP is often used for attacks and is not critical to normal day-to-day work besides DNS, many (corporate) networks and firewalls block the protocol almost entirely. As such, UDP probably needs to be explicitly allowed to/from your HTTP/3 servers. QUIC can run on any UDP port but expect port 443 (which is typically used for HTTPS over TCP as well) to be most common.

However, many network administrators will not want to just allow UDP wholesale. Instead, they will specifically want to allow QUIC over UDP. The problem there is that, as we’ve seen, QUIC is almost entirely encrypted. This includes QUIC-level metadata such as packet numbers, but also, for example, signals that indicate the closure of a connection. For TCP, firewalls actively track all of this metadata to check for expected behavior. (Did we see a full handshake before data-carrying packets? Do the packets follow expected patterns? How many open connections are there?) As we saw in part 1, this is exactly one of the reasons why TCP is no longer practically evolvable. However, due to QUIC’s encryption, firewalls can do much less of this connection-level tracking logic, and the few bits they can inspect are relatively complex.

As such, many firewall vendors currently recommend blocking QUIC until they can update their software. Even after that, though, many companies might not want to allow it, because firewall QUIC support will always be much less than the TCP features they’re used to.

This is all complicated even more by the connection migration feature. As we’ve seen, this feature allows for the connection to continue from a new IP address without having to perform a new handshake, by the use of connection IDs (CIDs). However, to the firewall, this will look as if a new connection is being used without first using a handshake, which might just as well be an attacker sending malicious traffic. Firewalls can’t just use the QUIC CIDs, because they also change over time to protect users’ privacy! As such, there will be some need for the servers to communicate with the firewall about which CIDs are expected, but none of these things exist yet.

There are similar concerns for load balancers for larger-scale set-ups. These machines distribute incoming connections over a large number of back-end servers. Traffic for one connection must, of course, always be routed to the same back-end server (the others wouldn’t know what to do with it!). For TCP, this could simply be done based on the 4-tuple, because that never changes. With QUIC connection migration, however, that’s no longer an option. Again, servers and load balancers will need to somehow agree on which CIDs to choose in order to allow deterministic routing. Unlike for firewall configuration, however, there is already a proposal to set this up (although this is far from widely implemented).

Finally, there are other, higher-level security considerations, mainly around 0-RTT and distributed denial-of-service (DDoS) attacks. As discussed in part 2, QUIC includes quite a few mitigations for these issues already, but ideally, they will also use extra lines of defense on the network. For example, proxy or edge servers might block certain 0-RTT requests from reaching the actual back ends to prevent replay attacks. Alternatively, to prevent reflection attacks or DDoS attacks that only send the first handshake packet and then stop replying (called a SYN flood in TCP), QUIC includes the retry feature. This allows the server to validate that it’s a well-behaved client, without having to keep any state in the meantime (the equivalent of TCP SYN cookies). This retry process best happens, of course, somewhere before the back-end server — for example, at the load balancer. Again, this requires additional configuration and communication to set up, though.

These are only the most prominent issues that network and system administrators will have with QUIC and HTTP/3. There are several more, some of which I’ve talked about. There are also two separate accompanying documents for the QUIC RFCs that discuss these issues and their possible (partial) mitigations.

What Does It All Mean?

HTTP/3 and QUIC are complex protocols that rely on a lot of internal machinery. Not all of that is ready for prime time just yet, although you already have some options to deploy the new protocols on your back ends. It will probably take a few months to even years for the most prominent servers and underlying libraries (such as OpenSSL) to get updated, however.

Even then, properly configuring the servers and other network intermediaries, so that the protocols can be used in a secure and optimal fashion, will be non-trivial in larger-scale set-ups. You will need a good development and operations team to correctly make this transition.

As such, especially in the early days, it is probably best to rely on a large hosting company or CDN to set up and configure the protocols for you. As discussed in part 2, that’s where QUIC is most likely to pay off anyway, and using a CDN is one of the key performance optimizations you can do. I would personally recommend using Cloudflare or Fastly because they have been intimately involved in the standardization process and will have the most advanced and well-tuned implementations available.

Clients and QUIC Discovery

So far, we have considered server-side and in-network support for the new protocols. However, several issues are also to be overcome on the client’s side.

Before getting to that, let’s start with some good news: Most of the popular browsers already have (experimental) HTTP/3 support! Specifically, at the time of writing, here is the status of support (see also caniuse.com):

  • Google Chrome (version 91+): Enabled by default.
  • Mozilla Firefox (version 89+): Enabled by default.
  • Microsoft Edge (version 90+): Enabled by default (uses Chromium internally).
  • Opera (version 77+): Enabled by default (uses Chromium internally).
  • Apple Safari (version 14): Behind a manual flag. Will be enabled by default in version 15, which is currently in technology preview.
  • Other Browsers: No signals yet that I’m aware of (although other browsers that use Chromium internally, such as Brave, could, in theory, also start enabling it).

Note some nuances:

  • Most browsers are rolling out gradually, whereby not all users will get HTTP/3 support enabled by default from the start. This is done to limit the risks that a single overlooked bug could affect many users or that server deployments become overloaded. As such, there is a small chance that, even in recent browser versions, you won’t get HTTP/3 by default and will have to manually enable it.
  • As with the servers, HTTP/3 support does not mean that all features have been implemented or are being used at this time. Particularly, 0-RTT, connection migration, server push, dynamic QPACK header compression, and HTTP/3 prioritization might still be missing, disabled, used sparingly, or poorly configured.
  • If you want to use client-side HTTP/3 outside of the browser (for example, in your native app), then you would have to integrate one of the libraries listed above or use cURL. Apple will soon bring native HTTP/3 and QUIC support to its built-in networking libraries on macOS and iOS, and Microsoft is adding QUIC to the Windows kernel and their .NET environment, but similar native support has (to my knowledge) not been announced for other systems like Android.

Alt-Svc

Even if you’ve set up an HTTP/3-compatible server and are using an updated browser, you might be surprised to find that HTTP/3 isn’t actually being used consistently. To understand why, let’s suppose you’re the browser for a moment. Your user has requested that you navigate to example.com (a website you’ve never visited before), and you’ve used DNS to resolve that to an IP. You send one or more QUIC handshake packets to that IP. Now several things can go wrong:

  1. The server might not support QUIC.
  2. One of the intermediate networks or firewalls might block QUIC and/or UDP completely.
  3. The handshake packets might be lost in transit.

However, how would you know (which) one of these problems has occurred? In all three cases, you’ll never receive a reply to your handshake packet(s). The only thing you can do is wait, hoping that a reply might still come in. Then, after some waiting time (the timeout), you might decide there’s indeed a problem with HTTP/3. At that point, you would try to open a TCP connection to the server, hoping that HTTP/2 or HTTP/1.1 will work.

As you can see, this type of approach could introduce major delays, especially in the initial year(s) when many servers and networks won’t support QUIC yet. An easy but naïve solution would simply be to open both a QUIC and TCP connection at the same time and then use whichever handshake completes first. This method is called “connection racing” or “happy eyeballs”. While this is certainly possible, it does have considerable overhead. Even though the losing connection is almost immediately closed, it still takes up some memory and CPU time on both the client and server (especially when using TLS). On top of that, there are also other problems with this method involving IPv4 versus IPv6 networks and the previously discussed replay attacks (which my talk covers in more detail).

As such, for QUIC and HTTP/3, browsers would rather prefer to play it safe and only try QUIC if they know the server supports it. As such, the first time a new server is contacted, the browser will only use HTTP/2 or HTTP/1.1 over a TCP connection. The server can then let the browser know it also supports HTTP/3 for subsequent connections. This is done by setting a special HTTP header on the responses sent back over HTTP/2 or HTTP/1.1. This header is called Alt-Svc, which stands for “alternative services”. Alt-Svc can be used to let a browser know that a certain service is also reachable via another server (IP and/or port), but it also allows for the indication of alternative protocols. This can be seen below in figure 1.

Upon receipt of a valid Alt-Svc header indicating HTTP/3 support, the browser will cache this and try to set up a QUIC connection from then on. Some clients will do this as soon as possible (even during the initial page load — see below), while others will wait until the existing TCP connection(s) are closed. This means that the browser will only ever use HTTP/3 after it has downloaded at least a few resources via HTTP/2 or HTTP/1.1 first. Even then, it’s not smooth sailing. The browser now knows that the server supports HTTP/3, but that doesn’t mean the intermediate network won’t block it. As such, connection racing is still needed in practice. So, you might still end up with HTTP/2 if the network somehow delays the QUIC handshake enough. Additionally, if the QUIC connection fails to establish a few times in a row, some browsers will put the Alt-Svc cache entry on a denylist for some time, not trying HTTP/3 for a while. As such, it can be helpful to manually clear your browser’s cache if things are acting up because that should also empty the Alt-Svc bindings. Finally, Alt-Svc has been shown to pose some serious security risks. For this reason, some browsers pose extra restrictions on, for instance, which ports can be used (in Chrome, your HTTP/2 and HTTP/3 servers need to be either both on a port below 1024 or both on a port above or equal to 1024, otherwise Alt-Svc will be ignored). All of this logic varies and evolves wildly between browsers, meaning that getting consistent HTTP/3 connections can be difficult, which also makes it challenging to test new set-ups.

There is ongoing work to improve this two-step Alt-Svc process somewhat. The idea is to use new DNS records called SVCB and HTTPS, which will contain information similar to what is in Alt-Svc. As such, the client can discover that a server supports HTTP/3 during the DNS resolution step instead, meaning that it can try QUIC from the very first page load instead of first having to go through HTTP/2 or HTTP/1.1. For more information on this and Alt-Svc, see last year’s Web Almanac chapter on HTTP/2.

As you can see, Alt-Svc and the HTTP/3 discovery process add a layer of complexity to your already challenging QUIC server deployment, because:

  • you will always need to deploy your HTTP/3 server next to an HTTP/2 and/or HTTP/1.1 server;
  • you will need to configure your HTTP/2 and HTTP/1.1 servers to set the correct Alt-Svc headers on their responses.

While that should be manageable in production-level set-ups (because, for example, a single Apache or NGINX instance will likely support all three HTTP versions at the same time), it might be much more annoying in (local) test set-ups (I can already see myself forgetting to add the Alt-Svc headers or messing them up). This problem is compounded by a (current) lack of browser error logs and DevTools indicators, which means that figuring out why exactly the set-up isn’t working can be difficult.

Additional Issues

As if that wasn’t enough, another issue will make local testing more difficult: Chrome makes it very difficult for you to use self-signed TLS certificates for QUIC. This is because non-official TLS certificates are often used by companies to decrypt their employees’ TLS traffic (so that they can, for example, have their firewalls scan inside encrypted traffic). However, if companies would start doing that with QUIC, we would again have custom middlebox implementations that make their own assumptions about the protocol. This could lead to them potentially breaking protocol support in the future, which is exactly what we tried to prevent by encrypting QUIC so extensively in the first place! As such, Chrome takes a very opinionated stance on this: If you’re not using an official TLS certificate (signed by a certificate authority or root certificate that is trusted by Chrome, such as Let’s Encrypt), then you cannot use QUIC. This, sadly, also includes self-signed certificates, which are often used for local test set-ups.

It is still possible to bypass this with some freaky command-line flags (because the common --ignore-certificate-errors doesn’t work for QUIC yet), by using per-developer certificates (although setting this up can be tedious), or by setting up the real certificate on your development PC (but this is rarely an option for big teams because you would have to share the certificate’s private key with each developer). Finally, while you can install a custom root certificate, you would then also need to pass both the --origin-to-force-quic-on and --ignore-certificate-errors-spki-list flags when starting Chrome (see below). Luckily, for now, only Chrome is being so strict, and hopefully, its developers will loosen their approach over time.

If you are having problems with your QUIC set-up from inside a browser, it’s best to first validate it using a tool such as cURL. cURL has excellent HTTP/3 support (you can even choose between two different underlying libraries) and also makes it easier to observe Alt-Svc caching logic.

What Does It All Mean?

Next to the challenges involved with setting up HTTP/3 and QUIC on the server-side, there are also difficulties in getting browsers to use the new protocols consistently. This is due to a two-step discovery process involving the Alt-Svc HTTP header and the fact that HTTP/2 connections cannot simply be “upgraded” to HTTP/3, because the latter uses UDP.

Even if a server supports HTTP/3, however, clients (and website owners!) need to deal with the fact that intermediate networks might block UDP and/or QUIC traffic. As such, HTTP/3 will never completely replace HTTP/2. In practice, keeping a well-tuned HTTP/2 set-up will remain necessary both for first-time visitors and visitors on non-permissive networks. Luckily, as we discussed, there shouldn’t be many page-level changes between HTTP/2 and HTTP/3, so this shouldn’t be a major headache.

What could become a problem, however, is testing and verifying whether you are using the correct configuration and whether the protocols are being used as expected. This is true in production, but especially in local set-ups. As such, I expect that most people will continue to run HTTP/2 (or even HTTP/1.1) development servers, switching only to HTTP/3 in a later deployment stage. Even then, however, validating protocol performance with the current generation of tools won’t be easy.

Tools and Testing

As was the case with many major servers, the makers of the most popular web performance testing tools have not been keeping up with HTTP/3 from the start. Consequently, few tools have dedicated support for the new protocol as of July 2021, although they support it to a certain degree.

Google Lighthouse

First, there is the Google Lighthouse tool suite. While this is an amazing tool for web performance in general, I have always found it somewhat lacking in aspects of protocol performance. This is mostly because it simulates slow networks in a relatively unrealistic way, in the browser (the same way that Chrome’s DevTools handle this). While this approach is quite usable and typically “good enough” to get an idea of the impact of a slow network, testing low-level protocol differences is not realistic enough. Because the browser doesn’t have direct access to the TCP stack, it still downloads the page on your normal network, and it then artificially delays the data from reaching the necessary browser logic. This means, for example, that Lighthouse emulates only delay and bandwidth, but not packet loss (which, as we’ve seen, is a major point where HTTP/3 could potentially differ from HTTP/2). Alternatively, Lighthouse uses a highly advanced simulation model to guesstimate the real network impact, because, for example, Google Chrome has some complex logic that tweaks several aspects of a page load if it detects a slow network. This model has, to the best of my knowledge, not been adjusted to handle IETF QUIC or HTTP/3 yet. As such, if you use Lighthouse today for the sole purpose of comparing HTTP/2 and HTTP/3 performance, then you are likely to get erroneous or oversimplified results, which could lead you to wrong conclusions about what HTTP/3 can do for your website in practice. The silver lining is that, in theory, this can be improved massively in the future, because the browser does have full access to the QUIC stack, and thus Lighthouse could add much more advanced simulations (including packet loss!) for HTTP/3 down the line. For now, though, while Lighthouse can, in theory, load pages over HTTP/3, I would recommend against it.

WebPageTest

Secondly, there is WebPageTest. This amazing project lets you load pages over real networks from real devices across the world, and it also allows you to add packet-level network emulation on top, including aspects such as packet loss! As such, WebPageTest is conceptually in a prime position to be used to compare HTTP/2 and HTTP/3 performance. However, while it can indeed already load pages over the new protocol, HTTP/3 has not yet been properly integrated into the tooling or visualizations. For example, there are currently no easy ways to force a page load over QUIC, to easily view how Alt-Svc was actually used, or even to see QUIC handshake details. In some cases, even seeing whether a response used HTTP/3 or HTTP/2 can be challenging. Still, in April, I was able to use WebPageTest to run quite a few tests on facebook.com and see HTTP/3 in action, which I’ll go over now.

First, I ran a default test for facebook.com, enabling the “repeat view” option. As explained above, I would expect the first page load to use HTTP/2, which will include the Alt-Svc response header. As such, the repeat view should use HTTP/3 from the start. In Firefox version 89, this is more or less what happens. However, when looking at individual responses, we see that even during the first page load, Firefox will switch to using HTTP/3 instead of HTTP/2! As you can see in figure 2, this happens from the 20th resource onwards. This means that Firefox establishes a new QUIC connection as soon as it sees the Alt-Svc header, and it switches to it once it succeeds. If you scroll down to the connection view, it also seems to show that Firefox even opened two QUIC connections: one for credentialed CORS requests and one for no-CORS requests. This would be expected because, as we discussed above, even for HTTP/2 and HTTP/3, browsers will open multiple connections due to security concerns. However, because WebPageTest doesn’t provide more details in this view, it’s difficult to confirm without manually digging through the data. Looking at the repeat view (second visit), it starts by directly using HTTP/3 for the first request, as expected.

Next, for Chrome, we see similar behavior for the first page load, although here Chrome already switches on the 10th resource, much earlier than Firefox. It’s a bit more unclear here whether it switches as soon as possible or only when a new connection is needed (for example, for requests with different credentials), because, unlike for Firefox, the connection view also doesn’t seem to show multiple QUIC connections. For the repeat view, we see some weirder things. Unexpectedly, Chrome starts off using HTTP/2 there as well, switching to HTTP/3 only after a few requests! I performed a few more tests on other pages as well, to confirm that this is indeed consistent behaviour. This could be due to several things: It might just be Chrome’s current policy, it might be that Chrome “raced” a TCP and QUIC connection and TCP won initially, or it might be that the Alt-Svc cache from the first view was unused for some reason. At this point, there is, sadly, no easy way to determine what the problem really is (and whether it can even be fixed).

Another interesting thing I noticed here is the apparent connection coalescing behavior. As discussed above, both HTTP/2 and HTTP/3 can reuse connections even if they go to other hostnames, to prevent downsides from hostname sharding. However, as shown in figure 3, WebPageTest reports that, for this Facebook load, connection coalescing is used over HTTP/3 for facebook.com and fbcdn.net, but not over HTTP/2 (as Chrome opens a secondary connection for the second domain). I suspect this is a bug in WebPageTest, however, because facebook.com and fbcnd.net resolve to different IPs and, as such, can’t really be coalesced.

The figure also shows that some key QUIC handshake information is missing from the current WebPageTest visualization.

Note: As we see, getting “real” HTTP/3 going can be difficult sometimes. Luckily, for Chrome specifically, we have additional options we can use to test QUIC and HTTP/3, in the form of command-line parameters.

On the bottom of WebPageTest’s “Chromium” tab, I used the following command-line options:

--enable-quic --quic-version=h3-29 --origin-to-force-quic-on=www.facebook.com:443,static.xx.fbcdn.net:443

The results from this test show that this indeed forces a QUIC connection from the start, even in the first view, thus bypassing the Alt-Svc process. Interestingly, you will notice I had to pass two hostnames to --origin-to-force-quic-on. In the version where I didn’t, Chrome, of course, still first opened an HTTP/2 connection to the fbcnd.net domain, even in the repeat view. As such, you’ll need to manually indicate all QUIC origins in order for this to work!

We can see even from these few examples that a lot of stuff is going on with how browsers actually use HTTP/3 in practice. It seems they even switch to the new protocol during the initial page load, abandoning HTTP/2 either as soon as possible or when a new connection is needed. As such, it’s difficult not only getting a full HTTP/3 load, but also getting a pure HTTP/2 load on a set-up that supports both! Because WebPageTest doesn’t show much HTTP/3 or QUIC metadata yet, figuring out what’s going on can be challenging, and you can’t trust the tools and visualizations at face value either.

So, if you use WebPageTest, you’ll need to double-check the results to make sure which protocols were actually used. Consequently, I think this means that it’s too early to really test HTTP/3 performance at this time (and especially too early to compare it to HTTP/2). This belief is strengthened by the fact that not all servers and clients have implemented all protocol features yet. Due to the fact that WebPageTest doesn’t yet have easy ways of showing whether advanced aspects such as 0-RTT were used, it will be tricky to know what you’re actually measuring. This is especially true for the HTTP/3 prioritization feature, which isn’t implemented properly in all browsers yet and which many servers also lack full support for. Because prioritization can be a major aspect driving web performance, it would be unfair to compare HTTP/3 to HTTP/2 without making sure that at least this feature works properly (for both protocols!). This is just one aspect, though, as my research shows how big the differences between QUIC implementations can be. If you do any comparison of this sort yourself (or if you read articles that do), make 100% sure that you’ve checked what’s actually going on.

Finally, also note that other higher-level tools (or data sets such as the amazing HTTP Archive) are often based on WebPageTest or Lighthouse (or use similar methods), so I suspect that most of my comments here will be widely applicable to most web performance tooling. Even for those tool vendors announcing HTTP/3 support in the coming months, I would be a bit skeptical and would validate that they’re actually doing it correctly. For some tools, things are probably even worse, though; for example, Google’s PageSpeed Insights only got HTTP/2 support this year, so I wouldn’t wait for HTTP/3 arriving anytime soon.

Wireshark, qlog and qvis

As the discussion above shows, it can be tricky to analyze HTTP/3 behavior by just using Lighthouse or WebPageTest at this point. Luckily, other lower-level tools are available to help with this. First, the excellent Wireshark tool has advanced support for QUIC, and it can experimentally dissect HTTP/3 as well. This allows you to observe which QUIC and HTTP/3 packets are actually going over the wire. However, in order for that to work, you need to obtain the TLS decryption keys for a given connection, which most implementations (including Chrome and Firefox) allow you to extract by using the SSLKEYLOGFILE environment variable. While this can be useful for some things, really figuring out what’s happening, especially for longer connections, could entail a lot of manual work. You would also need a pretty advanced understanding of the protocols’ inner workings.

Luckily, there is a second option, qlog and qvis. qlog is a JSON-based logging format specifically for QUIC and HTTP/3 that is supported by the majority of QUIC implementations. Instead of looking at the packets going over the wire, qlog captures this information on the client and server directly, which allows it to include some additional information (for example, congestion control details). Typically, you can trigger qlog output when starting servers and clients with the QLOGDIR environment variable. (Note that in Firefox, you need to set the network.http.http3.enable_qlog preference. Apple devices and Safari use QUIC_LOG_DIRECTORY instead. Chrome doesn’t yet support qlog.)

These qlog files can then be uploaded to the qvis tool suite at qvis.quictools.info. There, you’ll get a number of advanced interactive visualizations that make it easier to interpret QUIC and HTTP/3 traffic. qvis also has support for uploading Wireshark packet captures (.pcap files), and it has experimental support for Chrome’s netlog files, so you can also analyze Chrome’s behavior. A full tutorial on qlog and qvis is beyond the scope of this article, but more details can be found in tutorial form, as a paper, and even in talk-show format. You can also ask me about them directly because I’m the main implementer of qlog and qvis. 😉

However, I am under no illusion that most readers here should ever use Wireshark or qvis, because these are quite low-level tools. Still, as we have few alternatives at the moment, I strongly recommend not extensively testing HTTP/3 performance without using this type of tool, to make sure you really know what’s happening on the wire and whether what you’re seeing is really explained by the protocol’s internals and not by other factors.

What Does It All Mean?

As we’ve seen, setting up and using HTTP/3 over QUIC can be a complex affair, and many things can go wrong. Sadly, no good tool or visualization is available that exposes the necessary details at an appropriate level of abstraction. This makes it very difficult for most developers to assess the potential benefits that HTTP/3 can bring to their website at this time or even to validate that their set-up works as expected.

Relying only on high-level metrics is very dangerous because these could be skewed by a plethora of factors (such as unrealistic network emulation, a lack of features on clients or servers, only partial HTTP/3 usage, etc.). Even if everything did work better, as we’ve seen in part 2, the differences between HTTP/2 and HTTP/3 will likely be relatively small in most cases, which makes it even more difficult to get the necessary information from high-level tools without targeted HTTP/3 support.

As such, I recommend leaving HTTP/2 versus HTTP/3 performance measurements alone for a few more months and focusing instead on making sure that our server-side set-ups are functioning as expected. For this, it’s easiest to use WebPageTest in combination with Google Chrome’s command-line parameters, with a fallback to curl for potential issues — this is currently the most consistent set-up I can find.

Conclusion and Takeaways

Dear reader, if you’ve read the full three-part series and made it here, I salute you! Even if you’ve only read a few sections, I thank you for your interest in these new and exciting protocols. Now, I will summarize the key takeaways from this series, provide a few key recommendations for the coming months and year, and finally provide you with some additional resources, in case you’d like to know more.

Summary

First, in part 1, we discussed that HTTP/3 was needed mainly because of the new underlying QUIC transport protocol. QUIC is the spiritual successor to TCP, and it integrates all of its best practices, as well as TLS 1.3. This was mainly needed because TCP, due to its ubiquitous deployment and integration in middleboxes, has become too inflexible to evolve. QUIC’s usage of UDP and almost full encryption means that we (hopefully) only have to update the endpoints in the future in order to add new features, which should be easier. QUIC, however, also adds some interesting new capabilities. First, QUIC’s combined transport and cryptographic handshake is faster than TCP + TLS, and it can make good use of the 0-RTT feature. Secondly, QUIC knows it is carrying multiple independent byte streams and can be smarter about how it handles loss and delays, mitigating the head-of-line blocking problem. Thirdly, QUIC connections can survive users moving to a different network (called connection migration) by tagging each packet with a connection ID. Finally, QUIC’s flexible packet structure (employing frames) makes it more efficient but also more flexible and extensible in the future. In conclusion, it’s clear that QUIC is the next-generation transport protocol and will be used and extended for many years to come.

Secondly, in part 2, we took a bit of a critical look at these new features, especially their performance implications. First, we saw that QUIC’s use of UDP doesn’t magically make it faster (nor slower) because QUIC uses congestion control mechanisms very similar to TCP to prevent overloading the network. Secondly, the faster handshake and 0-RTT are more micro-optimizations, because they are really only one round trip faster than an optimized TCP + TLS stack, and QUIC’s true 0-RTT is further affected by a range of security concerns that can limit its usefulness. Thirdly, connection migration is really only needed in a few specific cases, and it still means resetting send rates because the congestion control doesn’t know how much data the new network can handle. Fourthly, the effectiveness of QUIC’s head-of-line blocking removal severely depends on how stream data is multiplexed and prioritized. Approaches that are optimal to recover from packet loss seem detrimental to general use cases of web page loading performance and vice versa, although more research is needed. Fifthly, QUIC could easily be slower to send packets than TCP + TLS, because UDP APIs are less mature and QUIC encrypts each packet individually, although this can be largely mitigated in time. Sixthly, HTTP/3 itself doesn’t really bring any major new performance features to the table, but mainly reworks and simplifies the internals of known HTTP/2 features. Finally, some of the most exciting performance-related features that QUIC allows (multipath, unreliable data, WebTransport, forward error correction, etc.) are not part of the core QUIC and HTTP/3 standards, but rather are proposed extensions that will take some more time to be available. In conclusion, this means QUIC will probably not improve performance much for users on high-speed networks, but will mainly be important for those on slow and less-stable networks.

Finally, in this part 3, we looked at how to practically use and deploy QUIC and HTTP/3. First, we saw that most best practices and lessons learned from HTTP/2 should just carry over to HTTP/3. There is no need to change your bundling or inlining strategy, nor to consolidate or shard your server farm. Server push is still not the best feature to use, and preload can similarly be a powerful footgun. Secondly, we’ve discussed that it might take a while before off-the-shelf web server packages provide full HTTP/3 support (partly due to TLS library support issues), although plenty of open-source options are available for early adopters and several major CDNs have a mature offering. Thirdly, it’s clear that most major browsers have (basic) HTTP/3 support, even enabled by default. There are major differences in how and when they practically use HTTP/3 and its new features, though, so understanding their behavior can be challenging. Fourthly, we’ve discussed that this is worsened by a lack of explicit HTTP/3 support in popular tools such as Lighthouse and WebPageTest, making it especially difficult to compare HTTP/3 performance to HTTP/2 and HTTP/1.1 at this time. In conclusion, HTTP/3 and QUIC are probably not quite ready for primetime yet, but they soon will be.

Recommendations

From the summary above, it might seem like I am making strong arguments against using QUIC or HTTP/3. However, that is quite opposite to the point I want to make.

First, as discussed at the end of part 2, even though your “average” user might not encounter major performance gains (depending on your target market), a significant portion of your audience will likely see impressive improvements. 0-RTT might only save a single round trip, but that can still mean several hundred milliseconds for some users. Connection migration might not sustain consistently fast downloads, but it will definitely help people trying to fetch that PDF on a high-speed train. Packet loss on cable might be bursty, but wireless links might benefit more from QUIC’s head-of-line blocking removal. What’s more, these users are those who would typically encounter the worst performance of your product and, consequently, be most heavily affected by it. If you wonder why that matters, read Chris Zacharias’ famous web performance anecdote.

Secondly, QUIC and HTTP/3 will only get better and faster over time. Version 1 has focused on getting the basic protocol done, keeping more advanced performance features for later. As such, I feel it pays to start investing in the protocols now, to make sure you can use them and the new features to optimal effect when they become available down the line. Given the complexity of the protocols and their deployment aspects, it would be good to give yourself some time to get acquainted with their quirks. Even if you don’t want to get your hands dirty quite yet, several major CDN providers offer mature “flip the switch” HTTP/3 support (particularly, Cloudflare and Fastly). I struggle to find a reason not to try that out if you’re using a CDN (which, if you care about performance, you really should be).

As such, while I wouldn’t say that it’s crucial to start using QUIC and HTTP/3 as soon as possible, I do feel there are plenty of benefits already to be had, and they will only increase in the future.

Further Reading

While this has been a long body of text, sadly, it really only scratches the technical surface of the complex protocols that QUIC and HTTP/3 are.

Below you will find a list of additional resources for continued learning, more or less in order of ascending technical depth:

  • HTTP/3 Explained,” Daniel Stenberg
    This e-book, by the creator of cURL, summarizes the protocol.
  • HTTP/2 in Action,” Barry Pollard
    This excellent all-round book on HTTP/2 has reusable advice and a section on HTTP/3.
  • @programmingart, Twitter
    My tweets are mostly dedicated to QUIC, HTTP/3, and web performance (including news) in general. See for example my recent threads on QUIC features.
  • YouTube,” Robin Marx
    My over 10 in-depth talks cover various aspects of the protocols.
  • The Cloudlare Blog
    This is the main product of a company that also runs a CDN on the side.
  • The Fastly Blog
    This blog has excellent discussions of technical aspects, embedded in the wider context.
  • QUIC, the actual RFCs
    You’ll find links to the IETF QUIC and HTTP/3 RFC documents and other official extensions.
  • IIJ Engineers Blog: Excellent deep technical explanations of QUIC feature details.
  • HTTP/3 and QUIC academic papers, Robin Marx
    My research papers cover stream multiplexing and prioritization, tooling, and implementation differences.
  • QUIPS, EPIQ 2018, and EPIQ 2020
    These papers from academic workshops contain in-depth research on security, performance, and extensions of the protocols.

With that, I leave you, dear reader, with a hopefully much-improved understanding of this brave new world. I am always open to feedback, so please let me know what you think of this series!

This series is divided into three parts:

  1. HTTP/3 history and core concepts
    This is targeted at people new to HTTP/3 and protocols in general, and it mainly discusses the basics.
  2. HTTP/3 performance features
    This is more in-depth and technical. People who already know the basics can start here.
  3. Practical HTTP/3 deployment options (current article)
    This explains the challenges involved in deploying and testing HTTP/3 yourself. It details how and if you should change your web pages and resources as well.

Privacy Changes to Streaming Ads Are Coming

Connected television, a rising star in the targeted, programmatic advertising industry, will soon be impacted by new privacy measures.

Connected television is often called CTV or OTT. The latter stands for “over the top” and is reminiscent of cable boxes or Roku devices that sat over the top of televisions.

For advertisers, the term can mean more. The definition of CTV “can be very convoluted. You can go down a rabbit hole of different definitions and abbreviations,” said Mary Kotara, a senior programmatic media consultant with Adswerve, a consultancy. Kotara was speaking during a live interview with the CommerceCo by Practical Ecommerce community on August 19, 2021.

The definition becomes challenging because CTV is not typically used to describe, for example, videos viewed on Facebook or even YouTube. But watching YouTube TV on a Roku device or Hulu on an iPhone is CTV.

Think of advertising on streaming television and movie providers such as Hulu or Roku displayed on television screens. These are the ad platforms that might be impacted by changes to the availability of internet protocol addresses separate from the privacy trends found in web browsers (with the elimination of tracking cookies) and mobile app tracking.

IP Addresses

CTV differs from traditional (often called linear) television or even cable programming in that advertisers are able to target audiences in ways similar to what can be found on popular networks such as Google or Facebook. This includes targeting behavior, buying intent, and geo-location, among others.

Part of that targeting process can be to create a household device graph. This graph uses an IP address and other metrics to associate individual devices — mobile phones, desktop computers,  CTV devices — with a specific household.

Once a device graph is established, it is possible, for example, to target shoppers with CTV ads that correspond to something an individual may have done on a mobile app. It takes behavioral targeting across devices and uses it to show relevant and timely ads.

Conversely, the household device graph improves attribution.

“Consumers watching Hulu on a TV will not click through to convert. So [advertisers] need to know I served them an ad on connected TV and then the consumer went back to her desktop computer or phone and converted. The household graph will let [advertisers] paint that whole picture,” said Kotara, adding that IP addresses were an important identifier for constructing a household graph.

“Companies that connect TV with cookies and device IDs use IP addresses to tie it all together,” said Kotara.

IP addresses in CTV devices, however, could be the next identifier to become unavailable or unreliable as changes occur to promote user privacy.

Several proposals coming from privacy advocates and technology companies would make collecting an IP address almost useless.

One such idea is Google’s IP blindness proposal called Gnatcatcher (“Global Network Address Translation Combined with Audited and Trusted CDN or HTTP-Proxy Eliminating Reidentification”). Gnatcatcher disguises a user’s IP address.

With Gnatcatcher, Google wanted to develop “technology to protect people from opaque or hidden techniques that share data about individual users and allow them to be tracked covertly. One such tactic involves using a device’s IP address to try and identify someone without their knowledge or ability to opt-out.”

Better Future

Interestingly, obscuring IP addresses could actually make CTV attribution and targeting better in the long run.

“There will be smarter ways” to target and measure CTV in the near future, according to Kotara.

“IP addresses are not the most reliable source of information. They are not always accurate. There are downfalls aside from the privacy issue,” Kotara said.

In the same way that ending tracking cookies gives advertisers an opportunity to improve, so too will the end of relying on IP addresses, according to Kotara.

In fact, there are many companies already working on privacy-focused ways to provide audience targeting and some level of conversion attribution.

But it is worth noting that while dates have been set for the end of third-party cookie collection in each leading browser, there is no single date set for the obscuring of IP addresses. Rather, CTV advertisers know they cannot use IP addresses forever.