Product Photography, Part 7: Magnification and Close-ups

The best product photos provide online shoppers with precise detail to know what to expect when the goods arrive. Specialty magnified shots can help.

This is the seventh post in my series on helping ecommerce merchants elevate their product photography. “Part 1” addressed the importance of backdrops. “Part 2” explained tripods. “Part 3” examined the fundamentals of artificial lighting. “Part 4” reviewed angles and viewpoints, and “Part 5” dealt with choosing a camera. “Part 6” assessed lens and their importance.

In this installment, I’ll describe the benefits of macro and tilt-shift lenses.

Macro Lenses

A macro lens acts as a magnifying glass for your camera, producing extremely sharp photos at close range. Macro lenses typically magnify at a 1:1 ratio and can create images larger than the object. The downside, unfortunately, is that the plane of focus is parallel to your camera’s sensor, resulting in a very narrow depth of field. But that shouldn’t matter when photographing small products.

Photo of a Canon Macro 100mm lens from B&H PhotoPhoto of a Canon Macro 100mm lens from B&H Photo

A macro lens, such as this example from Canon, acts as a magnifying glass for a camera, for very sharp photos at close range. Source: B&H Photo.

Moreover, even if the depth of field does affect the focus of your images, a process called photo stacking layers images and creates a single version entirely in focus. I will explain how to do this in a later installment.

My favorite macro lenses include:

360-degree photos are comprised of 20 to 80 shots with a macro lens from the same fixed position utilizing turntables and a variety of cameras. 360-degree images greatly enhance the online experience while boosting trust and conversions. And because they provide unparalleled details, 360-degree photos can remove surprises and thus reduce customer chargebacks.

Sample 360-degree animated GIF image from Product-360.com.Sample 360-degree animated GIF image from Product-360.com.

360-degree photos are comprised of 20 to 80 shots with a macro lens from the same fixed position. This screenshot shows the details of multiple shots in an animated GIF. Source: Product-360.com.

Extension tubes are less-costly alternatives to macro lenses. Sometimes called “macro tubes,” extensions are hollow cylinders that fit between the body of a camera and its lens. They alter how close you can get to a subject and thus increase magnification. Extension tubes do not distort a shot and attach to one another to create the desired magnification with any lens.

The downsides of extensions are having to change the minimum and maximum focus distances and the effective focal length and aperture. A “longer” lens will make your camera far more susceptible to shakes and allow less light to hit your sensor. Vello, Mieke, Viltrox, Kenko, and Fujifilm all make quality extension tubes.

Sample Viltrox extension tube from B&H PhotoSample Viltrox extension tube from B&H Photo

Extension tubes fit between the body of a camera and its lens. They alter how close you can get to a subject and thus increase magnification. Source: B&H Photo.

Tilt-Shift Lenses

A tilt-shift lens is very handy for changing the focal plane of an image to maximize or minimize its depth of field. Tilt-shift lenses allow the movement of a lens up or down and side to side as needed for the perfect shot.

The tilt function is especially helpful in product photography because it allows the focus on specific details. More importantly, a tilt-shift lens projects a much wider area onto your sensor than is required while producing a very sharp image, unlike traditional wide-angle lenses.

My choices for tilt-shift lenses are Canon 50mm f/2.8L Macro or Canon TS-E 90mm f/2.8. Both are quite expensive, however.

Sample Canon tilt-lens from B&H PhotoSample Canon tilt-lens from B&H Photo

Tilt-shift lenses, such as this one from Canon, allow the movement of a lens up or down and side to side. Source: B&H Photo. 

Macro vs. Tilt-shift?

Choosing between a macro lens and a tilt-shift comes down to your products and budget. A macro lens with photo stacking is best if you require everything in focus. (Again, I’ll explain photo stacking in a future installment.) For less money, use extension tubes (or even backing up with one of the lenses in “Part 6” and a larger aperture setting).

However, if you’re looking to create interesting and engaging images and have a large budget, consider investing in a tilt-shift lens.

Using Modern Image Formats: AVIF And WebP

Images are the most popular resource type on the web and are often the largest. Users appreciate high-quality visuals, but care needs to be taken to deliver those hero images, product photos and cat memes as efficiently and effectively as possible.

If you’re optimizing for the Web Vitals, you might be interested to hear that images account for ~42% of the Largest Contentful Paint element for websites. Key user-centric metrics often depend on the size, number, layout, and loading priority of images on the page. This is why a lot of our guidance on performance talks about image optimization.

A tl;dr of recommendations can be found below.

tl;dr

  • AVIF is a solid first choice if lossy, low-fidelity compression is acceptable and saving bandwidth is the number one priority. Assuming encode/decode speeds meet your needs.
  • WebP is more widely supported and may be used for rendering regular images where advanced features like wide color gamut or text overlays are not required.
  • AVIF may not be able to compress non-photographic images as well as PNG or lossless WebP. Compression savings from WebP may be lower than JPEG for high-fidelity lossy compression.
  • If both AVIF and WebP are not viable options, consider evaluating MozJPEG (optimize JPEG images), OxiPNG (non-photographic images), or JPEG 2000 (lossy or lossless photographic images).
  • Progressive enhancement via <picture> lets the browser choose the first supported format in the order of preference. This implementation is considerably simplified when using image CDN’s where the Accept Header and content negotiation (e.g. auto-format and quality) can serve the best image.

Why Do We Need Modern Formats?

We have a reasonably wide selection of image formats to choose from when rendering images on the web. The essential difference between image formats is that the image codec used to encode or decode each image type is different. An image codec represents the algorithm used to compress and encode images to a specific file type and decode them for display on the screen.

Evaluating Codecs

You can evaluate which image format is suitable for you based on different parameters.

  • Compression
    The efficiency of a codec can be mainly measured by how much compression it can achieve. Compression achieved is relevant because the higher the compression, the smaller the file size, and the lower the data required to transfer the image on the network. Smaller file size directly impacts the Largest contentful Paint (LCP) metric for the page as image resources needed by the page get loaded faster.
  • Quality
    Ideally, compression should not result in any loss of image data; it should be lossless. Compression formats that result in some loss of image data, thereby reducing the quality of the image, are known as lossy. You may use tools like DSSIM or ssimulacra to measure the structural similarity between images and judge if the loss in quality is acceptable.
  • Encode/Decode Speed
    Complex compression algorithms may require higher processing power to encode/decode images. This can be complicated by whether encoding is being done ahead of time (static/build) or on-the-fly (on-demand). While encoding may be one-time in the case of static images, the browser still has to decode images before rendering them. A complex decoding process can slow down the rendering of images.

Degree of compression, image quality, and decoding speed are key factors to be considered when comparing image performance for the web. Specific use cases may require image formats that support other features like:

  • Software support: An image format may perform very well but is useless if browsers, CDN’s and other image manipulation tools do not recognize it.
  • Animation support may be required for some images on the web (e.g., GIF). However, you should ideally replace such images with videos.
  • Alpha Transparency: The ability to create images with different opacity levels using the alpha channel. (e.g., PNG images with transparent backgrounds)
  • It should support High dynamic range(HDR) imaging and wide color gamut.
  • Progressive decoding to load images gradually allows users to get a reasonable preview of the image before it gets refined.
  • Depth maps that will enable you to apply effects to the foreground or background of the image.
  • Images with multiple overlapping layers, for example, text overlays, borders, and so on.

Tip: When evaluating quality, compression and fine-tuning of modern formats, Squoosh.app’s ability to perform a visual side-by-side comparison is helpful. Zooming in allows you to better appreciate where a format exhibits blockiness or edge-artifacts to reason about trade-offs.

The Old Guards: JPEG And PNG

JPEG has been the most widely supported image format for 25 years. Classic JPEG encoders lead to relatively weak compression, while more modern JPEG encoding efforts (like MozJPEG) improve compression but are not quite as optimal as modern formats. JPEG is also a lossy compression format. While decoding speed for JPEGs is excellent, it lacks other desirable features required of images on modern, eye-catching websites. It does not support transparency in images, animation, depth maps, or overlays.

JPEG works best with photographs, while PNG is its counterpart for other still images. PNG is a lossless format and can support alpha transparency, but the compression achieved, especially for photographs, is considerably low. JPEG and PNG are both used extensively depending on the type of image required.

The target for modern image formats is thus to overcome the limitations of JPEG and PNG by offering better compression and flexibility to support the other features discussed earlier. With this background, let us look at what AVIF and WebP have to offer.

AVIF

The AV1 image file format (AVIF) is an open-source image format for storing still and animated images. It was released in February 2019 by the Alliance for Open Media (AOMedia). AVIF is the image version of the popular AV1 video format. The goal was to develop a new open-source video coding format that is both state-of-the-art and royalty-free.

AVIF Benefits

AVIF supports very efficient lossy and lossless compression to produce high-quality images after compression AVIF compresses much better than most popular formats on the web today (JPEG, WebP, JPEG 2000, and so on). Images can be up to ten times smaller than JPEGs of similar visual quality. Some tests have shown that AVIF offers a 50% saving in file size compared to JPEG with similar perceptual quality. Note that there can be cases where WebP lossless can be better than AVIF lossless, so do be sure to manually evaluate.

Here, you can see a size comparison between a JPEG image and its corresponding (lossy) AVIF image converted using the Squoosh app:

In addition to superior compression, AVIF also provides the following features:

  • AVIF supports animations, live photos, and more through multilayer images stored in image sequences.
  • It offers better support for graphical elements, logos, and infographics, where JPEG has limitations.
  • It provides better lossless compression than JPEG.
  • It supports twelve bits of color depth enabling high dynamic range (HDR) and wide color gamut (WCG) images with a better span of bright and dark tones and a broader range of luminosity.
  • It includes support for monochrome images and multichannel images, including transparent images that use the alpha channel.

Comparing Formats

To better understand the differences in quality and compression offered by the different formats, we can visually compare images and evaluate the differences.

Evaluating Quality And Compression

We will begin our quality evaluation of JPEG, WebP, and AVIF using the default high-quality output settings of Squoosh for each format — intentionally untuned to mimic a new user’s experience with them. As a reminder, you should aim to evaluate the quality configuration and formats that best suit your needs. If you’re short on time, image CDNs automate some of this.

In this first test, encoding a 560KB photo of a sunset (with many textures) produces an image that is visually and perceptually quite similar for each. The output comes in at 289KB (JPEG@q75), 206KB (WebP@q75), and 101KB (AVIF@q30) — up to 81% in compression savings.

Great stuff, but let’s dig deeper.

For a more extreme example of the differences between JPEG and AVIF, we can look at an example from the Kodak dataset (evaluated by Netflix) comparing a JPEG (4:4:4) at 20KB to an AVIF (4:4:4) at 19.8KB. Notice how the JPEG has visible blocky artifacts in the sky and roof. The AVIF is visibly better, containing fewer blocking artifacts. There is, however, a level of texture loss on the roof and some blurriness. It’s still quite impressive, given the overall compression factor is 59x.

To include an AVIF image on your page, you can add it as an image element. However, browsers that do not support AVIF cannot render this image.

<img src="/images/sky.avif" width="360" height="240" alt="a beautiful sky">

A workaround to ensure that at least one supported image format is delivered to all browsers is to apply AVIF as a progressive enhancement. There are two ways to do this.

Progressive Enhancement
  1. Using The <picture> Element
    As <picture> allows browsers to skip images they do not recognize, you can include images in your order of preference. The browser selects the first one it supports.
    <picture>
    <source srcset="img/photo.avif" type="image/avif">
    <source srcset="img/photo.webp" type="image/webp">
    <img src="img/photo.jpg" alt="Description" width="360" height="240">
    </picture>
  2. Using Content Negotiation
    Content negotiation allows the server to serve different resource formats based on what is supported by the browser. Browsers that support a specific format can announce it by adding the format to their Accept Request Header. E.g., the Accept Request Header for images in Chrome is:
    Accept: image/avif,image/webp,image/apng,image/*,*/*;q=0.8
    The code to check if AVIF is supported in the fetch event handler can look something like this:
    const hdrAccept = event.request.headers.get("accept");
    const sendAVIF = /image\/avif/.test(hdrAccept);
    You can use this value to serve AVIF or any other default format to the client.

Creating the markup for progressive enhancement can be daunting. Image CDN’s offer the option to automatically serve the best format suitable to the client. However, if you are not using an image CDN, you can consider using a tool like just-gimme-an-img. This tool can generate the markup for the picture element for a given image with different formats and widths. It also creates the images corresponding to the markup using Squoosh entirely client-side. Note: encoding multiple formats can take a while using it, so you might want to grab a coffee while you wait.

Note: Image CDNs are mentioned a few times in this article. CDN servers are often located closer to users than origin servers and can have a shorter round-trip times (RTT), improving network latency. That said, serving from a different origin can add round-trips and impact performance gains. This may be fine if the CDN is serving other site content, but when in doubt, experiment and measure.

Encode And Decode AVIF Files

Several open-source projects provide different methods to encode/decode AVIF files:

  • Libraries
    Libaom is the open-source encoder and decoder maintained by AOMedia, the creators of AVIF. The library is continuously updated with new optimizations that aim to reduce the cost of encoding AVIF, especially for frequently loaded or high-priority images. Libavif is an open-source muxer and parser for AVIF used in Chrome for decoding AVIF images. You can use libavif with libaom to create AVIF files from original uncompressed images or transcode them from other formats. There is also Libheif, a popular AVIF/HEIF encoder/decoder and Cavif. Thanks to Ben Morss, libgd supports AVIF and is also coming to PHP in November.
  • Web Apps And Desktop Apps
    Squoosh, a web app that lets you use different image compressors, also supports AVIF, making it relatively straightforward to convert and create .avif files online. On desktop, GIMP supports AVIF exporting. ImageMagick and Paint.net also support AVIF while Photoshop community plugins for AVIF are also available.
  • JavaScript Libraries
    • AVIF.js is an AVIF polyfill for browsers that do not support AVIF yet. It uses Service Worker API to intercept the fetch event and decode AVIF files.
    • Avif.io is another web utility that can convert files from different image types to AVIF on the client-side. It calls Rust code in the browser using a WebWorker. The converter library is compiled to WASM using wasm-pack.
    • Sharp is a Node.js module that can convert large images in standard formats to smaller web-friendly images, including AVIF images.
  • Utilities
    Image conversion or transformation utilities support the AVIF format. You can use MP4Box to create and decode AVIF files.
  • In Code
    go-avif implements an AVIF encoder for Go using libaom. It comes with a utility called avif which can encode JPEG or PNG files to AVIF.

Anyone interested in learning how to create AVIF images using Squoosh or building the command line encoder avifenc can do so at the codelab on serving AVIF files.

AVIF And Performance

AVIF can reduce the file size of images due to better compression. As a result, AVIF files download faster and consume lower bandwidth. This can potentially improve performance by reducing the time to load images.

Lighthouse-best practices audit now considers that AVIF image compression can bring significant improvements. It collects all the BMP, JPEG, and PNG images on the page, converts them to WebP and estimates the AVIF file size. This estimate helps Lighthouse report the potential savings under the “Serve images in next-gen formats” section.

Tim Vereecke reported 25% byte savings and a positive impact on LCP (compared to JPEG) after converting 14 million images on the website to AVIF measured using Real User Monitoring (RUM).

AVIF Gotchas

The biggest drawback for AVIF at present is that it lacks uniform support across browsers. Introducing AVIF as a progressive enhancement helps overcome this. A few other aspects in which AVIF does not meet the ideal standards for a modern file format.

  • Modern versions of Chrome (Chrome 94+) do support AVIF progressive rendering while older versions do not. While at the time of writing there isn’t an encoder that can make these images easily, there is hope this will change.
  • AVIF images take longer to encode and create. This could be a problem for sites that create images files dynamically. However, the AVIF team is working on improving encoding speeds. AVIF contributors at Google have also reported some nice performance gains. Since Jan 1st 2021, AVIF encoding had a ~47% improvement in transcode time (this is speed 6, the current default for libavif) and across the calendar year a 73% improvement. Since July, there’s also been a 72% improvement in transcode times at speed 9 (on-the-fly-encoding).

  • Decoding AVIF images for display can also take up more CPU power than other codecs, though smaller file sizes may compensate for this.
  • Some CDNs do not yet support AVIF by default for their automatic format modes because it can still be slower to generate on the first request.

WebP

We have mentioned WebP a few times, but let’s briefly cover its history. Google created the WebP format in 2011 as an image format that would help to make the web faster. Over the years, it has been accepted and adopted widely because of its ability to compress images to lower file sizes compared to JPEG and PNG. WebP offers both lossless and lossy compression at an acceptable visual quality and supports alpha-channel transparency and animation.

Lossy WebP compression is based on the VP8 video codec and uses predictive encoding to encode an image. It uses values in the neighboring blocks of pixels to predict the value in a block and encodes only the difference. Lossless WebP images are generated by applying multiple transformation techniques to images to compress them.

WebP Benefits

WebP lossless images are generally 26% smaller than PNG, and WebP lossy images are 25–34% smaller than JPEG images of similar quality. Animation support makes them an excellent replacement for GIF images as well. The following shows a transparent PNG image on the left and the corresponding WebP image on the right generated by the Squoosh app with a 26% size reduction.

Additionally, WebP offers other benefits like:

  • Transparency
    WebP has a lossless 8-bit transparency channel with only 22% more bytes than PNG. It also supports lossy RGB transparency, which is a feature unique to WebP.
  • Metadata
    The WebP file format supports EXIF photo metadata and extreme memory profile (XMP) digital document metadata. It may also contain an ICC color profile.
  • Animation
    WebP supports true-color animated images.

Note: In the case of transparent, vector-like images such as the above, an optimized SVG may ultimately deliver a sharper, smaller file compared to a raster format.

WebP Tooling And Support

Over the years, ecosystems other than Google have adopted WebP, and there are many tools available to create, view and load WebP files.

Serving And Viewing WebP Files

WebP is supported on the latest versions of almost all major browsers today.

If developers wish to serve WebP on other browsers in the future, they can do so using the <picture> element or request headers as shown in the section on AVIF.

Image Content Delivery Networks (CDN) also support responsive images with automatic format selection for images in WebP or AVIF depending on the browser support. WebP plug-ins are available for other popular stacks like WordPress, Joomla, Drupal, etc. Initial support for WebP is also available in WordPress core, starting with WordPress 5.8.

You can view WebP images easily by opening them in a browser that supports them. Additionally, you can also preview them on Windows and macOS using an add-on. Installing the Quick Look plug-in for WebP (qlImageSize) would allow you to preview WebP files using the Quick Look utility. The WebP team has published precompiled libraries and utilities for the WebP codec for Windows, macOS, and Linux. Using these on Windows allows you to preview WebP images in File Explorer or Windows Photo Viewer.

Converting images to WebP

In addition to the libraries provided by the WebP team, several free, open-source, and commercial image editing tools support WebP.

Utilities:
Like AVIF, Squoosh can also convert files to WebP online, as shown in the previous section. XnConvert is a utility that you can install on the desktop to convert different image formats, including WebP. XnConvert can also help with stripping and editing metadata, cropping and resizing, brightness and contrast, customizing color depth, blurring and sharpening, masks and watermarks, and other transforms.

Node.js Modules:
Imagemin is a popular image minification module with an add-on for converting images to WebP (imagemin-webp). The add-on supports both lossy and lossless WebP modes.

Others:
Several apps for image conversion and manipulation support the WebP format. These include Sketch, GIMP, ImageMagick, etc. A Photoshop plug-in for WebP is also available.

WebP Production Usage

Due to its compression benefits over JPEG and PNG, Many large companies use WebP in production to reduce costs and decrease web page load times. Google reports 30–35% savings using WebP over other lossy compression schemes, serving 43 billion image requests a day, 26% of those being lossless compression.

To reach its large user base in emerging markets where data is expensive, Facebook started serving WebP images to Android users. They observed, “data savings of 25 to 35 percent compared with JPG, and 80 percent compared with PNG”.

WebP Gotchas

In its early days, a substantial downside with WebP was the lack of browser and tooling support. There are still few trade-offs with WebP when considering all the features a modern format should ideally support.

  • WebP is limited to 8-bit color precision. As a result, it cannot support HDR/wide gamut images.
  • WebP does not support lossy images without chroma subsampling. Lossy WebP works exclusively with an 8-bit YCbCr 4:2:0, while lossless WebP works with the RGBA format. This can affect images with fine details, chromatic textures, or colored text. See below for an example.
  • It does not support progressive decoding, however does support incremental decoding. This can somewhat compensate for that, but the effect on rendering can be different.

You should ideally generate WebP files from the best quality source files available. Converting substandard JPEGs to WebP’s is not very efficient since you lose on quality twice.

Summary

Summarizing all the information about the four formats JPEG, PNG, AVIF, and WebP and comparing and quantifying strengths and weaknesses as presented in the previous section, we have come up with the following table.

Note: The number of stars is based on a general opinion and may differ for specific use cases.

Following are some of the key points to be considered when referring to this table.

  • Compression for photographic and non-photographic images may further differ based on the fidelity (quality) of the images. We have indicated an overall score here.
  • You should choose quality and chroma subsampling settings based on the purpose of the images. Low to medium-fidelity images may be acceptable in most scenarios on the web, e.g., for news, social media, and e-commerce. Image archival, movie, or photography websites require high fidelity images. You should test the actual savings due to compression for high fidelity before converting to another format.
  • Lack of progressive decoding support and speed may be a problem for encoding/decoding AVIF files. For websites with average-sized images, the byte savings due to compression can compensate for speed and the absence of progressive decoding as images get downloaded quickly.
  • When comparing compression offered by image formats, compare file sizes at the same DSSIM.
  • The quality setting used when encoding need not be the same for different formats to yield the same quality of images. A JPEG encoded at a quality setting of 60 may be similar to an AVIF at a quality setting of 50 and a WebP at a quality setting of 65, as suggested by this post.
  • Extensive studies are still required to measure the actual impact on LCP when comparing formats.
  • We have not included other formats like JPEG XL and HEIC in this comparison. JPEG XL is still in a relatively nascent stage, and only Apple devices support HEIC (while Safari does not). Royalty and license fees further complicate support for HEIC.

AVIF does check most of the boxes overall, and WebP has better support and offers better compression when compared to JPEG or PNG. As such, you should undoubtedly consider WebP support when optimizing images on your website. Evaluating AVIF if it meets your requirements and introducing it as a progressive enhancement could provide value as the format gets adopted across different browsers and platforms. With quality comparison tooling and improving encoding speeds using AVIF would ultimately get easier.

With thanks to Leena Sohoni-Kasture for her heavy input into this article as well as Patrick Meenan, Frank Galligan and Yoav Weiss for their reviews.

A Smashing Note

Earlier this year, we published a brand new book with Addy on everything you need to know to optimize how you compress, serve and maintain images — boosting performance along the way. We are shipping the books for free worldwide, and if you get the book now, you will also receive a hand-written postcard by Addy with a personal message.

17 Useful UX and UI Design Tools

User experience and user interface tools can help with every stage of website and mobile-app design — from early whiteboard brainstorming to testing your prototype with end-users.

Here is a list of useful UX and UI design tools. There are platforms to design websites from start to finish. There are also tools to observe users, monitor their interactions, and solicit feedback. All of these tools are relatively inexpensive. Most have free plans.

Adobe XD

Home page of Adobe XDHome page of Adobe XD

Adobe XD

Adobe XD is a vector-based user experience design tool for web and mobile apps. Quickly sketch wireframes and mockups; add animations, interactions, and reusable components; and work together in real-time. Design websites, apps, voice, and more. Adobe XD is part of Adobe Creative Cloud. Price: Plans start at $9.99 per month.

Sketch

Home page of SketchHome page of Sketch

Sketch

Sketch is a vector graphics editor, primarily for user interface and user experience design. Quickly link different parts of your design and create prototypes to test your ideas. Use collaborative cross-platform tools for real-time feedback, sharing, and developer handoff. Sketch is available as a web app and for macOS. Price: Plans start at $9 per month.

Balsamiq

Home page of BalsamiqHome page of Balsamiq

Balsamiq

Balsamiq is a graphical tool to sketch user interfaces for websites and web apps. Rapidly design wireframes with a library of quick, drag-and-drop user-interface elements. Create templates, masters, and reusable and customizable component libraries. Linking lets you generate simple prototypes for demos or usability testing. Price: Plans start at $9 per month.

InVision

Home page of InVisionHome page of InVision

InVision

InVision is a digital product design platform for developing user experiences. Use Freehand to collaborate in real-time on a digital whiteboard. Prototype lets you create experiences without code. Collect input and keep development on track with Specs. Use Studio to amplify your screen design with animation and more. Price: Free for up to three documents. Premium plans start at $7.95.

Figma

Home page of FigmaHome page of Figma

Figma

Figma is a vector editor and a mockup and prototyping tool for you and your team to brainstorm in the open. Connect UI elements and choose your interactions and animations. Define subtle interactions, such as click, while hovering, while pressing a button, and more. Share, present, and gather feedback on interactive prototypes. Price: Free for one team project. Premium plans start at $12 per month.

Optimal Workshop

Home page of Optimal WorkshopHome page of Optimal Workshop

Optimal Workshop

Optimal Workshop is a platform with multiple tools to research and create user experiences. Discover how people conceptualize, group, and label ideas with OptimalSort. Use Treejack to create and launch tests on your designs. Chalkmark lets you test design prototypes with users quickly and easily, for feedback. Find the right people for your studies and learn from your results with Questions. Make sense of your findings quickly with easy-to-use analysis tools in Reframer. Price: Plans start at $99 per month.

UsabilityHub

Home page of UsabilityHubHome page of UsabilityHub

UsabilityHub

UsabilityHub offers a suite of tests for remote user testing. Run 5-second tests for first impressions; click and navigation tests to measure user behavior; straightforward question tests; and tests to discover user preferences. Participants choose from multiple design options in response to your specific questions. Price: Free for basic tests. Premium plans start at $79 per month.

Hotjar

Home page of HotjarHome page of Hotjar

Hotjar

Hotjar is an analytics and feedback platform to gather insights on user experiences. The Analysis tools allow you to measure and observe user behavior, and the Feedback tools enable you to hear what your users have to say. Run feedback polls and surveys. Price: Free for up to three heatmaps. Premium plans start at $39 per month.

Proto.io

Home page of Proto.ioHome page of Proto.io

Proto.io

Proto.io is a prototyping platform to develop web and mobile sites. Access the extensive UI component libraries, a huge variety of customizable templates, and a wide selection of assets, including static and animated icons, stock images, and even sound effects. Add different levels of interactivity based on your project’s needs and go from a simple wireframe to a prototype that feels real. Explore your prototype on a web or mobile browser. Price: Plans start at $24 per month.

UXPin

Home page of UXPinHome page of UXPin

UXPin

UXPin is a platform to design mockups, prototypes, wireframes, and systems for UI and UX design. Turn your wireframes and flows into mockups and fully interactive prototypes all in one tool. Share your work and collaborate with your team on a single design file. Access built-in design elements, easy mockups, real-time comments, and online user testing. Price: Plans start at $19 per month.

Axure

Home page of AxureHome page of Axure

Axure

Axure is a UX tool to build prototypes with unlimited event triggers, conditions, and actions. Leverage Axure rapid prototyping widgets to create working forms, sortable grids, and dynamic interfaces. Create diagrams, customer journeys, wireframes, and other UX docs next to your prototypes. Gather feedback directly on-screen and use Slack and Microsoft Teams integrations for notifications. Price: Plans start at $25 per month.

Red Pen

Home page of Red PenHome page of Red Pen

Red Pen

Red Pen provides live, annotated feedback on your design projects. Upload your designs, and collaborators simply point and click to give feedback. Ask colleagues and clients for feedback by giving them a private link or inviting them via email. Teams are automatically updated about comments, additions, and new versions. Each image can be easily updated or replaced whenever an iteration has been made. Price: Free for 14 days. Plans start at $20 per month for five projects.

Marvel

Home page of MarvelHome page of Marvel

Marvel

Marvel is a design platform to create wireframes, mockups, and prototypes for any device right from your browser. Sync designs from Sketch, upload your images, or build mockups directly in the design tool. Create interactive prototypes, with no coding required. Add collaborators to projects. Get video, voice, and analytical feedback on designs and prototypes. Price: Free for one project. Premium plans start at $12 per month.

FlowMapp

Home page of FlowMappHome page of FlowMapp

FlowMapp

FlowMapp is a UX online planning tool for creating visual customer-journey maps, sitemaps, and user flows. Design UX for products, websites, and apps. Use Flowchart to improve user experience and plan user journeys. User flow diagrams are the fastest way for creators to plan user experience and improve customer journey paths. Share your boards easily with your team, clients, and guest users. Price: Free for one project. Premium plans start at $8.25 per month.

Origami Studio

Home page of Origami StudioHome page of Origami Studio

Origami Studio

Origami Studio is a free design tool created by Facebook and available for Mac. It allows designers to rapidly build and share interactive interfaces. Use Canvas to drag and drop your layout, shape layers and groups, and more. Add interaction, animation, and behavior to your prototype using blocks called patches. Reuse components for efficient design. Preview live prototypes with the iOS and Android apps. Price: Free.

Webflow

Home page of WebflowHome page of Webflow

Webflow

Webflow lets you build custom websites in a visual canvas with no code. Drag in unstyled HTML elements for full control, or use pre-built pieces for complex elements such as sliders, tabs, background videos, and more. Work directly with content-management data and ecommerce products to build your site with real content. Create branded purchase flows for your customers. Upload your logo and tweak colors to keep your receipt and order notification emails on brand. Price: Free for up to two projects. Premium plans start at $12 per month.

Adobe Comp

Home page of Adobe CompHome page of Adobe Comp

Adobe Comp

Adobe Comp lets you develop layouts quickly from rough sketches and ideas. Create layouts on your phone or tablet using natural drawing gestures. Convert rough shapes and lines into crisp graphics. Start your print, web, and mobile layouts with the actual assets instead of placeholders. Pull in vector shapes, images, colors, and text styles. Easily share your layouts and collaborate. Price: Free.

Improving The Accessibility Of Your Markdown

Markdown is a small text to HTML conversion language. It was created by John Gruber in 2004 with the goal of making writing formatted text in a plain text editor easier. You can find Markdown in many places on the internet, especially in locations where developers are present. Two notable examples are comments on GitHub and the source code for posts on Smashing Magazine!

How Markdown Works

Markdown uses special arrangements of characters to format content. For example, you can create a link by wrapping a character, word, or phrase in square brackets. After the closing square bracket, you then include a URL wrapped in parenthesis to create a destination for the link.

So typing:

[I am a link](https://www.smashingmagazine.com/)

Would create the following HTML markup:

<a href="https://www.smashingmagazine.com/">I am a link</a>

You can also blend HTML with Markdown, and it will all boil down to HTML when compiled. The following example:

I am a sentence that includes <span class="class-name">HTML</span> and __Markdown__ formatting.

Generates this as HTML markup:

<p>I am a sentence that includes <span class="class-name">HTML</span> and <strong>Markdown</strong> formatting.</p>

Markdown And Accessibility

Accessibility is a holistic concern, meaning that it affects every aspect of creating and maintaining digital experiences. Since Markdown is a digital tool, it also has accessibility considerations to be aware of.

  • The good news:
    Markdown generates simple HTML markup, and simple HTML markup can be easily read by assistive technology.
  • The less good news:
    Markdown isn’t all-encompassing, nor is it prescriptive. In addition, there is more to accessibility than just assistive technology.

When it comes to ensuring your Markdown content is accessible, there are two big-picture issues:

  1. There are certain types of content Markdown does not support, and
  2. There isn’t a Clippy-esque experience to accompany you while you write, meaning that you won’t get a warning if you do something that will create inaccessible content.

Because of these two considerations, there are things we can do to ensure our Markdown content is as accessible as it can be.

The Three Most Important Things You Can Do

It can be difficult to know where to start when it comes to making your content accessible. Here are three things you can do right now to make a large, significant impact.

1. Use Headings To Outline Your Content

Navigating by heading is by far the most popular method many assistive technology users use to understand the content of the page or view they’re looking at.

Because of this, you want to use Markdown’s heading formatting options (# , ##, ###, ####, #####, and ######) to create a logical heading structure:


# The title, a first-level heading Content ## A second-level heading Content ### A third-level heading Content ## Another second-level heading Content

This creates a hierarchical outline that is easy to scan:

1. The title, a first-level heading a. A second-level heading i. A Third-level heading b. Another second-level heading

Writing effective heading levels is a bit of an art, in that you want enough information to communicate the overall scope of the page, but not overwhelm someone by over-describing. For example, a recipe might only need a few h2 elements to section the ingredients, instructions, and backstory, while an academic paper might need all six heading levels to fully communicate nuance.

Being able to quickly scan all the headings on a page or view and jump to a specific one is a technique that isn’t limited to just screen readers, either. I enjoy and benefit from extensions such as headingsMap that let you take advantage of this feature.

2. Write Meaningful Alternate Descriptions For Your Images

Alternate descriptions help folks who have low vision or are browsing with images turned off to understand the content of the image you’re using.

In Markdown, an alternate description is placed in between the opening and closing brackets of the image formatting code:

![A sinister-looking shoebill staring at the camera.](https://live.staticflickr.com/3439/3259412053_92f822bee2_b.jpg)

An alternate description should clearly and concisely describe the content of the image and the context of why it was included. Also don’t forget to add punctuation!

Certain websites and web apps that use Markdown input will also try to add alternate description text for you. For example, GitHub will use the name of the file you upload for the alt attribute:

Unfortunately, this does not provide enough context for a person who can’t see the image. In this scenario, you want to communicate why the image is important enough to be included.

Examples of this you’ll commonly see on GitHub include:

  • A visual bug, where something doesn’t look the way it’s supposed to,
  • A new feature that is being proposed,
  • An annotated screenshot providing feedback,
  • Graphs and flowcharts that explain processes, and
  • Reaction GIFs for communicating emotion.

These images aren’t decorative. Since GitHub is public by default, you don’t know who is accessing your repo, or their circumstances. Better to proactively include them.

If you need help writing alternate descriptions, I’d enthusiastically recommend the W3C’s alt Decision Tree and Axess Lab’s Ultimate Guide to Alt Texts.

3. Use Plain Language

Simple, direct language helps everyone understand your content. This includes people:

  • With cognitive considerations,
  • Who don’t use English as their primary language,
  • Unfamiliar with the concepts you’re communicating,
  • Who are stressed or multitasking and have limited attention spans,
  • And so on.

The easier it is for someone to read what you write, the easier it is for them to understand and internalize it. This helps with every form of written Markdown content, be it blog posts, Jira tickets, Notion notes, GitHub comments, Trello cards, and so on.

Consider your sentence and word lengths. Also, consider who your intended audience is, and think about things like the jargon and idioms you use.

If you need help simplifying your language, three tools I like to use are Hemingway, Datayze’s Readability Analyzer, and the xkcd Simple Writer. Another site worth checking out is plainlanguage.gov.

Other Considerations

Want to go the extra mile? Great! Here are some things you can do:

Images

In addition to providing alternate descriptions, there are a few other things you can do to make your Markdown-inserted images accessible.

Mark Up SVG Images Properly

SVG is a great format for graphs, icons, simple illustrations, and other kinds of imagery that uses simple shapes and crisp lines.

There are two ways to render SVG in Markdown. Both approaches have specific things you’ll need to be on the lookout for:

1. Linking to an image with a .svg file extension

Note: The bug that I’m about to describe has been fixed, however, I’m still recommending the following advice for the next couple of years. This is due to Safari’s questionable tactic of tying browser updates to system updates, as well as hesitancy around updating software for some people who use assistive technology.

If you’re linking to an SVG as an image, you’ll want to use HTML’s img element, and not Markdown’s image formatting code (![]()).

The reason for this is that certain screen readers have bugs when they try to parse an img element that links to an SVG file. Instead of announcing it as expected as an image, it will announce it as a group, or skip announcing the image entirely. To fix this, declare role="img" on the image element:

<img role="img" alt="A sylized sunflower." src="flower.svg" />

2. Using Inline SVG Code

There are a few reasons for declaring an image as inline SVG code instead of using an img element. The reason I most often encounter is to support dark mode.

Much like with using an img element, there are a couple of attributes you need to include to ensure assistive technology interprets it as an image, and not code. The two attribute declarations are role="img" and aria-labelledby:

<svg aria-labelledby="svg-title" fill="none" height="54" role="img" viewBox="0 0 90 54" width="90" xmlns="http://www.w3.org/2000/svg"> <title id="svg-title">A pelican.</title> <path class="icon-fill" d="M88.563 2.193H56.911a7.84 7.84 0 00-12.674 8.508h-.001l.01.023c.096.251.204.495.324.733l4.532 10.241-1.089 1.09-6.361-6.554a10.18 10.18 0 00-7.305-3.09H0l5.229 4.95h7.738l2.226 2.107H7.454l4.451 4.214h7.741l1.197 1.134c.355.334.713.66 1.081.973h-7.739a30.103 30.103 0 0023.019 7.076L16.891 53.91l22.724-5.263v2.454H37.08v2.81h13.518v-.076a2.734 2.734 0 00-2.734-2.734h-5.441v-3.104l2.642-.612a21.64 21.64 0 0014.91-30.555l-1.954-4.05 1.229-1.22 3.165 3.284a9.891 9.891 0 0013.036 1.066L90 5.061v-1.43c0-.794-.643-1.438-1.437-1.438zM53.859 6.591a1.147 1.147 0 110-2.294 1.147 1.147 0 010 2.294z"/></svg>

You’ll also want to ensure you use a title element (not to be confused with the title attribute) to describe the image, similar to an img element’s alt attribute. Unlike an alt attribute, you’ll also need to associate the id of the title element with its parent svg element by using aria-labelledby.

If you’d like to go deeper on accessibly marking up SVG, I recommend Accessible SVGs by Heather Migliorisi, and Accessible SVGs: Perfect Patterns For Screen Reader Users by Carie Fisher.

Load With Animated Images Paused

Animated GIFs are another common thing you’ll find with Markdown content — I find them more often than not being used by a developer to express their delight and frustration when discussing a technical topic.

The thing is, these animations can be distracting and adversely affect someone who is trying to read through your content. Cognitive considerations such as ADHD are especially affected here.

The good news is you can still include animated content! There are a few options:

  1. Use the picture element, using filetypes such as .mp4 and .webm that can load in a paused state, or
  2. Use a solution that gives play/pause functionality to a .gif, such as Steve Faulkner’s details/summary hack, or the freezeframe.js library.

This little detail can go a long way to helping people out without having to abandon a way for you to express yourself.

Links

If you write content online, sooner or later you’re going to have to use links. Here are some things to be aware of:

Use Unique, Descriptive Link Names

Some forms of assistive technology can navigate through a list of links on a page or view the same way they can navigate through headings. Because of this, you want your links to hint at what someone can expect to find if they visit it.

Learn more about [how to easily poach an egg](https://lifehacker.com/this-is-the-chillest-easiest-way-to-poach-an-egg-1825889759).

You’ll also want to avoid ambiguous phrases, especially if they repeat. Terms like “click here” and “learn more” are common culprits. These terms don’t make sense when separated from the context of their surrounding non-link content. In addition, using the term more than once can create experiences like this:

Avoid Opening Links In A New Tab Or Window

Certain variants of Markdown such as Kramdown allow you to write code that can open links in a new tab or window:

[link name](url){:target="_blank"}

Doing this creates a security risk. In addition, this experience is so confusing and undesirable that it is a Web Content Accessibility Guidelines (WCAG) success criterion. It is far better to let everyone using your website or web app make the choice for themselves about whether or not they want to open a link in a new tab.

Use Skip Links With Discretion

A skip link, or “skipnav” is a way to bypass large sections of content. You’ll commonly encounter them as a way to bypass the logo and main navigation on a webpage, allowing someone to quickly jump to the main content.

This is also a great technique for allowing someone to bypass a “keyboard trap,” something commonly found in embedded content.

Keyboard traps are where someone who isn’t using a mouse or a touchpad cannot escape an interactive component due to how it is constructed. You’ll typically find these with embedded iframe widgets.

A good way to test for keyboard traps? Use the Tab key!

Without a skip link, someone using assistive technology may have to resort to refreshing the page or view to escape the trap. This isn’t great and is especially troubling if motor control concerns are thrown into the mix. I’m of the school of thought that most people will just close the tab if they run into this scenario, rather than try to wrestle with getting it to work.

In addition to his great post about testing with the Tab key, Manuel Matuzović tells us about his use of skip links, as well as other improvements in Improving the keyboard accessibility of Embedded CodePens.

Be Careful With Automatically Generated Heading Anchor Links

Some Markdown generators automatically add an anchor link to accompany each heading you write. This is so you can focus someone’s attention to the relevant section on a page or view when sharing content.

The issue is there might be some assistive technology issues with this, depending on how this anchor link is constructed. If the anchor link is only wrapped around a glyph such as #, ¶, or §, we run into two issues:

  1. The link’s name does not make sense when removed from its surrounding context, and
  2. The link’s name is repeated.

This issue is discussed in more detail by Amber Wilson in her post, Are your Anchor Links Accessible? Her post also goes into detail about different solutions, as well as their potential drawbacks.

Indicate The Presence Of Downloads

Most of the times links take you to another page or view. Sometimes, however, the destination is a download. When this happens, the browser either:

  1. Opens an app associated with the request file type to display it, or
  2. Prompts you to save it to the Operating System’s filesystem.

These two experiences can be jarring, especially if you can’t see the screen. A good way to prevent this less-than-ideal experience is to hint at the presence of the download in the link’s name. For example, here’s how you would do it in Markdown when linking to a PDF:

Download our [2020 Annual Report (PDF)](https://mycorp.biz/downloads/2020/annual-report.pdf).

Color

Color isn’t related to Markdown per se, but it does affect a lot of Markdown-generated content. The biggest color-related concerns are things you can usually modify if you are using a blogging service such as WordPress, Eleventy, Ghost, Jekyll, Gatsby, and so on.

Use A Dark Mode Theme

Providing a toggle for dark mode allows someone to choose an experience that helps them read. For some, it could be an aesthetic preference, for others it could be a way to avoid things like migraines, eye strain, and fatigue.

The important bit here is choice. Let someone who has dark mode turned on use light mode for your website, and vice-versa (and make sure the UI to do so is accessible).

The thing is, you can’t know what a person’s needs, desires, or circumstances are when they visit your website or web app, but you can provide them with the ability to do something about it.

Let’s also remember that Markdown exports simple, straightforward HTML, and that is easy to work within CSS. This goes a long way to making your dark mode theme easier to develop.

Use Syntax Highlighting With Good Color Contrast Support

Markdown can create blocks of code by wrapping content in triple backticks (```). It can also create inline content wrapped in the code element by wrapping a character, word, or phrase in single backticks.

For both examples, many people add syntax highlighting libraries such as PrismJS to help people understand the code example they’re providing.

Certain themes use light-on-light or dark-on-dark values as an aesthetic choice. Unfortunately, this means the code may be difficult or impossible to see for some individuals. The trick here is to select a syntax highlighting theme that uses color values that are high enough contrast that people can actually see each and every glyph of the code.

A way to determine if it is high enough contrast is to use a tool such as WebAIM’s and manually check the color values provided by the theme. If you’re looking for a faster suggestion and don’t mind a little self-promotion, I maintain a color contrast-friendly syntax highlighting theme.

Content That Isn’t Supported By Markdown

Since you can use HTML in Markdown, there are certain kinds of content you’ll see more often than others in Markdown. Here are a few considerations for a couple of them.

Use The title Attribute To Describe iframe Content

HTML’s title attribute is commonly misused to create a tooltip effect. Unfortunately, this causes a lot of headaches for assistive technology users, and its usage this way is considered an antipattern.

The one good use of a title attribute is to provide a concise, meaningful description of what the iframe contains. This description provides assistive technology users a clue about what to expect if they navigate into the iframe to check out its contents.

For Markdown, the most common form of iframe content will be embeds such as YouTube videos:

<iframe width="560" height="315" src="https://www.youtube.com/embed/SDdsD5AmKYA" title="YouTube: Accessibility is a Hydra | EJ Mason | CascadiaJS 2019." frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

Much like your link text, you’ll also want to avoid generic and repetitive title content. YouTube’s embed code defaults to YouTube video player, which isn’t so great. We can do a little better and update that to YouTube: Video title. This will especially help if there’s more than one YouTube video embedded on the page or view.

As to why YouTube does it this way when it already knows the video title information is another problem entirely.

Provide Captions And Transcripts For Videos And Recorded Audio

Speaking of YouTube, another thing you’ll want to do is ensure your video and audio have captions and transcripts.

Captions

Captions display a text version of video content in realtime as it is being spoken, allowing someone who biologically or circumstantially cannot hear audio to be able to understand the video’s content. Captions can also include sound effects, music, and other cues that are important to communicating meaning.

Most popular video hosting providers have features to support captioning, including displaying them in an embedded context. The important part here is to avoid craptions—manually review automatically generated captions to ensure they make sense to a human being.

Transcripts

Transcripts are caption’s sibling. They take all the spoken dialog, pertinent sound effects and music, and other important details and list them outside of the embedded video or audio. There are many benefits to doing this, including allowing someone to:

  • Read through the video and audio content at their own pace;
  • Modify the size and presentation of the content;
  • Print the content out or convert it into a format that’s easier to digest;
  • More easily discover the content via search engines;
  • More easily translate the content.

Reader Mode

Like other Markdown-adjacent concerns, Reader Mode can offer a lot of benefits from an accessibility perspective.

If you are unfamiliar, Reader Mode is a feature offered by many browsers that strips away everything other than the main content. Most reader modes also provide controls for adjusting the text size, font, line height, foreground and background color, column width, even having your device read the content out loud for you!

You can’t directly trigger Reader Mode by using Markdown. Longform Markdown content, however, is often rendered in templates that can be set to make them Reader Mode-friendly.

Mandy Michael teaches us how to do this in her post, Building websites for Safari Reader Mode and other reading apps. A combination of semantic HTML, sectioning elements, and a dash of structured microdata are all it takes to unlock this great feature.

You Don’t Have To Do Everything At Once

This is a long post that covers different aspects of Markdown and how it interacts with other technology. It can seem daunting, in that it is a lot of content to cover across a few different subject areas.

The thing about accessibility work is that every little bit helps. You don’t have to address every consideration I have in this post in one big, sweeping change. Instead, try picking one thing to focus on, and build from there.

Know that each tweak and update will have a direct impact on someone’s quality of life when using the web, and that’s huge.

Continue Reading on Smashing Magazine

5 Ways Reddit Helps an Ecommerce Brand

Reddit is among the most visited sites worldwide. I addressed Reddit and Quora last month, explaining the dos and don’ts for marketers.

Reddit has a strict no-self-promotion policy. Still, it offers a venue to further a brand and establish expertise.

Reddit’s community provides the content, such as news updates, memes, how-to tips, and more. With more than 130,000 subreddits (niche branches), the platform contains seemingly endless topics to discuss with other members.

Members “upvote” what they consider as helpful posts and comments and “downvote” spam or low-value submissions. A member’s ratio of upvotes to downvotes is tallied in a score called “Karma,” reflecting trustworthiness.

5 Ways Reddit Helps an Ecommerce Brand

Join relevant communities. Redditors have a good eye for veiled attempts at selling on the platform. There’s no way to sneak around and slyly earn upvotes.

Consider the screenshot below of a subreddit for skincare (r/SkincareAddicts). A merchant that sells skincare products could join and connect with potential customers. Reddit prohibits inserting links to your site, but you can copy, say, blog text into a post or comment and link the source. It’s an excellent way to repurpose content.

A skin care merchant could join the SkincareAddicts subreddit and connect with potential customers.A skin care merchant could join the SkincareAddicts subreddit and connect with potential customers.

A skincare merchant could join the SkincareAddicts subreddit and connect with potential customers.

Create an ad. If Reddit’s traditional participation methods are too time-consuming, pay for an ad.

The example ad below is from Steam, a video game and store, placed in the r/SkincareAddicts subreddit.

Steam, a video game and store, placed in the SkincareAddicts subreddit.Steam, a video game and store, placed in the SkincareAddicts subreddit.

Steam, a video game and store, placed an ad in the SkincareAddicts subreddit.

Create your own subreddit. A subreddit can provide a dedicated space to discuss your industry, products, suggestions, and more — all can help meet customers and prospects. Assign “moderators” to answer questions and address complaints.

Ipsy, a monthly beauty subscription service, has a subreddit with more than 13,000 members.

Ipsy, a monthly beauty subscription service, has a subreddit with more than 13,000 members.Ipsy, a monthly beauty subscription service, has a subreddit with more than 13,000 members.

Ipsy, a monthly beauty subscription service, has a subreddit with more than 13,000 members.

Host an /r/IAmA. “Ask me Anything” is a subreddit for interactive interviews. It’s a fun way to reach a large audience and show your expertise on a topic. The sessions are live and last an hour. Redditors ask the questions. The host responds in real-time.

Reddit’s rules for starting an AMA thread involved sending a request roughly 15-30 minutes before collect questions. A captivating title will help attract participants.

The general formula is: “I’m [insert short bio]. Ask me anything!”

The examples below are a scheduled AMA from actor Jared Cook and an unscheduled AMA from a photographer in Ghana. Both garnered lots of upvotes and comments. AMA’s are widely popular. Celebrities, CEOs, and even former president Barack Obama have hosted them.

These examples show a scheduled AMA from actor Jared Cook and an unscheduled AMA from a photographer in Ghana.These examples show a scheduled AMA from actor Jared Cook and an unscheduled AMA from a photographer in Ghana.

These examples show a scheduled AMA from actor Jared Cook and an unscheduled AMA from a photographer in Ghana.

Obtain feedback on your company. Reddit helps see what folks really think about your products, services, reputation, and more.

Note the Ipsy subreddit below. A member asked what themes or products Ipsy should include in its monthly boxes. This is valuable feedback for Ipsy. Seemingly any marketing professional could benefit from similar queries.

Search Reddit, and you might be surprised what folks are saying about your brand!

A member on the Ipsy subreddit asked what themes or products Ipsy should include in its monthly boxes, providing valuable feedback..A member on the Ipsy subreddit asked what themes or products Ipsy should include in its monthly boxes, providing valuable feedback..

A member on the Ipsy subreddit asked what themes or products Ipsy should include in its monthly boxes, providing valuable feedback.

Let’s Dive Into Cypress For End-to-End Testing

Software development without automated testing is hard to imagine today. A good variety of different test procedures will ensure a high level of quality. As a foundation for testing, we can use a number of unit tests. On top of that, in the middle of the pyramid, so to speak, are integration tests. End-to-end tests are at the very top, covering the most critical use cases. This third kind of testing will be the focus of this article.

However, end-to-end testing does have some pitfalls that are cause for concern:

  • End-to-end tests are slow and, thus, pose a significant hurdle in every continuous integration and continuous deployment (CI/CD) strategy. Not only that, but imagine finishing a task, a feature, or any other implementation — waiting for the test to execute can drain everyone’s patience.
  • Such end-to-end tests are hard to maintain, error-prone, and expensive in every way due to the effort of debugging. Various factors can cause this. Your test should feel like an assistant, never a hindrance.
  • The biggest nightmare for developers is a flaky test, which is a test that is executed the same way but leads to different results. It’s like a “Heisenbug”, which only occurs if you don’t measure the application being tested — that is, if you don’t look at it.

But don’t worry: You don’t have to succumb to these pitfalls. Let’s look at how to prevent many of them. However, I won’t just promise the moon and not deliver. In this guide, we’ll write some tests together, which I’ve made public for you in a GitHub repository. This way, I hope to show you that end-end testing can be fun! Let’s get started.

What Are End-to-End Tests?

When talking about end-to-end (or E2E) testing, I like to refer to it as being “workflow-based”. The phrase sums up end-to-end testing well: It simulates actual user workflows and should include as many functional areas and parts of the technology stack used in the application as possible. In the end, the computer is pretending to be a customer and tries to behave like a real user. These tests are best for applying constant stress to your application’s entire system and, thus, are a great measure to ensure quality when the whole application stack is present.

Let’s recall what we want to achieve with all of this. We know that front-end testing is a set of practices for testing the UI of a web application, including its functionality. Makes sense — with these measures, we can ensure that our application is working correctly and that no future changes will break our code. In order to achieve this efficiently, you might be wondering what and how much you need to test.

This is a valid question. You might find one possible answer in a metaphor: The test automation pyramid, first introduced by Mike Cohn and further specified by Martin Fowler, shows how to make testing efficient. We find fast and cheap unit tests on the lowest pyramid level, and time-consuming and expensive UI tests (end-to-end testing) at the top.

Explaining this and its advantages and drawbacks would be enough for its own article. I’d like to focus on one level. End-to-end tests in particular can bring significant improvements in quality if prioritized efficiently. In doing so, we can constantly put our system under stress and ensure that our application’s main functions are working correctly.

My Journey To Cypress

When I started learning how to write end-to-end tests, I used Mink, a PHP library, on top of Behat, a scenario-oriented behavior-driven development (BDD) framework. I started using Selenium, with all of its advantages and disadvantages. Because my team had started working with Vue.js a lot, we changed to a JavaScript-based testing framework to ensure flawless integration and compatibility. Our choice back then was Nightwatch.js, so I built our new test suite from scratch.

During this time, we often stumbled upon compatibility problems. You could call it dependency hell — not to mention all of the limitations we saw with Selenium and later with WebDriver.

  • On our team, we weren’t able to pin down the Chrome version of our CI. So if updates to Chrome were released, Nightwatch.js wasn’t fast enough to be compatible, causing many failures in our testing pipelines.
  • The number of test-sided causes of flaky tests started to rise, as the waiting possibilities of Nightwatch.js didn’t optimally match our product.

So, we came to consider building our test suite anew. After visiting an unconference, I discovered Cypress.

Cypress is an all-in-one testing framework that does not use Selenium or WebDriver. The tool uses Node.js to start a browser under special control. The tests in this framework are run at the browser level, not just remote-controlling. That offers several advantages.

In short, here are the reasons why I chose this framework:

  • Excellent debugging capability
    Cypress’ test runner can jump back to any state of the application via snapshots. So, we can directly see an error and all of the steps before it. In addition, there is full access to Chrome’s developer tools (DevTools), and clicks are fully recorded.
  • Better ways to wait for actions in the test or UI or in the responses from the API
    Cypress brings implicit waiting, so there is no need for appropriate checks. You can also make the test wait for animations and API responses.
  • Tests are written in JavaScript
    This mitigates the learning curve to write tests. Cypress’ test runner is open-source, so it fits our product strategy.

However, this article is a guide, so let’s stop with this general information and get going.

Getting Started

Install And Start Cypress

Let’s start from scratch. In my talks about Cypress, I usually begin by creating a new directory via mkdir, and then immediately installing Cypress. The easiest way to install is shown in this drawing:

A little hint: If you don’t want to use npm, you can install Cypress via Yarn:

yarn add cypress --dev

An alternative is the direct download, using the ZIP folders that Cypress provides. That’s it! Once the installation is complete, you’re ready to start.

There are two ways to start running Cypress tests. The first is by starting Cypress in the console, and running your tests headlessly:

./node_modules/.bin/cypress run

The second way is to use one of Cypress’ neat features, which is its integrated test runner. The test runner is a UI for running tests. To launch it, you can use a similar command:

./node_modules/.bin/cypress open

This command will open the test runner. When you open Cypress for the first time, you will see this interface:

Cypress provides some prewritten sample tests to showcase its features and give you some starting points — this is the reason for the tests that are available. Let’s ignore those for now, because we want to write our own soon. However, please keep this “Integration Tests” area in mind, because it will account for a lot of the magic that will happen later.

First Impression Of Cypress’ Structure

Now it’s time to open our newly created project in the integrated development environment (IDE) of choice. If you navigate to this folder, you will see the following test structure:

smashing-example └── cypress └── fixtures └── integration └── plugins └── support └── cypress.json

Let’s go over these folders:

  • fixtures
    Here is where you’ll find fixed test data, which have no relation to the other entities. So, no IDs are stored here, which can change according to the local state.
  • integration
    You will find the actual tests here.
  • plugins
    Here, you can extend Cypress, whether with existing Cypress plugins or your own.
  • support
    Here, you can extend Cypress itself. Your own commands and helpers are located here.
  • cypress.json
    Modify configurations here, including for the environment.

All right, I think we can find our way around Cypress now, whether the test runner or the source code. But how do we start? What do we want to test?

Choose A Test Case

A typical end-to-end test can get complex, particularly if has a lot of steps. It would take a lot of time to execute manually. Because of this complexity, E2E tests can be challenging to automate and slow to run. As a result, we need to carefully decide which cases to automate.

In my opinion, the term “workflow-based” is key: We would select test cases based on typical user stories. However, due to run times, it is not advisable to cover every single available workflow. Therefore, we need a way to prioritize our test cases.

On my team, we had several criteria for our project. The test case should:

  • cover the most general and most used workflows of a feature, such as CRUD operations (the term “happy path” describes these workflows quite well);
  • use risk analysis, covering the workflows with E2E tests that are most vulnerable (i.e. where errors would cause the most damage);
  • avoid duplicate coverage;
  • not necessarily be used if unit tests are more appropriate (use an E2E test to test your software’s response to an error, not the error itself).

The second most important thing to keep in mind is to only test the workflow that you explicitly want to test. All other steps required to make your test work should be done with API operations outside of the test, to avoid testing them. This way, you will ensure minimal test run times and get a clear result of your test case if it fails. Think of this workflow as an end user would: Focus on using the feature rather than on the technical implementation.

Example:

If you want to test the checkout process in an online shop, don’t perform all of the other steps, such as creating the products and categories, even though you will need them to process the checkout. Use, for example, an API or a database dump to make these things, and configure the test only for the checkout.

Example: Finding My Articles in Smashing Magazine

I want to write a test for this website, Smashing Magazine. I cannot guarantee that this test will be up to date forever, but let’s hope it will last. Either way, you’ll be able to find this example in a GitHub repository.

Creating Our First Cypress Test

In the integration folder, we’ll begin by creating a new file. Let’s call it find-author.spec.js. The suffix .spec stands for “specification”. In terms of a test, this refers to the technical details of a given feature or application that your application must fulfill.

To turn this empty JavaScript file into a test’s home, we’ll start by giving the test suite its structure. We’ll use the method called describe. describe(), or context(), is used to contain and organize the tests. In other words, this method serves as a frame for our tests. Thus, our test file will look like this:

// find-author.spec.js
describe('Find authors at smashing', () => { //...
});

The next step is to create the actual test. We’ll use the method it. it(), or specify(), is used to represent the actual test. As you can see, we can capture multiple tests in one file, which allows for some excellent structuring options.

// find-author.spec.js
describe('Find authors at smashing', () => { it('Find the author Ramona Schwering', () => { cy.log('This is our brand-new test'); });
});

Little hint: If you’re familiar with Mocha, you might have noticed some similarities. Cypress is built on top of Mocha, so the syntax is the same.

All right, let’s proceed. If we run our test in Cypress’ test runner, we’ll notice that Cypress will open a browser to run the test. This browser is seen in the screenshot below:

Congratulations! We’ve written our first test! Sure, it doesn’t do much. We need to continue. Let’s fill our test with life.

Fill The Test With Life

What’s the first thing to do when testing a website? Right, we need to open the website. We can do that using a Cypress command. What is the command, you might be wondering?

Working With Commands

There are mainly two types of instructions used in an E2E test. The first type of instruction, the commands, represents the individual steps in the test. In the context of Cypress, commands are everything that Cypress does to interact with your website. This interaction could be anything — a click, scrolling down the website, or even finding an element. As a result, commands will be one of the important things we’ll fill our test with.

So, our first command will be the one to navigate to the website — smashingmagazine.com. This command is called visit.

Using it, our test will look like this:

// find-author.spec.js
describe('Find authors at smashing', () => { it('Find the author Ramona Schwering', () => { cy.visit('https://www.smashingmagazine.com/'); });
});

There is one command that I use often — and you will, too. It’s called get:

cy.get(‘selector’);

This command returns an element according to its selector — similar to jQuery’s $(…). So, you would use this command to find the parts to interact with. Usually, you would use it to start a chain of commands. But wait — what is meant by a chain of commands?

As mentioned at the beginning of this article, all tests and everything else that goes with them are written in JavaScript. You can put the commands in the tests (i.e. the statements) in a chain (chained, in other words). This means that the commands can pass on a subject (or return value) of a command to the following command, as we know from many test frameworks.

All right, we will start a chain of commands with the get command. To find an element with get, we need to find its selector first. Finding a unique selector is essential, because Cypress would otherwise return all matching elements; so, keep this in mind and avoid it if it’s unintended.

Interacting With Elements

Cypress itself has a feature to help you find the selectors of the elements that you want to work with. This feature is called the Selector Playground, and it helps you to discover unique selectors of a component or to see all matching elements for a selector or a text string. So, this feature can help you a lot in this task. To enable it, simply click the crosshair icon in the header of your test’s UI, and then hover over the desired element:

As seen in the screenshot above, a tooltip will display the selector on hover or in this little bar under the crosshair icon, which appeared when the element was clicked. In this bar, you can also see how many elements would match the given selector — ensuring its uniqueness in our case.

Sometimes, those automatically generated selectors might not be the ones you want to use (e.g. if they are long or hard to read or do not fulfill your other criteria). The selector generated below is challenging to understand and too long, in my humble opinion:

In this case, I would fall back to the browser’s DevTools to find my unique selectors. You might be familiar with these tools; in my case, I often choose Chrome for this purpose. However, other supported browsers might provide similar features. The process feels similar to the Selector Playground, except that we’re using the DevTools’ features in the “Element” tab.

To ensure that a selector is unique, I’d recommend searching for it in your DevTools’ code view. If you find only one result, you can be confident that it’s unique.

Did you know that there are many different selector types? Depending on the variety, tests can look and even behave pretty differently. Some varieties are better suited to end-to-end testing than others. If you want to know which selectors to use to keep your tests stable and clean, I can point you to one of my articles that covers this issue. Cypress’ developers themselves provide some guidance on this topic in their best practices.

Our Test As A Sequence Of Commands

OK, back to our test. In it, we want to display our workflow:

“I, as a user, will search for the author’s article and navigate to the author’s website through the reference area in one of their articles.”

We’ll reproduce the steps that a user would take by using commands. I’ll paste below the finished test with comments, which will explain the steps:

// find-author.spec.js
it('Find the author Ramona Schwering', () => { // Open the website cy.visit('https://www.smashingmagazine.com'); // Enter author’s name in search field cy.get('#js-search-input').type('Ramona Schwering'); // Navigate to author’s article cy.get('h2 > a').first().click(); // Open the author’s page cy.get('.author-post__author-title').click();
});

This example deals with the workflow that we want to test. Cypress will execute this test. So, is it time to say “Congratulations”? Have we finally finished writing our first test?

Well, please take a closer look. Cypress will execute it, but it will only do what the test tells it to, which is whatever you wrote. If you run it in the test runner, you can see whether it has passed — but not in case you ran it headlessly. With this test, we only know whether Cypress could run our commands successfully — not whether we ended up on the author’s website. So, we need to teach our test to determine that.

Working With Assertions

The second type of statement takes care of the descriptions of the desired state of the UI — that is, whether something should exist, be visible, or no longer be visible. The assertions in Cypress are based on Chai and Sinon-Chai assertions, which is noticeable in the syntax.

Remember that we want to check whether we’re on the author’s profile page — mine in this example. So, we need to add an assertion for exactly that:

// find-author.spec.js
it('Find the author Ramona Schwering', () => { // Open the website cy.visit('https://www.smashingmagazine.com'); // Enter author’s name in search field cy.get('#js-search-input').type('Ramona Schwering'); // Navigate to author’s article cy.get('h2 > a').first().click(); // Open the author’s page cy.get('.author-post__author-title').click(); // Check if we’re on the author’s site cy.contains('.author__title', 'Ramona Schwering').should('be.visible');
});

All right, now we’ve written a test that has value. So, yes, congratulations on writing your first test… even if it’s not yet perfect.

Let’s Make Our Test Pretty

Even if we’ve succeeded in writing a first meaningful test and learned the core concept in the process, I wouldn’t merge this one yet if it was proposed in a pull request. A couple of things are left to do to make it shine.

Take Your Time

Cypress has a built-in retry option in almost every command, so you don’t have to wait to see whether, for example, an element already exists. However, this only looks to see whether an element exists in the DOM, not more than that. Cypress can’t predict everything your application does, so there might be some flakiness if you rely solely on this.

What would a user do if they wanted to see a website that is still loading? They would most likely wait until some parts of the website become visible (thus, loaded) and would then interact with them. In our test, we want to mimic precisely that: We want to wait for changes in the UI before starting to interact. In most cases, we’d limit this behavior to the elements we need, thus using assertions on those elements.

As you can see, we must make our test wait on several occasions. However, waiting too many times is not good either. As a rule of thumb, I’d suggest using an assertion to check whether the element to be interacted with has fully loaded, as the first step to determining whether the website being test has loaded.

Let’s take a look at such a part of our test as an example. I added one assertion to make sure our page has fully loaded:

// find-author-assertions.spec.js
// Open website
cy.visit('https://www.smashingmagazine.com'); // Ensure site is fully loaded
cy.get('.headline-content').should('be.visible'); // Enter author’s name in the search field
cy.get('#js-search-input').type('Ramona Schwering');

Keep adding assertions in such a manner to all instances where our website will have loading times or several elements that need to be rendered anew. For the complete test file, please look at the corresponding test in the GitHub repository.

To avoid falling into the trap of flaky tests, I would like to give you one last hint: Never use fixed wait times, such as cy.wait(500) or the like.

API Responses Are Your Friends

There’s one neat waiting possibility in particular that I love to make use of in my tests. In Cypress, it’s possible to work with network features — another helpful way of waiting in your application is to use these features to work with network requests. This way, you can make the test wait for a successful API response.

If we remember our workflow as an example, one step could make great use of an API waiting possibility. I’m thinking about search. A corresponding user story could be the following:

“I, as a developer, want to make sure that our search results have fully loaded so that no article of older results will mislead our test.”

Let’s apply that to our test. First of all, we need to define the route that we want to wait for later on. We can use the intercept command for this. I would search for the request, bringing the data that I need — the search results in this case.

To keep this example simple, I’ll use a wildcard for the URL. After that, I’ll use an alias so that Cypress can work with this route later on.

// find-author-hooks.spec.js
// Set the route to work with
it('Find the author Ramona Schwering', () => { // Route to wait for later cy.intercept({ url: '*/indexes/smashingmagazine/*', method: 'POST' }).as('search'); // With this alias Cypress will find the request again //...

In Cypress, all defined routes are displayed at the beginning of the test. So, I’d like to put those intercept commands at the beginning of my test, too.

Now, we can use this route alias in assertions. The leanest way to do this would be with Cypress’ wait command, directly with the alias mentioned before. However, using this command alone would lead to waiting for the response regardless of its outcome. Even error codes such as 400 or 500 would count as passing, whereas your application would most likely break. So I’d recommend adding another assertion like this:

// find-author-hooks.spec.js
// Later: Assertion of the search request’s status code
cy.wait('@search') .its('response.statusCode').should('equal', 200);

This way, we can wait for the software’s data, changes, and so on with precision, without wasting time or getting into problems if the application is heavily stressed. Again, you can find the complete example file in my GitHub repository.

Configuring Cypress

I’ve left out one small detail. If you take a closer look at the complete test example, it differs slightly from those we used here in this guide.

// Cypress
describe('Find author at smashing', () => { beforeEach(() => { // Open website cy.visit('https://www.smashingmagazine.com'); }); //...

I only use a slash to open the website of Smashing Magazine. How does that work? Well, using this command like so will navigate to the baseUrl of our tests. baseUrl is a configuration value that can be used as prefix for the cy.visit() or cy.request() command’s URL. Among other values, we can define this value in the cypress.json file. For our test, we’ll set the baseUrl like so:

// cypress.json
{ "baseUrl": "http://www.smashingmagazine.com"
}

Honorable Mention: Hooks

There’s one topic left that I want to mention, even if our example test is not suited to using it. As is common in other test frameworks, we can define what happens before and after our tests via so-called lifecycle hooks. More precisely, these exist to execute code before or after one or all tests:

// Cypress
describe('Hooks', function() { before(() => { // Runs once before all tests }); after(() => { // Runs once after all tests }); beforeEach(() => { // Runs before each test }); afterEach(() => { // Runs after each test });
});

We want to fill our test file with more than one test, so we should look for common steps that we want to execute before or after them. Our first line is a case in point, being the visit command. Assuming we want to open this website before each of these tests, a beforeEach hook in our example would look like this:

// Cypress
describe('Find author at smashing', () => { beforeEach(() => { // Open website cy.visit('https://www.smashingmagazine.com'); }); //...

I frequently use this in my daily work to ensure, for example, that my application is reset to its default state before the test, thus isolating the test from other tests. (Never rely on previous tests!) Run your tests in isolation from each other to maintain control over the application’s state.

Each test should be able to run on its own — independent of other tests. This is critical to ensuring valid test results. For details on this, see the section “Data We Used to Share” in one of my recent articles. For now, refer to the complete example on GitHub if you want to see the entire test.

Conclusion

In my opinion, end-to-end tests are an essential component of CI, keeping the quality of applications at a high level and at the same time relieving the work of testers. Cypress is my tool of choice for debugging end-to-end tests quickly, stably, and efficiently, and for running them parallel to any pull request as part of CI. The learning curve is gentle if you’re already familiar with JavaScript.

I hope I’ve been able to guide you a bit and given you a starting point to write Cypress tests and some practical tips to get started. Of course, all code examples are available in the GitHub repository, so feel free to take a look.

Of course, this is only a starting point; there are many more things to learn and discuss regarding Cypress tests — I’ll leave you with some suggestions on what to learn next. With this in mind, happy testing!

Resources

The Emergence of B2B Raw Material Marketplaces

Business-to-business marketplaces are among ecommerce’s leading growth trends, yet many industries remain under-served, especially for raw materials.

The trend is evident in the level of venture capital investment and in the number of enterprise businesses developing marketplaces alongside their core products. That’s according to Paul do Forno, managing director of content and commerce at Deloitte, the international consulting firm.

“Everyone thinks of Amazon, but there are hundreds of marketplaces popping up,” do Forno said, giving, as an example, Knowde, a chemical, polymer, and ingredient marketplace connecting B2B buyers and sellers.

Knowde raised $72 million in Series B funding in August 2021.

Purchasing chemicals, polymers, and ingredients is “a very complicated buy, and what Knowde is trying to do is make it super simple,” do Forno said.

Home page of KnowdeHome page of Knowde

Knowde is a B2B ecommerce marketplace for raw materials and an example of what could be an emerging growth trend.

Not New

B2B marketplaces are not new.

“Business-to-business commerce on the Internet is generating a lot of interest,” wrote Steven N. Kaplan and Mohanbir Sawhney in a Harvard Business Review article from 2000.

“The appeal of doing business on the web is clear. By bringing together huge numbers of buyers and sellers and by automating transactions, web markets expand the choices available to buyers, give sellers access to new customers, and reduce transaction costs for all the players. By extracting fees for the transactions occurring within the B2B marketplaces, market makers can earn vast revenues. And because the marketplaces are made from software — not bricks and mortar — they can scale with minimal additional investment, promising even more attractive margins as the markets grow,” Kaplan and Sawhney wrote.

Some 21 years later, the time for many of these marketplaces may have finally come.

Raw Materials

“When I think about B2B marketplaces, I break them up into three segments,” said Ali Amin-Javaheri, the co-founder and CEO of Knowde.

“The first segment is everything related to services — payment marketplaces, labor marketplaces, logistics marketplaces, freight marketplaces, all sorts of them.

“The second is finished goods marketplaces, like Amazon Business, Alibaba, McMaster-Carr. It’s all B2B. They are selling to companies, but it’s all finished goods,” Amin-Javaheri continued.

“The third segment is all things raw materials — all the stuff that companies buy to create their own products,” said Amin-Javaheri, describing the segment in which his own company fits.

Many examples exist in the first two categories described by Amin-Javaheri, but relatively few are in the third.

That could change. Raw material marketplaces such as Knowde could be a Blue Ocean of opportunity for businesses to combine deep industry knowledge with commerce software.

The business fundamentals are the same as those that Kaplan and Sawhney described in Harvard Business Review nearly a quarter-century ago, “Web markets expand the choices available to buyers, give sellers access to new customers, and reduce transaction costs for all the players.”

Those fundamentals could apply to raw materials in circa 2021.

“It’s greenfield, it’s massive, and it is ripe for change,” said Knowde’s Amin-Javaheri of the market for chemicals, polymers, and similar raw materials, adding that there could be $5 trillion in annual transactions for these materials worldwide.

Chemical suppliers, according to Amin-Javaheri, have traditional sales forces and methods that require a lot of personal interaction. While this approach can be lucrative for the professional buyers representing huge companies, it creates a gap for small and mid-sized organizations.

Those buyers are relatively expensive for some middle-market chemical suppliers to transact with. So they don’t. That leaves businesses — some of which are willing to spend hundreds of thousands or even millions of dollars on raw materials — feeling underserved.

A marketplace solves the problem for both buyers and sellers. The latter can connect with many more potential customers at a lower cost, while the former gets more support on a complex buying decision that might include understanding how various compounds could interact at a molecular level.

Software, Knowledge

This level of detail and complexity is why a simple web catalog won’t necessarily work. Buyers and sellers of the sorts of raw materials Knowde, for example, is trying to serve cannot simply visit a web page with a list of chemicals and casually add them to a shopping cart.

Thus, those B2B marketplaces create “workflows” that enable buyers and sellers to research products, ask questions, and negotiate prices.

These customer “workflows” could be similar in concept across industries. For example, a search that identifies chemical interactions might use similar logic and code to a search that matches semiconductor chips to motherboards.

But the parameters of, say, chemicals and semiconductor chips are vastly different. Thus raw material marketplaces will require both software and industry know-how.

That is a challenge. But it is one many companies could take on. Don’t be surprised if new B2B raw material marketplaces emerge in the next few years. And don’t be surprised when marketplaces such as Knowde gain significant market share.

Lessons from Bankruptcy Drive Ecom Agency Founder

Josh Durham knows the downside of entrepreneurship. He founded an ecommerce company in 2015 and quickly grew revenue. Then it went out of business.

He told me, “We made weighted blankets. We scaled that business to about $6 million a year. Then it tanked overnight. It was brutal. I had to fire everyone.”

Fast forward to 2021, and Durham has bounced back. He launched a successful marketing agency, Aligned Growth Management, that builds on his ecommerce experience. He and I recently discussed his journey, from early success to bankruptcy and back.

Our entire audio conversation is embedded below. The transcript that follows is edited for clarity and length.

Eric Bandholz: Tell us about your journey to Aligned Growth Management.

Josh Durham: I got into the ecommerce game in 2015, when I started my first company. It was called Weighting Comforts. We made weighted blankets. We were the first weighted blanket for adults on the market. We scaled that business to about $6 million a year. Then it tanked overnight. It was brutal. I had to fire everyone.

Bandholz: What happened?

Durham: The market fell out from under us. We didn’t evolve on our product well. Manufacturing in the States was our biggest downfall. Our competitors had automated and outsourced production to China. They were making weighted blankets for $5 each. My cost was $40 to $50. Our margins vanished as soon as Target came on the market with their own version.

Bandholz: Sales just disappeared?

Durham: Yes. In the fall of 2018 we started seeing indicators. It was two weeks from Black Friday, and we didn’t have enough cash for payroll. Ultimately it was a poor product-market fit. Plus, we tried to go from $2 million a year in revenue to $10 million. That was too ambitious.

I co-founded the business with my mom. It required a lot of debt. We borrowed almost $1 million trying to keep it open. That was a scary situation. We closed in May 2019.

We sold the trademark and the email list to a digital marketing agency that used it as an in-house brand. But the proceeds were not enough to pay the debt.

Bandholz: You signed personal guarantees?

Durham: Yes, personal guarantees for a bank line of credit.

Bandholz: It’s like the faster you grow, the more money it takes for inventory. The more you tie your cash up, the less money you have for marketing. So you’re stuck with all this inventory but no capital to sell it.

Durham: Yes, it’s a vicious cycle. And then you do a sale every two weeks. It’s a spiral. It’s deadly.

Bandholz: You mentioned your mom. Was she carrying any of that debt?

Durham: Yes, she had a portion of that debt for sure. Vendors did, too. It was all personally guaranteed.

Bandholz: How do you pay off a million dollars of debt?

Durham: One bite at a time. I had to file personal bankruptcy in Tennessee. I filed in September of 2019.

Bandholz: Everyone talks about the winning stories. But yours is another side of entrepreneurship that’s worth discussing.

Durham: It was a dark, dark time. When you’re scaling a business, you have revenue coming in. It’s an exciting time. I made Forbes lists during that time. Then I was out of business within nine months.

It was mental exhaustion. I was trying new things every week to keep the business alive. Our overhead was very expensive. We shifted employees to part-time and tried to make our marketing more efficient.

I ended up in bankruptcy court. It was a sad experience overall. I had hardly any cash, just enough to live on for a couple of months. I needed a job. But the job became a healing experience. Just being able to settle down, focus on my health, get back mentally, and rebuild.

Bandholz: You’ve bounced back, dug yourself out of the hole.

Durham: After the bankruptcy I was trying to decide what to do. Should I start an agency or freelance with other ecommerce brands? But I met Peter, the CEO of Groove Life, which makes outdoor-focused rings, belts, watch bands. He said, “Why don’t you come work here for a year?” So I joined Groove Life as head of growth.

Bandholz: Groove Life is very successful.

Durham: For sure. The company’s strong product margins allow for a lot of ad spend and investment in acquiring customers. The founders have done a great job building a brand. They focused on the guy who loves to hunt, fish, and work with his hands. More blue-collar, middle America. Everyone else in the market focused on the CrossFitters, the fitness influencers, that kind of thing.

The company also has an amazing guarantee — a lifetime warranty. They stick behind their products.

Bandholz:  How long were you with Groove?

Durham: About a year and a half. I was in a transition period. I got married last year and began rethinking my priorities. I love entrepreneurship. I’m passionate about it. I love providing value, creating financial freedom for myself. I love helping others. So I struck out on my own and launched Aligned Growth Management, a marketing agency.

Bandholz: Ad agencies, from a brand perspective, can be frustrating. Everyone promises to scale your brand like crazy, but very few can. How do you help your clients grow?

Durham: First, we try to set realistic, healthy expectations. We’re not going to create a big win from one tactic alone. That’s the trap a lot of agencies have created for themselves. It’s rarely one thing that drives the needle.

We try to add value beyond ad buying. There’s a lot more to growing an ecommerce business than just having the best Facebook ad. It starts with the product. That’s the biggest differentiator. A founder with a marketing mindset while conceiving the product sets up the business for success, versus creating the product and then figuring out how to sell it.

We focus on three channels: Facebook, email-SMS, and an ambassador program.

Clients with strong internal stakeholders see the most success. Instead of outsourcing all responsibilities to the marketing agency, they have their own strategy, game plan, promotional calendar. They have new products coming out with campaigns planned. Many brands have no real marketing strategy. They send emails campaigns randomly, for example, whenever they can get content.

Bandholz: Are there still opportunities to scale on Facebook with the loss of data?

Durham: Facebook’s on-platform reporting is bad with iOS 14 changes. I like to split performance metrics between a lead measure and a lag measure. Lead measures are CPMs, click-through rates. That data is not going to be wonky.

The lag measures are return on ad spend, cost per purchase, number of purchases, that kind of thing.

Bandholz: You’ve been around the block. I appreciate you opening up, being vulnerable about the downs. Everyone hears about the winning entrepreneur. But it’s tough out there. Your willingness to share your story will help other businesses. So thank you.

How can listeners get ahold of you?

Durham: The best place is Twitter — @joshjdurham. I’m also on LinkedIn. My agency’s site is AlignedGrowthManagement.com.

Rebuilding A Large E-Commerce Website With Next.js (Case Study)

At our company, Unplatform, we have been building e-commerce sites for decades now. Over those years, we have seen the technology stack evolve from server-rendered pages with some minor JavaScript and CSS to full-blown JavaScript applications.

The platform we used for our e-commerce sites was based on ASP.NET and when visitors started to expect more interaction, we added React for the front-end. Although mixing the concepts of a server web framework like ASP.NET with a client-side web framework like React made things more complicated, we were quite happy with the solution. That was until we went to production with our highest traffic customer. From the moment we went live, we experienced performance issues. Core Web Vitals are important, even more so in e-commerce. In this Deloitte study: Milliseconds Make Millions, the investigators analyzed mobile site data of 37 different brands. As a result, they found that a 0.1s performance improvement can lead to a 10% increase in conversion.

To mitigate the performance issues, we had to add a lot of (unbudgeted) extra servers and had to aggressively cache pages on a reverse proxy. This even required us to disable parts of the site’s functionality. We ended up having a really complicated, expensive solution that in some cases just statically served some pages.

Obviously, this didn’t feel right, until we found out about Next.js. Next.js is a React-based web framework that allows you to statically generate pages, but you can also still use server-side rendering, making it ideal for e-commerce. It can be hosted on a CDN like Vercel or Netlify, which results in lower latency. Vercel and Netlify also use serverless functions for the Server Side Rendering, which is the most efficient way to scale out.

Challenges

Developing with Next.js is amazing, but there are definitely some challenges. The developer experience with Next.js is something you just need to experience. The code you write visualizes instantly in your browser and productivity goes through the sky. This is also a risk because you can easily get too focused on productivity and neglect the maintainability of your code. Over time, this and the untyped nature of JavaScript can lead to the degradation of your codebase. The number of bugs increases and productivity starts to go down.

It can also be challenging on the runtime side of things. The smallest changes in your code can lead to a drop in performance and other Core Web Vitals. Also, careless use of server-side rendering can lead to unexpected service costs.

Let’s have a closer look at our lessons learned in overcoming these challenges.

  1. Modularize Your Codebase
  2. Lint And Format Your Code
  3. Use TypeScript
  4. Plan For Performance And Measure Performance
  5. Add Performance Checks To Your Quality Gate
  6. Add Automated Tests
  7. Aggressively Manage Your Dependencies
  8. Use A Log Aggregation Service
  9. Next.js’s Rewrite Functionality Enables Incremental Adoption

Lesson Learned: Modularize Your Codebase

Front-end frameworks like Next.js make it so easy to get started these days. You just run npx create-next-app and you can start coding. But if you are not careful and start banging out code without thinking about design, you might end up with a big ball of mud.

When you run npx create-next-app, you will have a folder structure like the following (this is also how most examples are structured):

/public logo.gif
/src /lib /hooks useForm.js /api content.js /components Header.js Layout.js /pages Index.js

We started out using the same structure. We had some subfolders in the components folder for bigger components, but most of the components were in the root components folder. There is nothing wrong with this approach and it’s fine for smaller projects. However, as our project grew it became harder to reason about components and where they are used. We even found components that were no longer used at all! It also promotes a big ball of mud, because there is no clear guidance on what code should be dependent on what other code.

To solve this, we decided to refactor the codebase and group the code by functional modules (kind of like NPM modules) instead of technical concepts:

/src /modules /catalog /components productblock.js /checkout /api cartservice.js /components cart.js

In this small example, there is a checkout module and a catalog module. Grouping the code this way leads to better discoverability: by merely looking at the folder structure you know exactly what kind of functionality is in the codebase and where to find it. It also makes it a lot easier to reason about dependencies. In the previous situation, there were a lot of dependencies between the components. We had pull requests for changes in the checkout that also impacted catalog components. This increased the number of merge conflicts and made it harder to make changes.

The solution that worked best for us was to keep the dependencies between the modules to an absolute minimum (if you really need a dependency, make sure its uni-directional) and introduce a “project” level that ties everything together:

/src /modules /common /atoms /lib /catalog /components productblock.js /checkout /api cartservice.js /components cart.js /search /project /layout /components /templates productdetail.js cart.js /pages cart.js

A visual overview of this solution:

The project level contains the code for the layout of the e-commerce site and page templates. In Next.js, a page component is a convention and results in a physical page. In our experience, these pages often need to reuse the same implementation and that is why we have introduced the concept of “page templates”. The page templates use the components from the different modules, for example, the product detail page template will use components from the catalog to display product information, but also an add to cart component from the checkout module.

We also have a common module, because there is still some code that needs to be reused by the functional modules. It contains simple atoms that are React components used to provide a consistent look and feel. It also contains infrastructure code, think of certain generic react hooks or GraphQL client code.

Warning: Make sure the code in the common module is stable and always think twice before adding code here, in order to prevent tangled code.

Micro Front-Ends

In even bigger solutions or when working with different teams, it can make sense to split up the application even more into so-called micro-frontends. In short, this means splitting up the application even more into multiple physical applications that are hosted independently on different URLs. For example: checkout.mydomain.com and catalog.mydomain.com. These are then integrated by a different application that acts as a proxy.

Next.js’ rewrite functionality is great for this and using it like this is supported by so-called Multi Zones.

The benefit of multi-zones is that every zone manages its own dependencies. It also makes it easier to incrementally evolve the codebase: If a new version of Next.js or React gets out, you can upgrade the zones one by one instead of having to upgrade the entire codebase at once. In a multi-team organization, this can greatly reduce dependencies between teams.

Further Reading

Lesson Learned: Lint And Format Your Code

This is something we learned in an earlier project: if you work in the same codebase with multiple people and don’t use a formatter, your code will soon become very inconsistent. Even if you are using coding conventions and are doing reviews, you will soon start to notice the different coding styles, giving a messy impression of the code.

A linter will check your code for potential issues and a formatter will make sure the code is formatted in a consistent way. We use ESLint & prettier and think they are awesome. You don’t have to think about the coding style, reducing the cognitive load during development.

Fortunately, Next.js 11 now supports ESLint out of the box (https://nextjs.org/blog/next-11), making it super easy to set up by running npx next lint. This saves you a lot of time because it comes with a default configuration for Next.js. For example, it is already configured with an ESLint extension for React. Even better, it comes with a new Next.js-specific extension that will even spot issues with your code that could potentially impact the Core Web Vitals of your application! In a later paragraph, we will talk about quality gates that can help you to prevent pushing code to a product that accidentally hurts your Core Web Vitals. This extension gives you feedback a lot faster, making it a great addition.

Further Reading

Lesson Learned: Use TypeScript

As components got modified and refactored, we noticed that some of the component props were no longer used. Also, in some cases, we experienced bugs because of missing or incorrect types of props being passed into the components.

TypeScript is a superset of JavaScript and adds types, which allows a compiler to statically check your code, kind of like a linter on steroids.

At the start of the project, we did not really see the value of adding TypeScript. We felt it was just an unnecessary abstraction. However, one of our colleagues had good experiences with TypeScript and convinced us to give it a try. Fortunately, Next.js has great TypeScript support out of the box and TypeScript allows you to add it to your solution incrementally. This means you don’t have to rewrite or convert your entire codebase in one go, but you can start using it right away and slowly convert the rest of the codebase.

Once we started migrating components to TypeScript, we immediately found issues with wrong values being passed into components and functions. Also, the developer feedback loop got shorter and you get notified of issues before running the app in the browser. Another big benefit we found is that it makes it a lot easier to refactor code: it is easier to see where code is being used and you immediately spot unused component props and code. In short, the benefits of TypeScript:

  1. Reduces the number of bugs
  2. Makes it easier to refactor your code
  3. Code gets easier to read

Further Reading

Lesson Learned: Plan For Performance And Measure Performance

Next.js supports different types of pre-rendering: Static generation and Server-side rendering. For best performance, it is recommended to use static generation, which happens during build time, but this is not always possible. Think of product detail pages that contain stock information. This kind of information changes often and running a build every time does not scale well. Fortunately, Next.js also supports a mode called Incremental Static Regeneration (ISR), which still statically generates the page, but generates a new one in the background every x seconds. We have learned that this model works great for larger applications. Performance is still great, it requires less CPU time than Server-side rendering and it reduces build times: pages only get generated on the first request. For every page you add, you should think of the type of rendering needed. First, see if you can use static generation; if not, go for Incremental Static Regeneration, and if that too is not possible, you can still use server-side rendering.

Next.js automatically determines the type of rendering based on the absence of getServerSideProps and getInitialProps methods on the page. It’s easy to make a mistake, which could cause the page to be rendered on the server instead of being statically generated. The output of a Next.js build shows exactly which page uses what type of rendering, so be sure to check this. It also helps to monitor production and track the performance of the pages and the CPU time involved. Most hosting providers charge you based on the CPU time and this helps to prevent any unpleasant surprises. I will describe how we monitor this in the Lesson learned: Use a log aggregation service paragraph.

Bundle Size

To have a good performance it is crucial to minimize the bundle size. Next.js has a lot of features out of the box that help, e.g. automatic code splitting. This will make sure that only the required JavaScript and CSS are loaded for every page. It also generates different bundles for the client and for the server. However, it is important to keep an eye on these. For example, if you import JavaScript modules the wrong way the server JavaScript can end up in the client bundle, greatly increasing the client bundle size and hurting performance. Adding NPM dependencies can also greatly impact the bundle size.

Fortunately, Next.js comes with a bundles analyzer that gives you insight into which code takes up what part of the bundles.

Further Reading

Lesson Learned: Add Performance Checks To Your Quality Gate

One of the big benefits of using Next.js is the ability to statically generate pages and to be able to deploy the application to the edge (CDN), which should result in great performance and Web Vitals. We learned that, even with great technology like Next.js, getting and keeping a great lighthouse score is really hard. It happened a number of times that after we deployed some changes to production, the lighthouse score dropped significantly. To take back control, we have added automatic lighthouse tests to our quality gate. With this Github Action you can automatically add lighthouse tests to your pull requests. We are using Vercel and every time a pull request is created, Vercel deploys it to a preview URL and we use the Github action to run lighthouse tests against this deployment.

If you don’t want to set up the GitHub action yourself, or if you want to take this even further, you could also consider a third-party performance monitoring service like DebugBear. Vercel also offers an Analytics feature, which measures the core Web Vitals of your production deployment. Vercel Analytics actually collects the measures from the devices of your visitors, so these scores are really what your visitors are experiencing. At the time of writing, Vercel Analytics only works on production deployments.

Lesson Learned: Add Automated Tests

When the codebase gets bigger it gets harder to determine if your code changes might have broken existing functionality. In our experience, it is vital to have a good set of End-to-end tests as a safety net. Even if you have a small project, it can make your life so much easier when you have at least some basic smoke tests. We have been using Cypress for this and absolutely love it. The combination of using Netlify or Vercel to automatically deploy your Pull request on a temporary environment and running your E2E tests is priceless.

We use cypress-io/GitHub-action to automatically run the cypress tests against our pull requests. Depending on the type of software you’re building it can be valuable to also have more granular tests using Enzyme or JEST. The tradeoff is that these are more tightly coupled to your code and require more maintenance.

Lesson Learned: Aggressively Manage Your Dependencies

Managing dependencies becomes a time-consuming, but oh so important activity when maintaining a large Next.js codebase. NPM made adding packages so easy and there seems to be a package for everything these days. Looking back, a lot of times when we introduced a new bug or had a drop in performance it had something to do with a new or updated NPM package.

So before installing a package you should always ask yourself the following:

  • What is the quality of the package?
  • What will adding this package mean for my bundle size?
  • Is this package really necessary or are there alternatives?
  • Is the package still actively maintained?

To keep the bundle size small and to minimize the effort needed to maintain these dependencies it is important to keep the number of dependencies as small as possible. Your future self will thank you for it when you are maintaining the software.

Tip: The Import Cost VSCode extension automatically shows the size of imported packages.

Keep Up With Next.js Versions

Keeping up with Next.js & React is important. Not only will it give you access to new features, but new versions will also include bug fixes and fixes for potential security issues. Fortunately, Next.js makes upgrading incredibly easy by providing Codemods (https://nextjs.org/docs/advanced-features/codemods. These are automatic code transformations that automatically update your code.

Update Dependencies

For the same reason, it is important to keep the Next.js and React versions actual; it is also important to update other dependencies. Github’s dependabot (https://github.com/dependabot) can really help here. It will automatically create Pull Requests with updated dependencies. However, updating dependencies can potentially break things, so having automated end-to-end tests here can really be a lifesaver.

Lesson learned: Use A Log Aggregation Service

To make sure the app is behaving properly and to preemptively find issues, we have found it is absolutely necessary to configure a log aggregation service. Vercel allows you to log in and view the logs, but these are streamed in real-time and are not persisted. It also does not support configuring alerts and notifications.

Some exceptions can take a long time to surface. For example, we had configured Stale-While-Revalidate for a particular page. At some point, we noticed that the pages were not being refreshed and that old data was being served. After checking the Vercel logging, we found that an exception was happening during the background rendering of the page. By using a log aggregation service and configuring an alert for exceptions, we would have been able to spot this a lot sooner.

Log aggregation services can also be useful to monitor the limits of Vercel’s pricing plans. Vercel’s usage page also gives you insights in this, but using a log aggregation service allows you to add notifications when you reach a certain threshold. Prevention is better than cure, especially when it comes to billing.

Vercel offers a number of out-of-the-box integrations with log aggregation services, featuring Datadog, Logtail, Logalert, Sentry, and more.

Further Reading

Lesson Learned: Next.js’s Rewrite Functionality Enables Incremental Adoption

Unless there are some serious issues with the current website, not a lot of customers are going to be excited to rewrite the entire website. But what if you could start with rebuilding only the pages that matter most in terms of Web Vitals? That is exactly what we did for another customer. Instead of rebuilding the entire site, we only rebuild the pages that matter most for SEO and conversion. In this case the product detail and category pages. By rebuilding those with Next.js, performance greatly increased.

Next.js rewrite functionality is great for this. We built a new Next.js front-end that contains the catalog pages and deployed that to the CDN. All other existing pages are rewritten by Next.js to the existing website. This way you can start having the benefits of a Next.js site in a low-effort or low-risk manner.

Further Reading

What’s Next?

When we released the first version of the project and started doing serious performance testing we were thrilled by the results. Not only were the page response times and Web Vitals so much better than before, but the operational costs were also a fraction of what it was before. Next.js and JAMStack generally allow you to scale out in the most cost-efficient way.

Switching over from a more back-end-oriented architecture to something like Next.js is a big step. The learning curve can be quite steep, and initially, some team members really felt outside of their comfort zone. The small adjustments we made, the lessons learned from this article, really helped with this. Also, the development experience with Next.js gives an amazing productivity boost. The developer feedback cycle is incredibly short!

Further Reading

Free Online Workshop: Frustrating Design Patterns And How To Fix Them

Disabled buttons. Infinite scroll. Poor inline validation. Parallax. Carousels. Modals. Mega-dropdown hover menus. There is plenty of frustration on the web. Let’s fix that. Join us for a free online workshop on Frustrating Design Patterns on Monday, September 27 at 9:00 AM PDT / 6PM CET.

In the 2.5h live session we’ll take a close look at some of the confusing and annoying patterns and explore better alternatives alongside plenty of examples and checklists to keep in mind when building or designing one. You’ll walk away with a toolbox of techniques and examples of doing things well — in your product, website, desktop app or mobile app.

We’ll look into carousels, modals, infinite scroll, parallax and scrolljacking, mega-dropdown menus, disabled buttons, inline validation, frozen filters, CAPTCHA, authentication and privacy. Register for the free workshop.

Free online workshop on Frustrating Design Patterns in 2021, and How To Fix Them.

1 × 2.5h live sessions + Q&A. Mon, Sep 27.
With all video recordings & slides. Get a free ticket.

Upcoming Live Workshops (Sep–Nov 2021)

We also have plenty of other online workshops coming up in the months to come (some of them with early-bird-pricing!). It goes without saying that we’d love to see you there.

Frustrating Design Patterns in 2021
Vitaly Friedman
1 session Sep 27 free

Early birds!

Early birds!

Early birds!

Early birds!

Early birds!

Early birds!

Early birds!

A Sneak Peek

We’ve written already on how we run online workshops at Smashing, and for this one, we expect engaged discussions and participation. Here’s a quick preview of what an online workshop usually looks like:

Online meet-up. Sep 30, 9AM PDT, 6PM CET.
With all video recordings & slides. Get a free ticket.

See You Then!

Thank you so much for your continuous and ongoing support, everyone! And we hope to see you soon — online or offline! ❤️