Product Photography, Part 10: Lines as Design Elements

The design elements of a photo include lines, color, shapes, light, texture, and negative space. The use of such elements in product photography can make or break its appeal to shoppers.

This is the 10th installment in my series to help ecommerce merchants take better product images. “Part 1” addressed the importance of backdrops. “Part 2” explained tripods. “Part 3” examined artificial lighting. “Part 4” reviewed angles and viewpoints, and “Part 5” dealt with choosing a camera. “Part 6” assessed lenses and their importance. “Part 7” focused on magnification and close-ups, and “Part 8” and “Part 9” introduced the basics of composition.

In this installment, I’ll look at using lines to make your product photos more engaging.

Lines in Photography

Lines direct the viewer’s eyes to the focal point of an image. Failure to employ lines correctly can make your images confusing or complicated, lowering conversions. Let’s look at the six types of lines for your product photography.

Vertical lines draw viewers’ eyes from the top of your photo to the bottom, or vice versa. Vertical lines can evoke feelings in the viewer, depending on the context.

Woman at a sink washing a reusable water bottle. SourceL TakeyaUSA.com.Woman at a sink washing a reusable water bottle. SourceL TakeyaUSA.com.

Vertical lines can evoke feelings in the viewer. This image of a reusable water bottle conveys the brand’s sustainability efforts and a key selling feature: a removable lid. Source: TakeyaUSA.com.

For example, the image above of a woman washing a reusable water bottle sends a powerful message about the brand’s sustainability efforts and a key selling feature: a removable lid. Viewers’ eyes follow the top of the flowing water before settling on the lid and the overall scene. It’s a compelling example of how product photography can provide a visual journey and prompt shoppers to contemplate an item’s utility.

Horizontal. The human eye naturally follows horizontal lines in an image, making their use a powerful tool when crafting a story about a product or brand. Interrupting a horizontal line with the product is an effective way to draw attention, as seen in the example below. I prefer placing a product on top of a horizontal line to occupy most of the upper portion. It forces viewers to gaze upwards and contemplate your product longer.

Image of a water blue water bottle on a tennis court out-of-bounds line. Source: TakeyaUSA.comImage of a water blue water bottle on a tennis court out-of-bounds line. Source: TakeyaUSA.com

Interrupting a horizontal line with a product, such as this water bottle, draws attention. Source: TakeyaUSA.com.

Diagonal lines can create useful tension in product photography. Tension can improve engagement. For example, the diagonals in the image below of the towel and flowers drive viewers’ eyes to the coffee maker and its “smooth pouring” experience.

Female pouring coffee into a glass. Source: TakeyaUSA.com.Female pouring coffee into a glass. Source: TakeyaUSA.com.

Diagonal lines can create useful tension. The diagonals of the towel and flowers in this image drive viewers’ eyes to the coffee maker. Source: TakeyaUSA.com.

Diagonal lines can also create depth in an image, which is helpful in forming a story around a product. The image below is much more interesting with the diagonal shoreline in the background.

Girl standing on a rock holding a water bottle. Source: TakeyaUSA.com.Girl standing on a rock holding a water bottle. Source: TakeyaUSA.com.

The diagonal shoreline in the background of this image adds interest and engagement. Source: TakeyaUSA.com.

Leading lines can be vertical, horizontal, and diagonal. They steer viewers to an image’s focal point. Leading lines make images less static and more three-dimensional. Use them in any number of ways. For example, a product placed partway through a line entices viewers to continue past the item, take in the entire image, and return.

Lines can also lead directly to your product and terminate, as in the image below. The woman’s arms lead to the water bottle.

Image of a lady on a beach sitting, holding a blue water bottle. Lines can lead directly to a product and then terminate, such as the women's legs and her arms, which lead to the water bottle. Source: TakeyaUSA.com.Image of a lady on a beach sitting, holding a blue water bottle. Lines can lead directly to a product and then terminate, such as the women's legs and her arms, which lead to the water bottle. Source: TakeyaUSA.com.

Lines can lead directly to a product and then terminate, such as the woman’s arms, which lead to the water bottle. Source: TakeyaUSA.com.

Avoid placing a product at the beginning of a leading line. Provide the viewer the experience of following lines to the item. Also, consider more than one leading line as illustrated, again, by arms and legs in the image above. Leading lines can come from the same or varying directions so long as they direct viewers’ eyes to the product.

Implied lines stem from the arrangement of elements. The photo below is a good example. The placement of the hat, pillow, jug, and glass implies a diagonal line running from the lower-left corner to the top right. The line draws a viewer’s eye to the product (the jug).

Image of a hat, bottle, and a glass arranged diagonally.Image of a hat, bottle, and a glass arranged diagonally.

Implied lines stem from the arrangement of elements in a photo. This hat, pillow, jug, and glass imply a diagonal line from the lower-left corner to the top right. Source: TakeyaUSA.com.

Converging lines are two or more diagonal lines that run toward each other. They may not touch, but they are helpful in some settings. For max effectiveness, place your product at the point where the lines converge. This becomes the focal point of the image and can engage viewers.  The water bottle below sits on at convergence of two diagonal countertop lines.

Converging lines can become the focal point of the image and can engage vieWater bottle sits at the convergence of two diagonal countertop lines. Source: TakeyaUSA.com.Converging lines can become the focal point of the image and can engage vieWater bottle sits at the convergence of two diagonal countertop lines. Source: TakeyaUSA.com.

Converging lines can become the focal point of the image and can engage viewers. This water bottle sits on at convergence of two diagonal countertop lines. Source: TakeyaUSA.com.

Respecting Users’ Motion Preferences

When working with motion on the web, it’s important to consider that not everyone experiences it in the same way. What might feel smooth and slick to some might be annoying or distracting to others — or worse, induce feelings of sickness, or even cause seizures. Websites with a lot of motion might also have a higher impact on the battery life of mobile devices, or cause more data to be used (autoplaying videos, for instance, will require more of a user’s data than a static image). These are just some of the reasons why motion-heavy sites might not be desirable for all.

Most new operating systems enable the user to set their motion preferences in their system-level settings. The prefers-reduced-motion media query (part of the Level 5 Media Queries specification) allows us to detect users’ system-level motion preferences, and apply CSS styles that respect that.

The two options for prefers-reduced-motion are reduce or no-preference. We can use it in the following way in our CSS to turn off an element’s animation if the user has explicitly set a preference for reduced motion:

.some-element { animation: bounce 1200ms;
} @media (prefers-reduced-motion: reduce) { .some-element { animation: none; }
}

Conversely, we could set the animation only if the user has no motion preference. This has the advantage of reducing the amount of code we need to write, and means it’s less likely we’ll forget to cater for users’ motion preferences:

@media (prefers-reduced-motion: no-preference) { .some-element { animation: bounce 1200ms; }
}

An added advantage is that older browsers that don’t support prefers-reduced-motion will ignore the rule and only display our original, motion-free element.

Which Rule?

Unlike min-width and max-width media queries, where the more-or-less established consensus is mobile-first (therefore favoring min-width), there is no single “right” way to write your reduced-motion styles. I tend to favor the second example (applying animations only if prefers-reduced-motion: no-preference evaluates true), for the reasons listed above. Tatiana Mac wrote this excellent article which covers some of the approaches developers might consider taking, as well plenty of other great points, including key questions to ask when designing with motion on the web.

As always, team communication and a consistent strategy are key to ensuring all bases are covered when it comes to web accessibility.

Practical Use: Applying prefers-reduced-motion To Scroll Behavior

prefers-reduced-motion has plenty of applications beyond applying (or not applying) keyframe animations or transitions. One example is smooth scrolling. If we set scroll-behaviour: smooth on our html element, when a user clicks an in-page anchor link they will be smoothly scrolled to the appropriate position on the page (currently not supported in Safari):

html { scroll-behavior: smooth;
}

Unfortunately, in CSS we don’t have much control over that behavior right now. If we have a long page of content, the page scrolls very fast, which can be a pretty unpleasant experience for someone with motion sensitivity. By wrapping it in a media query, we can prevent that behavior from being applied in cases where the user has a reduced-motion preference:

@media (prefers-reduced-motion: no-preference) { html { scroll-behavior: smooth; }
}

Catering For Motion Preferences In Javascript

Sometimes we need to apply motion in JavaScript rather than CSS. We can similarly detect a user’s motion preferences with JS, using matchMedia. Let’s see how we can conditionally implement smooth scroll behavior in our JS code:

/* Set the media query */
const prefersReducedMotion = window.matchMedia('(prefers-reduced-motion: reduce)') button.addEventListener('click', () => { /* If the media query matches, set scroll behavior variable to 'auto', otherwise set it to 'smooth' */ const behavior = prefersReducedMotion.matches ? 'auto' : 'smooth' /* When the button is clicked, the user will be scrolled to the top */ window.scrollTo({ x: 0, y: 0, behavior })
})

The same principle can be used to detect whether to implement motion-rich UIs with JS libraries — or even whether to load the libraries themselves.

In the following code snippet, the function returns early if the user prefers reduced motion, avoiding the unnecessary import of a large dependency — a performance win for the user. If they have no motion preference set, then we can dynamically import the Greensock animation library and initialize our animations.

const prefersReducedMotion = window.matchMedia('(prefers-reduced-motion: reduce)') const loadGSAPAndInitAnimations = () => { /* If user prefers reduced motion, do nothing */ if (prefersReducedMotion.matches) return /* Otherwise, import the GSAP module and initialize animations */ import('gsap').then((object) => { const gsap = object.default /* Initialize animations with GSAP here */ })
} loadGSAPAndInitAnimations()

reduced-motion Doesn’t Mean No Motion

When styling for reduced motion preferences, it’s important that we still provide the user with meaningful and accessible indicators of when an action has occurred. For instance, when switching off a distracting or motion-intensive hover state for users who prefer reduced motion, we must take care to provide a clear alternative style for when the user is hovering on the element.

The following demo shows an elaborate transition when the user hovers or focuses on a gallery item if they have no motion preference set. If they prefer reduced motion, the transition is more subtle, yet still clearly indicates the hover state:

See the Pen Gallery with prefers-reduced-motion by Michelle Barker.

Reduced motion doesn’t necessarily mean removing all transforms from our webpage either. For instance, a button that has a small arrow icon that moves a few pixels on hover is unlikely to cause problems for someone who prefers a reduced-motion experience, and provides a more useful indicator of a change of state than color alone.

I sometimes see developers applying reduced motion styles in the following way, which eliminates all transitions and animations on all elements:

@media screen and (prefers-reduced-motion: reduce) { * { animation: none !important; transition: none !important; scroll-behavior: auto !important; }
}

This is arguably better than ignoring users’ motion preferences, but doesn’t allow us to easily tailor elements to provide more subtle transitions when necessary.

In the following code snippet, we have a button that grows in scale on hover. We’re transitioning the colors and the scale, but users with a preference for reduced motion will get no transition at all:

button { background-color: hotpink; transition: color 300ms, background-color 300ms, transform 500ms cubic-bezier(.44, .23, .47, 1.27);
} button:hover,
button:focus { background-color: darkviolet; color: white; transform: scale(1.2);
} @media screen and (prefers-reduced-motion: reduce) { * { animation: none !important; transition: none !important; scroll-behavior: auto !important; } button { /* Even though we would still like to transition the colors of our button, the following rule will have no effect */ transition: color 200ms, background-color 200ms; } button:hover, button:focus { /* Preventing the button scaling on hover */ transform: scale(1); }
}

Check out this demo to see the effect. This is perhaps not ideal, as the sudden color switch without a transition could feel more jarring than a transition of a couple of hundred milliseconds. This is one reason why, on the whole, I generally prefer to style for reduced motion on a case-by-case basis.

If you’re interested, this is the same demo refactored to allow for customizing the transition when necessary. It uses a custom property for the transition duration, which allows us to toggle the scale transition on and off without having to rewrite the whole declaration.

When Removing Animation Is Better

Eric Bailey raises the point that “not every device that can access the web can also render animation, or render animation smoothly“ in his article, “Revisiting prefers-reduced-motion, the reduced motion media query.” For devices with a low refresh rate, which can cause janky animations, it might in fact be preferable to remove the animation. The update media feature can be used to determine this:

@media screen and (prefers-reduced-motion: reduce), (update: slow) { * { animation-duration: 0.001ms !important; animation-iteration-count: 1 !important; transition-duration: 0.001ms !important; }
}

Be sure to read the full article for Eric’s recommendations, as he’s a first-rate person to follow in the field of accessibility.

The Sum Of All Parts

It’s important to keep in mind the overall page design when focusing so tightly on component-level CSS. What might seem a fairly innocuous animation at the component level could have a far greater impact when it’s repeated throughout the page, and is one of many moving parts.

In Tatiana’s article, she suggests organizing animations (with prefers-reduced-motion) in a single CSS file, which can be loaded only if (prefers-reduced-motion: no-preference) evaluates true. Seeing the sum total of all our animations could have the added benefit of helping us visualize the experience of visiting the site as a whole, and tailor our reduced-motion styles accordingly.

Explicit Motion Toggle

While prefers-reduced-motion is useful, it does have the drawback of only catering to users who are aware of the feature in their system settings. Plenty of users lack knowledge of this setting, while others might be using a borrowed computer, without access to system-level settings. Still, others might be happy with the motion for the vast majority of sites, but find sites with heavy use of motion hard to bear.

It can be annoying to have to adjust your system preferences just to visit one site. For these reasons, in some cases, it might be preferable to provide an explicit control on the site itself to toggle motion on and off. We can implement this with JS.

The following demo has several circles drifting around the background. The initial animation styles are determined by the user’s system preferences (with prefers-reduced-motion), however, the user has the ability to toggle motion on or off via a button. This adds a class to the body, which we can use to set styles depending on the selected preference. As a bonus, the choice of motion preference is also preserved in local storage — so it is “remembered” when the user next visits.

See the Pen Reduced-motion toggle by Michelle Barker.

Custom Properties

One feature in the demo is that the toggle sets a custom property, --playState, which we can use to play or pause animations. This could be especially handy if you need to pause or play a number of animations at once. First of all, we set the play state to paused:

.circle { animation-play-state: var(--playState, paused);
}

If the user has set a preference for reduced motion in their system settings, we can set the play state to running:

@media (prefers-reduced-motion: no-preference) { body { --playState: running; }
}

Note: Setting this on the body, as opposed to the individual element, means the custom property can be inherited.

When the user clicks the toggle, the custom property is updated on the body, which will toggle any instances where it is used:

// This will pause all animations that use the `--playState` custom property
document.body.style.setProperty('--playState', 'paused')

This might not be the ideal solution in all cases, but one advantage is that the animation simply pauses when the user clicks the toggle, rather than jumping back to its initial state, which could be quite jarring.

Special thanks goes to Scott O’Hara for his recommendations for improving the accessibility of the toggle. He made me aware that some screenreaders don’t announce the updated button text, which is changed when a user clicks the button, and suggested role="switch" on the button instead, with aria-checked toggled to on or off on click.

Video Component

In some instances, toggling motion at the component level might be a better option. Take a webpage with an auto-playing video background. We should ensure the video doesn’t autoplay for users with a preference for reduced motion, but we should still provide a way for them to play the video only if they choose. (Some might argue we should avoid auto-playing videos full stop, but we don’t always win that battle!) Likewise, if a video is set to autoplay for users without a stated preference, we should also provide a way for them to pause the video.

This demo shows how we can set the autoplay attribute when the user has no stated motion preference, implementing a custom play/pause button to allow them to also toggle playback, regardless of preference:

See the Pen Video with motion preference by Michelle Barker.

(I subsequently came upon this post by Scott O‘Hara, detailing this exact use case.)

Using The <picture> Element

Chris Coyier wrote an interesting article combining a couple of techniques to load different media sources depending on the user’s motion preferences. This is pretty cool, as it means that for users who prefer reduced motion, the much larger GIF file won’t even be downloaded. The downside, as far as I can see, is that once the file is downloaded, there is no way for the user to switch back to the motion-free alternative.

I create a modified version of the demo which adds this option. (Switch on reduced-motion in your system preferences to see it in action.) Unfortunately, when toggling between the animated and motion-free options in Chrome, it appears the GIF file is downloaded afresh each time, which isn’t the case in other browsers:

See the Pen Prefers Reduction Motion Technique PLUS! [forked] by Michelle Barker.

Still, this technique seems like a more respectful way of displaying GIFs, which can be a source of frustration to users.

Browser Support And Final Thoughts

prefers-reduced-motion has excellent support in all modern browsers going back a couple of years. As we’ve seen, by taking a reduced-motion-first approach, non-supporting browsers will simply get a reduced-motion fallback. There’s no reason not to use it today to make your sites more accessible.

Custom toggles most definitely have a place, and can vastly improve the experience for users who aren’t aware of this setting, or what it does. The downside for the user is inconsistency — if every developer is forced to come up with their own solution, the user needs to look for a motion toggle in a different place on every website.

It feels like the missing layer here is browsers. I’d love to see browsers implement reduced-motion toggles, somewhere easily accessible to the user, so that people know where to find it regardless of the site they’re browsing. It might encourage developers to spend more time ensuring motion accessibility, too.

Related Resources

Digital Creators Need Email Marketing

A list of engaged email subscribers is among the best promotional tools for creators who sell digital products. For many of those creators, it could also be a path toward earning a living.

The creator economy has a problem. There are certainly examples of creators who have risen to celebrity status and wealth — Khaby Lame (life hacks, TikTok), Charli D’Amelio (dancing, TikTok), and PewDiePie (comedy, YouTube) are all in this category. But the vast majority of vloggers, bloggers, podcasters, and the like don’t earn enough money to quit their day jobs and create full-time.

While creators could try to up their TikTok, YouTube, or Instagram output to earn more, a practical solution may be digital products and email marketing.

Making Money

Creators who want to earn a living making music, coaching, teaching, or blogging, as examples, have a few options for generating revenue. These include advertising, platform monetization, and selling a product.

Advertising might include promoting the goods and services of third-party companies. Pat Flynn, the creator of Smart Passive Income, has earned more than $3 million in affiliate commissions over several years. Flynn devotes time during podcast episodes or YouTube videos and space on his website to promote affiliate brands. When that promotion leads to a sale, he earns a percentage as commission.

Advertising could also include sponsors. Matt Brechwald, the host of the “Off-farm Income” podcast and a podcasting consultant, sells sponsorships to well-known brands in the farm industry, including LaCrosse boots and Powder River cattle equipment.

Creators could also generate income from social media platforms. YouTube shares advertising revenue with creators whose channels have met minimum requirements. This amounts to a few pennies per view. But it can add up.

Selling products and services is another option. Jon McCray, the host of the “Whaddo You Meme??” YouTube channel, sells physical merchandise such as t-shirts.

Or the product could be digital.

Digital Products

Digital products have many advantages over physical goods and offer a compelling monetization alternative. Such products can take many forms, including:

  • Downloadable music,
  • Audio files,
  • Art and graphics,
  • Educational content,
  • Recipes,
  • Software tools and calculators,
  • Content licenses,
  • Paid newsletters,
  • Ebooks,
  • Memberships.

In many cases, digital products are relatively inexpensive to produce, requiring expertise and effort rather than dollars and cents.

For example, an accountant could create a home budgeting course with little more than the information in her head and a webcam.

Other advantages include:

  • Passive income. Once created, a digital product can be sold repeatedly. Creators work once and get paid over and over.
  • Scalability. Digital products have endless inventory, making them relatively easy to scale.
  • High margins. The low cost of developing a digital product could equate to high profit margins.
  • Low overhead. Digital products require no warehouse space and minimal staff or overhead.

Competition

In short, digital products are appealing. A creator needs only to produce an ebook, song, illustration, or similar and start making money.

Except there is a lot of competition.

An accountant who creates a home budgeting course has to fight for attention against the likes of Dave Ramsey, Duke University, Khan Academy, and 147 instructors on Udemy, all of which offer a budgeting course of some kind.

Screenshot of Google search results for "home budgeting course"Screenshot of Google search results for "home budgeting course"

This Google search for “home budgeting course” illustrates the competition for online education. Creators face competition no matter the niche, seemingly.

Email Marketing

Thus making a digital product is not enough. The creator also needs to market it.

This could be done in many ways, but email marketing is among the most compelling.

Creators have audiences. A YouTube channel that describes succulent gardening has an audience of viewers who consume the channel’s content in search of tips and tidbits about cultivating plants.

That audience, however, belongs to YouTube. But a creator who attracts subscribers to a gardening newsletter is on his way to developing a first-party audience and potential customers for his digital products. Each newsletter could include content to nurture relationships and a call-to-action for selling a course, ebook, or similar.

In this way, the combination of email marketing and digital products can help creators build their own audience and generate passive income.

Building The SSG I’ve Always Wanted: An 11ty, Vite And JAM Sandwich

I don’t know about you, but I’ve been overwhelmed by all the web development tools we have these days. Whether you like Markdown, plain HTML, React, Vue, Svelte, Pug templates, Handlebars, Vibranium — you can probably mix it up with some CMS data and get a nice static site cocktail.

I’m not going to tell you which UI development tools to reach for because they’re all great — depending on the needs of your project. This post is about finding the perfect static site generator for any occasion; something that lets us use JS-less templates like markdown to start, and bring in “islands” of component-driven interactivity as needed.

I’m distilling a year’s worth of learnings into a single post here. Not only are we gonna talk code (aka duct-taping 11ty and Vite together), but we’re also going to explore why this approach is so universal to Jamstackian problems. We’ll touch on:

  • Two approaches to static site generation, and why we should bridge the gap;
  • Where templating languages like Pug and Nunjucks still prove useful;
  • When component frameworks like React or Svelte should come into play;
  • How the new, hot-reloading world of Vite helps us bring JS interactivity to our HTML with almost zero configs;
  • How this complements 11ty’s data cascade, bringing CMS data to any component framework or HTML template you could want.

So without further ado, here’s my tale of terrible build scripts, bundler breakthroughs, and spaghetti-code-duct-tape that (eventually) gave me the SSG I always wanted: an 11ty, Vite and Jam sandwich called Slinkity!

A Great Divide In Static Site Generation

Before diving in, I want to discuss what I’ll call two “camps” in static site generation.

In the first camp, we have the “simple” static site generator. These tools don’t bring JavaScript bundles, single-page apps, and any other buzzwords we’ve come to expect. They just nail the Jamstack fundamentals: pull in data from whichever JSON blob of CMS you prefer, and slide that data into plain HTML templates + CSS. Tools like Jekyll, Hugo, and 11ty dominate this camp, letting you turn a directory of markdown and liquid files into a fully-functional website. Key benefits:

  • Shallow learning curve
    If you know HTML, you’re good to go!
  • Fast build times
    We’re not processing anything complex, so each route builds in a snap.
  • Instant time to interactive
    There’s no (or very little) JavaScript to parse on the client.

Now in the second camp, we have the “dynamic” static site generator. These introduce component frameworks like React, Vue, and Svelte to bring interactivity to your Jamstack. These fulfill the same core promise of combining CMS data with your site’s routes at build time. Key benefits:

  • Built for interactivity
    Need an animated image carousel? Multi-step form? Just add a componentized nugget of HTML, CSS, and JS.
  • State management
    Something like React Context of Svelte stores allow seamless data sharing between routes. For instance, the cart on your e-commerce site.

There are distinct pros to either approach. But what if you choose an SSG from the first camp like Jekyll, only to realize six months into your project that you need some component-y interactivity? Or you choose something like NextJS for those powerful components, only to struggle with the learning curve of React, or needless KB of JavaScript on a static blog post?

Few projects squarely fit into one camp or the other in my opinion. They exist on a spectrum, constantly favoring new feature sets as a project’s need evolve. So how do we find a solution that lets us start with the simple tools of the first camp, and gradually add features from the second when we need them?

Well, let’s walk through my learning journey for a bit.

Note: If you’re already sold on static templating with 11ty to build your static sites, feel free to hop down to the juicy code walkthrough. 😉

Going From Components To Templates And Web APIs

Back in January 2020, I set out to do what just about every web developer does each year: rebuild my personal site. But this time was gonna be different. I challenged myself to build a site with my hands tied behind my back, no frameworks or build pipelines allowed!

This was no simple task as a React devotee. But with my head held high, I set out to build my own build pipeline from absolute ground zero. There’s a lot of poorly-written code I could share from v1 of my personal site… but I’ll let you click this README if you’re so brave. 😉 Instead, I want to focus on the higher-level takeaways I learned starving myself of my JS guilty pleasures.

Templates Go A Lot Further Than You Might Think

I came at this project a recovering JavaScript junky. There are a few static-site-related needs I loved using component-based frameworks to fill:

  1. We want to break down my site into reusable UI components that can accept JS objects as parameters (aka “props”).
  2. We need to fetch some information at build time to slap into a production site.
  3. We need to generate a bunch of URL routes from either a directory of files or a fat JSON object of content.

List taken from this post on my personal blog.

But you may have noticed… none of these really need clientside JavaScript. Component frameworks like React are mainly built to handle state management concerns, like the Facebook web app inspiring React in the first place. If you’re just breaking down your site into bite-sized components or design system elements, templates like Pug work pretty well too!

Take this navigation bar for instance. In Pug, we can define a “mixin” that receives data as props:

// nav-mixins.pug
mixin NavBar(links) // pug's version of a for loop each link in links a(href=link.href) link.text

Then, we can apply that mixin anywhere on our site.

// index.pug
// kinda like an ESM "import"
include nav-mixins.pug
html body +NavBar(navLinksPassedByJS) main h1 Welcome to my pug playground 🐶

If we “render” this file with some data, we’ll get a beautiful index.html to serve up to our users.

const html = pug.render('/index.pug', { navLinksPassedByJS: [ { href: '/', text: 'Home' }, { href: '/adopt', text: 'Adopt a Pug' }
] })
// use the NodeJS filesystem helpers to write a file to our build
await writeFile('build/index.html', html)

Sure, this doesn’t give niceties like scoped CSS for your mixins, or stateful JavaScript where you want it. But it has some very powerful benefits over something like React:

  1. We don’t need fancy bundlers we don’t understand.
    We just wrote that pug.render call by hand, and we already have the first route of a site ready-to-deploy.
  2. We don’t ship any JavaScript to the end-user.
    Using React often means sending a big ole runtime for people’s browsers to run. By calling a function like pug.render at build time, we keep all the JS on our side while sending a clean .html file at the end.

This is why I think templates are a great “base” for static sites. Still, being able to reach for component frameworks where we really benefit from them would be nice. More on that later. 🙃

Recommended Reading: How To Create Better Angular Templates With Pug by Zara Cooper

You Don’t Need A Framework To Build Single Page Apps

While I was at it, I also wanted some sexy page transitions on my site. But how do we pull off something like this without a framework?

Crossfade with vertical wipe transition. (Large preview)

Well, we can’t do this if every page is its own .html file. The whole browser refreshes when we jump from one HTML file to the other, so we can’t have that nice cross-fade effect (since we’d briefly show both pages on top of each other).

We need a way to “fetch” the HTML and CSS for wherever we’re navigating to, and animate it into view using JavaScript. This sounds like a job for single-page apps!
I used a simple browser API medley for this:

  1. Intercept all your link clicks using an event listener.
  2. fetch API: Fetch all the resources for whatever page you want to visit, and grab the bit I want to animate into view: the content outside the navbar (which I want to remain stationary during the animation).
  3. web animations API: Animate the new content into view as a keyframe.
  4. history API: Change the route displaying in your browser’s URL bar using window.history.pushState({}, 'new-route'). Otherwise, it looks like you never left the previous page!

For clarity, here’s a visual illustration of that single page app concept using a simple find-and-replace (source article):

Step-by-step clientside routing process: 1. Medium rare hamburger is returned, 2. We request a well done burger using the fetch API, 3. We massage the response, 4. We pluck out the ‘patty’ element and apply it to our current page. (Large preview)

Note: You can also visit the source code from my personal site.

Sure, some pairing of React et al and your animation library of choice can do this. But for a use case as simple as a fade transition… web APIs are pretty dang powerful on their own. And if you want more robust page transitions on static templates like Pug or plain HTML, libraries like Swup will serve you well.

What 11ty Brought To The Table

I was feeling pretty good about my little SSG at this point. Sure it couldn’t fetch any CMS data at build-time, and didn’t support different layouts by page or by directory, and didn’t optimize my images, and didn’t have incremental builds.

Okay, I might need some help.

Given all my learnings from v1, I thought I earned my right to drop the “no third-party build pipelines” rule and reach for existing tools. Turns out, 11ty has a treasure trove of features I need!

If you’ve tried out bare-bones SSGs like Jekyll or Hugo, you should have a pretty good idea of how 11ty works. Only difference? 11ty uses JavaScript through-and-through.

11ty supports basically every template library out there, so it was happy to render all my Pug pages to .html routes. It’s layout chaining option helped with my foe-single-page-app setup too. I just needed a single script for all my routes, and a “global” layout to import that script:

// _includes/base-layout.html
<html>
<body> <!--load every page's content between some body tags--> {{ content }} <!--and apply the script tag just below this--> <script src="main.js"></script>
</body>
</html> // random-blog-post.pug
---
layout: base-layout
--- article h2 Welcome to my blog p Have you heard the story of Darth Plagueis the Wise?

As long as that main.js does all that link intercepting we explored, we have page transitions!

Oh, And The Data Cascade

So 11ty helped clean up all my spaghetti code from v1. But it brought another important piece: a clean API to load data into my layouts. This is the bread and butter of the Jamstack approach. Instead of fetching data in the browser with JavaScript + DOM manipulation, you can:

  1. Fetch data at build-time using Node.
    This could be a call to some external API, a local JSON or YAML import, or even the content of other routes on your site (imagine updating a table-of-contents whenever new routes are added 🙃).
  2. Slot that data into your routes. Recall that .render function we wrote earlier:
const html = pug.render('/index.pug', { navLinksPassedByJS: [ { href: '/', text: 'Home' }, { href: '/adopt', text: 'Adopt a Pug' }
] })

…but instead of calling pug.render with our data every time, we let 11ty do this behind-the-scenes.

Sure, I didn’t have a lot of data for my personal site. But it felt great to whip up a .yaml file for all my personal projects:

# _data/works.yaml
- title: Bits of Good Homepage hash: bog-homepage links: - href: https://bitsofgood.org text: Explore the live site - href: https://github.com/GTBitsOfGood/bog-web text: Scour the Svelt-ified codebase timeframe: May 2019 - present tags: - JAMstack - SvelteJS
- title: Dolphin Audio Visualizer
...

And access that data across any template:

// home.pug
.project-carousel each work in works h3 #{title} p #{timeframe} each tag in tags ...

Coming from the world of “clientside rendering” with create-react-app, this was a pretty big revelation. No more sending API keys or big JSON blobs to the browser. 😁

I also added some goodies for JavaScript fetching and animation improvements over version 1 of my site. If you’re curious, here’s where my README stood at this point.

I Was Happy At This Point But Something Was Missing

I went surprisingly far by abandoning JS-based components and embracing templates (with animated page transitions to boot). But I know this won’t satisfy my needs forever. Remember that great divide I kicked us off with? Well, there’s clearly still that ravine between my build setup (firmly in camp #1) and the haven of JS-ified interactivity (the Next, SvelteKit, and more of camp #2). Say I want to add:

  • a pop-up modal with an open/close toggle,
  • a component-based design system like Material UI, complete with scoped styling,
  • a complex multi-step form, maybe driven by a state machine.

If you’re a plain-JS-purist, you probably have framework-less answers to all those use cases. 😉 But there’s a reason JQuery isn’t the norm anymore! There’s something appealing about creating discrete, easy-to-read components of HTML, scoped styles, and pieces of JavaScript “state” variables. React, Vue, Svelte, etc. offer so many niceties for debugging and testing that straight DOM manipulation can’t quite match.

So here’s my million dollar question:

Can we use straight HTML templates to start, and gradually add React/Vue/Svelte components where we want them?

The answer is yes. Let’s try it.

11ty + Vite: A Match Made In Heaven ❤️

Here’s the dream that I’m imagining here. Wherever I want to insert something interactive, I want to leave a little flag in my template to “put X React component here.” This could be the shortcode syntax that 11ty supports:

# Super interesting programming tutorial Writing paragraphs has been fun, but that's no way to learn. Time for an interactive code example! {% react './components/FancyLiveDemo.jsx' %}

But remember, the one-piece 11ty (purposely) avoids: a way to bundle all your JavaScript. Coming from the OG guild of bundling, your brain probably jumps to building Webpack, Rollup, or Babel processes here. Build a big ole entry point file, and output some beautiful optimized code right?

Well yes, but this can get pretty involved. If we’re using React components, for instance, we’ll probably need some loaders for JSX, a fancy Babel process to transform everything, an interpreter for SASS and CSS module imports, something to help with live reloading, and so on.

If only there were a tool that could just see our .jsx files and know exactly what to do with them.

Enter: Vite

Vite’s been the talk of the town as of late. It’s meant to be the all-in-one tool for building just about anything in JavaScript. Here’s an example for you to try at home. Let’s make an empty directory somewhere on our machine and install some dependencies:

npm init -y # Make a new package.json with defaults set
npm i vite react react-dom # Grab Vite + some dependencies to use React

Now, we can make an index.html file to serve as our app’s “entry point.” We’ll keep it pretty simple:

<!DOCTYPE html>
<html lang="en">
<head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title>
</head>
<body> <h1>Hello Vite! (wait is it pronounced "veet" or "vight"...)</h1> <div id="root"></div>
</body>
</html>

The only interesting bit is that div id="root" in the middle. This will be the root of our React component in a moment!

If you want, you can fire up the Vite server to see our plain HTML file in your browser. Just run vite (or npx vite if the command didn’t get configured in your terminal), and you’ll see this helpful output:

vite vX.X.X dev server running at: > Local: http://localhost:3000/
> Network: use `--host` to expose ready in Xms.

Much like Browsersync or other popular dev servers, the name of each .html file corresponds to a route on our server. So if we renamed index.html to about.html, we would visit http://localhost:3000/about/ (yes, you’ll need a trailing slash!)

Now let’s do something interesting. Alongside that index.html file, add a basic React component of some sort. We’ll use React’s useState here to demonstrate interactivity:

// TimesWeMispronouncedVite.jsx
import React from 'react' export default function TimesWeMispronouncedVite() { const [count, setCount] = React.useState(0) return ( <div> <p>I've said Vite wrong {count} times today</p> <button onClick={() => setCount(count + 1)}>Add one</button> </div> )
}

Now, let’s load that component onto our page. This is all we have to add to our index.html:

<!DOCTYPE html>
...
<body> <h1>Hello Vite! (wait is it pronounced "veet" or "vight"...)</h1> <div id="root"></div> <!--Don't forget type="module"! This lets us use ES import syntax in the browser--> <script type="module"> // path to our component. Note we still use .jsx here! import Component from './TimesWeMispronouncedVite.jsx'; import React from 'react'; import ReactDOM from 'react-dom'; const componentRoot = document.getElementById('root'); ReactDOM.render(React.createElement(Component), componentRoot); </script>
</body>
</html>

Yep, that’s it. No need to transform our .jsx file to a browser-ready .js file ourselves! Wherever Vite sees a .jsx import, it’ll auto-convert that file to something browsers can understand. There isn’t even a dist or build folder when working in development; Vite processes everything on the fly — complete with hot module reloading every time we save our changes. 🤯

Okay, so we have an incredibly capable build tool. How can we bring this to our 11ty templates?

Running Vite Alongside 11ty

Before we jump into the good stuff, let’s discuss running 11ty and Vite side-by-side. Go ahead and install 11ty as a dev dependency into the same project directory from last section:

npm i -D @11ty/eleventy # yes, it really is 11ty twice

Now let’s do a little pre-flight check to see if 11ty’s working. To avoid any confusion, I’d suggest you:

  1. Delete that index.html file from earlier;
  2. Move that TimesWeMispronouncedVite.jsx inside a new directory. Say, components/;
  3. Create a src folder for our website to live in;
  4. Add a template to that src directory for 11ty to process.

For example, a blog-post.md file with the following contents:

# Hello world! It’s markdown here

Your project structure should look something like this:

src/ blog-post.md
components/ TimesWeMispronouncedVite.jsx

Now, run 11ty from your terminal like so:

npx eleventy --input=src

If all goes well, you should see an build output like this:

_site/ blog-post/ index.html

Where _site is our default output directory, and blog-post/index.html is our markdown file beautifully converted for browsing.

Normally, we’d run npx eleventy --serve to spin up a dev server and visit that /blog-post page. But we’re using Vite for our dev server now! The goal here is to:

  1. Have eleventy build our markdown, Pug, nunjucks, and more to the _site directory.
  2. Point Vite at that same _site directory so it can process the React components, fancy style imports, and other things that 11ty didn’t pick up.

So a two-step build process, with 11ty handing off the Vite. Here’s the CLI command you’ll need to start 11ty and Vite in “watch” mode simultaneously:

(npx eleventy --input=src --watch) & npx vite _site

You can also run these commands in two separate terminals for easier debugging. 😄

With any luck, you should be able to visit http://localhost:3000/blog-post/ (again, don’t forget the trailing slash!) to see that processed Markdown file.

Partial Hydration With Shortcodes

Let’s do a brief rundown on shortcodes. Time to revisit that syntax from earlier:

{% react '/components/TimesWeMispronouncedVite.jsx' %}

For those unfamiliar with shortcodes: they’re about the same as a function call, where the function returns a string of HTML to slide into your page. The “anatomy” of our shortcode is:

  • {% … %}
    Wrapper denoting the start and end of the shortcode.
  • react
    The name of our shortcode function we’ll configure in a moment.
  • '/components/TimesWeMispronouncedVite.jsx'
    The first (and only) argument to our shortcode function. You can have as many arguments as you’d like.

Let’s wire up our first shortcode! Add a .eleventy.js file to the base of your project, and add this config entry for our react shortcode:

// .eleventy.js, at the base of the project
module.exports = function(eleventyConfig) { eleventyConfig.addShortcode('react', function(componentPath) { // return any valid HTML to insert return `<div id="root">This is where we'll import ${componentPath}</div>` }) return { dir: { // so we don't have to write `--input=src` in our terminal every time! input: 'src', } }
}

Now, let’s spice up our blog-post.md with our new shortcode. Paste this content into our markdown file:

# Super interesting programming tutorial Writing paragraphs has been fun, but that's no way to learn. Time for an interactive code example! {% react '/components/TimesWeMispronouncedVite.jsx' %}

And if you run a quick npx eleventy, you should see this output in your _site directory under /blog-post/index.html:

<h1>Super interesting programming tutorial</h1> <p>Writing paragraphs has been fun, but that's no way to learn. Time for an interactive code example!</p> <div id="root">This is where we'll import /components/TimesWeMispronouncedVite.jsx</div>

Writing Our Component Shortcode

Now let’s do something useful with that shortcode. Remember that script tag we wrote while trying out Vite? Well, we can do the same thing in our shortcode! This time we’ll use the componentPath argument to generate the import, but keep the rest pretty much the same:

// .eleventy.js
module.exports = function(eleventyConfig) { let idCounter = 0; // copy all our /components to the output directory // so Vite can find them. Very important step! eleventyConfig.addPassthroughCopy('components') eleventyConfig.addShortcode('react', function (componentPath) { // we'll use idCounter to generate unique IDs for each "root" div // this lets us use multiple components / shortcodes on the same page 👍 idCounter += 1; const componentRootId = `component-root-${idCounter}` return ` <div id="${componentRootId}"></div> <script type="module"> // use JSON.stringify to // 1) wrap our componentPath in quotes // 2) strip any invalid characters. Probably a non-issue, but good to be cautious! import Component from ${JSON.stringify(componentPath)}; import React from 'react'; import ReactDOM from 'react-dom'; const componentRoot = document.getElementById('${componentRootId}'); ReactDOM.render(React.createElement(Component), componentRoot); </script> ` }) eleventyConfig.on('beforeBuild', function () { // reset the counter for each new build // otherwise, it'll count up higher and higher on every live reload idCounter = 0; }) return { dir: { input: 'src', } }
}

Now, a call to our shortcode (ex. {% react '/components/TimesWeMispronouncedVite.jsx' %}) should output something like this:

<div id="component-root-1"></div>
<script type="module"> import Component from './components/FancyLiveDemo.jsx'; import React from 'react'; import ReactDOM from 'react-dom'; const componentRoot = document.getElementById('component-root-1'); ReactDOM.render(React.createElement(Component), componentRoot);
</script>

Visiting our dev server using (npx eleventy --watch) & vite _site, we should find a beautifully clickable counter element. ✨

Buzzword Alert — Partial Hydration And Islands Architecture

We just demonstrated “islands architecture” in its simplest form. This is the idea that our interactive component trees don’t have to consume the entire website. Instead, we can spin up mini-trees, or “islands,” throughout our app depending on where we actually need that interactivity. Have a basic landing page of links without any state to manage? Great! No need for interactive components. But do you have a multi-step form that could benefit from X React library? No problem. Use techniques like that react shortcode to spin up a Form.jsx island.

This goes hand-in-hand with the idea of “partial hydration.” You’ve likely heard the term “hydration” if you work with component-y SSGs like NextJS or Gatsby. In short, it’s a way to:

  1. Render your components to static HTML first.
    This gives the user something to view when they initially visit your website.
  2. “Hydrate” this HTML with interactivity.
    This is where we hook up our state hooks and renderers to, well, make button clicks actually trigger something.

This 1-2 punch makes JS-driven frameworks viable for static sites. As long as the user has something to view before your JavaScript is done parsing, you’ll get a decent score on those lighthouse metrics.

Well, until you don’t. 😢 It can be expensive to “hydrate” an entire website since you’ll need a JavaScript bundle ready to process every last DOM element. But our scrappy shortcode technique doesn’t cover the entire page! Instead, we “partially” hydrate the content that’s there, inserting components only where necessary.

Don’t Worry, There’s A Plugin For All This: Slinkity

Let’s recap what we discovered here:

  1. Vite is an incredibly capable bundler that can process most file types (jsx, vue, and svelte to name a few) without extra config.
  2. Shortcodes are an easy way to insert chunks of HTML into our templates, component-style.
  3. We can use shortcodes to render dynamic, interactive JS bundles wherever we want using partial hydration.

So what about optimized production builds? Properly loading scoped styles? Heck, using .jsx to create entire pages? Well, I’ve bundled all of this (and a whole lot more!) into a project called Slinkity. I’m excited to see the warm community reception to the project, and I’d love for you, dear reader, to give it a spin yourself!

🚀 Try the quick start guide

Astro’s Pretty Great Too

Readers with their eyes on cutting-edge tech probably thought about Astro at least once by now. 😉 And I can’t blame you! It’s built with a pretty similar goal in mind: start with plain HTML, and insert stateful components wherever you need them. Heck, they’ll even let you start writing React components inside Vue or Svelte components inside HTML template files! It’s like MDX Xtreme edition. 🤯

There’s one pretty major cost to their approach though: you need to rewrite your app from scratch. This means a new template format based on JSX (which you might not be comfortable with), a whole new data pipeline that’s missing a couple of niceties right now, and general bugginess as they work out the kinks.

But spinning up an 11ty + Vite cocktail with a tool like Slinkity? Well, if you already have an 11ty site, Vite should bolt into place without any rewrites, and shortcodes should cover many of the same use cases as .astro files. I’ll admit it’s far from perfect right now. But hey, it’s been useful so far, and I think it’s a pretty strong alternative if you want to avoid site-wide rewrites!

Wrapping Up

This Slinkity experiment has served my needs pretty well so far (and a few of y’all’s too!). Feel free to use whatever stack works for your JAM. I’m just excited to share the results of my year of build tool debauchery, and I’m so pumped to see how we can bridge the great Jamstack divide.

Further Reading

Want to dive deeper into partial hydration, or ESM, or SSGs in general? Check these out:

  • Islands Architecture
    This blog post from Jason Format really kicked off a discussion of “islands” and “partial hydration” in web development. It’s chock-full of useful diagrams and the philosophy behind the idea.
  • Simplify your static with a custom-made static site generator
    Another SmashingMag article that walks you through crafting Node-based website builders from scratch. It was a huge inspiration to me!
  • How ES Modules have redefined web development
    A personal post on how ES Modules have changed the web development game. This dives a little further into the “then and now” of import syntax on the web.
  • An introduction to web components
    An excellent walkthrough on what web components are, how the shadow DOM works, and where web components prove useful. Used this guide to apply custom components to my own framework!

My 3 Rules for Social Media Success

Roughly 4.2 billion people worldwide use social media. In 2021, all ecommerce businesses should attempt to connect with prospects on those platforms.

This is my first post to help merchants utilize social media for brand awareness and sales. I’ll start by sharing my experiences as an artist. For years I’ve used social media as the primary sales tool for my paintings.

3 Social Media Rules for Ecommerce

Chose the right platforms. Rule number one is to focus on the networks where your audience gathers. Don’t try to force your business onto the most popular. Instead, choose the platforms that offer the best chances of success.

I focus heavily on TikTok and Instagram because the visual medium lends itself well to my creativity, such as shooting videos, while also selling paintings. When one of my videos goes viral — 500,000 to 25 million views — I sell a lot of paintings. That wouldn’t be possible on Twitter or Facebook.

But that doesn’t mean posting branded content on large Facebook groups won’t net a similar result with your business. It just means you have to look at what you’re selling and find the platform that will produce the best results for your time.

Carolyn Mara's TikTok pageCarolyn Mara's TikTok page

TikTok’s visual medium works well for artists, such as the author.

Connect, connect, connect. Next, take the time to respond to user comments, questions, and reviews. You want an engaged and happy community if you’re going to sell your products to the participants. You never know what opportunity might blossom from thanking someone for her comment or agreeing to repost your work.

Here’s an example.

A few months ago, a fan asked if he could repost my work on his Instagram profile. I agreed and then forgot about it until he posted one of my performance art mop painting videos a few weeks later. I didn’t realize this person’s account was very popular in Spain. That repost of my work was viewed millions of times, which led to an increase in orders and, more importantly, exposure to a new market. All I did was say, “Thank you for asking. Of course you can repost my work!”

The principle is the same for any online business. Find a way to connect with your audience because you never know which connection will lead to the next breakthrough.

Always remember that every single repost or share on someone’s social media profile can create extra exposure. Always interact no matter how big or small their accounts.

Use videos and photos. Engage your followers with videos and photos, but do it in a way that’s more like a conversation versus traditional advertising. People don’t like being sold to when browsing Instagram or Reddit. The most impactful posts are those that your audience shares with friends, family, and colleagues. The best way to do that is with content that shows your product, how it’s used, and how it works. And, if applicable, show how it looks in a realistic setting.

A life-like setting has been a game-changer for me. My online art business first gained traction when I posted images of my paintings in clients’ homes. It helped my audience see the finished product in the proper context and visualize how it fits into their lives.

Building An API With Gatsby Functions

You’ve probably heard about Serverless Functions, but if you haven’t, Serverless Functions provide functionality typically associated with server-side technologies that can be implemented alongside front-end code without getting caught up in server-side infrastructures.

With server-side and client-side code coexisting in the same code base, front-end developers like myself can extend the reach of what’s possible using the tools they already know and love.

Limitations

Coexistence is great but there are at least two scenarios I’ve encountered where using Serverless Functions in this way weren’t quite the right fit for the task at hand. They are as follows:

  1. The front end couldn’t support Serverless Functions.
  2. The same functionality was required by more than one front end.

To help provide some context here’s one example of both points 1 and 2 named above. I maintain an Open-source project called MDX Embed, you’ll see from the docs site that it’s not a Gatsby website. It’s been built using Storybook and Storybook on its own provides no Serverless Function capabilities. I wanted to implement “Pay what you want” contributions to help fund this project and I wanted to use Stripe to enable secure payments but without a secure “backend” This would not have been possible.

By abstracting this functionality away into an API built with Gatsby Functions I was able to achieve what I wanted with MDX Embed and also re-use the same functionality and enable “Pay what you want” functionality for my blog.

You can read more about how I did that here: Monetize Open-Source Software With Gatsby Functions And Stripe.

It’s at this point that using Gatsby Functions can act as a kind of Back end for front end or BFF 😊 and developing in this way is more akin to developing an API (Application Programming Interface).

APIs are used by front-end code to handle things like, logins, real-time data fetching, or secure tasks that aren’t suitably handled by the browser alone. In this tutorial, I’ll explain how to build an API using Gatsby Functions and deploy it to Gatsby Cloud.

Preflight Checks

Gatsby Functions work when deployed to Gatsby Cloud or Netlify, and in this tutorial, I’ll be explaining how to deploy to Gatsby Cloud so you’ll need to sign up and create a free account first.

You’re also going to need either a GitHub, GitLab or BitBucket account, this is how Gatsby Cloud reads your code and then builds your “site”, or in this case, API.

For the purposes of this tutorial, I’ll be using GitHub. If you’d prefer to jump ahead, the finished demo API code can be found on my GitHub.

Getting Started

Create a new dir somewhere on your local drive and run the following in your terminal. This will set up a default package.json.

npm init -y

Dependencies

Type the following into your terminal to install the required dependencies.

npm install gatsby react react-dom

Pages

It’s likely your API won’t have any “pages” but to avoid seeing Gatsby’s default missing page warning when you visit the root URL in the browser, add the following to both src/pages/index.js and src/pages/404.js.

//src/pages/index.js & src/pages/404.js export default () => null;

API

Add the following to src/api/my-first-function.js.

I’ll explain a little later what 'Access-Control-Allow-Origin', '*' means, but in short, it makes sure that your APIs from other origins aren’t blocked by CORS.

//src/api/my-first-function.js export default function handler(req, res) { res.setHeader('Access-Control-Allow-Origin', '*'); res.status(200).json({ message: 'A ok!' });
}

Scripts

Add the following to package.json.

//package.json ... "scripts": { "develop": "gatsby develop", "build": "gatsby build" },
...

Start The Gatsby Development Server

To spin up the Gatsby development server run the following in your terminal.

npm run develop

Make A Request From The Browser

With the Gatsby’s development server running you can visit http://localhost:8000/api/my-first-function, and since this is a simple GET request you should see the following in your browser.

{ "message": "A ok!"
}

Congratulations 🎉

You’ve just developed an API using Gatsby Functions.

Deploy

If you are seeing the above response in your browser it’s safe to assume your function is working correctly locally, in the following steps I’ll explain how to deploy your API to Gatsby Cloud and access it using an HTTP request from CodeSandbox.

Push Code To Git

Before attempting to deploy to Gatsby Cloud you’ll need to have pushed your code to your Git provider of choice.

Gatsby Cloud

Log into your Gatsby Cloud account and look for the big purple button that says “Add site +”.

In the next step, you’ll be asked to either Import from a Git repository or Start from a Template, select Import from Git Repository and hit next.

As mentioned above Gatsby Cloud can connect to either GitHub, GitLab or Bitbucket. Select your preferred Git provider and hit next.

With your Git provider connected, you can search for your repository, and give your site a name.

Once you’ve selected your repository and named your site hit next.

You can skip the “Integrations” and “Setup” as we won’t be needing these.

If all has gone to plan your should be seeing something similar to the below screenshot.

You’ll see near the top on the left-hand side of the screen a URL that ends with gatsbyjs.io, this will be the URL for your API and any functions you create can be accessed by adding /api/name-of-function to the end of this URL.

E.g, the complete deployed version of my-first-function.js for my demo API is as follows:

Demo API: My First Function.

Testing Your API

Visiting the URL of your API is one thing but it’s not really how APIs are typically used. Ideally to test your API you need to make a request to the function from a completely unrelated origin.

It’s here where res.setHeader('Access-Control-Allow-Origin', '*'); comes to the rescue. Whilst it’s not always desirable to allow any domain (website) to access your functions, for the most part, public functions are just that, public. Setting the Access Control header to a value of * means any domain can access your function, without this, any domain other than the domain the API is hosted on will be blocked by CORS.

Here’s a CodeSandbox that uses my-first-function from my demo API. You can fork this and change the Axios request URL to test your function.

CodeSandbox: My First Function

Getting Fancier

Sending a response from your API that says message: "A ok!" isn’t exactly exciting so in the next bit I’ll show you how to query the GitHub REST API and make a personal profile card to display on your own site using the API you just created, and it’ll look a little like this.

CodeSandbox: Demo profile card

Dependencies

To use the GitHub REST API you’ll need to install @octokit/rest package.

npm install @octokit/rest

Get GitHub User Raw

Add the following to src/api/get-github-user-raw.js.

// src/api/get-github-user-raw.js import { Octokit } from '@octokit/rest'; const octokit = new Octokit({ auth: process.env.OCTOKIT_PERSONAL_ACCESS_TOKEN
}); export default async function handler(req, res) { res.setHeader('Access-Control-Allow-Origin', '*'); try { const { data } = await octokit.request(`GET /users/{username}`, { username: 'PaulieScanlon' }); res.status(200).json({ message: 'A ok!', user: data }); } catch (error) { res.status(500).json({ message: 'Error!' }); }
}

Access Token

To communicate with the GitHub REST API you’ll need an access token. You can get this by following the steps in this guide from GitHub: Creating A Personal Access Token.

.env Variables

To keep your access token secure add the following to .env.development and .env.production.

OCTOKIT_PERSONAL_ACCESS_TOKEN=123YourAccessTokenABC

You can read more about Gatsby environment variables in this guide from Gatsby: Environment Variables.

Start Development Server

As you did before start the Gatsby development server by typing the following in your terminal.

npm run develop

Make A Request From The Browser

With the Gatsby development server running you can visit http://localhost:8000/api/get-github-user-raw, and since this too is a simple GET request you should see the following in your browser. (I’ve removed part of the response for brevity.)

{ "message": "A ok!", "user": { "login": "PaulieScanlon", "id": 1465706, "node_id": "MDQ6VXNlcjE0NjU3MDY=", "avatar_url": "https://avatars.githubusercontent.com/u/1465706?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PaulieScanlon", "type": "User", "site_admin": false, "name": "Paul Scanlon", "company": "Paulie Scanlon Ltd.", "blog": "https://www.paulie.dev", "location": "Worthing", "email": "pauliescanlon@gmail.com", "hireable": true, "bio": "Jamstack Developer / Technical Content Writer (freelance)", "twitter_username": "pauliescanlon", "created_at": "2012-02-23T13:43:26Z", "two_factor_authentication": true, ... }
}

Here’s a CodeSandbox example of the full raw response.

CodeSandbox: Raw Response

You’ll see from the above that there’s quite a lot of data returned that I don’t really need, this next bit is completely up to you as it’s your API but I have found it helpful to manipulate the GitHub API response a little bit before sending it back to my frontend code.

If you’d like to do the same you could create a new function and add the following to src/api/get-github-user.js.

// src/api/get-github-user.js import { Octokit } from '@octokit/rest'; const octokit = new Octokit({ auth: process.env.OCTOKIT_PERSONAL_ACCESS_TOKEN
}); export default async function handler(req, res) { res.setHeader('Access-Control-Allow-Origin', '*'); try { const { data } = await octokit.request(`GET /users/{username}`, { username: 'PaulieScanlon' }); res.status(200).json({ message: 'A ok!', user: { name: data.name, blog_url: data.blog, bio: data.bio, photo: data.avatar_url, githubUsername: `@${data.login}`, githubUrl: data.html_url, twitterUsername: `@${data.twitter_username}`, twitterUrl: `https://twitter.com/${data.twitter_username}` } }); } catch (error) { res.status(500).json({ message: 'Error!' }); }
}

You’ll see from the above that rather than returning the complete data object returned by the GitHub REST API I pick out just the bits I need, rename them and add a few bits before the username and URL values. This makes life a bit easier when you come to render the data in the frontend code.

Here’s a CodeSandbox example of the formatted response.

CodeSandbox: Formatted Response

This is very similar to the Profile Card CodeSandbox from earlier, but I’ve also printed the data out so you can see how each manipulated data item is used.

It’s worth noting at this point that all four of the CodeSandbox demos in this tutorial are using the demo API, and none of them are built using Gatsby or hosted on Gatsby Cloud — cool ay!

.env Variables In Gatsby Cloud

Before you deploy your two new functions you’ll need to add the GitHub Access token to the environment variables section in Gatsby Cloud.

Where To Go From Here?

I asked myself this very question. Typically speaking serverless functions are used in client-side requests and whilst that’s fine I wondered if they could also be used at build time to statically “bake” data into a page rather than relying on JavaScript which may or may not be disabled in the user’s browser.

…so that’s exactly what I did.

Here’s a kind of data dashboard that uses data returned by Gatsby Functions at both run and build time. I built this site using Astro and deployed it GitHub Pages.

The reason I think this is a great approach is because I’m able to re-use the same functionality on both the server and in the browser without duplicating anything.

In this Astro build I hit the same endpoint exposed by my API to return data that is then either baked into the page (great for SEO) or fetched at run time by the browser (great for showing fresh or up to the minute live data).

Data Dashboard

The data displayed on the left of the site is requested at build time and baked into the page with Astro. The data on the right of the page is requested at runtime using a client-side request. I’ve used slightly different endpoints exposed by the GitHub REST API to query different GitHub user accounts which create the different lists.

Everything you see on this site is provided by my more complete API. I’ve called it: Paulie API and I use it for a number of my websites.

Paulie API

Paulie API like the API from this tutorial is built with Gatsby but because Gatsby can act as both a site and an API I’ve used it to document how all my functions work and each endpoint has its own page that can be used as an interactive playground… feel free to have a look around.

So, there you have it, A Gatsby Functions API that can be used by any client-side or server-side code, from any website built with any tech stack. 🤯

Give it a go and I’d be very interested to see what you build. Feel free to share in the comments below or come find me on Twitter: @PaulieScanlon.

13 Examples of Sustainable Ecommerce

A recent survey of 6,000 consumers in North America, Europe, and Asia found that 80% of participants felt it was “important or extremely important” for companies to design environmentally conscious products. Moreover, 72% said they buy more environmentally friendly products than five years ago, and 81% said they expected to purchase more over the next five years.

Ecommerce merchants can court this changing shopper base and create a positive impact by making their businesses more sustainable. Consider posting a statement about your focus on ethical and sustainable practices on the “About Us” page. Reduce packaging, shift to an eco-friendly shipping program, and create an optional carbon offset charge at checkout. Develop recycling policies and channels to resell your used merchandise. And partner with brands that have ethical and sustainable models.

Here is a list of sustainable ecommerce sites, for inspiration. These sites create change through environmental, economic, and social action and awareness.

Worn Wear – Patagonia

Home page of Worn Wear - PatagoniaHome page of Worn Wear - Patagonia

Worn Wear – Patagonia

Worn Wear is an online store for used Patagonia clothing. Customers can trade in used Patagonia clothing and receive credit for a used or new Patagonia item. Worn Wear is a way that customers can partner with Patagonia to extend the life of those products. Most items are cleaned using CO2 technology, saving water and energy (compared to conventional methods) and capturing microfibers.

Rêve En Vert

Home page of Rêve En VertHome page of Rêve En Vert

Rêve En Vert

Rêve En Vert is a luxury retail platform for sustainable and ethical goods. It sources the most ethical materials possible, emphasizing low-environmental impact and longevity. Alongside its retail offering, the editorial and community sections of Rêve En Vert explore the sustainable options of collective humanity.

4ocean

Home page of 4oceanHome page of 4ocean

4ocean

4ocean is committed to ending the ocean plastic crisis. Its ecommerce shop sells jewelry, apparel, and reusable items to support its mission. Every 4ocean product comes with a “One Pound Promise” to pull one pound of trash from the ocean, rivers, and coastlines. While its full-time crews remove debris, it also educates people to end the dependence on single-use plastic.

Shades of Green

Home page of Shades of GreenHome page of Shades of Green

Shades of Green

Shades of Green is committed to creating healthier living spaces by sourcing and selling only non-toxic, environmentally-friendly products, offering green design consultation, and providing the latest information on green building products and practices. The company’s evaluation system offers customers smart choices and real value through honest and transparent details. Every product on its website is eco-friendly, with a green score from 1 to 5. Each product description contains information on why it’s recommended, as well as reviews from users

Our Commonplace

Home page of Our CommonplaceHome page of Our Commonplace

Our Commonplace

Our Commonplace is an ethical and sustainable marketplace for women’s fashion wear. Its mission is to help consumers shop ethically and sustainably. Each product displays the corresponding value icons: Ethical, Sustainable, Cruelty-Free, Woman-Owned, BIPOC-Owned (Black, Indigenous, and People of Color), and Toxic-Free. Shoppers benefit from product knowledge, transparency, and a better way to shop to help the world.

Pela

Home page of PelaHome page of Pela

Pela

Pela develops products from environmentally-sensible materials, with a mission to create a waste-free future. Pela offers biodegradable iPhone and iPad cases, smartwatch bands, sunglasses, and accessories. In addition, Pela works to streamline transportation and improve manufacturing efficiencies. And Pela offsets its entire carbon footprint by purchasing carbon credits.

EarthHero

Home page of EarthHeroHome page of EarthHero

EarthHero

EarthHero is an eco-friendly online marketplace to make buying responsibly second nature. Partner brands are chosen because they’re helping to create a more sustainable future. Each product displays the corresponding logos to tell shoppers why it’s sustainable. Logos fall under Low Impact, Organic Content, Recycled Content, Renewable Resource, Responsible, and Upcycled Content. Product pages also detail sustainability features and specifications, as well as info about the brand partner.

Green Toys

Home page of Green ToysHome page of Green Toys

Green Toys

Green Toys is a provider of environmentally and socially responsible toys and tableware for children. It uses 100% post-consumer recycled plastic and manufactures all its products in the U.S., diverting materials from landfills and reducing its carbon footprint. For packaging, Green Toys uses recycled material with soy ink that biodegrades four times faster than petroleum-based.

Simple Switch

Home page of Simple SwitchHome page of Simple Switch

Simple Switch.

Simple Switch is a marketplace for apparel, household items, food and drink, and travel and outdoor goods. Partner companies have committed to improving livelihoods, protecting the earth, and empowering people to change our future. In addition to shopping by product or partner, customers can also shop by impact, filtering products by certifications (e.g., climate neutral, Forest Stewardship Council), social impacts (e.g., fights human trafficking, supports education), and environmental impacts (e.g., innovative environmental materials, renewable energy).

Thrive Market

Home page of Trive MarketHome page of Trive Market

Thrive Market

Thrive Market is an online grocery membership site that features ethical and sustainable goods, carbon-neutral shipping, zero-waste warehouses, and recyclable-compostable packaging. Upon signup, members select their most important values or causes, such as animal welfare, sustainable sourcing, fair trade, carbon impact, organic, and regenerative agriculture. Every annual membership to Thrive Market sponsors a free one for a family in need.

Ten Thousand Villages

Home page of Ten Thousand VillagesHome page of Ten Thousand Villages

Ten Thousand Villages

Ten Thousand Villages has a mission to create opportunities for artisans in developing countries to earn income by bringing their products and stories to its markets. Ten Thousand Villages’ model puts the maker first, giving artisans opportunities to gain a safety net of financial stability and escape the cycle of poverty through transparent price agreements, interest-free microfinance investment, payment before export, and more. On each product page, shoppers can access the maker’s story, along with a link to more items by the maker.

Ethica

Home page of Etihica Home page of Etihica

Ethica

Ethica is an online retailer to learn about ethical fashion, discover emerging designers, and shop a high-style selection of ethical and sustainable labels. Its goal is to connect consumers and companies that share a commitment to social and environmental responsibility. Ethica uses eco-friendly packaging and carbon-neutral shipping.

The Responsible Shop – Verishop

Home page of The Responsible Shop - VerishopHome page of The Responsible Shop - Verishop

The Responsible Shop – Verishop

The Responsible Shop is a store within Verishop, an aggregator for independent brands and designers of home decor, fashion, and beauty products. In The Responsible Shop, shoppers can filter products via “Shop By Cause,” which includes Clean Beauty, Conscious, Cruelty-Free, Fair Trade, Organic, Philanthropic, Responsible, Sustainable, Upcycled, and Vegan.

Solving CLS Issues In A Next.js-Powered E-Commerce Website (Case Study)

Fairprice is one of the largest online grocery stores in Singapore. We are continuously looking out for areas of opportunities to improve the user’s online shopping experience. Performance is one of the core aspects to ensure our users are having a delightful user experience irrespective of their devices or network connection.

There are many key performance indicators (KPI) that measure different points during the lifecycle of the web page (such as TTFB, domInteractiveand onload), but these metrics don’t reflect how the end-user experiences the page.

We wanted to use a few KPIs which correspond closely to the actual experience of the end-users so we know that if any of those KPIs are not performing well, then it will be directly impacting the end-user experience. We found out user-centric performance metrics to be the perfect fit for this purpose.

There are many user-centric performance metrics to measure different points in a page’s life cycle such as FCP, LCP, FID, CLS, and so on. For this case study, we are mainly going to focus on CLS.

CLS measures the total score of all unexpected layout shifts happening between when the page starts loading and till it is unloaded.

Therefore having a low CLS value for a page ensures there are no random layout shifts causing user frustration. Barry Pollard has written an excellent in-depth article about CLS.

How We Discovered CLS Issue In Our Product Page

We use Lighthouse and WebPagetest as our synthetic testing tools for performance to measure CLS. We also use the web-vitals library to measure CLS for real users. Apart from that, we check the Google Search Console Core Web Vitals Report section to get an idea of any potential CLS issues in any of our pages. While exploring the report section, we found many URLs from the product detail page had more than 0.1 CLS value hinting there is some major layout shift event happening there.

Debugging CLS Issue Using Different Tools

Now that we know that there is a CLS issue on the product detail page, the next step was to identify which element was causing it. At first, we decided to run some tests using synthetic testing tools.

So we ran the lighthouse to check if it could find any element which could be triggering a major layout shift, it reported CLS to .004 which is quite low.

The Lighthouse report page has a diagnostic section. That also did not show any element causing a high CLS value.

Then we ran WebpageTest and decided to check the filmstrip view:

We find this feature very helpful since we can find out which element at which point in time caused the layout to shift. But when we run the test to see if any layout shifts are highlighted, there wasn’t anything contributing to the huge LCS:

The quirk with CLS is that it records individual layout shift scores during the entire lifespan of the page and adds them.

Note: How CLS is measured has been changed since June 2021.

Since Lighthouse and WebpageTest couldn’t detect any element that triggered a major layout shift which means it was happening after the initial page load possibly due to some user action. So we decided to use Web Vitals Google Chrome extension since it can record CLS on a page while the user is interacting with it. After performing different actions we found the layout shift score is getting increased when the user uses the image magnify feature.

I have also created a PR to the original repo so that other developers using this library can get rid of the CLS issue.

The Impact Of The Change

After the code was deployed to production, the CLS was fixed on the product details page and the number of pages impacted with CLS was reduced by 98%:

Since we used transform, it also helped to make the image magnify a smoother experience to the users.

Note: Paul Irish has written an excellent article on this topic.

Other Key Changes We Made For CLS

There are also some other issues we faced through many pages in our website which contribute to CLS. Let’s go through those elements and components and see how we tried to mitigate layout shifts arising from them.

  • Web-fonts:
    We have noticed that late loading of fonts causes user frustrations since the content flashes and it also causes some amount of layout shifts. To minimize this we have done few changes:

    • We have self-hosted the fonts instead of loading from 3rd party CDN.
    • We preload the fonts.
    • We use font-display optional.
  • Images:
    Missing height or width value in the image causes the element after the image to shift once the image is loaded. This ends up becoming a major contributor to CLS. Since we are using Next.js, we took advantage of the built-in image component called next/images. This component incorporates several image-related best practices. It is built on top of <img> HTML tag and can help to improve LCP and CLS. I highly recommend reading this RFC to find out the key features and advantages of using it.

  • Infinite Scroll:
    On our website, product listing pages have infinite scrolling. So initially, when users scroll to the bottom of the page they see a footer for a fraction of seconds before the next set of data is loaded, this causes layout shifts. To solve this we took few steps:

    • We call the API to load data even before the user reaches the absolute bottom of the list.
    • We have reserved enough space for the loading state and we show product skeletons during the loading status. So now when the user scrolls, they don’t see the footer for a fraction of seconds while the products are getting loaded.

Addy Osmani has written a detailed article on this approach which I highly recommend checking.

Key Takeaways

  • While Lighthouse and WebpageTest help to discover performance issues happening till page load, they can’t detect performance issues after page load.
  • Web Vitals extensions can detect CLS changes triggered by user interactions so if a page has a high CLS value but Lighthouse or WebpageTest reports low CLS then the Web Vitals extension can help to pinpoint the issue.
  • Google Search Console data is based on real users’ data so that also can point to potential perf issues happening at any point in the life cycle of a page. Once an issue is detected and fixed, checking the report section again can help verify the effectiveness of the performance fix. The changes are reflected within days in the web vitals report section.

Final Thoughts

While CLS issues are comparatively harder to debug, using a combination of different tools till page load (Lighthouse, WebPageTest) and Web Vitals extension (after page load) can help us pinpoint the issue. It is also one of the metrics which is going through lots of active development to cover a wide range of scenarios and this means that how it is measured is going to be changed in the future. We are following https://web.dev/evolving-cls/ to know about any upcoming changes.

As for us, we are continuously working to improve other Core Web Vitals too. Recently, we have implemented responsive image preload and started serving images in WebP format which helped us to reduce 75% of image payload, reduce LCP by 62%, and Speed Index by 24%. You can read more details of optimization for improving LCP and Speed Index or follow our engineering blog to know about other exciting work we are doing.

We would like to thank Alex Castle for helping us debug the CLS issue on the product page and solve the quirks in the next/images implementation.

Ecommerce Product Releases: October 17, 2021

Here is a list of product releases and updates for mid-October from companies that offer services to online merchants. There are updates on email marketing, digital payments, subscription tools, social commerce, Amazon integration, WooCommerce, and live video shopping.

Got an ecommerce product release? Email releases@practicalecommerce.com.

Ecommerce Product Releases

HubSpot launches new payments service. HubSpot, the customer relationship management and inbound marketing platform, has announced the launch of its open beta for HubSpot Payments. Built natively as part of the HubSpot CRM platform, HubSpot Payments helps companies accept payments seamlessly in less time and with fewer tools. HubSpot Payments supports all major credit cards and ACH and features payment links, recurring payments for memberships, and native integration with HubSpot’s quotes feature in Sales Hub. To help customers get up and running, HubSpot is waiving fees on the first $50,000 of ACH transactions.

Home page of HubSpot PaymentsHome page of HubSpot Payments

HubSpot Payments

CedCommerce launches Amazon integration for Shopify merchants. CedCommerce has launched “Amazon by CedCommerce” for Shopify merchants to create and synchronize listings between Shopify and Amazon. The release follows Shopify’s recent closure of its Amazon-selling app. Sellers can get the benefits of the Amazon by CedCommerce app, free of any subscription cost, through December 31, 2021.

Lightspeed completes the acquisition of Ecwid. Lightspeed Commerce, an ecommerce and point-of-sale provider, has completed the acquisition of Ecwid, the global ecommerce platform. Once integrated, Lightspeed and Ecwid will help merchants reach shoppers on social media and digital marketplaces. Ecwid recently announced a partnership with TikTok to help shape the future of buying on the social media platform. The partnership will also help Lightspeed’s merchants access the core functions of TikTok For Business Ads Manager.

GhostRetail unveils live-video shopping platform. GhostRetail has emerged from stealth mode to launch a 1:1 live video shopping platform that simulates online an in-store experience. The white-label platform is for enterprise retailers and direct-to-consumer merchants looking to augment their in-store and ecommerce sales channels with personalized live video co-shopping. Multiple Fortune 500 brands — American Eagle, Authentic Brands Group, Canada Goose, Maple Leaf Sports and Entertainment, others — now use the platform ahead of the holiday shopping season.

Home page of GhostRetailHome page of GhostRetail

GhostRetail

Shopify launches Global ERP Program. Shopify is launching a global ERP program, allowing select enterprise resource planning (ERP) partners to build direct integrations into the Shopify App Store. Shopify is partnering with leading ERP providers, including Microsoft Dynamics 365 Business Central, Oracle NetSuite, Infor, Acumatica, and Brightpearl, with more to come. Through the program, merchants can now access a suite of certified apps directly integrated with Shopify. The global ERP program provides partners with support from Shopify’s engineering team in building their apps.

GoDaddy’s point of sale now integrates with WooCommerce. GoDaddy launched its point of sale hardware last month. Now the POS is integrated with WooCommerce to make in-person payments quick and simple. The integration eliminates the need for multiple logins across platforms to manage a store, cuts down on training time, and helps stores launch in-person payments quickly and affordably. Businesses using WooCommerce via GoDaddy can add the POS offerings from their GoDaddy Payments Hub and start selling in-person. Developers can build WooCommerce websites with GoDaddy, and GoDaddy Payments will automatically appear in their WordPress admin dashboard.

BigCommerce announces integration with Chargify to deliver subscription management services. BigCommerce has announced a new native integration with Chargify, a billing and subscription management platform. In collaboration with developer Ebizio, the Chargify integration provides BigCommerce’s B2B and B2C merchants with the ability to manage, track, and analyze subscription activity. With a one-click install, merchants can sell their products on subscription directly through their BigCommerce store. Merchants can also quickly introduce subscription options to their customers without costly development work.

Home page of ChargifyHome page of Chargify

Chargify

Poshmark unveils new ecommerce innovations. Poshmark, a social marketplace for new and secondhand fashion items, has launched My Shoppers, a clienteling feature that mimics an in-store retail associate, suggesting relevant products and personal styling based on what a shopper is browsing or liking. Poshmark also introduced Closet Insights, a dynamic dashboard that provides sellers with real-time inventory and sales data. Closet Insights allows sellers to understand sales performance over time to inform strategy.

ActiveCampaign expands automations with custom objects and integrations. ActiveCampaign, an email marketing and customer-experience platform, now enables businesses to build automations using custom objects. With custom objects, companies of all sizes can trigger automations from unique data specific to their business. ActiveCampaign customers can integrate the 1:1 automation with their best-loved tools. For example, a new opportunity in Salesforce could generate a series of internal alerts or tasks, trigger an onboarding message sequence, and send a message to the contact within the opportunity. A live entertainment venue that sells tickets through Eventbrite could create an “Events” custom object through that integration, enabling it to manage types of events, tickets, attendees, dates, and more.

GetResponse introduces Free Forever. GetResponse, an email marketing platform, is now offering a free plan, following the launch of its new website builder this year. With the free plan, businesses use a drag-and-drop creator to build emails, build and host one landing page, connect their domain or choose a free one, and receive lead-generation tools, such as newsletter templates and signup forms. Participants can also access premium features for 30 days at no cost.

Home page of GetResponseHome page of GetResponse

GetResponse

Amid Facebook Ad Turmoil, Supply.co Retrenches

Patrick Coddou is a direct-to-consumer pioneer, having launched Supply.co in 2015. The company designs, manufactures, and sells premium shaving products — all to great success.

Until April 2021. That’s when Apple launched iOS 14.5, which tracks the actions of iPhone users only if they agree. The release upended Facebook’s ability to hyper-target ads. The result is Facebook’s cheap but profitable ads are now less cheap and not so profitable.

Coddou’s company relied on sales from Facebook ads. He told me, “Supply.co had a really tough summer. For the first time ever we recorded two monthly losses.”

But he has retrenched. He fired his marketing agency, adjusted personnel, and moved forward on two long-planned product releases.

He and I discussed it all in our recent conversation. The full audio version is embedded below. The transcript that follows is edited for clarity and length.

Eric Bandholz: What’s going on with Facebook?

Patrick Coddou: This topic is no surprise to anybody in ecommerce. The short of it is that my company, Supply.co, had a really tough summer. For the first time ever we recorded two monthly losses. We’ve always been profitable, except for the very early days. We recorded some pretty decent losses this summer. Those were some painful months, and they were 100% directly attributed to the iOS 14.5 updates. We started to see changes in our advertising performance quickly after that update, as early as early May. We had lower revenue months over the summer and much lower and negative profit.

Bandholz: We’re feeling the same pain at Beardbrand. iOS 14.5 made it hard for Facebook to track people and thus target ads. What was your response? What’s your plan?

Coddou: We saw those rough months coming. We shifted into an immediate problem-solving mode. It was clear to me that our agency at the time didn’t have a plan. They spent like drunken sailors on really poor advertising. They wasted a ton of money.

So I cut off that agency relationship. I moved everything in house. I hired an internal head of marketing. I reassigned one of my guys to be our new head of creative. I hired a new full-time developer. I hired a full-time copywriter. I’m currently on the look for a part-time senior media buyer as well. And then I upped our videographer budget.

I took this huge chunk of money I was giving agencies and, for the first time with our company, took ownership of our marketing channels. I’m not saying it was a silver bullet. But that was my initial response.

Bandholz: Are the new employees remote or local?

Coddou: A little of both. Most of them are in the Dallas-Ft.Worth area. Most of my full-time team is there. My developer is in Africa. I have other teammates around the world, but the new guys are in the U.S.

I took a very different approach this time in terms of finding the new staff. I’ve hired so many expensive people in the history of my company. I hired a very expensive, full-time head of marketing last year. I’ve retained fancy, costly agencies. None of them worked out.

So this time around, I hired green people, those without a lot of experience. I over-leveraged on hunger and chip-on-the-shoulder mentality. I wanted people eager to prove themselves and a desire to learn. Those are the kind of people I hired. I found most of them through Twitter, incidentally.

Bandholz: How do you distinguish between someone who’s hungry and driven versus annoying and overbearing?

Coddou: I’m no hiring expert. It’s very subjective. I look for raw honesty coupled with motivation.

Here’s an example. One of the new hires didn’t have a great resume. He couldn’t keep a job for more than a few months. I asked him, “Based on your resume, I don’t think you’re hire-able. Why I should hire you.”

His answer combined honesty and hunger. He admitted why his previous positions had been failures. He told me what he learned from them and how he wanted to prove himself despite those failures.

Otherwise, there’s not one thing that I always do. However, I use a book called “Who: The A Method for Hiring.” I pretty much follow it word for word. It’s always worked for me. You ask direct questions, and you get surprisingly good answers.

Bandholz: You’re hiring green people. Who trains them?

Coddou: I’m doing a bit of training and mentoring. I’ve set one of the hires up with some forums. I’m hiring a seasoned person to help him. I’m honest with my hires. I tell them that there’s not a lot of structure at my company. There’s not a lot of, “Here’s your job. Go do it.” It’s more like, “This is what we need. Go figure it out.”

For the most part, it’s worked well. But for media buying lately, nobody knows what they’re doing. So how can I expect a green media buyer to know?

Bandholz: Nobody knows what to do with Facebook ads now.

Coddou: Nobody. So we might as well make it up and start from scratch.

Bandholz: Do you have a primary KPI for your ads?

Coddou: I have two: total revenue and the marketing efficiency ratio, or MER, which is the percent of revenue spent on advertising. From there, it gets convoluted because our return on ad spend on Facebook doesn’t currently make any sense. We used to hit a 2 to 3-times return on ad spend easily. Now we’re lucky to hit a 1. So the challenge with media buying now is we don’t know what’s working.

It feels like we’re moving around in this dark room, hoping to shed light on what to do. We’re using Wicked Reports, which is a first-party pixel. It’s helped us know what works, but it’s not perfect.

What is a better KPI than MER? I haven’t come up with a good answer.

Bandholz: Let’s switch gears. You’ve just launched a Kickstarter campaign.

Coddou: Right. This is our fourth Kickstarter. We’ve raised close to $500,000 over the past six years. We started our company on Kickstarter in August 2015. We love Kickstarter.

We went back to Kickstarter because we’re in the middle of an ambitious launch of two new products. I’ve spent hundreds of thousands of dollars in research and development as well as tooling costs.

I need to place purchase orders to have these products manufactured. But I don’t have half a million dollars sitting around to do it. So we turned to Kickstarter to raise the money. Plus it’s a lot of fun. It’s an event.

And then there are other benefits. I’m acquiring new customers through Kickstarter at a much lower return on ad spend than other channels. I’m buying Facebook ads to drive ad traffic to the page.

Kickstarter is a fantastic platform to launch a new product on. Not every product will work, however. We had a campaign that was a dud a couple of years ago for our Dopp Kit, a shaving bag.

Bandholz: This is a big year for Supply.co. You’re launching two new razors and bringing marketing in-house. What will 2022 look like?

Coddou: Let me talk about the future in the context of our product launch. Our current razor, the one on our website, is $75. It’s not cheap by any measure. But it’s a bargain because it lasts forever. It has a lifetime warranty. It’s made from steel.

And blade replacements are cheap. So you’re saving money in the long term. But it’s a lot of money to pay for a razor. The reason is that it’s very high-quality and it costs a lot to produce.

So I’ve always wanted to offer a lower-priced version. I’ve also wanted one that’s easy to use. Safety razors are not always safe, and they’re not always easy to use.

One of our new products is called the SE, the Sensitive Edition. It’s super easy to use, with a 30% lower price. And so I view that product as my Amazon and Target product, my mass-market product. It’s still not cheap at $49. But it’s more affordable for a quality safety razor.

So we’ll have that product. And then, for my most seasoned customers, my shaving fanatics, we’ve introduced a new complex-engineered razor. That will be our higher-priced product.

So we’ll have a two-phase pricing strategy.

Bandholz: Are you killing the old razor?

Coddou: Yes, once it’s out of stock, we’re going to kill it. It doesn’t offer anything different from the other two.

So what’s the future for us? Next year, 2022, will be about this lower-priced version, getting it on Amazon and, hopefully, Target. There’s a lot of work to do. But getting that product in the hands of many people is my strategy.

The mission for me from day one has been to evangelize single-blade shaving worldwide. We call it the single-blade revolution.

Bandholz: How can listeners reach you, buy your products, follow your Kickstarter?

Coddou: Our website is Supply.co. I’m usually on Twitter, @soundslikecanoe. Our Kickstarter page is “The Single Edge SE & Pro.”