Will Amazon Convert Shipping Expense into Profit?

Amazon is poised to become the second-largest package delivery carrier in the United States, perhaps transforming a cost into a profit center.

In 2021, Amazon’s burgeoning logistics service had a 22% share of the U.S. small parcel delivery market by volume, making it the third-largest carrier in the American market, according to a May 23, 2022, Pitney Bowes report.

This report states that Amazon’s network — made up of company-owned facilities and vehicles, an army of delivery service partners, and Uber-like Flex drivers — moved about 4.8 billion boxes and envelopes in 2021.

Surpasses FedEx

Amazon delivered more parcels in 2021 than FedEx in the U.S. — a significant milestone. Several transportation analysts doubted Amazon could do it.

A 2018 article in a Memphis business journal, for example, said that the “chances of [Amazon] ever making the network profitable or in any way, shape or form a serious competitor to FedEx is dubious at best, a pipe dream.” (FedEx, it’s worth noting, is based in Memphis.)

The quote addressed Amazon’s Delivery Service Partners, a network of independent contractors who own or lease Amazon-branded vans and make deliveries exclusively for Amazon. But there might be a lesson here about doubting the company’s abilities and tenacity.

In 2022, analysts expect Amazon to pass UPS in parcel volume, making it second only to the United States Postal Service in the number of small packages delivered in America.

Cost Center

Some have pointed out that Amazon’s market share is just 12% when revenue — not parcel volume — is in view. But that fact is beside the point.

Fulfillment and shipping are cost centers for ecommerce operations. Amazon’s Q1 2022 results, released on April 28, showed that the company spent more than $19.5 billion on shipping.

The parcel delivery network Amazon is building is likely to reduce the company’s fulfillment and shipping expenses relative to using other carriers. UPS and FedEx’s profit margins are Amazon’s cost savings opportunity.

Pitney Bowes reports that Amazon had 12% of the $188 billion parcel delivery market in 2021. That translates into roughly $22.5 billion in delivery revenue.

Profit Center

One could argue that Amazon is on the verge of transforming one of its most significant expenses into a profit center.

The company is already deriving revenue from third-party sellers on its Marketplace. In the first quarter of this year, Amazon’s third-party seller service fees, fulfillment, and shipping revenue were more than $25. billion.

That figure could grow if the company’s “Buy with Prime” initiative succeeds.

Announced on April 20, 2022, Buy with Prime puts Amazon’s “fast, free delivery, hassle-free returns, and a seamless checkout experience” on any ecommerce site. Participating merchants must use Fulfillment by Amazon and the company’s parcel delivery network.

Effectively, Amazon opened up its delivery services to ecommerce sales originating not just from its Marketplace but from any store. Buy with Prime is a back door to scale its delivery operations and become a profitable parcel carrier.

UPS and FedEx generated $2.6 billion and $1.1 billion in profit in their most recent fiscal quarters. Amazon presumably sees that profit as an opportunity.

Amazon Web Services

Amazon has done this before.

Data centers are typically a cost center for large ecommerce enterprises. But Amazon long ago transformed this expense into profit.

Amazon Web Services started as low-cost digital storage but has grown into a massive cloud computing business. AWS now includes database offerings, animation solutions, natural language processing, machine learning, and more.

AWS generated more than $6.5 billion in profit in the first quarter of 2022 on $18.4 billion in revenue. What’s more, AWS is growing more than 30% per year.


In a way, Amazon used this same tactic on its website, too. The company has become a leader in retail media.

Its advertising services business produced $7.8 billion in revenue in Q1 2022.

Instead of seeing its website and marketplace as somehow proprietary, Amazon has long since seen it as an opportunity to generate more profit.


Amazon’s fulfillment and delivery operations and its success with AWS and retail advertising could hold an exciting lesson for ecommerce companies.

What is a cost today could become a source of revenue.

Manage Accessible Design System Themes With CSS Color-Contrast()

There’s certainly no shortage of design systems available to use when building your next project. Between IBM’s Carbon, Wanda and Nord, there are plenty of terrific design systems to choose from. Yet, while each one contains its own nuances and opinions, most share a similar goal — simplifying the development process of creating beautifully accessible user interfaces.

It’s an admirable goal and, honestly, one that has led me to shift my own career into design systems. But a core feature at the foundation of many design systems is the extensibility for theming. And why wouldn’t it be? Without some flexibility for branding, every product using a particular system would look the same, à la Bootstrap around 2012.

While providing support for custom themes is vital, it also leaves the most well-intentioned system’s accessibility at the mercy of the implementation. Some teams may spend weeks, if not months, defining their ideal color palette for a rebranding. They’ll labor over each shade and color combination to ensure everything is reliable, informative, and accessible.

Others simply can’t and/or won’t do that.

It’s one thing to require alt text on an img element or a label for an input element, but enforcing accessible color palettes is an entirely different beast. It’s a beast with jagged yellow teeth, fiery-red eyes, and green scales covering its body like sheets of crocodile armor.

At least you think it is. For all you know, it could be a beast of nothing more than indistinct shades of black and slightly darker black.

And therein lies the problem.

The CSS Color-Contrast() Function

Building inclusive products doesn’t mean supporting devices but supporting the people using them.

The CSS color-contrast() function is an experimental feature which is currently a part of Color Module 5. Its purpose — and the reason for the excitement of this article — is to select the greatest contrasting color from a list when compared against a base color.

For the sake of this article, we will refer to the first parameter as the “base color” and the second as the “color list.” These parameters can accept any combination of browser-supported CSS color formats, but be weary of opacities. There’s an optional third parameter, but let’s look at that later. First, let’s define what we mean by this being an experimental feature.

At the time of writing, the color-contrast() feature is only available in the Safari Technology Preview browser. The feature can be toggled through the Develop and Experimental Features menus. The following demos will only work if the feature is enabled in that browser. So, if you’d like to switch, now wouldn’t be the worst time to do so.

Now, with the base syntax, terminology, and support out of the way, let’s dive in. 🤿

Color Me Intrigued

It was Rachel Andrew’s talk at AxeCon 2022, “New CSS With Accessibility in Mind”, where I was introduced to color-contrast(). I scribbled the function down into my notebook and circled it multiple times to make it pop. Because my mind has been entirely in the world of design systems as of late, I wondered how big of an impact this little CSS feature could have in that context.

In her presentation, Rachel demoed the new feature by dynamically defining text colors based on a background. So, let’s start there as well, by setting background and text colors on an article.

article { --article-bg: #222; background: var(--article-bg); color: color-contrast(var(--article-bg) vs #FFF, #000);

We start by defining the --article-bg custom property as a dark grey, #222. That property is then used as the base color in the color-contrast() function and compared against each item in the color list to find the highest contrasting value.

Base ColorColor ListContrast Ratio

As a result, the article’s color will be set to white, #FFF.

But this can be taken further.

We can effectively chain color-contrast() functions by using the result of one as the base color of another. Let’s extend the article example by defining the ::selection color relative to its text.

article { --article-bg: #222; --article-color: color-contrast(var(--article-bg) vs #FFF, #000); background: var(--article-bg); color: var(--article-color); ::selection { background: color-contrast(var(--article-color) vs #FFF, #000); }

Now, as the text color is defined, so will its selection background.

The optional third parameter for color-contrast() defines a target contrast ratio. The parameter accepts either a keyword — AA, AA-large, AAA, and AAA-large — or a number. When a target contrast is defined, the first color from the color list that meets or exceeds it is selected.

This is where color-contrast() could really empower design systems to enforce a specific level of accessibility.

Let’s break this down.

.dark-mode { --bg: #000; --color-list: #111, #222;
} .dark-mode { background: var(--bg); color: color-contrast(var(--bg) vs var(--color-list)); &.with-target { color: color-contrast(var(--bg) vs var(--color-list) to AA); }

The magic here happens when the two color declarations are compared.

The base .dark-mode class does not use a target contrast. This results in the color being defined as #222, the highest contrasting value from the color list relative to its base color of black. Needless to say, the contrast ratio of 1.35 may be the highest, but it’s far from accessible.

Compare this to when the .dark-mode and .with-target classes are combined, and a target contrast is specified. Despite using the same base color and color list, the result is much different. When no value in the color list meets the AA (4.5) target contrast, the function selects a value that does. In this case, white.

This is where the potential of color-contrast() is the brightest.

In the context of design systems, this would allow a system to enforce a level of color accessibility with very granular control. That level could also be a :root-scoped custom property allowing the target contrast to be dynamic yet global. There’s a real feeling of control on the product side, but that comes at a cost during the implementation.

There’s a logical disconnect between the code and the result. The code doesn’t communicate that the color white will be the result. And, of course, that control on the product side translates to uncertainty with the implementation. If a person is using a design system and passes specific colors into their theme, why are black and white being used instead?

The first concern could be remedied by understanding the color-contrast() feature more deeply, and the second could be alleviated by clear, communicative documentation. However, in both cases, this shifts the burden of expectation onto the implementation side, which is not ideal.

In some cases, the explicit control will justify the costs. However, there are other drawbacks to color-contrast() that will need to be considered in all cases.

Not All That Glitters Is Gold

There are inevitable drawbacks to consider, as with any experimental or new feature, and color-contrast() is no different.

Color And Visual Contrasts Are Different Things

When using color-contrast() to determine text color based on its background, the function is comparing exactly that — the colors. What color-contrast() does not take into consideration are other styles that may affect visual contrast, such as font size, weight, and opacity.

This means it’s possible to have a color pairing that technically meets a specific contrast threshold but still results in an inaccessible text because its size is too small, weight is too light, or its opacity is too transparent.

To learn more about accessible typography, I highly recommend Carie Fisher’s talk, “Accessible Typography Essentials.”

Custom Properties And Fallbacks

Since CSS custom properties support fallback values for when the property is not defined, it seemed like a good approach to use color-contrast() as a progressive enhancement.

--article-color: color-contrast(#000 vs #333, #FFF);
color: var(--article-color, var(--fallback-color));

If color-contrast() is not supported, the --article-color property would not be defined, and therefore the --fallback-color would be used. Unfortunately, that’s not how this works.

An interesting thing happens in unsupported browsers — the custom property would be defined with the function itself. Here’s an example of this from Chrome DevTools:

Because the --article-color property is technically defined, the fallback won’t trigger.

However, that’s not to say color-contrast() can’t be used progressively, though. It can be paired with the @supports() function, but be mindful if you decide to do so. As exciting as it may be, with such limited support and potential for syntax and/or functionality changes, it may be best to hold off on sprinkling this little gem throughout an entire codebase.

@supports (color: color-contrast(#000 vs #fff, #eee)) { --article-color: color-contrast(var(--article-color) vs #fff, #000);

The Highest Contrast Doesn’t Mean Accessible Contrast

Despite the control color-contrast() can offer with colors and themes, there are still limitations. When the function compares the base color against the list and no target contrast is specified, it will select the highest contrasting value. Just because the two colors offer the greatest contrast ratio, it doesn’t mean it’s an accessible one.

h1 { background: #000; color: color-contrast(#000 vs #111, #222);

In this example, the background color of black. #000 is compared against two shades of dark grey. While #222 would be selected for having the “greatest” contrast ratio, pairing it with black would be anything but great.

No Gradient Support

In hindsight, it was maybe a touch ambitious trying gradients with color-contrast(). Nevertheless, through some testing, it seems gradients are not supported. Which, once I thought about it, makes sense.

If a gradient transitioned from black to white, what would the base color be? And wouldn’t it need to be relative to the position of the content? It’s not like the function can interpret the UI. However, Michelle Barker has experimented with using CSS color-mix() and color-contrast() together to support this exact use case.

It’s not you, color-contrast(), it’s me. Well, it’s actually the gradients, but you know what I mean.

Wrapping Up

That was a lot of code and demos, so let’s take a step back and review color-contrast().

The function compares a base color against a color list, then selects the highest contrasting value. Additionally, it can compare those values against a target contrast ratio and either select the first color to meet that threshold or use a dynamic color that does. Pair this with progressive enhancement, and we’ve got a feature that can drastically improve web accessibility.

I believe there are still plenty of unexplored areas and use cases for color-contrast(), so I want to end this article with some additional thoughts and/or questions.

How do you see this feature being leveraged when working with different color modes, like light, dark, and high contrast? Could a React-based design system expose an optional targetContrast prop on its ThemeProvider in order to enforce accessibility if the theme falls short? Would there be a use case for the function to return the lowest contrasting value instead? If there were two base colors, could the function be used to find the best contrasting value between them?

What do you think?


Further Reading on Smashing Magazine

Charts: Impact of Ukraine War on World Economy

“World Economic Outlook” is a twice-yearly report by the International Monetary Fund. The April 2022 edition is subtitled “War Sets Back the Global Recovery.”

The conflict in Ukraine has generated a humanitarian disaster with economic consequences that will slow down the global economy and elevate inflation. Fuel and food prices have risen significantly, for example.

According to IMF estimates, the worldwide economy will decelerate from 6.1% growth in 2021 to 3.6% in 2022 and 2023 — 0.8 and 0.2 percentage points lower than its January 2022 forecast.

The IMF’s inflation predictions for 2022 are 5.7% in advanced economies and 8.7% in emerging market and developing economies.

Although the IMF expects it to ease eventually, inflation could be higher in the short term for various reasons, including worsening supply-demand mismatches and commodity price hikes. Furthermore, both the war and renewed pandemic breakouts could extend supply disruptions, raising costs even more.

According to the IMF forecast, the global economy will effectively flatline this year as Europe enters a recession, China slows substantially, and U.S. financial conditions tighten. On a purchasing power parity basis, the growth of global gross domestic product is expected to be 3.6% in 2022.

Bloomberg tracks global fuel prices, including the comparison of gasoline to diesel. According to Bloomberg’s data, gasoline prices in northwest Europe have risen significantly and are now higher than diesel, a change from prior periods.

Understanding Weak Reference In JavaScript

Memory and performance management are important aspects of software development and ones that every software developer should pay attention to. Though useful, weak references are not often used in JavaScript. WeakSet and WeakMap were introduced to JavaScript in the ES6 version.

Weak Reference

To clarify, unlike strong reference, weak reference doesn’t prevent the referenced object from being reclaimed or collected by the garbage collector, even if it is the only reference to the object in memory.

Before getting into strong reference, WeakSet, Set, WeakMap, and Map, let’s illustrate weak reference with the following snippet:

// Create an instance of the WeakMap object.
let human = new WeakMap(): // Create an object, and assign it to a variable called man.
let man = { name: "Joe Doe" }; // Call the set method on human, and pass two arguments (key and value) to it.
human.set(man, "done") console.log(human)

The output of the code above would be the following:

WeakMap {{…} => 'done'} man = null;

The man argument is now set to the WeakMap object. At the point when we reassigned the man variable to null, the only reference to the original object in memory was the weak reference, and it came from the WeakMap that we created earlier. When the JavaScript engine runs a garbage-collection process, the man object will be removed from memory and from the WeakMap that we assigned it to. This is because it is a weak reference, and it doesn’t prevent garbage collection.

It looks like we are making progress. Let’s talk about strong reference, and then we’ll tie everything together.

Strong Reference

A strong reference in JavaScript is a reference that prevents an object from being garbage-collected. It keeps the object in memory.

The following code snippets illustrate the concept of strong reference:

let man = {name: "Joe Doe"}; let human = [man]; man = null;

The result of the code above would be this:

// An array of objects of length 1. [{…}]

The object cannot be accessed via the dog variable anymore due to the strong reference that exists between the human array and object. The object is retained in memory and can be accessed with the following code:


The important point to note here is that a weak reference doesn’t prevent an object from being garbage-collected, whereas a strong reference does prevent an object from being garbage-collected.

Garbage Collection in JavaScript

As in every programming language, memory management is a key factor to consider when writing JavaScript. Unlike C, JavaScript is a high-level programming language that automatically allocates memory when objects are created and that clears memory automatically when the objects are no longer needed. The process of clearing memory when objects are no longer being used is referred to as garbage collection. It is almost impossible to talk about garbage collection in JavaScript without touching on the concept of reachability.


All values that are within a specific scope or that are in use within a scope are said to be “reachable” within that scope and are referred to as “reachable values”. Reachable values are always stored in memory.

Values are considered reachable if they are:

  • values in the root of the program or referenced from the root, such as global variables or the currently executing function, its context, and callback;
  • values accessible from the root by a reference or chain of references (for example, an object in the global variable referencing another object, which also references another object — these are all considered reachable values).

The code snippets below illustrate the concept of reachability:

let languages = {name: “JavaScript”};

Here we have an object with a key-value pair (with the name JavaScript) referencing the global variable languages. If we overwrite the value of languages by assigning null to it…

languages = null;

… then the object will be garbage-collected, and the value JavaScript cannot be accessed again. Here is another example:

let languages = {name: “JavaScript”}; let programmer = languages;

From the code snippets above, we can access the object property from both the languages variable and the programmer variable. However, if we set languages to null

languages = null;

… then the object will still be in memory because it can be accessed via the programmer variable. This is how garbage collection works in a nutshell.

Note: By default, JavaScript uses strong reference for its references. To implement weak reference in JavaScript, you would use WeakMap, WeakSet, or WeakRef.

Comparing Set and WeakSet

A set object is a collection of unique values with a single occurrence. A set, like an array, does not have a key-value pair. We can iterate through a set of arrays with the array methods for… of and .forEach.

Let’s illustrate this with the following snippets:

let setArray = new Set(["Joseph", "Frank", "John", "Davies"]);
for (let names of setArray){ console.log(names)
}// Joseph Frank John Davies

We can use the .forEach iterator as well:

 setArray.forEach((name, nameAgain, setArray) =>{ console.log(names); });

A WeakSet is a collection of unique objects. As the name applies, WeakSets use weak reference. The following are properties of WeakSet():

  • It may only contain objects.
  • Objects within the set can be reachable somewhere else.
  • It cannot be looped through.
  • Like Set(), WeakSet() has the methods add, has, and delete.

The code below illustrates how to use WeakSet() and some of the methods available:

const human = new WeakSet(); let paul = {name: "Paul"};
let mary = {gender: "Mary"}; // Add the human with the name paul to the classroom. const classroom = human.add(paul); console.log(classroom.has(paul)); // true paul = null; // The classroom will be cleaned automatically of the human paul. console.log(classroom.has(paul)); // false

On line 1, we’ve created an instance of WeakSet(). On lines 3 and 4, we created objects and assigned them to their respective variables. On line 7, we added paul to the WeakSet() and assigned it to the classroom variable. On line 11, we made the paul reference null. The code on line 15 returns false because WeakSet() will be automatically cleaned; so, WeakSet() doesn’t prevent garbage collection.

Comparing Map and WeakMap

As we know from the section on garbage collection above, the JavaScript engine keeps a value in memory as long as it is reachable. Let’s illustrate this with some snippets:

let smashing = {name: "magazine"};
// The object can be accessed from the reference. // Overwrite the reference smashing.
smashing = null;
// The object can no longer be accessed.

Properties of a data structure are considered reachable while the data structure is in memory, and they are usually kept in memory. If we store an object in an array, then as long as the array is in memory, the object can still be accessed even if it has no other references.

let smashing = {name: "magazine"}; let arr = [smashing]; // Overwrite the reference.
smashing = null;
console.log(array[0]) // {name: 'magazine'}

We’re still able to access this object even if the reference has been overwritten because the object was saved in the array; hence, it was saved in memory as long the array is still in memory. Therefore, it was not garbage-collected. As we’ve used an array in the example above, we can use map too. While the map still exists, the values stored in it won’t be garbage-collected.

let map = new Map(); let smashing {name: "magazine"}; map.set(smashing, "blog"); // Overwrite the reference.
smashing = null; // To access the object.

Like an object, maps can hold key-value pairs, and we can access the value through the key. But with maps, we must use the .get() method to access the values.

According to Mozilla Developer Network, the Map object holds key-value pairs and remembers the original insertion order of the keys. Any value (both objects and primitive values) may be used as either key or value.

Unlike a map, WeakMap holds a weak reference; hence, it doesn’t prevent garbage collection from removing values that it references if those values are not strongly referenced elsewhere. Apart from this, WeakMap is the same as map. WeakMaps are not enumerable due to weak references.

With WeakMap, the keys must be objects, and the values may be a number or a string.

The snippets below illustrate how WeakMap works and the methods in it:

// Create a weakMap.
let weakMap = new WeakMap(); let weakMap2 = new WeakMap(); // Create an object.
let ob = {}; // Use the set method.
weakMap.set(ob, "Done"); // You can set the value to be an object or even a function.
weakMap.set(ob, ob) // You can set the value to undefined.
weakMap.set(ob, undefined); // WeakMap can also be the value and the key.
weakMap.set(weakMap2, weakMap) // To get values, use the get method.
weakMap.get(ob) // Done // Use the has method.
weakMap.has(ob) // true weakMap.delete(ob) weakMap.has(ob) // false

One major side effect of using objects as keys in a WeakMap with no other references to it is that they will be automatically removed from memory during garbage collection.

Areas of Application of WeakMap

WeakMap can be used in two areas of web development: caching and additional data storage.


This a web technique that involves saving (i.e. storing) a copy of a given resource and serving it back when requested. The result from a function can be cached so that whenever the function is called, the cached result can be reused.

Let’s see this in action. Create a file, name it cachedResult.js, and write the following in it:

 let cachedResult = new WeakMap(); // A function that stores a result.
function keep(obj){
if(!cachedResult.has(obj){ let result = obj; cachedResult.set(obj, result); }
return cachedResult.get(obj);
} let obj = {name: "Frank"}; let resultSaved = keep(obj) obj = null; // console.log(cachedResult.size); Possible with map, not with WeakMap

If we had used Map() instead of WeakMap() in the code above, and there were multiple invocations on the function keep(), then it would only calculate the result the first time it was called, and it would retrieve it from cachedResult the other times. The side effect is that we’ll need to clean cachedResult whenever the object is not needed. With WeakMap(), the cached result will be automatically removed from memory as soon as the object is garbage-collected. Caching is a great means of improving software performance — it could save the costs of database usage, third-party API calls, and server-to-server requests. With caching, a copy of the result from a request is saved locally.

Additional Data

Another important use of WeakMap() is additional data storage. Imagine we are building an e-commerce platform, and we have a program that counts visitors, and we want to be able to reduce the count when visitors leave. This task would be very demanding with Map, but quite easy to implement with WeakMap():

let visitorCount = new WeakMap();
function countCustomer(customer){ let count = visitorCount.get(customer) || 0; visitorCount.set(customer, count + 1);

Let’s create client code for this:

let person = {name: "Frank"}; // Taking count of person visit.
countCustomer(person) // Person leaves.
person = null;

With Map(), we will have to clean visitorCount whenever a customer leaves; otherwise, it will grow in memory indefinitely, taking up space. But with WeakMap(), we do not need to clean visitorCount; as soon as a person (object) becomes unreachable, it will be garbage-collected automatically.


In this article, we learned about weak reference, strong reference, and the concept of reachability, and we tried to connect them to memory management as best we could. I hope you found this article valuable. Feel free to drop a comment.

Developing An Award-Winning Onboarding Process (Case Study)

The notion of onboarding is all about helping users quickly and easily find value in your offering. Speed and ease of use are equally important because users might lose interest if going through an onboarding takes more time or is more complicated than what they expected. Speed and ease of use are also relative to a person’s point of view: a salesperson can have vastly different expectations for an onboarding than a developer.

A well-constructed onboarding process boosts engagement, improves product adoption, increases conversion rates, and educates users about a product. Optimizing the onboarding experience is a journey. You should have a plan but be agile, utilizing processes and tools to garner feedback from target users in a bid to constantly improve.

In this article, we will walk you through how we developed the onboarding processes for platformOS from the very beginning. You will be able to follow how we carried out user experience research, how our onboarding has changed over time, what assumptions we made, and how we adjusted them. We will talk about all the tools we used as examples, but the same processes can be implemented with a wide variety of other tools. You will get practical examples and a complete overview of how we built our onboarding, with insights into UX research and the specifics of working with different audience segments.

Our audience has always combined technical people with various levels of programming skills, and non-technical people who come to our docs to evaluate if platformOS would be a good fit for their projects like Project Owners, Business Analysts, and Project Managers. Because our main target audience is divided into different segments, you will also get a glimpse of the processes we developed for our documentation, developer education, and developer relations.

Challenge: Onboarding For Different Target Audiences

platformOS is a model-based application development platform aimed at front-end developers and site builders automating infrastructure provisioning and DevOps.

DevOps is a combination of development methodologies, practices, and tools that enable teams to evolve and improve products at a faster pace to better serve their customers and compete more effectively in the market. Under a DevOps model, development and operations teams are merged into a single team where the engineers work across the entire application lifecycle, from development and test to deployment to operations.

Our main target audience is developers, and the foundation for their onboarding, education, and support is our developer portal — but our onboarding has to cater to other target audience segments as well.

Defining Our Target Audience Segments

We defined our target audience during the discovery phase of the Design Thinking process that we used to plan our developer portal. Since then, we have frequently revalidated the results to see if we are on the right track because we want to be sure that we understand the people who will be using our product, and what motivates them. We also know that in the lifecycle of a product this audience can change as a result of product positioning, and how well we can address their needs.

Our target audience currently has four segments:

  • Experienced developers,
  • Junior developers,
  • Agency Owner, Sales/Marketing,
  • PM, Business Analyst.

User Base Shifts

We created the first target audience map when we started planning our developer portal. In the discovery phase, we mapped out four proto-personas that covered the following segments: Experienced Developers, Junior Developers, Site Builders, and Marketplace Owners.

We revalidated these results a year later, and we realized that our audience had shifted a bit.

  • The Experienced Developers and the Junior Developers stayed as the main target audiences. However, we collected new knowledge related to the needs of the junior devs. They needed more detail to be able to understand and start working with the product. This new information helped us specify their user journey.
  • At this point, the Site Builders were the smallest group. We identified we needed to address the needs of the developers group first, creating a strong foundation to support site builders in the platform.
  • The non-technical segment shifted on the way. The Marketplace Owners segment was divided into two separate audiences: the Agency Owners, who have a sales and marketing background, and the Business Analysts, who have an enterprise background in business management or transformation — a new audience who started to show interest in our product.

Along the way, we were able to specify the needs of these audiences in more detail. These details helped with the prioritization of the onboarding tasks and kept our focus on the needs of the audience.

Defining Entry Points For Target Audience Segments

Getting to know the needs of the target audience segments provided guidance for identifying the entry points to the product.

  • The Agency Owners’ key goal is to work on multiple web projects that they host and manage on the platform. They won’t work on the platform themselves, but they would like to know the status and the progress of the platform without worrying about DevOps. They need to see the business perspective, the security, and that they are part of a reliable ecosystem with a helpful community around without diving deep into the technical part of the product.
  • The Business Analysts’ goal is to identify solution providers for their specific business problems. They need to find a long-term solution that fits with their use case, is scalable, and gives them the possibility for easy evaluation that shows the key business values in action.
  • The Junior Developers’ goal is to learn the basics without much hassle, under the guidance of experienced community members. They need clear technical communication on how to set up a dev environment and how to troubleshoot common errors.
  • The Experienced Developers’ goal is to find a solution that is reliable and flexible enough for all their project needs and at the same time provides good performance. They need to be able to evaluate quickly if it’s a good fit, then see how their project could work on the platform. They also need to see that the platform has a future with a solid community behind it.

All segments needed an actionable onboarding where they can interact with the product (and engage with the community) based on their level of technical knowledge.

  • In the non-technical journey, users can go from the 1-click route that takes them through registering on the Partner Portal to creating a demo site and installing the blog module by clicking through a setup wizard.
  • In the semi-technical journey, users can create a sandbox in which they can experiment by cloning a demo site from our GitHub repository, and they also have the option to go through our “Hello, World!” guide.
  • In the technical journey, users can follow a more complex tutorial that walks them through the steps of creating an app on platformOS from setting up their development environment to deploying and testing their finished app. It explains basic concepts, the main building blocks, and the logic behind platformOS, while also giving some recommendations on the workflow.

How We Approached The Challenge: Methods And Tools

We followed various methods to tackle different aspects of the main challenge. We selected a Design process to follow, used many different user research methods to collect insights and feedback from our users, chose a framework for our editorial workflow and technical implementation that could work well for our Agile, iterative process and our target audience, and went with an approach for content production that allowed community members to contribute early on.

Design Thinking

Because of the strategic role our developer portal plays in the adoption and use of our product, we wanted to use a creative design process that solves traditional business problems with an open mindset.

Our goal was to:

  • help our community to be able to use our documentation site for their needs as early as possible;
  • measure user needs and iterate the product based on the feedback;
  • keep the long-term user and business goals in mind and take a step closer with each iteration.

We found the Design Thinking framework a perfect fit because it is a user-centric approach that focuses on problem-solving while fostering innovation.

We followed the stages of the design thinking process:

  • Empathize
    In the beginning, we explored our audience, our documentation needs, and existing and missing content through in-depth interviews and workshops.
  • Define
    Then, we defined personas and our Content Inventory.
  • Ideate
    We shared our ideas for content and features through a Card Sorting exercise.
  • Prototype
    Based on our findings, we created a sitemap and prioritized content needs, and created layouts and wireframes. Content production started based on the results of our discovery phase.
  • Test
    We followed an iterative, Docs as Code approach: at each stage, we work with quick feedback rounds, deploy often, and improve features and content based on feedback from real users.

User Research

In the life of a product, each development stage has a fitting UX research method that we can use, depending on the business plans, time constraints, stage of product/feature, and the current concerns.

In the last three years we used the following methods:

  • Interviews
    We met with users, sales, and support persons to discuss in-depth what the participant experienced about various topics.
  • Remote Usability Testing
    We asked potential or current users of the product to complete a set of tasks during this process, and we observed their behavior to define the usability of the product. We used two types of remote usability testing:
    • Moderated: We conducted the research remotely via screen-sharing software, and the participants joined in from their usual work environment. This approach is advantageous when analyzing complex tasks — where real-time interaction and questioning with participants are essential.
    • Unmoderated: We sent tasks for users to complete in their own time. As moderators are not present, we measured less complex tasks and focused on the overall level of satisfaction they experienced when interfacing with the product.
  • Card Sorting
    A quantitative or qualitative method, where we ask users to organize items into groups and assign categories to each group. This process makes it possible to reflect the users’ mental model on the architecture.
  • Tree tests
    We used tree tests to validate the logic of the used information architecture. We gave users a task to find certain elements in the navigation structure and asked them to talk about where they would go next to accomplish the task.
  • Surveys, Questionnaires
    We used questionnaires and surveys to gather a large amount of information about a topic. This quantitative data can help us have a better understanding of specific topics that we can further research to understand what motivates users.
  • Analytics review
    We used site analytics to gather quantitative data about usage patterns and identify possible flow breaks. Based on the data we either fixed the problem or if needed, we further tested with usability research.

Docs As Code And CI/CD

We engaged our users in an Agile and iterative process right from the beginning discovery phase. This ensured that we were able to test and validate all of our assumptions, and quickly make modifications if needed. As our internal team members and our community participants are distributed, we needed a workflow that made it possible to collaborate on changes, large or small, remotely. Consequently, we needed a robust approach to version control accommodating authors, reviewers, and editors all working on content concurrently. As we wanted to encourage developers to contribute, we needed a framework that they’re familiar with. We also wanted to make our documentation open-source, so that anyone could duplicate and reuse it for their own projects. Based on these requirements, we decided to follow the Docs as Code approach.

Documentation as Code or Docs as Code refers to a philosophy of writing documentation with the same tools as software coding. This means following the same workflows as development teams, including being integrated into the product team. It enables a culture where writers and developers both feel they have ownership of the documentation and work together to aim for the best possible outcome. In our case, we didn’t only have writers and developers working on our onboarding but also UX researchers, account and project managers, and of course, a range of users in varying roles.

Our documentation is in a separate repository on GitHub. We have a central branch, and we work locally in a dedicated branch, then we send pull requests for review to be merged into the main branch. To preview docs, we use our own staging site which is an exact copy of the live documentation site.

Once we accept changes, we take steps to push them live almost immediately. To maintain the integrity of the site during this process, we follow the practice of continuous integration and continuous deployment (CI/CD). We run test scripts automatically and deploy the codebase to staging. If a test fails, an error report is generated. Alternatively, if everything goes well, our CI/CD of choice — GitHub Actions — deploys the codebase to production and sends us a notification. We release updates continuously, at times merging multiple changes in a single day, at other times only once or twice a week.

Editorial Workflow

Docs as Code provides the foundation for our processes, but for the various users to work efficiently together, we needed to define a clear editorial workflow that worked for all participants (internal and external writer, developer, contributor, and so on) and for all stages of the process (writing, reviewing, editing); but that was also simple enough to involve new contributors. Following Docs as Code, each stage of our workflow is in git, including project management (contributors can also add tickets to report issues or requests).

These are the steps of our editorial workflow:

  1. Write new content in Markdown using the templates. You can use any editor that can produce Github Flavored Markdown.
  2. Submit the new topic as a pull request on GitHub.
  3. Review. We have a peer-review system in place for code and docs alike. Topics are reviewed by both technical reviewers (developers) and writers.
  4. Edit as needed. Repeat steps 3-4 until approved.
  5. Merge approved pull request.
  6. Deploy to staging, then to production.

Our editorial workflow ensures that contribution works the same way for everyone, and we support our contributors with guidelines and ready-to-use templates.

Content Production And Contribution

When we started developing our onboarding and documentation, we followed the Content First approach. We planned to develop some initial content that we could work with, but even before that, we decided what types of content we would need and outlined the structure of each content type. These outlines became templates that ensure consistency and encourage contribution.

We were inspired by topic-based authoring and DITA, in the sense that we decided to have three main content types for our documentation, tutorials that describe how to accomplish a task, concepts that provide background information and context, and references like our API Reference. Our onboarding consists of tutorials that link to concepts and references when needed.

DITA, short for Darwin Information Typing Architecture, is an XML standard, an architectural approach, and a topic-based writing methodology where content is authored in topics rather than in larger documents or publications. A DITA topic must make sense in its own right.

Involving our users from the beginning ensured that we could test and validate all of our assumptions, and quickly modify anything if needed. This proved to be a time and cost-efficient approach: although we edit and rewrite our content, and change things on our documentation site all the time, we don’t run the risk of creating large chunks of work that have to be thrown away because they don’t correspond to the needs of our users.

Constant collaboration also builds trust: as our process is completely transparent, our community continuously knows what we’re working on and how our docs evolve, and community members can be sure that their opinions are heard and acted upon.

Involving the community from an early stage means that our users saw lots of stuff that was partially done, missing, or ended up totally rewritten. So, for all of this to work, our users had to be mature enough to give feedback on half-done content, and we had to be level-headed enough to utilize sometimes passionate criticism.

Encouraging Contribution

We wanted to make it very easy to get involved for all segments of our target audience, so we offer several ways to contribute, taking into consideration the time contributors have available, and their skill level. We describe ways for our community members to get involved in our Contributor Guide. For some quick editing, like fixing typos or adding links, contributors can edit the content easily on the GitHub UI. For heavy editing, adding new content, or for developers who prefer to use git, we provide a complete Docs as Code workflow. This approach proved to be extremely valuable for our onboarding. We got direct feedback on where users struggled with a step or had too little or too much information, and we could immediately make adjustments and verify that we have fixed the issue.

To help contributors write larger chunks of text or complete topics, we provide guidelines and templates to start from:

  • Style Guide
    Our style guide contains guidelines for writing technical content (e.g. language, tone, etc.) and each content type in our documentation (e.g. tutorials, concept topics, etc.).

  • Templates
    Our site uses Liquid pages, but to make editing easier for contributors, we write documentation content in Markdown and use a Markdown converter to turn it into Liquid. Our templates include all non-changeable content and placeholders with explanations for the parts that are editable. Placeholders provide information on the recommended format (e.g. title) and any requirements or limitations (e.g. maximum number of characters).

We thank all of our contributors by giving recognition to them on our Contributors page as well as on our GitHub repository’s README page.


Our team and community members are scattered across different time zones. Similarly to how we communicate among team members, we use mostly asynchronous and sometimes real-time communication tools to communicate with our community. We even leverage real-time communication tools, like a video conference, to become somewhat asynchronous. For example, video conferences and webinars are recorded, and community members can discuss them on various channels.

  • pOS Community site
    One of our main communication channels is our community site, where you can ask, answer, upvote, and downvote questions, and get to know other members of the platformOS Community. More features coming soon!
  • Slack support
    One of our main communication channels is dedicated Slack channels, where community members ask questions, share ideas, and get to know our team members and each other. Based on their feedback, community members have confirmed how helpful it is to be able to communicate directly with us and each other: they can share what they’ve learned, plan their module development in sync with our roadmap and each other’s projects, and allocate their resources according to what’s going on in the business and the wider community. This communication seeds the documentation site with the most sought-after topics.
  • Video conference
    We regularly have video conferences over Zoom called Town Halls, where community members and the platformOS team share news, demo features, and modules and have the opportunity to engage in real-time, face-to-face conversation. Our team and community members are distributed over different continents, so we try to accommodate participants in different time zones by rotating the time of this event so that everyone has the chance to participate. We also share the recording of each session.
  • User experience research
    Besides getting constant feedback from the community through the channels described above, we plan regular checkpoints in our process to facilitate testing and course correction. During development, we tie these checkpoints to development phases. At the end of each larger release, we conduct user interviews and compile and share a short survey for community members to fill out. This helps us clarify the roadmap for the next development phase.

We make sure to keep community members informed about what’s happening through different channels:

  • Status reports
    We regularly share status reports on our blog to keep our community updated on what we’ve achieved, what we are working on, and what we are planning for the near future. Our status reports also include calls for contribution and research participation and the results and analysis of UX research. Subscribers can also choose to receive the status reports via email newsletter.
  • Release notes
    We share updates regarding new features, improvements, and fixes in our release notes.
  • Blog
    We regularly share articles about best practices and general news on our blog.

Accessibility And Inclusiveness

We address accessibility right from the design phase, where we use Figma’s Able accessibility plugin. We regularly test for accessibility with various tools and ensure that the site complies with all accessibility requirements.

From a technical writing perspective, we support Accessibility and Usability by providing well-structured, clear, concise, and easy-to-understand copy. All of our documentation topics follow a predefined structure (predefined headings, steps, sections, link collections, and so on) applicable to that topic type (tasks, concepts, references), inspired by the principles of topic-based authoring.

Semantic HTML is important for Accessibility, and we make sure not to style text any other way than through Markdown which is then translated into HTML. This way, screen readers can properly navigate through the content, and it also helps overall consistency when, for example, we want to do a design update.

We also review all content to ensure accessible and inclusive language as specified in our style guide.

How We Developed Our Onboarding: Rounds And Lessons Learned

Developing Our Onboarding Using Continuous Iteration Rounds

At the beginning of the project, we started with a focused effort around discovery to identify the main business goals and user needs. As a result of this research, we were able to articulate the big picture. After we had all the user journeys and a sitemap for the big picture plan, we were able to break it down to identify the first iteration that would become the first working MVP version of the site.

Moving forward, we continue to follow an iterative approach, moving fast with an agile mindset. Steps: gather user feedback, identify areas of improvement and possible new directions, define the solution based on resources, business goals, and user needs, and implement it. This circle repeats indefinitely. So, we have an overarching plan outlined for our documentation that we keep in mind, but we always focus on the next couple of action steps we’d like to take.

We can highlight five distinctive rounds that had a great impact on the development of our developer portal.

  1. For our onboarding process, we started with exploring the requirements following the Design Thinking approach. Through a Card Sorting session, we explored the areas of interest for each target audience and that helped us define the topics that concern them the most. This worked as a persona-based content prioritization for the documentation site.
  2. We wanted to guide our users with actionable items that they can try out on our site as a next step. At this point, we were already aware that our target audience shifted. The interviews and the support feedback helped us understand their needs that pointed in two main directions. We needed an easy journey for non-technicals and another one for technicals who like to understand the logic of the platform. In this stage, we planned, tested, and developed the first version of the 1-click journey and the sandbox.
  3. We already had experienced platform users who we wanted to see in action. Using remote field studies, we discovered how they use the tools, the documentation site, and the partner portal we provide. At the same time, we started to conduct continuous onboarding interviews with partners who joined the platform. The two research directions helped us to realize how users with a varying degrees of experience interpret the platform.
  4. By this point, our content grew a lot on the developer portal, and we wanted to discover if we needed a structural and content reorganization based on the user research.
  5. In this latest round, we wanted to dedicate some time to fine-tuning and adjustments, and to double down on the developer portal’s accessibility and inclusiveness.

Round 1: Identifying The Target Audience Segments, Defining Proto-Personas, Base Discovery

With the Design Thinking workshops, we first focused on understanding our users. Based on the user research results, we defined the proto-personas and created a detailed description of each to show their needs and expectations and help us identify who we were designing for. It provided a good foundation for guiding the ideation process and prioritizing features based on how well they address the needs of one or more personas.

On our documentation site, we are working with a large amount of data that we need to present clearly to all users. To define a Content Inventory:

  • we created a list of our proto-personas’ needs based on the problems they needed to solve with the platform;
  • we created a detailed list of content from our previous documentation site and identified missing, reusable, and non-reusable content for our future site;
  • we analyzed the competitor sites to create a list of inspirations.

We ideated with the workshop participant using a Card Sorting exercise. The task was to map out the Content Inventory elements and define connections between them. The result showed us the connected areas and the proto-persona’s preference through color coding.

Based on the Content Inventory and the results of the Card Sorting sessions, we outlined the Information Architecture by creating a sitemap and the navigation for our future site. This plan included all the needs that were discovered and offered a roadmap to keep track of site improvements, content needs, and project phases.

During the Card Sorting sessions, we explored areas of interest for each user persona and, on the sitemaps, we highlighted these as user journeys. We also validated the importance of these areas to assign higher priorities to the ones that need more attention. This process kept our focus on the most important needs of the personas.

The most important sections for the four segments:

  • Experienced Developers: Quickstart guide, How to guide, API docs;
  • Junior Developers: Quickstart guide, Tutorials, Conceptual documentation;
  • Site Builders: Quickstart guide, Tutorials, FAQ, Forum;
  • Marketplace Owners: About platformOS, Blog.

This concluded our Information Architecture phase. We have discovered and organized all the information we needed to continue to the next phase, where we started creating templates for content types, building the wireframes for each page, producing content, and making Design decisions.

Round 2: Onboarding Strategy And Testing Of The Onboarding Process


Before we jumped into planning an onboarding strategy, we did a revalidation on proto-personas. At that point, we discovered that our audience shifted to Experienced developers, Junior developers, Agency Owner, Sales/Marketing, PM and Business Analyst, and we realized that we needed to cover a broader spectrum of needs than previously identified.

We interviewed 20 platformOS users. We identified how long they have been using the system, how they use the platform, what the key ‘aha’ moments were, what struggles they faced, and how they solved them. Their needs pointed in two main directions: we needed an easy journey for non-technicals and another one for technicals, covering those with less experience as well as those more capable developers who wished to understand the deeper logic and nuances of platformOS.

Our main goals with the new onboarding strategy were:

  • to connect our systems (developer portal — partner portal — platform), so our users can go through their discovery experience in one flow during their first visit;
  • to provide an actionable stepped process that the users can walk through;
  • allow users/personas to quickly identify the most fitting journey.

Usability Test

We conducted remote Usability Test sessions in three rounds to validate the platformOS onboarding process.

The onboarding section connects the Documentation site and the Partner Portal where users can select one of three journeys based on their programming experience. The goal was to learn how users with different levels of technical knowledge reacted to the three journeys. Are they able to quickly identify what is included in each journey? If yes, how do they engage from that time forward? Did they follow the pathway most appropriate for them?

During the Usability study, we asked users to do several short tasks using a prototype of the planned features built with Figma. We used both moderated and unmoderated remote usability testing techniques and conducted extra tests with platformOS team members to verify the represented business, technical, and content goals.

We conducted six moderated remote Usability Tests in two rounds and set up three unmoderated remote Usability Tests. These tests were separated into three rounds, and after each round, we updated the prototype with the test results.

Based on the test results, we decided that instead of showing three options to the users, we show the two quickest options: 1-click install and Build a basic ‘Hello world’ app. This helps them to quickly decide which is the best fit for them, and at the same time they can immediately try out the platformOS basics. Then, if they want to, they can check out our third journey — the Get Started guide that explains how to build a to-do app.

We redesigned the Instance welcome screen to help users identify the next steps. Based on the results, we had to optimize the UI copy to make it comfortable for non-technical users as well.

As the flow connects two sites and shows the product, the main goal was to show that the user is on the right track and still on the selected journey. We achieved it by showing the steps of the journey upfront, using consistent wording, and allowing the user to step back and forth.

Round 3: Remote Field Study And Onboarding Interviews

In this round, the goal was to examine the overall journey of the experienced and prospective pOS users, focusing on both successes and challenges they are facing. We conducted an interview with a remote field study to get a better understanding of how they work and what processes they are using.

We focused on four main topics:

  1. Development with pOS (workflows, preferences on version control, tools),
  2. Community and collaboration (support, discussions),
  3. Developer Portal (overall experience, obstacles, suggestions for improvements),
  4. Partner Portal (usage, dashboard preferences).

Key insights from the user research results:

  • The development with platformOS has a flexible and limitless offering which is a great strength of the system, but it also means that learning the workings of the platform, especially in the very beginning, takes more effort and patience from developers.
    Solution: Templates might provide aid during the learning process.

  • As platformOS is new in the market, there’s not much information on Google or StackOverflow yet. On the positive side, the pOS team always provides great support via Slack and introduces new solutions in Town Hall meetings, status reports, and release notes.
    Solution: To further strengthen the community, a separate Community Site can be an efficient and quick platform for peer-to-peer support by having a search function, and users can follow useful topics.

  • Related to the Developer Portal, we saw that the user easily gets to the documentation and finds the solution for most of their use cases. However, the search results were not precise enough in some cases, and the naming of the tutorials caused uncertainty about where to find items.
    Solution: Run a content reorganization session for the tutorials and fix the search function.

  • We discovered that the Partner Portal was used mostly at the beginning of the projects by experienced devs. Junior developers preferred that they can find helping instructions on the instances page that supported their work on the new instances. Agency Owners/Business Analyst preferred to use the site to see the payments related information and the analytics of the instance use. We saw that they generally had problems handling the permissions related to the instances and identifying the hierarchy between their instances.
    Solution: Partner Portal design update with new information structure of the instances and permissions.

Round 4: Structural And Content Reorganization, User Testing, Implementation

Structural And Content Reorganization

In this round, we renamed the Tutorials section to Developer Guide. This was in line with our plan to extend our tutorials in this section with more concept topics, as requested. We planned to have a comprehensive Get Started section for beginners with the “Hello, World!” tutorial and the Build a To-do List App series, and the Developer Guide for everyone working with platformOS — from users who have just finished the Get Started guides to experienced platformOS developers. This separated and highlighted the onboarding area of the site, and this is when the current structure of our Get Started section came to be: a separate tutorial for when you start your journey with platformOS, that you can use as a first step to go through the more advanced onboarding tutorials.

Card Sorting

At this point, we had 136+ topics in our Tutorials section organized into 27 groups, and we knew that we wanted to add more. Based on user feedback, we could improve the usability of the Tutorials section by organizing the topics better. Our goal was to identify a structure that best fits users’ expectations. We used a Card Sorting exercise to reach our goal.

We have analyzed the inputs, and based on the results, we concluded that seven categories can cover our 27 topics: Data management, Schema, Templates, Modules and Module examples, Partner Portal, Third-Party Systems, and Best Practices. We used the similarity matrix and the category namings to identify which topics are connected and what names users suggested for them.

With this research, we managed to restructure the Tutorials section to become in line with the mental models of the users.

Round 5: Fine-Tuning, Content Production

In the latest round, we added the possibility, on our onboarding, to start from a template. Based on our discovery, the marketplace template is a good option for site builders who would like to have a marketplace up and running fast and don’t want to explore the development in detail.

The pOS marketplace template is a fully functional marketplace built on platformOS with features like user onboarding, ad listings and ads, purchase and checkout process, and online payment. Following the tutorial we added, users can deploy this code within minutes to have a list of working features and start customizing the back- and front-end code.

We also keep fine-tuning our content for clarity, brevity, readability, accessibility, and inclusive language. We have regular accessibility reviews where we pay attention to aspects, such as terminology, technical language, gender-neutral pronouns, and informative link text while avoiding ableist language, metaphors, and colloquialisms. We summarized our experience with fine-tuning accessibility in the article “Code and Content for Accessibility on the platformOS Developer Portal” which includes examples of what we changed and how.

Future Plans

The platformOS Developer Portal was very positively received and even won a few peer-reviewed awards. We are honored and grateful that our efforts have yielded such great recognition. We will keep revalidating and improving our onboarding just like we have been doing since the beginning. We are also working on a developer education program for our soon-to-be-launched community site that includes various learning pathways that will try to accommodate users’ different learning styles and also offer ways for them to get more involved with our developer community.


So, after years of working on our onboarding, what are our key takeaways?

  • Don’t feel pressured to get everything right the first time around. Instead, become comfortable with change and consider each adjustment progress.
  • Get to know your target audience and be ready to revalidate and shift target audience segments based on your findings.
  • Get familiar with different user research methods to know when to use which approach. Carry out extensive user research and, in turn, listen to your users. To support feedback, allow users multiple different channels to give you feedback.
  • Choose a flexible workflow, so that the editorial process does not become an obstacle to continuous change. We love Docs as Code.
  • A product is never ready. Shaping and updating an already done flow is perfectly fine.
  • Iteration and prioritization are your best friends when it comes to delivering large amounts of work.

We hope that this case study helps and encourages you as you build an onboarding experience for your product.

7 Reasons Why Marketing Emails Fail

Underperforming marketing emails are often an indicator of overall program deterioration. Reductions in clicks, conversions, and revenue are typically symptoms of a larger problem.

In this post, I’ll address seven causes of poor email performance and how to fix them.

Email Not Reaching Inbox

All email marketing platforms will report a deliverability rate — the percentage of emails that recipients received. Usually this is 98% or more.

However, what your email provider is not reporting is how many of those delivered emails ended up in the inbox versus a subfolder, such as spam or junk. Unfortunately, no tool detects that percentage.

Encourage inboxing by:

  • Avoiding spam triggers such as using all caps or excessive exclamation points,
  • Keeping domain and IP address reputation high,
  • Staying off blacklists,
  • Maintaining high subscriber engagement.

Not Optimizing for Gmail

According to Litmus; in April 2022 Gmail was the second most popular global email client (behind Apple), accounting for roughly 30% of the market. In 2013, Gmail added tabs in the recipient’s inbox, leading most marketing emails to be filtered to Promotions.

Gmail recently released a few new features to help marketers stand out in the Promotions tab. You can check how your emails will filter for free using the Litmus Gmail tab tool.

Litmus's Gmail tab toolLitmus's Gmail tab tool

Litmus’s free Gmail tab tool will detect where in Gmail an email will end up.

Marketers can now boost their promotional emails in Gmail by highlighting an offer, offer code, adding a promotions preview image, and defining a logo URL that will appear as a custom icon next to the From line.

Screenshot of enhancements in Gmaiil Promotions tabScreenshot of enhancements in Gmaiil Promotions tab

Marketers can now boost promotional emails in Gmail by highlighting an offer, offer code, a preview image, and a logo.

Gmail for Developers offers documentation on how to code emails for these features. In addition, Gmail has several email partners that include promotional annotations in their software, including Litmus, Salesforce, Sailthru, Oracle Bronto, and more.

Gmail also features relevant promotional emails within the primary tab to help add more visibility to your messages.

The Wrong Offer

Offers are tricky. Always test email offers to help determine which works best for your audience. In my experience, performance can vary drastically depending on the product and service.

  • For product sales, usually a gift or a pre-populated cart helps. The latter auto-loads a free item into a recipient’s shopping cart.
  • Free shipping has lost some appeal as most retailers offer it in some capacity.
  • Dollar-off offers tend to perform a bit better than percentage-off.

Make sure to pair the offer with your recipients. For example, a small amount off will not likely appeal to a high-end jewelry buyer.

Outdated Data

Email data can become obsolete quickly. According to Return Path, on average only 56% of subscribers remain on an email list after 12 months! Of those that remain, roughly 47% are “active” — having opened and read at least one email.

While these statistics seem scary, there are several ways to maintain an engaged list.

  • Removing unengaged subscribers.
  • Running email verification on any subscriber that hasn’t been emailed in over 30 days.
  • Encouraging new email subscriptions.
  • Keep email frequency low to new subscribers to prevent immediate opt-outs.

I addressed email database cleaning tips last year.

Too Many Emails

Even the most loyal customers will eventually unsubscribe if you send them too many emails. Frequency in email marketing is a fine art and requires testing and monitoring. A few unsubscribes may seem inconsequential, but too many will impact performance.

Each subscriber has unique tolerance levels. But no one wants to receive multiple emails a day from a single sender. I recently unsubscribed to a few of my favorite brands that sent upwards of 15 emails a week.

Remember that elevated unsubscribed rates will hurt your reputation score, leading to more emails in junk or spam folders.

In my experience, two to three emails per week are optimal for ecommerce retailers. Again, testing is critical.

Mismatched Content

Irrelevant content drives unsubscribes. This means understanding your customers — what they have searched and purchased. Match email content — product recommendations, notifications — to those interests.

Personalization can help keep content relevant. I recently received an email from the Red Cross promoting upcoming blood drives that provided a good, basic example of personalization. The email included blood drive locations near me instead of a generic “find event” button.

Poor Subject, From, Preheader

Always preview the combination of your “Subject” line, “From” line, and preheader, especially on mobile. Keep subject lines short with the preheader as an extension. Do not repeat words.

Zurb offers a free subject-line preview tool.

13 Platforms for Blogging

Writing a blog is one of the best ways to establish expertise within an industry and drive traffic to your website. There are a variety of platforms available to launch and manage a blog at little or no expense.

Here is a list of platforms to launch a blog. There are content management systems to create a blog and full-featured website. There are also minimalist blogging tools to publish a clean, modern blog. Nearly all of these applications have free plans.


Screenshot of WordPress.orgScreenshot of WordPress.org


WordPress is a free and open-source content management system (CMS). WordPress is the internet’s most popular application – 43% of the web uses it. With nearly 60,000 plugins available to extend and customize your site, build a blog with an ecommerce store, forum, gallery, mailing list, analytics, and more. Price: Free.


Screenshot of Medium home pageScreenshot of Medium home page


Medium is a publishing platform where experts and undiscovered writers publish and share content. Individual writers publish from profile pages. Collaborate with others or post under a brand name. Use Medium’s story submission system and expressive customization options. Add a custom domain name to your space to help visitors find, share, and return to it. Join the Partner Program to earn money by making your stories part of member-only content, or allow free access to anyone. Price: Free. Partner Program is $5 per month.


Screenshot of Ghost home pageScreenshot of Ghost home page


Ghost was founded in April 2013 after a successful Kickstarter campaign to create a new platform for professional publishing. Ghost comes with modern tools to build a website, publish content, send newsletters, and offer paid subscriptions to members. The platform includes a marketplace for free and premium themes, custom integrations, and help from experts. Use native signup forms that turn anonymous views into logged-in members. Get detailed engagement analytics. Connect your Stripe account and deliver premium content to your audience. Price: Free. Hosting plans start at $25 per month.


Screenshot of LinkedInScreenshot of LinkedIn


LinkedIn is the world’s largest professional network. It’s also a great place to generate a blog. Demonstrate your expertise, and develop content to keep your profile fresh. To create a blog on LinkedIn, click the icon “Write an article” on your front page. Add your text and images, then publish and promote. Price: Free. Premium accounts start at $29.99 per month.


Screenshot of Squarespace.comScreenshot of Squarespace.com


Squarespace is an all-in-one platform for building a flexible website for a blog and more. Access image-rich, award-winning designer templates and integrations with Getty Images, Unsplash, and Google Amp. Increase traffic to your blog with Squarespace Email Campaigns and connected social media accounts. Enable commenting through Squarespace or Disqus. Price: Plans start at $14 per month.


Screenshot of Wix.comScreenshot of Wix.com


Wix is a website builder to quickly launch a site with 500+ customizable website templates to meet your business needs. Choose from 200+ blog templates, or use a blank canvas to create your own. Use advanced SEO tools, set up automatic emails, promote on social media, and invite your followers to become members and start discussions. Monetize with subscriptions, display ads, paid events, and ecommerce features. Price: Free. Premium plans start at $16 per month.


Screenshot of CMS HubScreenshot of CMS Hub


CMS Hub is HubSpot’s content platform for launching a blog. Use one of HubSpot’s pre-built website themes with the option for custom development. An SEO recommendations home screen allows you to improve your site and take action, all in one place. Track every visitor to your site and create personalized digital experiences leveraging CRM data. With adaptive testing, choose up to five page variations, and HubSpot will monitor and serve up the best-performing option. Price: Plans start at $23 per month.

Craft CMS

Screenshot of Craft CMS home pageScreenshot of Craft CMS home page

Craft CMS

Craft CMS is a flexible, user-friendly CMS for creating custom digital experiences. Choose from a large variety of built-in and plugin-supplied field types. Manage multiple sites from a single installation. Update content easily with Craft’s built-in management features, including an image editor, collaboration tools, and a localization feature. Easily integrate with popular payment gateways, CRMs, and fulfillment services. Price: Free. Pro is $299 per project.


Screenshot of Weebly home pageScreenshot of Weebly home page


Weebly is an easy-to-use website builder to create and manage your blog. Create posts with drag and drop, manage comments and schedule future content. Instantly respond to blog comments and form entries, reply to customer inquiries and stay connected to followers from anywhere. Drive traffic with integrated social media marketing, SEO tools, and AdWords credit. Price: Free. Premium plans start at $6 per month.


Screenshot of Write.asScreenshot of Write.as


Write.as is a modern, simple and clean platform for blogging. The editor only gives you what you need to write and auto­ma­ti­cally saves while you type. There are no comments, spam, likes, or dis­trac­tions — just your words in your own digital space. Write.as does not collect personal data, so you can write freely. Publish anonymously or under any name you choose. Price: Free. Pro plans start at $6 per month.


Screenshot of Blogger.comScreenshot of Blogger.com


Blogger is Google’s content management system. Choose from a selection of customizable templates with background images, or design something new. Get a blogspot.com domain or buy a custom domain. Connect directly to Google Analytics. Use AdSense to display relevant, targeted ads to get paid. Price: Free.


Screenshot of Tumblr.comScreenshot of Tumblr.com


Tumblr is a micro-blogging platform for media content. Tell stories through text, photos, GIFs, videos, live streams, and audio. Tumblr features free custom domains and hundreds of free and premium themes. Price: Free.


Screenshot of a web page from ContentlyScreenshot of a web page from Contently


Contently is an enterprise content marketing platform. Contently provides expert content strategies to tell you the content topics, formats, channels, and voice and tone your audience craves. It can make intelligent content recommendations, align your teams, and create better content faster. Access its talent network of 160,000+ writers, filmmakers, designers, and editors from Wired, The New York Times, Popular Science, and more. Contently’s high cost puts it out of the reach of most small businesses, but it may be an ideal solution for brands with larger budgets in need of editorial support. Contact for pricing.

Lesser-Known And Underused CSS Features In 2022

After reading Louis Lazaris’ insightful article “Those HTML Attributes You Never Use”, I’ve asked myself (and the community) which properties and selectors are lesser-known or should be used more often. Some answers from the community surprised me, as they’ve included some very useful and often-requested CSS features which were made available in the past year or two.

The following list is created with community requests and my personal picks. So, let’s get started!

all Property

This is a shorthand property which is often used for resetting all properties to their respective initial value by effectively stopping inheritance, or to enforce inheritance for all properties.

  • initial
    Sets all properties to their respective initial values.
  • inherit
    Sets all properties to their inherited values.
  • unset
    Changes all values to their respective default value which is either inherit or initial.
  • revert
    Resulting values depend on the stylesheet origin where this property is located.
  • revert-layer
    Resulting values will match a previous cascade layer or the next matching rule.

aspect-ratio for Sizing Control

When aspect-ratio was initially released, I thought I won’t use it outside image and video elements and in very narrow use-cases. I was surprised to find myself using it in a similar way I would use currentColor — to avoid unnecessarily setting multiple properties with the same value.

With aspect-ratio, we can easily control size of an element. For example, equal width and height buttons will have an aspect ratio of 1. That way, we can easily create buttons that adapt to their content and varying icon sizes, while maintaining the required shape.

I assumed that this issue cannot be fixed, and I moved on. One of the tweets from the community poll suggested that I should look into font-variant-numeric: tabular-nums, and I was surprised to find a plethora of options that affect font rendering.

For example, tabular-nums fixed the aforementioned issue by setting the equal width for all numeric characters.

Render Performance Optimization

When it comes to rendering performance, it’s very rare to run into these issues when working on regular projects. However, in the case of large DOM trees with several thousands of elements or other similar edge cases, we can run into some performance issues related to CSS and rendering. Luckily, we have a direct way of dealing with these performance issues that cause lag, unresponsiveness to user inputs, low FPS, etc.

This is where contain property comes in. It tells the browser what won’t change in the render cycle, so the browser can safely skip it. This can have consequences on the layout and style, so make sure to test if this property doesn’t introduce any visual bugs.

.container { /* child elements won't display outside of this container so only the contents of this container should be rendered*/ contain: paint;

This property is quite complex, and Rachel Andrew has covered it in great detail in her article. This property is somewhat difficult to demonstrate, as it is most useful in those very specific edge cases. For example, Johan Isaksson covered one of those examples in his article, where he noticed a major scroll lag on Google Search Console. It was caused by having over 38 000 elements on a page and was fixed by containing property!

As you can see, contain relies on the developer knowing exactly which properties won’t change and knowing how to avoid potential regressions. So, it’s a bit difficult to use this property safely.

However, there is an option where we can signal the browser to apply the required contain value automatically. We can use the content-visibility property. With this property, we can defer the rendering of off-screen and below-the-fold content. Some even refer to this as “lazy-rendering”.

Una Kravets and Vladimir Levin covered this property in their travel blog example. They apply the following class name to the below-the-fold blog sections.

.story { content-visibility: auto; /* Behaves like overflow: hidden; */ contain-intrinsic-size: 100px 1000px;

With contain-intrinsic-size, we can estimate the size of the section that is going to be rendered. Without this property, the size of the content would be 0, and page dimensions would keep increasing, as content is loaded.

Going back to Una Kravets and Vladimir Levin’s travel blog example. Notice how the scrollbar jumps around, as you scroll or drag it. This is because of the difference between the placeholder (estimated) size set with contain-intrinsic-size and the actual render size. If we omit this property, the scroll jumps would be even more jarring.

See the Pen Content-visibility Demo: Base (With Content Visibility) by Vladimir Levin.

Thijs Terluin covers several ways of calculating this value including PHP and JavaScript. Server-side calculation using PHP is especially impressive, as it can automate the value estimation on larger set of various pages and make it more accurate for a subset of screen sizes.

Keep in mind that these properties should be used to fix issues once they happen, so it’s safe to omit them until you encounter render performance issues.


CSS evolves constantly, with more features being added each year. It’s important to keep up with the latest features and best practices, but also keep an eye out on browser support and use progressive enhancement.

I’m sure there are more CSS properties and selectors that aren’t included here. Feel free to let us know in the comments which properties or selectors are less known or should be used more often, but may be a bit convoluted or there is not enough buzz around them.

Further Reading on Smashing Magazine

Fed-up Toy Company Responds to Knock-offs

Having founded Viahart, a manufacturer of educational toys, in 2010, Molson Hart learned his products were being illegally produced and sold by others. He realized the problem, intellectual property theft, extended far beyond Viahart. And that prompted the launch of his second company.

He told me, “I founded Edison Litigation Financing in 2017 with my brother, a computer science guy. Our company finds businesses experiencing intellectual theft. There are a lot of crooks knocking off products.”

Edison locates potential infringements, contacts the infringed party, and arranges lawsuits for damages. It earns a fee for that service. Viahart sells mainly through Amazon. It recorded nearly $9 million in sales in 2021.

Molson and I recently discussed both companies. The entire audio of our conversation is embedded below. The transcript is edited for clarity and length.

Eric Bandholz: Tell us about your business.

Molson Hart: I founded Viahart in 2010 as an educational toy brand. For the most part, everything we sell is about inspiring confidence and capability in children. Our primary sales channel is Amazon, although we do have an ecommerce site. We sell a building toy called Brain Flakes, which does things that Lego can’t. It’s for kids between three and 13. We also sell Goodminton rackets and Tiger Tale toys.

I have a second company, Edison LLC, that does ecommerce-focused litigation financing for intellectual property theft.

We’re based in Austin, Texas.

Bandholz: Let’s start with Goodminton. How did you come up with the name?

Hart: When selling online, you’ve got to have a brand. One of the best ways to create a good brand and to take full advantage of word of mouth is to have a catchy name that sticks with you. We were selling a racket game similar to Badminton, and I thought, “Can we make a name that’s funny, memorable, and makes people laugh?” We decided on Goodminton, instead of Badminton.

Bandholz: What are the pain points of selling on Amazon?

Hart: Sometimes Amazon falsely identifies sellers as doing search or review manipulations — a pain point that happens probably quarterly. If we’ve got a hero SKU like Brain Flakes, our bestselling product, Amazon says, “Yeah, we’re going to put that in position 32 when customers search your brand name because you manipulated search results.” There’s no recourse, and there’s no explanation for how you can solve that issue.

If you’re starting, it’s pretty brutal because shipping costs are high. For us, it’s not bad because we’re scaled up. When shipping from Asia to the United States. We’re using 40-foot containers, so we have reasonable shipping prices.

We used to be able to do an email blast for reviews and stuff like that, and now there aren’t early review rewards on Amazon, so you need pay-per-click ads and sponsored advertising. It can be expensive. We don’t do paid social marketing to drive customers to Amazon. We only pay for ads on Amazon. Launching a new product needs to be differentiated and innovative in some way.

Bandholz: How do you handle knock-offs and intellectual theft?

Hart: I used to have an office in China. The first knock-off we ever had was before our trademark was registered. Someone in China used our trademark, Brain Flakes, for an interlocking disc product. I couldn’t do anything because our trademark hadn’t been registered. It turns out it was someone in that office, one of my Chinese employees. I ended up suing her, and then two years later, she did it again, and I had to sue her again. I found out that another employee was doing it, too.

I had three employees in China, and two of them were counterfeiting me and selling our brand’s products on Amazon. The third guy wasn’t. When I finally found that out, he and I weren’t working together. I sent him $5,000 to say, “You’re the man for not doing that when the other two were.” He’s a good guy.

After suing my employees, I was able to stop infringement. I knew that other people had those problems, so I founded Edison Litigation Financing in 2017 with my brother, a computer science guy. Our company finds businesses experiencing intellectual theft similar to what I just described. There are a lot of crooks knocking off products.

We connect brands dealing with infringement with a lawyer, who does the filings. The brand doesn’t have to pay any money. We take a percentage of the money that comes back. Our business evolved and now offers reporting, which is cool because we sign up lots of clients to pay us for reporting, and when we notice an opportunity for a lawsuit, we can use our reporting data in the case. We have a growing SaaS reporting business, and since my brother is good at programming, we have an excellent tech suite. Our lawyers use it, and increasingly, our customers do as well.

We take all the risks. So in exchange for earning a chunk of the money when it comes back, we pay all the legal fees, and should you be counter-sued, that’s on us. That’s the value proposition. We are turn-key, so you don’t have to worry about it. We gather all the evidence, and we do all the analysis. I have a warehouse in Texas, and hundreds of counterfeit products go to that warehouse every day. The items get shipped to that warehouse, and we open them up and take photos. Those photos end up becoming the evidence.

Bandholz: So you use your software to scour the web and find infringers?

Hart: Yes. We’re always looking for people who have exceptional cases. When that happens, we’ll reach out. We also have a $99 per month marketplace reporting service. For instance, if you have a brand selling on Amazon, you pay us monthly, and we monitor Amazon and combine the regular reporting with everything that goes into our lawsuits. So you get the best of both worlds.

We monitor all the major marketplaces. We scour Shopify, Amazon, eBay, AliExpress, and Walmart for fake versions of products. One of our clients pays us for 15 different marketplace locations. Each business is structured differently. Some people are like us and mostly use Amazon. Others sell their retail with the big brands.

Another way our business makes money is by handling photo copyrights. People don’t know this, but if you take a photo and register its copyright, and then someone else comes in and uses that photo, it depends on what the context is, but the damages for that are high. If I were to steal one of the Beardbrand photos to sell Molson beard cream, I would get in a lot more trouble than if I buy Beardbrand from whatever store you have and then sell it on Amazon. It’s not cool to steal people’s photos. Photography is expensive. It takes a lot of time and a lot of work. You do get compensated when people steal your photos.

Bandholz: Where can people learn more about you and reach out?

Hart: My website is MolsonHart.com. You can follow me on Twitter, @Molson_hart. We’re also at Viahart and Edison Litigation Financing.

The Ultimate Free Solo Blog Setup With Ghost And Gatsby

These days it seems there are an endless number of tools and platforms for creating your own blog. However, lots of the options out there lean towards non-technical users and abstract away all of the options for customization and truly making something your own.

If you are someone who knows their way around front-end development, it can be frustrating to find a solution that gives you the control you want, while removing the admin from managing your blog content.

Enter the Headless Content Management System (CMS). With a Headless CMS, you can get all of the tools to create and organize your content, while maintaining 100% control of how it is delivered to your readers. In other words, you get all of the backend structure of a CMS while not being limited to its rigid front-end themes and templates.

When it comes to Headless CMS systems, I’m a big fan of Ghost. Ghost is open-source and simple to use, with lots of great APIs that make it flexible to use with static site builders like Gatsby.

In this article, I will show you how you can use Ghost and Gatsby together to get the ultimate personal blog setup that lets you keep full control of your front-end delivery, but leaves all the boring content management to Ghost.

Oh, and it’s 100% free to set up and run. That’s because we will be running our Ghost instance locally and then deploying to Netlify, taking advantage of their generous free tier.

Let’s dive in!

Setting Up Ghost And Gatsby

I’ve written a starter post on this before that covers the very basics, so I won’t go too in-depth into them here. Instead, I will focus on the more advanced issues and gotchas that come up when running a headless blog.

But in short, here’s what we need to do to get a basic set-up up and running that we can work from:

  • Install a local version of the Gatsby Starter Blog
  • Install Ghost locally
  • Change the source data from Markdown to Ghost (swap out gatsby-source-file system for gatsby-source-ghost)
  • Modify the GraphQL queries in your gatsby-node, templates, and pages to match the gatsby-source-ghost schema

For more details on any of these steps, you can check out my previous article.

Or you can just start from the code in this Github repository.

Dealing With Images

With the basics out of the way, the first issue we run into with a headless blog that builds locally is what to do with images.

Ghost by default serves images from its own server. So when you go headless with a static site, you will run into a situation where your content is built and served from an edge provider like Netlify, but your images are still being served by your Ghost server.

This isn’t ideal from a performance perspective and it makes it impossible to build and deploy your site locally (which means you would have to pay monthly fees for a Digital Ocean droplet, AWS EC2 instance, or some other server to host your Ghost instance).

But we can get around that if we can find another solution to host our images &mdash, and thankfully, Ghost has storage converters that enable you to store images in the cloud.

For our purposes, we are going to use an AWS S3 converter, which enables us to host our images on AWS S3 along with Cloudfront to give us a similar performance to the rest of our content.

There are two open-source options available: ghost-storage-adapter-s3 and ghost-s3-compat. I use ghost-storage-adapter-s3 since I find the docs easier to follow and it was more recently updated.

That being said, if I followed the docs exactly, I got some AWS errors, so here’s the process that I followed that worked for me:

  • Create a new S3 Bucket in AWS and select Disable Static Hosting
  • Next, create a new Cloudfront Distribution and select the S3 Bucket as the Origin
  • When configuring the Cloudfront Distribution, under S3 Bucket Access:

    • Select “Yes, use OAI (bucket can restrict access to only Cloudfront)”
    • Create a New OAI
    • And finally, select “Yes, update the bucket policy”

      This creates an AWS S3 Bucket that can only be accessed via the Cloudfront Distribution that you have created.

Then, you just need to create an IAM User for Ghost that will enable it to write new images to your new S3 Bucket. To do this, create a new Programmatic IAM User and attach this policy to it:

{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::YOUR-S3-BUCKET-NAME" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:PutObjectVersionAcl", "s3:DeleteObject", "s3:PutObjectAcl" ], "Resource": "arn:aws:s3:::YOUR-S3-BUCKET-NAME/*" } ]

With that, our AWS setup is complete, we just need to tell Ghost to read and write our images there instead of to its local server.

To do that, we need to go to the folder where our Ghost instance is installed and open the file: ghost.development.json orghost.production.json.(depending on what environment you’re currently running)

Then we just need to add the following:

{ "storage": { "active": "s3", "s3": { "accessKeyId": "[key]", "secretAccessKey": "[secret]", "region": "[region]", "bucket": "[bucket]", "assetHost": "https://[subdomain].example.com", // cloudfront "forcePathStyle": true, "acl": "private" }

The values for accessKeyId and secretAccessKey can be found from your IAM setup, while the region and bucket refer to the region and bucket name of your S3 bucket. Finally, the assetHost is the URL of your Cloudfront distribution.

Now, if you restart your Ghost instance, you will see that any new images you save are in your S3 bucket and Ghost knows to link to them there. (Note: Ghost won’t make updates retroactively, so be sure to do this first thing after a fresh Ghost install so you don’t have to re-upload images later)

Handling Internal Links

With Images out of the way, the next tricky thing we need to think about is internal links. As you are writing content in Ghost and inserting links in Posts and Pages, Ghost will automatically add the site’s URL to all internal links.

So for example, if you put a link in your blog post that goes to /my-post/, Ghost is going to create a link that goes to https://mysite.com/my-post/.

Normally, this isn’t a big deal, but for Headless blogs this causes problems. This is because your Ghost instance will be hosted somewhere separate from your front-end and in our case it won’t even be reachable online since we will be building locally.

This means that we will need to go through each blog post and page to correct any internal links. Thankfully, this isn’t as hard as it sounds.

First, we will add this HTML parsing script in a new file called replaceLinks.js and put it in a new utils folder at src/utils:

const url = require(`url`);
const cheerio = require('cheerio'); const replaceLinks = async (htmlInput, siteUrlString) => { const siteUrl = url.parse(siteUrlString); const $ = cheerio.load(htmlInput); const links = $('a'); links.attr('href', function(i, href){ if (href) { const hrefUrl = url.parse(href); if (hrefUrl.protocol === siteUrl.protocol && hrefUrl.host === siteUrl.host) { return hrefUrl.path } return href; } }); return $.html();
} module.exports = replaceLinks;

Then we will add the following to our gatsby-node.js file:

exports.onCreateNode = async ({ actions, node, getNodesByType }) => { if (node.internal.owner !== `gatsby-source-ghost`) { return } if (node.internal.type === 'GhostPage' || node.internal.type === 'GhostPost') { const settings = getNodesByType(`GhostSettings`); actions.createNodeField({ name: 'html', value: replaceLinks(node.html, settings[0].url), node }) }

You will see that we are adding two new packages in replaceLinks.js, so let’s start by installing those with NPM:

npm install --save url cheerio

In our gatsby-node.js file, we are hooking into Gatsby’s onCreateNode, and specifically into any nodes that are created from data that comes from gatsby-source-ghost (as opposed to metadata that comes from our config file that we don’t care about for now).

Then we are checking the node type, to filter out any nodes that are not Ghost Pages or Posts (since these are the only ones that will have links inside their content).

Next, we are getting the URL of the Ghost site from the Ghost settings and passing that to our removeLinks function along with the HTML content from the Page/Post.

In replaceLinks, we are using cheerio to parse the HTML. Then we can then select all of the links in this HTML content and map through their href attributes. We can then check if the href attribute matches the URL of the Ghost Site — if it does, we will replace the href attribute with just the URL path, which is the internal link that we are looking for (e.g. something like /my-post/).

Finally, we are making this new HTML content available through GraphQL using Gatsby’s createNodeField (Note: we must do it this way since Gatsby does not allow you to overwrite fields at this phase in the build).

Now our new HTML content will be available in our blog-post.js template and we can access it by changing our GraphQL query to:

ghostPost(slug: { eq: $slug }) { id title slug excerpt published_at_pretty: published_at(formatString: "DD MMMM, YYYY") html meta_title fields { html } }

And with that, we just need to tweak this section in the template:

<section dangerouslySetInnerHTML={{ __html: post.html }} itemProp="articleBody"

To be:

<section dangerouslySetInnerHTML={{ __html: post.fields.html }} itemProp="articleBody"

This makes all of our internal links reachable, but we still have one more problem. All of these links are <a>anchor tags while with Gatsby we should be using Gatsby Link for internal links (to avoid page refreshes and to provide a more seamless experience).

Thankfully, there is a Gatsby plugin that makes this really easy to solve. It’s called gatsby-plugin-catch-links and it looks for any internal links and automatically replaces the <a> anchor tags with Gatsby <Link>.

All we need to do is install it using NPM:

npm install --save gatsby-plugin-catch-links

And add gatsby-plugin-catch-links into our plugins array in our gatsby-config file.

Adding Templates And Styles

Now the big stuff is technically working, but we are missing out on some of the content from our Ghost instance.

The Gatsby Starter Blog only has an Index page and a template for Blog Posts, while Ghost by default has Posts, Pages, as well as pages for Tags and Authors. So we need to create templates for each of these.

For this, we can leverage the Gatsby starter that was created by the Ghost team.

As a starting point for this project, we can just copy and paste a lot of the files directly into our project. Here’s what we will take:

The meta files are adding JSON structured data markup to our templates. This is a great benefit that Ghost offers by default on their platform and they’ve transposed it into Gatsby as part of their starter template.

Then we took the Pagination and PostCard.js components that we can drop right into our project. And with those components, we can take the template files and drop them into our project and they will work.

The fragments.js file makes our GraphQL queries a lot cleaner for each of our pages and templates — we now just have a central source for all of our GraphQL queries. And the siteConfig.js file has a few Ghost configuration options that are easiest to put in a separate file.

Now we will just need to install a few npm packages and update our gatsby-node file to use our new templates.

The packages that we will need to install are gatsby-awesome-pagination, @tryghost/helpers, and @tryghost/helpers-gatsby.

So we will do:

npm install --save gatsby-awesome-pagination @tryghost/helpers @tryghost/helpers-gatsby

Then we need to make some updates to our gatsby-node file.

First, we will add the following new imports to the top of our file:

const { paginate } = require(`gatsby-awesome-pagination`);
const { postsPerPage } = require(`./src/utils/siteConfig`);

Next, in our exports.createPages, we will update our GraphQL query to:

{ allGhostPost(sort: { order: ASC, fields: published_at }) { edges { node { slug } } } allGhostTag(sort: { order: ASC, fields: name }) { edges { node { slug url postCount } } } allGhostAuthor(sort: { order: ASC, fields: name }) { edges { node { slug url postCount } } } allGhostPage(sort: { order: ASC, fields: published_at }) { edges { node { slug url } } }

This will pull all of the GraphQL data we need for Gatsby to build pages based on our new templates.

To do that, we will extract all of those queries and assign them to variables:

// Extract query results const tags = result.data.allGhostTag.edges const authors = result.data.allGhostAuthor.edges const pages = result.data.allGhostPage.edges const posts = result.data.allGhostPost.edges

Then we will load all of our templates:

// Load templates const tagsTemplate = path.resolve(`./src/templates/tag.js`) const authorTemplate = path.resolve(`./src/templates/author.js`) const pageTemplate = path.resolve(`./src/templates/page.js`) const postTemplate = path.resolve(`./src/templates/post.js`)

Note here that we are replacing our old blog-post.js template with post.js, so we can go ahead and delete blog-post.js from our templates folder.

Finally, we will add this code to build pages from our templates and GraphQL data:

// Create tag pages
tags.forEach(({ node }) => { const totalPosts = node.postCount !== null ? node.postCount : 0 // This part here defines, that our tag pages will use // a `/tag/:slug/` permalink. const url = `/tag/${node.slug}` const items = Array.from({length: totalPosts}) // Create pagination paginate({ createPage, items: items, itemsPerPage: postsPerPage, component: tagsTemplate, pathPrefix: ({ pageNumber }) => (pageNumber === 0) ? url : `${url}/page`, context: { slug: node.slug } })
}) // Create author pages
authors.forEach(({ node }) => { const totalPosts = node.postCount !== null ? node.postCount : 0 // This part here defines, that our author pages will use // a `/author/:slug/` permalink. const url = `/author/${node.slug}` const items = Array.from({length: totalPosts}) // Create pagination paginate({ createPage, items: items, itemsPerPage: postsPerPage, component: authorTemplate, pathPrefix: ({ pageNumber }) => (pageNumber === 0) ? url : `${url}/page`, context: { slug: node.slug } })
}) // Create pages
pages.forEach(({ node }) => { // This part here defines, that our pages will use // a `/:slug/` permalink. node.url = `/${node.slug}/` createPage({ path: node.url, component: pageTemplate, context: { // Data passed to context is available // in page queries as GraphQL variables. slug: node.slug, }, })
}) // Create post pages
posts.forEach(({ node }) => { // This part here defines, that our posts will use // a `/:slug/` permalink. node.url = `/${node.slug}/` createPage({ path: node.url, component: postTemplate, context: { // Data passed to context is available // in page queries as GraphQL variables. slug: node.slug, }, })

Here, we are looping in turn through our tags, authors, pages, and posts. For our pages and posts, we are simply creating slugs and then creating a new page using that slug and telling Gatsby what template to use.

For the tags and author pages, we are also adding pagination info using gatsby-awesome-pagination that will be passed into the page’s pageContext.

With that, all of our content should now be successfully built and displayed. But we could use a bit of work on styling. Since we copied over our templates directly from the Ghost Starter, we can use their styles as well.

Not all of these will be applicable, but to keep things simple and not get too bogged down in styling, I took all of the styles from Ghost’s src/styles/app.css starting from the section Layout until the end. Then you will just paste these into the end of your src/styles.css file.

Observe all of the styles starting with kg — this refers to Koening which is the name of the Ghost editor. These styles are very important for the Post and Page templates, as they have specific styles that handle the content that is created in the Ghost editor. These styles ensure that all of the content you are writing in your editor is translated over and displayed on your blog correctly.

Lastly, we need our page.js and post.js files to accommodate our internal link replacement from the previous step, starting with the queries:


ghostPage(slug: { eq: $slug } ) { ...GhostPageFields fields { html }


ghostPost(slug: { eq: $slug } ) { ...GhostPostFields fields { html }

And then the sections of our templates that are using the HTML content. So in our post.js we will change:

className="content-body load-external-scripts"
dangerouslySetInnerHTML={{ __html: post.html }} />


className="content-body load-external-scripts"
dangerouslySetInnerHTML={{ __html: post.fields.html }} />

And similarly, in our page.js file, we will change page.html to page.fields.html.

Dynamic Page Content

One of the disadvantages of Ghost when used as a traditional CMS, is that it is not possible to edit individual pieces of content on a page without going into your actual theme files and hard coding it.

Say you have a section on your site that is a Call-to-Action or customer testimonials. If you want to change the text in these boxes, you will have to edit the actual HTML files.

One of the great parts of going headless is that we can make dynamic content on our site that we can easily edit using Ghost. We are going to do this by using Pages that we will mark with ‘internal’ tags or tags that start with a # symbol.

So as an example, let’s go into our Ghost backend, create a new Page called Message, type something as content, and most importantly, we will add the tag #message.

Now let’s go back to our gatsby-node file. Currently, we are building pages for all of our tags and pages, but if we modify our GraphQL query in createPages, we can exclude everything internal:

allGhostTag(sort: { order: ASC, fields: name }, **filter: {slug: {regex: "/^((?!hash-).)*$/"}}**) { edges { node { slug url postCount } }
allGhostPage(sort: { order: ASC, fields: published_at }, **filter: {tags: {elemMatch: {slug: {regex: "/^((?!hash-).)*$/"}}}}**) { edges { node { slug url html } }

We are adding a filter on tag slugs with the regex expression /^((?!hash-).)*$/. This expression is saying to exclude any tag slugs that include hash-.

Now, we won’t be creating pages for our internal content, but we can still access it from our other GraphQL queries. So let’s add it to our index.js page by adding this to our query:

query GhostIndexQuery($limit: Int!, $skip: Int!) { site { siteMetadata { title } } message: ghostPage (tags: {elemMatch: {slug: {eq: "hash-message"}}}) { fields { html } } allGhostPost( sort: { order: DESC, fields: [published_at] }, limit: $limit, skip: $skip ) { edges { node { ...GhostPostFields } } } }

Here we are creating a new query called “message” that is looking for our internal content page by filtering specifically on the tag #message. Then let’s use the content from our #message page by adding this to our page:

const BlogIndex = ({ data, location, pageContext }) => { const siteTitle = data.site.siteMetadata?.title || `Title` const posts = data.allGhostPost.edges const message = data.message;
return ( <Layout location={location} title={siteTitle}> <Seo title="All posts" /> <section dangerouslySetInnerHTML={{ __html: message.fields.html, }} /> )

Finishing Touches

Now we’ve got a really great blog setup, but we can add a few final touches: pagination on our index page, a sitemap, and RSS feed.

First, to add pagination, we will need to convert our index.js page into a template. All we need to do is cut and paste our index.js file from our src/pages folder over to our src/templates folder and then add this to the section where we load our templates in gatsby-node.js:

// Load templates const indexTemplate = path.resolve(`./src/templates/index.js`)

Then we need to tell Gatsby to create our index page with our index.js template and tell it to create the pagination context.

Altogether we will add this code right after where we create our post pages:

// Create Index page with pagination paginate({ createPage, items: posts, itemsPerPage: postsPerPage, component: indexTemplate, pathPrefix: ({ pageNumber }) => { if (pageNumber === 0) { return `/` } else { return `/page` } }, })

Now let’s open up our index.js template and import our Pagination component and add it right underneath where we map through our posts:

import Pagination from '../components/pagination'
//... </ol> <Pagination pageContext={pageContext} /> </Layout>

Then we just need to change the link to our blog posts from:

<Link to={post.node.slug} itemProp="url">


<Link to={`/${post.node.slug}/`} itemProp="url">

This prevents Gatsby Link from prefixing our links on pagination pages — in other words, if we didn’t do this, a link on page 2 would show as /page/2/my-post/ instead of just /my-post/ like we want.

With that done, let’s set up our RSS feed. This is a pretty simple step, as we can use a ready-made script from the Ghost team’s Gatsby starter. Let’s copy their file generate-feed.js into our src/utils folder.

Then let’s use it in our gatsby-config.js by replacing the existing gatsby-plugin-feed section with:

{ resolve: `gatsby-plugin-feed`, options: { query: ` { allGhostSettings { edges { node { title description } } } } `, feeds: [ generateRSSFeed(config), ], },

We will need to import our script along with our siteConfig.js file:

const config = require(`./src/utils/siteConfig`);
const generateRSSFeed = require(`./src/utils/generate-feed`);

Finally, we need to make one important addition to our generate-feed.js file. Right after the GraphQL query and the output field, we need to add a title field:

output: `/rss.xml`,
title: "Gatsby Starter Blog RSS Feed",

Without this title field, gatsby-plugin-feed will throw an error on the build.

Then for our last finishing touch, let’s add our sitemap by installing the package gatsby-plugin-advanced-sitemap:

npm install --save gatsby-plugin-advanced-sitemap

And adding it to our gatsby-config.js file:

{ resolve: `gatsby-plugin-advanced-sitemap`, options: { query: ` { allGhostPost { edges { node { id slug updated_at created_at feature_image } } } allGhostPage { edges { node { id slug updated_at created_at feature_image } } } allGhostTag { edges { node { id slug feature_image } } } allGhostAuthor { edges { node { id slug profile_image } } } }`, mapping: { allGhostPost: { sitemap: `posts`, }, allGhostTag: { sitemap: `tags`, }, allGhostAuthor: { sitemap: `authors`, }, allGhostPage: { sitemap: `pages`, }, }, exclude: [ `/dev-404-page`, `/404`, `/404.html`, `/offline-plugin-app-shell-fallback`, ], createLinkInHead: true, addUncaughtPages: true, }

The query, which also comes from the Ghost team’s Gatsby starter, creates individual sitemaps for our pages and posts as well as our author and tag pages.

Now, we just have to make one small change to this query to exclude our internal content. Same as we did in the prior step, we need to update these queries to filter out tag slugs that contain ‘hash-’:

allGhostPage(filter: {tags: {elemMatch: {slug: {regex: "/^((?!hash-).)*$/"}}}}) { edges { node { id slug updated_at created_at feature_image } }
allGhostTag(filter: {slug: {regex: "/^((?!hash-).)*$/"}}) { edges { node { id slug feature_image } }

Wrapping Up

With that, you now have a fully functioning Ghost blog running on Gatsby that you can customize from here. You can create all of your content by running Ghost on your localhost and then when you are ready to deploy, you simply run:

gatsby build

And then you can deploy to Netlify using their command-line tool:

netlify deploy -p

Since your content only lives on your local machine, it is also a good idea to make occasional backups, which you can do using Ghost’s export feature.

This exports all of your content to a json file. Note, it doesn’t include your images, but these will be saved on the cloud anyway so you don’t need to worry as much about backing these up.

I hope you enjoyed this tutorial where we covered:

  • Setting up Ghost and Gatsby;
  • Handling Ghost Images using a storage converter;
  • Converting Ghost internal links to Gatsby Link;
  • Adding templates and styles for all Ghost content types;
  • Using dynamic content created in Ghost;
  • Setting up RSS feeds, sitemaps, and pagination.

If you are interested in exploring further what’s possible with a headless CMS, check out my work at Epilocal, where I’m using a similar tech stack to build tools for local news and other independent, online publishers.

Note: You can find the full code for this project on Github here, and you can also see a working demo here.

Further Reading on Smashing Magazine