B2B Lessons from 2020

Covid-19 forced all of us to change. Companies and employees who had resisted digital innovation suddenly had no choice. Some of the changes have created value and are worth keeping. Others not so much.

My company develops ecommerce systems for B2B merchants. What follows are pandemic-induced changes that, in my view, will likely become permanent. I’ve also listed a few painful B2B weaknesses that Covid has exposed.

Permanent Changes

Digital content is critical. This was shifting even before the pandemic. People are using online content rather than an in-person conversation. Even phone and virtual interactions are down in favor of digital self-service options. Companies in 2020 came up with new ways to build relationships, such as videos, articles, PDF guides, and other resources. Going forward, think about how a customer or prospect can interact with your company at midnight when no one is around.

More efficient processes. Doing more with less has become a higher priority. Tools that make key processes faster are essential, such as quickly answering customers’ questions, closing a sale, setting up an account, cost details, and shipping and arrival info.

Embracing online sales. For many of my B2B clients, ecommerce sales grew in 2020, even if off-line revenue was down. The pandemic demonstrated the value of digital transactions to organizations that were previously skeptical. Ecommerce is a benefit to a sales team, and last year proved it.

Ecommerce on a timeline. Many B2B ecommerce sites that had been held up due to an expansive scope got done quickly. Companies were forced to accept smaller, faster iterations and improvements.

New ways of communicating. In-person sales calls and trade shows turned out to not be as important as we thought. Our culture shifted as more people worked from home. We became used to seeing each other’s pets and being in each other’s homes. We were forced to re-examine our habits and notions of how things are.

New services or processes. An HVAC distributor, for example, can now offer dock-side pickup, similar to curb-side for retailers. Traditional B2B sellers have shifted to selling certain products directly to consumers. Manufacturers that once took complex orders only through a salesperson have created self-service digital ordering.

New skills. Many B2B companies shifted employees to digital-focused tasks. Experienced team members can offer valuable perspectives. Most are willing to learn new things to match the needs of the company.

Weaknesses

Challenging times can highlight our weaknesses. The pandemic did that for quite a few B2B merchants.

Outdated systems. In shifting to digital, some organizations found that their systems could not easily integrate. Examples are outdated administrative platforms or homegrown software for sales quotes. The digital transition requires a hard look at internal systems. Some will need to be updated.

Lost opportunity. Businesses with limited digital platforms suffered the most.

Culture turmoil. Many teams struggled because they weren’t prepared for or open to essential changes. Companies have learned the importance of a resilient, innovative, and adaptable culture. Employees who demonstrate an unwillingness to change are obstacles. Those who step up and lead are invaluable.

Unnecessary expenses. Some companies realized they were paying for outdated items that did not create value for the business or its customers.

2020 in Hindsight

Take the time to review how 2020 impacted your company. What lessons did you learn? What choices might you make going forward?

Dr. Squatch Scales to $100 Million with Natural Soaps for Men

In 2013 Jack Haldrup was reading “The Lean Startup” and pondering how to launch a business. He was seeking a niche based on personal experience versus market research.

“I had tried cold process soap, purchased at a farmer’s market,” he told me. “I really enjoyed it. I’ve always had sensitive skin and have been interested in those types of products. After using that soap, I thought there must be other guys like me who would enjoy it, too. So I decided to try to find those customers.”

Fast forward to 2021, and Dr. Squatch, Haldrup’s company, sells its own natural soaps and other grooming products for men. Revenue in 2020 was roughly $100 million, having exploded from $5 million just 18 months earlier.

How does a company grow from $5 million to $100 million in 18 months? I asked him that question and many more in our recent conversation.  The entire audio version of our discussion is below, followed by a transcript, which is edited for clarity and length.

Eric Bandholz: Tell us about Dr. Squatch.

Jack Haldrup: We’re a men’s personal care brand. We focus on natural products for guys. Soap is our hero product. It’s what we started with, and it’s our biggest category. We’ve recently launched a deodorant that’s off to a good start. We try to create a product that is natural and formulated for men.

Bandholz: Folks might think that Dr. Squatch is a direct competitor to Beardbrand. But we’re side-by-side trying to take a chunk out of Procter & Gamble and all those mega-corporations.

Haldrup: Totally. There’s a lot to take there. The men’s grooming market is not as fragmented as other categories, including female-oriented products.

Bandholz: Take us back to the early days of Dr. Squatch. You’re essentially selling one of the most commoditized products.

Haldrup: I started the company in 2013. I’d been thinking about it for maybe six months before that. At the time I was reading “The Lean Startup.” I was looking to start in a niche related to my own personal experience. I certainly didn’t do market research and conclude that soap is a great category.

I had tried cold process soap, purchased at a farmer’s market. I really enjoyed it. I’ve always had sensitive skin and have been interested in those types of products. After using that soap, I thought there must be other guys like me who would enjoy it, too. So I decided to try to find those customers.

Bandholz: Was it purely online initially?

Haldrup: I was doing online, yes. But my business partner and I made a lot of door-to-door sales at local boutiques. That was a huge part of our business for the first year or two. It wasn’t sophisticated. We didn’t go to many trade shows.

We drove to Portland, Seattle, and down to California and solicited smaller shops to resell our soap. It was not the most efficient way to scale a business.

My partner was more of a sales guy than me. I was pursuing the online approach. He gave us a foundation to invest in ecommerce. But knocking on doors was definitely a learning process. A lot of people took a chance on us because we were there.

Bandholz: It sounds like he’s no longer your partner.

Haldrup: That’s right. He’s a good friend and has been for most of my life. But we had a different vision for the company. I started the company and then brought him on eight months into it. That lasted for two or three years. He was essential during that time, helping us scale. But he didn’t want to tackle the next phase of the journey in terms of growth.

Bandholz: What was that phase? Outside investors?

Haldrup: Yes, outside investors. We were entirely bootstrapped for the first five or six years. I didn’t want to continue down that path. I wanted to really scale it, or make it passive or find a new project to work on. I didn’t want to keep it small.

I bought out my partner a year and a half before we took outside money. I was essentially the sole owner then.

Bandholz: So you raised money and found the magic button to scale the business.

Haldrup: That’s right. There’s a lot to unpack here. First and foremost, I changed my vision for the company. I started thinking bigger. I evolved to be open to meeting investors and raising money. I met a handful of people and ultimately chose an investor that I felt would add a lot of value. The company is based in Los Angeles. They invest in consumer businesses. Their portfolio companies are a tight-knit group, almost like an advisory community.

It’s gone incredibly well for us. We had maybe five people on the team and roughly $5 million in sales at the time. And fast forward to today, a year and a half later, we’ve got 150 people on the team. We’ve built a manufacturing facility, and we’ll do over $100 million in sales in 2020. Paid media and video has been a huge driver of that. I have no regrets.

Bandholz: Going from $5 million in revenue to $100 million in 18 months is unreal. We’ve all seen your YouTube video with that bearded spokesman. Was that your best marketing channel?

Haldrup: We’ve used YouTube and Facebook, but that video put us on the map. We made the first video with that actor in 2018. I was in San Diego at the time. We were working with a marketing agency there, but we wanted something bigger and more impactful. So we decided to create a viral video.

The agency found him at a comedy show for San Diego’s funniest comedian. They approached him, and he agreed to do it. We had no idea what we were doing — just creating an experience with video.

That was before our outside investment. We were about a $3 million company then. We spent about $18,000 on the video. It was a massive expense at the time.  It was scary, actually.

Bandholz: So you put your faith in it and cranked it up from there.

Haldrup: Yes. We use a simple post-purchase survey to know where people first heard about us. We see a lot of attribution from YouTube that doesn’t come through on its metrics.

Bandholz: We use a wonderful post-purchase app called Grapevine for $3 a month that allows us to ask that one question of how people find us.

Haldrup: For sure. Grapevine is incredible. I recommend it to everybody. I was blown away at how simple and impactful it is.

Bandholz: You guys are heavy on subscriptions. Is that a big part of your business?

Haldrup: It’s definitely a big part of our business, about 30 percent of overall revenue. Every business that sells a consumable product should offer subscription enabled ecommerce. There’s no downside.

Certainly we have customers canceling. They have too many soaps, or they don’t like the product. But it is still is a valuable part of our business.

Bandholz: You’re running Shopify and ReCharge, but you have a custom flow for people with the subscriptions.

Haldrup: Yes.

Bandholz: What percentage of revenue comes from your website versus Amazon?

Haldrup: We get about 85 percent of our revenue from our site and the rest from Amazon. We have boutique retailers that sell a bit here and there. We’re hoping to expand into larger brick-and-mortar retailers over the next couple of years.

Shopify is a great platform. We moved from Cratejoy in February of 2020. We saw dramatic improvements in terms of on-site performance, as well as on the backend, dealing with the data. We’ve experienced no massive challenges with Shopify thus far. We have no plans to change.

Bandholz: Are you still working with the marketing agency? What is the agency’s role?

Haldrup: Yes, we’re still a client. We do all of our own paid media buying and paid media optimization. We have an internal team focused on that. For content, it’s a mix of internal and external. The agency brings a lot of creative stuff to life. They also have the expertise in terms of shooting and editing the video.

It a retainer-based relationship, plus an added amount for the video work.

Bandholz: Where can people follow your company and purchase products?

Haldrup: The Dr. Squatch Instagram presence is pretty entertaining, as is our YouTube channel. Our website is DrSquatch.com.

When To Say No To Freelance Projects

A lot of feel-good life advice encourages us to say yes to new things whenever we can. This philosophy of openness can sound pretty enticing when you’re a freelancer or consultant just beginning to stand tall on your own — or riding a high of a string of good projects.

And it’s true that saying yes can help you grow! Saying yes to new clients, projects, and partners helps you make connections, build your portfolio, and evolve professionally. Saying yes can also lead to paying jobs, which lead to even more paying jobs.

But saying no — at the right times — can be just as critical to the success of our self-employment. Building up the ability and skill to say no is part of a career evolution and, in my opinion, one of the ultimate goals for successful freelancing.

It’s not always easy to decline work, though. We can feel reluctant to turn things down because we want to make clients happy. We also need to feel secure with our income and prospects, and saying no can certainly feel risky when we want to be sure our bills will get paid.

In this article, I’ll share with you what I’ve learned about the importance of recognizing when it’s better to say no to a freelance opportunity. It can feel scary to pass work to someone else, but it will be okay and you will garner a solid sense of self-understanding in the process. The information in this article is mostly about saying no to new projects, but can also apply to saying no when a current client asks for more work than you’ve already agreed to do.

The Importance Of Saying No

Saying yes often leads to surprising, exciting outcomes, but so does saying no.

Before we get deeper into details of when and how to say no to freelance opportunities, think about how this kind of selectivity may impact your wellbeing. Imagine only working on things you love? Committing to lackluster projects or unpleasant clients means you might find yourself unavailable when a perfect project shows up in your inbox. When you say no to projects that just aren’t right, you save time and headspace for things that you will want to say yes to.

Saying no also helps you avoid or recover from burnout. Most freelancers will experience burnout at times. Burnout happens when we work too hard and run out of steam — often because a client is bad for us, or because we take on too much, or both. Taking breaks and being deliberate about taking time off (meaning, saying no sometimes) is a valuable pursuit for your personal wellbeing, not to mention your long-term professional output. It’s okay to say no to something when you don’t have the passion or energy for it. Be kind to yourself.

How To Decide Whether To Take On A Project

Is The Project A Match For Your Expertise?

One of the first things to think about when considering a freelance opportunity is whether it’s a good match for your skills and your level of experience.

I’m a UX consultant and I like doing a lot of different things, but my core expertise is user research. If a potential project is not an exact fit for my skills, or if it’s not something I enjoy, I will typically decline. For example, if somebody asks me to do an accessibility review for their website, I know there are a lot of skilled freelancers who can do a much quicker job with it than I would. I might refer somebody if I know they are interested.

Similarly, an ideal project should match your level of experience. If the work is too far below your experience level, it might be boring and likely not pay enough. If it’s too far above, you might find yourself too stressed out and in over your head. For me, if a company is looking for a UX freelancer to “make wireframes” without digging into research or design strategy, I know that’s probably a job for somebody more junior or with different interests.

Think of it this way. If you say yes to a request that isn’t a good match, you may feel resentful about it. You might spend too much time on it. You might even deliver results that aren’t great. So although saying yes can be a strategy for developing skills or learning new things, it only works if the project is in line with your goals and doesn’t stretch too far beyond what you know you can do well. Otherwise, it might be a bad experience for you (and maybe the client, too).

Is The Budget Enough?

If a project doesn’t pay enough, ask for more or say no. Simple!

Well, okay, I know it’s not that simple. Sometimes we need the money. But it becomes easier to say no to low-paying jobs when you 1.) are knowledgeable about your worth and 2.) have a financial cushion. Either of these things may take time, but the goal is to say no to projects that are below a minimum amount that you will consider. Along those lines, you don’t want to devalue your worth by saying yes to too many out-of-scope extras.

Pricing is its own beast that I won’t get into here, but once you know your worth, stand strongly with it! Over time, the prices you charge can increase with your value, and eventually, you may pass work down to more junior freelancers. Also, remember that you are a professional who represents your industry, and when freelancers continuously say yes to very low prices for our services, it can impact everyone else in that specialty by devaluing our skills or expertise overall.

So, don’t make it a habit to accept projects that pay below your value unless you are purposely working pro bono for a good cause. Of course, again, sometimes we might need to take on low-paying work just to “keep the lights on” with our business, and that’s okay. Having a financial cushion eventually helps us say no to these gigs and hold out for better-paying opportunities. For me, it helps to keep my living expenses low so I feel more confident letting work opportunities pass by.

Does The Project Fit Your Schedule?

When your schedule is busy, it’s easy to decline an unwanted project. But what if you aren’t currently busy with work? Consider your pipeline, upcoming schedule, and financial cushion to decide whether a mediocre project is worth doing. If you have plenty of time, it might make sense to say yes to something even if it’s not a great fit.

And alternatively, what if the project is something you actually really want to do, but time is tight? Think carefully about your current commitments and deadlines and whether a new project is doable. If not, you may need to regretfully decline or ask for an alternative timeline. Saying yes to a project you don’t actually have time or energy to do well — as much as you may wish you did — is unfortunately a recipe for late nights or disappointed clients. And remember, having the time and energy to say yes to good projects is another reason to say no to bad projects!

Sometimes whether a project fits in your schedule depends on whether you can work something out with the client. A clear project scope (knowing specifically when you are needed, and for what) will help you plan your time and make decisions about taking on new things. When it comes to scope, also consider whether the client’s proposed timeline is too short for the amount of work required — this is super common, and it’s okay to offer to do the same project on a longer timeline, or a smaller version of the project within the proposed timeline, so that it becomes more manageable to fit into your schedule.

The reality of freelancing is that it’s often a guessing game when it comes to pipelines, scheduling, and balancing your workload. But if you know a project doesn’t fit into your schedule, make sure you say no quickly. You can always let the requester know when you expect that your schedule will free up, and maybe another project will work out later.

Does The Opportunity Align With Your Values?

In addition to the expertise needed, budget offered, and time required, check in on whether the project is in line with your values and goals. Think about your gut feelings here. What are some industries or companies you want to avoid? What kind of projects make you happy? Turn those gut feelings into a concrete list.

Here are a few factors to think about:

  • Does this project help move you toward a bigger goal?
  • Do you find the project personally interesting?
  • Do you believe your work on the project will help people in a meaningful way, or do some kind of social or environmental good?
  • Is this project supporting an industry or company that you consider unethical? (Kelly Small talks about seeking ethical work and saying no to unethical work in their book The Conscious Creative!)
  • Do you prefer working with startups or established companies?
  • Do you prefer working in small teams or big ones?
  • Do you prefer remote or in-person work?
  • Do you prefer subcontracting with other agencies or do you prefer direct clients?
  • Have you worked with this client before and what was that like?
  • Will having a good relationship with this client potentially lead to other good work?
  • Does it seem like the client will respect you?

You can create a personalized checklist or rating sheet to help you identify whether a project is worth doing. Burnout happens a lot quicker when our work is emotionally taxing, so take into account your personal values and preferences when deciding whether to say yes or no to opportunities. After all, working for yourself ideally means you get to call the shots.

Decline Assertively (But Kindly)

When you’ve decided a “no thank you” is in order, it’s important to be clear that you are declining while still keeping the mood light and productive (unless they have egregiously offended you, I suppose).

First, I recommend expressing gratitude for the opportunity. Thank the requester for thinking of you, going through the interview process with you, or whatever they have done. It takes time and effort on their end to hire freelancers or consultants.

Second is the important part: saying no. Be clear that you are declining. You can briefly explain why if you’d like, but you don’t have to explain your reasoning if you don’t want to. Just respond in a timely way so the client can move on.

If you worry about offending someone, please don’t! Be assertive so that when you say no, the answer is firm. If you think about the three main communication styles — assertive, passive, and aggressive — the idea is to be clear (assertive) while avoiding coming across as too passive or aggressive.

Most people will respect a clear and honest answer. I’ve only once had someone respond rudely to an email in which I politely declined to meet, but that aggressive response cemented that I likely saved myself a lot of aggravation by not working with them.

Third, once you’ve said no, you might offer some next steps or alternatives if you think it’s appropriate. If you are open to discussing future work, let them know. Be clear about the conditions under which you might be more likely to say yes — such as after a certain date, or if they have a need for a different area of your expertise.

You also might suggest other freelancers if you know anyone who may like to take on the project, or who might be a better fit than you are. If you don’t have specific names to offer, you can suggest resources to find freelancers, like listings for professional organization members. You might also offer to share the opportunity with other freelancers in your network, such as on social media or in Slack groups.

Here’s an example of how an email declining a project opportunity might look:

Hi [name],

Thank you so much for thinking of me for [project description]. This project does not seem to be a great match for [my experience level / my current interests / my schedule / etc.] so I am declining, but I can [share it on LinkedIn / connect you with another freelancer I trust / etc.], if you’d like.

Also, if you have future opportunities for [something else you’d like to do / something after a certain date / etc.], I’d love to hear about it. Keep in touch!

Conclusion

Saying no is a skill. Saying yes to the wrong freelance opportunities can lead you toward misery and burnout, and we could all probably improve how mindful we are about our work, partners, and clients.

Remember, if you’re not sure whether to accept a project, think through:

  • General Fit
    Is the project a match for your skills and your level of expertise?
  • Budget
    Would you be selling yourself or your field short by taking on the project for a low price?
  • Timeline
    Would the project conflict with your existing commitments? Is the client wanting too much in too short a timeframe?
  • Values
    How good would you feel about this project? Is it in line with your values, goals, and preferences?

Consider everything together. A low-paying job might still be a good fit if it’s for a good cause, or if it gets you critical experience to move toward a bigger goal. Consider making your own personalized checklist or rating sheet to help you rate past, present, and future projects and get a better understanding of which opportunities should be left on the table.

Pandemic-driven Shoppers Expect More from Ecommerce

The pandemic has upped consumers’ digital expectations. What was once the exception is now the norm. Telemedicine, online learning, virtual meetings — all have advanced in the past year. Ecommerce stores must evolve, too, in how they engage and interact with shoppers.

Here are five ways to stay competitive and spur conversions.

5 Engagement Tips

Embed how-to and inspirational videos on product and landing pages. Instead of sending shoppers on the hunt for more information, develop and deliver short, detailed videos on using your products. Video is a powerful marketing tool, especially when tailored to the target audience. Take things a step further by encouraging customers to submit content you can feature, either as standalone media or in collaboration with others.

Incorporate live video chat. Shoppers expect a speedy customer-service response. Yet most companies mismanage email and after-hours chat requests.

Combine delays with representatives who multitask — managing many queries at a time — and the result is unnecessary delays and mistakes.

Hosting dedicated channels for video-based customer service serves two purposes:

  • It allows reps to focus on one request at a time.
  • It personalizes the shopping experience.

Smaller and niche stores, as well as outlets selling high-end luxuries, could benefit from this method of guiding shoppers through product selection and checkout.

Put more focus on user-generated content. In the age of selfies and social media, UGC continues to help stores close sales. Customer reviews are a must, but so is personalizing pages via context-of-use photos and videos by real people.

For example, beauty company Glossier incorporates customers’ selfies in galleries and spotlights consumers in various blog posts. This creates a compelling, personalized marketing message: Real people (just like you) use Glossier products.

Screenshot of the Glossier blog, depicting an interview with a PR manager

Glossier’s blog focuses on its customers.

Ask questions. Online stores that sell competing products can benefit from asking key questions, similar in concept to shopping for car insurance, where providers query users to offer the best policy.

Shoppable quizzes can present personalized options, where shoppers are asked a series of questions, and their answers determine which product sets are listed.

Brooks Sports encourages shoppers to take a 10-question quiz to identify suitable running shoes. Answers about one’s running environment, desired fit and feel, and health issues help present the best possible shoe.

Brooks' running shoe quiz includes asking about one's running goals

Brooks’ quiz-takers are asked about their goals and needs.

Despite taking time to complete, shoppable quizzes can create a shorter path to purchase.

Brooks product recommendation based on quiz answers

Brooks’ quiz returns personalized results.

While quizzes aren’t all that unique, the presentation can be. Brooks uses fun illustrations and common lingo throughout the survey, making it a breeze. Be sure to use formatting and language tailored to your target audience.

Let them pick up where they left off. Online retailers commonly force shoppers to re-experience content — a big mistake. Features that support multi-device sessions and persistent shopping carts are crucial to the conversion process.

Interruptions are common. Shoppers want to pick up where they left off. Shopping carts, customer accounts, and social connections must work together to remove the frustration of starting over.

How We Improved SmashingMag Performance

Every web performance story is similar, isn’t it? It always starts with the long-awaited website overhaul. A day when a project, fully polished and carefully optimized, gets launched, ranking high and soaring above performance scores in Lighthouse and WebPageTest. There is a celebration and a wholehearted sense of accomplishment prevailing in the air — beautifully reflected in retweets and comments and newsletters and Slack threads.

Yet as time passes by, the excitement slowly fades away, and urgent adjustments, much-needed features, and new business requirements creep in. And suddenly, before you know it, the code base gets a little bit overweight and fragmented, third-party scripts have to load just a little bit earlier, and shiny new dynamic content finds its way into the DOM through the backdoors of fourth-party scripts and their uninvited guests.

We’ve been there at Smashing as well. Not many people know it but we are a very small team of around 12 people, many of whom are working part-time and most of whom are usually wearing many different hats on a given day. While performance has been our goal for almost a decade now, we never really had a dedicated performance team.

After the latest redesign in late 2017, it was Ilya Pukhalski on the JavaScript side of things (part-time), Michael Riethmueller on the CSS side of things (a few hours a week), and yours truly, playing mind games with critical CSS and trying to juggle a few too many things.

As it happened, we lost track of performance in the busyness of day-to-day routine. We were designing and building things, setting up new products, refactoring the components, and publishing articles. So by late 2020, things got a bit out of control, with yellowish-red Lighthouse scores slowly showing up across the board. We had to fix that.

That’s Where We Were

Some of you might know that we are running on JAMStack, with all articles and pages stored as Markdown files, Sass files compiled into CSS, JavaScript split into chunks with Webpack and Hugo building out static pages that we then serve directly from an Edge CDN. Back in 2017 we built the entire site with Preact, but then have moved to React in 2019 — and use it along with a few APIs for search, comments, authentication and checkout.

The entire site is built with progressive enhancement in mind, meaning that you, dear reader, can read every Smashing article in its entirety without the need to boot the application at all. It’s not very surprising either — in the end, a published article doesn’t change much over the years, while dynamic pieces such as Membership authentication and checkout need the application to run.

The entire build for deploying around 2500 articles live takes around 6 mins at the moment. The build process on its own has become quite a beast over time as well, with critical CSS injects, Webpack’s code splitting, dynamic inserts of advertising and feature panels, RSS (re)generation, and eventual A/B testing on the edge.

In early 2020, we’ve started with the big refactoring of the CSS layout components. We never used CSS-in-JS or styled-components, but instead a good ol’ component-based system of Sass-modules which would be compiled into CSS. Back in 2017, the entire layout was built with Flexbox and rebuilt with CSS Grid and CSS Custom Properties in mid-2019. However, some pages needed special treatment due to new advertising spots and new product panels. So while the layout was working, it wasn’t working very well, and it was quite difficult to maintain.

Additionally, the header with the main navigation had to change to accommodate for more items that we wanted to display dynamically. Plus, we wanted to refactor some frequently used components used across the site, and the CSS used there needed some revision as well — the newsletter box being the most notable culprit. We started off by refactoring some components with utility-first CSS but we never got to the point that it was used consistently across the entire site.

The larger issue was the large JavaScript bundle that — not very surprisingly — was blocking the main-thread for hundreds of milliseconds. A big JavaScript bundle might seem out of place on a magazine that merely publishes articles, but actually, there is plenty of scripting happening behind the scenes.

We have various states of components for authenticated and unauthenticated customers. Once you are signed in, we want to show all products in the final price, and as you add a book to the cart, we want to keep a cart accessible with a tap on a button — no matter what page you are on. Advertising needs to come in quickly without causing disruptive layout shifts, and the same goes for the native product panels that highlight our products. Plus a service worker that caches all static assets and serves them for repeat views, along with cached versions of articles that a reader has already visited.

So all of this scripting had to happen at some point, and it was draining on the reading experience even although the script was coming in quite late. Frankly, we were painstakingly working on the site and new components without keeping a close eye on performance (and we had a few other things to keep in mind for 2020). The turning point came unexpectedly. Harry Roberts ran his (excellent) Web Performance Masterclass as an online workshop with us, and throughout the entire workshop, he was using Smashing as an example by highlighting issues that we had and suggesting solutions to those issues alongside useful tools and guidelines.

Throughout the workshop, I was diligently taking notes and revisiting the codebase. At the time of the workshop, our Lighthouse scores were 60–68 on the homepage, and around 40-60 on article pages — and obviously worse on mobile. Once the workshop was over, we got to work.

Identifying The Bottlenecks

We often tend to rely on particular scores to get an understanding of how well we perform, yet too often single scores don’t provide a full picture. As David East eloquently noted in his article, web performance isn’t a single value; it’s a distribution. Even if a web experience is heavily and thoroughly an optimized all-around performance, it can’t be just fast. It might be fast to some visitors, but ultimately it will also be slower (or slow) to some others.

The reasons for it are numerous, but the most important one is a huge difference in network conditions and device hardware across the world. More often than not we can’t really influence those things, so we have to ensure that our experience accommodates them instead.

In essence, our job then is to increase the proportion of snappy experiences and decrease the proportion of sluggish experiences. But for that, we need to get a proper picture of what the distribution actually is. Now, analytics tools and performance monitoring tools will provide this data when needed, but we looked specifically into CrUX, Chrome User Experience Report. CrUX generates an overview of performance distributions over time, with traffic collected from Chrome users. Much of this data related to Core Web Vitals which Google has announced back in 2020, and which also contribute to and are exposed in Lighthouse.

We noticed that across the board, our performance regressed dramatically throughout the year, with particular drops around August and September. Once we saw these charts, we could look back into some of the PRs we’ve pushed live back then to study what has actually happened.

It didn’t take a while to figure out that just around these times we launched a new navigation bar live. That navigation bar — used on all pages — relied on JavaScript to display navigation items in a menu on tap or on click, but the JavaScript bit of it was actually bundled within the app.js bundle. To improve Time To Interactive, we decided to extract the navigation script from the bundle and serve it inline.

Around the same time we switched from an (outdated) manually created critical CSS file to an automated system that was generating critical CSS for every template — homepage, article, product page, event, job board, and so on — and inline critical CSS during the build time. Yet we didn’t really realize how much heavier the automatically generated critical CSS was. We had to explore it in more detail.

And also around the same time, we were adjusting the web font loading, trying to push web fonts more aggressively with resource hints such as preload. This seems to be backlashing with our performance efforts though, as web fonts were delaying rendering of the content, being overprioritized next to the full CSS file.

Now, one of the common reasons for regression is the heavy cost of JavaScript, so we also looked into Webpack Bundle Analyzer and Simon Hearne’s request map to get a visual picture of our JavaScript dependencies. It looked quite healthy at the start.

A few requests were coming to the CDN, a cookie consent service Cookiebot, Google Analytics, plus our internal services for serving product panels and custom advertising. It didn’t appear like there were many bottlenecks — until we looked a bit more closely.

In performance work, it’s common to look at the performance of some critical pages — most likely the homepage and most likely a few article/product pages. However, while there is only one homepage, there might be plenty of various product pages, so we need to pick ones that are representative of our audience.

In fact, as we’re publishing quite a few code-heavy and design-heavy articles on SmashingMag, over the years we’ve accumulated literally thousands of articles that contained heavy GIFs, syntax-highlighted code snippets, CodePen embeds, video/audio embeds, and nested threads of never-ending comments.

When brought together, many of them were causing nothing short of an explosion in DOM size along with excessive main thread work — slowing down the experience on thousands of pages. Not to mention that with advertising in place, some DOM elements were injected late in the page’s lifecycle causing a cascade of style recalculations and repaints — also expensive tasks that can produce long tasks.

All of this wasn’t showing up in the map we generated for a quite lightweight article page in the chart above. So we picked the heaviest pages we had — the almighty homepage, the longest one, the one with many video embeds, and the one with many CodePen embeds — and decided to optimize them as much as we could. After all, if they are fast, then pages with a single CodePen embed should be faster, too.

With these pages in mind, the map looked a little bit differently. Note the huge thick line heading to the Vimeo player and Vimeo CDN, with 78 requests coming from a Smashing article.

To study the impact on the main thread, we took a deep-dive into the Performance panel in DevTools. More specifically, we were looking for tasks that last longer than 50ms (highlighted with a right rectangle in the right upper corner) and tasks that contain Recalculation styles (purple bar). The first would indicate expensive JavaScript execution, while the latter would expose style invalidations caused by dynamic injections of content in the DOM and suboptimal CSS. This gave us some actionable pointers of where to start. For example, we quickly discovered that our web font loading had a significant repaint cost, while JavaScript chunks were still heavy enough to block the main thread.

As a baseline, we looked very closely at Core Web Vitals, trying to ensure that we are scoring well across all of them. We chose to focus specifically on slow mobile devices — with slow 3G, 400ms RTT and 400kbps transfer speed, just to be on the pessimistic side of things. It’s not surprising then that Lighthouse wasn’t very happy with our site either, providing fully solid red scores for the heaviest articles, and tirelessly complaining about unused JavaScript, CSS, offscreen images and their sizes.

Once we had some data in front of us, we could focus on optimizing the three heaviest article pages, with a focus on critical (and non-critical) CSS, JavaScript bundle, long tasks, web font loading, layout shifts and third-party-embeds. Later we’d also revise the codebase to remove legacy code and use new modern browser features. It seemed like a lot of work ahead of was, and indeed we were quite busy for the months to come.

Improving The Order Of Assets In The <head>

Ironically, the very first thing we looked into wasn’t even closely related to all the tasks we’ve identified above. In the performance workshop, Harry spent a considerable amount of time explaining the order of assets in the <head> of each page, making a point that to deliver critical content quickly means being very strategic and attentive about how assets are ordered in the source code.

Now it shouldn’t come as a big revelation that critical CSS is beneficial for web performance. However, it did come as a bit of a surprise how much difference the order of all the other assets — resource hints, web font preloading, synchronous and asynchronous scripts, full CSS and metadata — has.

We’ve turned up the entire <head> upside down, placing critical CSS before all asynchronous scripts and all preloaded assets such as fonts, images etc. We’ve broken down the assets that we’ll be preconnecting to or preloading by template and file type, so that critical images, syntax highlighting and video embeds will be requested early only for a certain type of articles and pages.

In general, we’ve carefully orchestrated the order in the <head>, reduced the number of preloaded assets that were competing for bandwidth, and focused on getting critical CSS right. If you’d like to dive deeper into some of the critical considerations with the <head> order, Harry highlights them in the article on CSS and Network Performance. This change alone brought us around 3–4 Lighthouse score points across the board.

Moving From Automated Critical CSS Back To Manual Critical CSS

Moving the <head> tags around was a simple part of the story though. A more difficult one was the generation and management of critical CSS files. Back in 2017, we manually handcrafted critical CSS for every template, by collecting all of the styles required to render the first 1000 pixels in height across all screen widths. This of course was a cumbersome and slightly uninspiring task, not to mention maintenance issues for taming a whole family of critical CSS files and a full CSS file.

So we looked into options on automating this process as a part of the build routine. There wasn’t really a shortage of tools available, so we’ve tested a few and decided to run a few tests. We’ve managed to set it them up and running quite quickly. The output seemed to be good enough for an automated process, so after a few configuration tweaks, we plugged it in and pushed it to production. That happened around July–August last year, which is nicely visualized in the spike and performance drop in the CrUX data above. We kept going back and forth with the configuration, often having troubles with simple things like adding in particular styles or removing others. E.g. cookie consent prompt styles that aren’t really included on a page unless the cookie script has initialized.

In October, we’ve introduced some major layout changes to the site, and when looking into the critical CSS, we’ve run into exactly the same issues yet again — the generated outcome was quite verbose, and wasn’t quite what we wanted. So as an experiment in late October, we all bundled our strengths to revisit our critical CSS approach and study how much smaller a handcrafted critical CSS would be. We took a deep breath and spent days around the code coverage tool on key pages. We grouped CSS rules manually and removed duplicates and legacy code in both places — the critical CSS and the main CSS. It was a much-needed cleanup indeed, as many styles that were written back in 2017–2018 have become obsolete over the years.

As a result, we ended up with three handcrafted critical CSS files, and with three more files that are currently work in progress:

The files are inlined in the head of each template, and at the moment they are duplicated in the monolithic CSS bundle that contains everything ever used (or not really used anymore) on the site. At the moment, we are looking into breaking down the full CSS bundle into a few CSS packages, so a reader of the magazine wouldn’t download styles from the job board or book pages, but then when reaching those pages would get a quick render with critical CSS and get the rest of the CSS for that page asynchronously — only on that page.

Admittedly, handcrafted critical CSS files weren’t much smaller in size: we’ve reduced the size of critical CSS files by around 14%. However, they included everything we needed in the right order from top to finish without duplicates and overriding styles. This seemed to be a step in the right direction, and it gave us a Lighthouse boost of another 3–4 points. We were making progress.

Changing The Web Font Loading

With font-display at our fingertips, font loading seems to be a problem in the past. Unfortunately, it isn’t quite right in our case. You, dear readers, seem to visit a number of articles on Smashing Magazine. You also frequently return back to the site to read yet another article — perhaps a few hours or days later, or perhaps a week later. One of the issues that we had with font-display used across the site was that for readers who moved inbetween articles a lot, we noticed plenty of flashes between the fallback font and the web font (which shouldn’t normally happen as fonts would be properly cached).

That didn’t feel like a decent user experience, so we looked into options. On Smashing, we are using two main typefaces — Mija for headings and Elena for body copy. Mija comes in two weights (Regular and Bold), while Elena is coming in three weights (Regular, Italic, Bold). We dropped Elena’s Bold Italic years ago during the redesign just because we used it on just a few pages. We subset the other fonts by removing unused characters and Unicode ranges.

Our articles are mostly set in text, so we’ve discovered that most of the time on the site the Largest Contentful Paint is either the first paragraph of text in an article or the photo of the author. That means that we need to take extra care of ensuring that the first paragraph appears quickly in a fallback font, while gracefully changing over to the web font with minimal reflows.

Take a close look at the initial loading experience of the front page (slowed down three times):

The first one was occurring due to expensive layout recalculations caused by the change of the fonts (from fallback font to web font), causing over 290ms of extra work (on a fast laptop and a fast connection). By removing stage one from the font loading alone, we were able to gain around 80ms back. It wasn’t good enough though because were way beyond the 50ms budget. So we started digging deeper.

The main reason why recalculations happened was merely because of the huge differences between fallback fonts and web fonts. By matching the line-height and sizes for fallback fonts and web fonts, we were able to avoid many situations when a line of text would wrap on a new line in the fallback font, but then get slightly smaller and fit on the previous line, causing major change in the geometry of the entire page, and consequently massive layout shifts. We’ve played with letter-spacing and word-spacing as well, but it didn’t produce good results.

With these changes, we were able to cut another 50-80ms, but we weren’t able to reduce it below 120ms without displaying the content in a fallback font and display the content in the web font afterwards. Obviously, it should massively affect only first time visitors as consequent page views would be rendered with the fonts retrieved directly from the service worker’s cache, without costly reflows due to the font switch.

By the way, it’s quite important to notice that in our case, we noticed that most Long Tasks weren’t caused by massive JavaScript, but instead by Layout Recalculations and parsing of the CSS, which meant that we needed to do a bit of CSS cleaning, especially watching out for situations when styles are overwritten. In some way, it was good news because we didn’t have to deal with complex JavaScript issues that much. However, it turned out not to be straightforward as we are still cleaning up the CSS this very day. We were able to remove two Long Tasks for good, but we still have a few outstanding ones and quite a way to go. Fortunately, most of the time we aren’t way above the magical 50ms threshold.

The much bigger issue was the JavaScript bundle we were serving, occupying the main thread for a whopping 580ms. Most of this time was spent in booting up app.js which contains React, Redux, Lodash, and a Webpack module loader. The only way to improve performance with this massive beast was to break it down into smaller pieces. So we looked into doing just that.

With Webpack, we’ve split up the monolithic bundle into smaller chunks with code-splitting, about 30Kb per chunk. We did some package.json cleansing and version upgrade for all production dependencies, adjusted the browserlistrc setup to address the two latest browser versions, upgraded to Webpack and Babel to the latest versions, moved to Terser for minification, and used ES2017 (+ browserlistrc) as a target for script compilation.

We also used BabelEsmPlugin to generate modern versions of existing dependencies. Finally, we’ve added prefetch links to the header for all necessary script chunks and refactored the service worker, migrating to Workbox with Webpack (workbox-webpack-plugin).

Remember when we switched to the new navigation back in mid-2020, just to see a huge performance penalty as a result? The reason for it was quite simple. While in the past the navigation was just static plain HTML and a bit of CSS, with the new navigation, we needed a bit of JavaScript to act on opening and closing of the menu on mobile and on desktop. That was causing rage clicks when you would click on the navigation menu and nothing would happen, and of course, had a penalty cost in Time-To-Interactive scores in Lighthouse.

We removed the script from the bundle and extracted it as a separate script. Additionally, we did the same thing for other standalone scripts that were used rarely — for syntax highlighting, tables, video embeds and code embeds — and removed them from the main bundle; instead, we granularly load them only when needed.

However, what we didn’t notice for months was that although we removed the navigation script from the bundle, it was loading after the entire app.js bundle was evaluated, which wasn’t really helping Time-To-Interactive (see image above). We fixed it by preloading nav.js and deferring it to execute in the order of appearance in the DOM, and managed to save another 100ms with that operation alone. By the end, with everything in place we were able to bring the task to around 220ms.

We managed to get some improvement in place, but still have quite a way to go, with further React and Webpack optimizations on our to-do list. At the moment we still have two major Long Tasks — font switch (120ms), app.js execution (220ms) and style recalculations due to the size of full CSS (140ms). For us, it means cleaning up and breaking up the monolithic CSS next.

It’s worth mentioning that these results are really the best-scenario-results. On a given article page we might have a large number of code embeds and video embeds, along with other third-party scripts that would require a separate conversation.

Dealing With 3rd-Parties

Fortunately, our third-party scripts footprint (and the impact of their friends’ fourth-party-scripts) wasn’t huge from the start. But when these third-party scripts accumulated, they would drive performance down significantly. This goes especially for video embedding scripts, but also syntax highlighting, advertising scripts, promo panels scripts and any external iframe embeds.

Obviously, we defer all of these scripts to start loading after the DOMContentLoaded event, but once they finally come on stage, they cause quite a bit of work on the main thread. This shows up especially on article pages, which are obviously the vast majority of content on the site.

The first thing we did was allocating proper space to all assets that are being injected into the DOM after the initial page render. It meant width and height for all advertising images and the styling of code snippets. We found out that because all the scripts were deferred, new styles were invalidating existing styles, causing massive layout shifts for every code snippet that was displayed. We fixed that by adding the necessary styles to the critical CSS on the article pages.

We’ve re-established a strategy for optimizing images (preferably AVIF or WebP — still work in progress though). All images below the 1000px height threshold are natively lazy-loaded (with <img loading=lazy>), while the ones on the top are prioritized (<img loading=eager>). The same goes for all third-party embeds.

We replaced some dynamic parts with their static counterparts — e.g. while a note about an article saved for offline reading was appearing dynamically after the article was added to the service worker’s cache, now it appears statically as we are, well, a bit optimistic and expect it to be happening in all modern browsers.

As of the moment of writing, we’re preparing facades for code embeds and video embeds as well. Plus, all images that are offscreen will get decoding=async attribute, so the browser has a free reign over when and how it loads images offscreen, asynchronously and in parallel.

To ensure that our images always include width and height attributes, we’ve also modified Harry Roberts’ snippet and Tim Kadlec’s diagnostics CSS to highlight whenever an image isn’t served properly. It’s used in development and editing but obviously not in production.

One technique that we used frequently to track what exactly is happening as the page is being loaded, was slow-motion loading.

First, we’ve added a simple line of code to the diagnostics CSS, which provides a noticeable outline for all elements on the page.

* { outline: 3px solid red }

Then we record a video of the page loaded on a slow and fast connection. Then we rewatch the video by slowing down the playback and moving back and forward to identify where massive layout shifts happen.

Here’s the recording of a page being loaded on a fast connection:

The reason for the poor score on mobile is clearly poor Time to Interactive and poor Total Blocking time due to the booting of the app and the size of the full CSS file. So there is still some work to be done there.

As for the next steps, we are currently looking into further reducing the size of the CSS, and specifically break it down into modules, similarly to JavaScript, loading some parts of the CSS (e.g. checkout or job board or books/eBooks) only when needed.

We also explore options of further bundling experimentation on mobile to reduce the performance impact of the app.js although it seems to be non-trivial at the moment. Finally, we’ll be looking into alternatives to our cookie prompt solution, rebuilding our containers with CSS clamp(), replacing the padding-bottom ratio technique with aspect-ratio and looking into serving as many images as possible in AVIF.

That’s It, Folks!

Hopefully, this little case-study will be useful to you, and perhaps there are one or two techniques that you might be able to apply to your project right away. In the end, performance is all about a sum of all the fine little details, that, when adding up, make or break your customer’s experience.

While we are very committed to getting better at performance, we also work on improving accessibility and the content of the site.

So if you spot anything that’s not quite right or anything that we could do to further improve Smashing Magazine, please let us know in the comments to this article!

Also, if you’d like to stay updated on articles like this one, please subscribe to our email newsletter for friendly web tips, goodies, tools and articles, and a seasonal selection of Smashing cats.

Porch Piracy Is Growing

Porch piracy is on the rise, with 43 percent of American consumers surveyed saying they had an ecommerce delivery pilfered from their front porch in 2020, according to a recent report.

C+R Research asked 2,000 self-reporting shoppers about delivery theft in November 2020. Some 58 percent of the respondents resided in single-family dwellings, 29 percent lived in apartments, and condos or other living arrangements made up the balance.

The survey was conducted on Amazon’s Mechanical Turk, so the results might not be as accurate as a phone survey, other online survey tools, or a review of retailer data, given that Mechanical Turk respondents are paid and more likely to use Amazon and Amazon Prime than U.S. consumers generally. Nonetheless, the survey may be an indicator of a growing problem.

C+R has used a similar questionnaire for the past few years, and the reports of delivery theft have risen. In 2018, 31 percent of the shoppers who C+R surveyed reported having at least one package stolen as it lay waiting outside of their front door. In 2019, that percentage had risen to 36 percent, and, as mentioned above, in 2020 43 percent of folks asked said a crook had taken a package from their front step.

Similar reports or surveys from Canary (a home security company), Security.org, and others put the number of American shoppers who have had a package taken from their front door between 18 percent and 40 percent. Most published surveys that I’ve reviewed report theft of this type is increasing.

More Porch Pirates

Many factors could be contributing to the rise in ecommerce package theft from Americans’ front steps. These factors include significant growth in ecommerce volume, economic conditions, delivery failures, and even so-called friendly fraud.

Ecommerce growth. Depending on who you ask, U.S. retail ecommerce sales rose somewhere between 20-and-40 percent in 2020. For example, The United Census Bureau said that third-quarter ecommerce sales in 2020 increased 36.7 percent over the same quarter of 2019.

Thus doorstep theft could simply be rising with the number of packages delivered.

Economic conditions. The pandemic-driven recession might also be a factor.

A 2007 study found that a relatively lower rate of property crime in the 1990s may have been related to the positive consumer sentiment. If the converse is also true, the current recession might be related to more porch privacy. A 2012 United Nations report, which did not include data from the United States, identified an apparent relationship between economic crises and crime.

Perhaps the Covid recession is contributing to package theft.

Delivery failures. Some of the growth in porch piracy could be related to delivery problems. A customer may assume a package was stolen when, in fact, it was delivered to the wrong address. The retailer says it was shipped. The carrier says it was delivered. But in reality, the box is two doors down or two streets over.

Friendly fraud. Some reports of porch privacy could actually be refund fraud.

“Refund fraud is an easy path for a customer to take if they want to have their cake and eat it too (or, have their watch/shoes/game/etc. and keep the money too),” wrote Shoshana Maraney, content and communications director at fraud prevention firm Identiq.

“They can simply claim that the parcel never arrived (porch pirates are a scourge these days) or that it was broken on arrival. Retailers who aren’t accommodating about refunds tend to receive chargebacks.”

Preventing Piracy

Shoppers can do a lot to stop the theft of ecommerce orders, but they should not have to do it alone. Retailers can help.

For example, retailers can communicate with customers. In 2019, The New York Times ran an article saying that 90,000 packages a day disappear in New York City. If an order comes in from Manhattan, a retailer might send an automatic email describing what a consumer can do to prevent theft. This message could encourage the shopper to meet the package at their door, use an alternative shipping destination such as an office, or have the package held with the carrier for pick up.

Besides communicating with the shopper, a retailer might offer free or low-cost theft insurance or ship items in discrete packages that conceal brand names. It may also be possible with some carriers to schedule delivery only for when a shopper is home and can answer a door.

How To Build A Node.js API For Ethereum Blockchain

Blockchain technology has been on the rise in the past ten years, and has brought a good number of products and platforms to life such as Chainalysis (finance tech), Burstiq (health-tech), Filament (IoT), Opus (music streaming) and Ocular (cybersecurity).

From these examples, we can see that blockchain cuts across many products and use cases — making it very essential and useful. In fintech (finance tech), it’s used as decentralized ledgers for security and transparency in places like Chain, Chainalysis, and is also useful in health tech for the security of sensitive health data in Burstiq and Robomed — not to forget media tech such as Opus and Audius that also use blockchain for royalties transparency and thus get full royalties.

Ocular uses security that comes with blockchain for identity management for biometric systems, while Filament uses blockchain ledgers for real-time encrypted communication. This goes to show how essential blockchain has become to us by making our lives better. But what exactly is a blockchain?

A blockchain is a database that is shared across a network of computers. Once a record has been added to the chain, it is quite difficult to change. To ensure that all the copies of the database are the same, the network makes constant checks.

So why do we need blockchain? Blockchain is a safe way to record activities and keep data fresh while maintaining a record of its history compared to the traditional records or databases where hacks, errors, and downtimes are very possible. The data can’t be corrupted by anyone or accidentally deleted, and you benefit from both a historical trail of data and an instantly up-to-date record that can’t be erased or become inaccessible due to downtime of a server.

Because the whole blockchain is duplicated across many computers, any user can view the entire blockchain. Transactions or records are processed not by one central administrator, but by a network of users who work to verify the data and achieve a consensus.

Applications that use blockchain are called dApps (Decentralised Applications). Looking around today, we’ll mostly find decentralized apps in fintech, but blockchain goes beyond decentralized finance. We have health platforms, music streaming/sharing platforms, e-commerce platforms, cybersecurity platforms, and IOTs moving towards decentralized applications (dApps) as cited above.

So, when would it make sense to consider using blockchain for our applications, rather than a standard database or record?

Common Applications Of Blockchain

  • Managing And Securing Digital Relationships
    Anytime you want to keep a long-term, transparent record of assets (for example, to record property or apartment rights), blockchain could be the ideal solution. Ethereum ‘Smart contracts’, in particular, are great for facilitating digital relationships. With a smart contract, automated payments can be released when parties in a transaction agree that their conditions have been met.
  • Eliminating Middlemen/Gatekeepers
    For example, most providers currently have to interact with guests via a centralized aggregator platform, like Airbnb or Uber (that, in turn, takes a cut on each transaction). Blockchain could change all that.
    For example, TUI is so convinced of the power of blockchain that it is pioneering ways to connect hoteliers and customers directly. That way, they can transact via blockchain in an easy, safe and consistent way, rather than via a central booking platform.
  • Record Secure Transactions Between Partners To Ensure Trust
    A traditional database may be good for recording simple transactions between two parties, but when things get more complicated, blockchain can help reduce bottlenecks and simplify relationships. What’s more, the added security of a decentralized system makes blockchain ideal for transactions in general.
    An example is the University Of Melbourne that started storing its records in blockchain. The most promising use case for blockchain in higher education is to transform the “record-keeping” of degrees, certificates, and diplomas. This saves a lot of cost from dedicated servers for storage or records.
  • Keeping Records Of Past Actions For Applications Where Data Is In Constant Flux
    Blockchain is a better, safer way to record the activity and keep data fresh while maintaining a record of its history. The data can’t be corrupted by anyone or accidentally deleted, and you benefit from both a historical trail of data, plus an instantly up-to-date record. An example of a good use case is blockchain in e-commerce, both blockchain and e-commerce involve transactions.
    Blockchain makes these transactions safer and faster while e-commerce activities rely on them. Blockchain technology enables users to share and securely store digital assets both automatically and manually. This technology has the capacity to handle user activities such as payment processing, product searches, product purchases, and customer care. It also reduces the expenses spent on inventory management and payment processing.
  • Decentralisation Makes It Possible To Be Used Anywhere
    Unlike before where we have to restrict ourselves to a particular region due to various reasons like currency exchange policies, limitations of payment gateways makes access to financial resources of many countries not in your region or continent hard. With the rise and power of blockchain’s decentralization or peer-to-peer system, this becomes easier to work with other countries.
    For example, an e-commerce store in Europe can have consumers in Africa and not require a middleman to process their payment requests. Furthermore, these technologies are opening doors for online retailers to make use of the consumer markets in faraway countries with bitcoin, i.e. a cryptocurrency.
  • Blockhain Is Technology-Neutral
    Blockchain works with all and any technology stack being used by a developer. You don’t have to learn Node as a Python dev to use blockchain or learn Golang. This makes blockchain very easy to use.
    We can actually use it directly with our front-end apps in Vue/React with the blockchain acting as our sole database for simple uncomplicated tasks and use cases like uploading data or getting hashes for displaying records for our users, or building frontend games like casino games and betting games (in which a high amount of trust is needed). Also, with the power of web3, we can store data in the chain directly.

Now, we have seen quite a number of the advantages of using blockchain, but when should we not bother using a blockchain at all?

Disadvantages Of Blockchain

  • Reduced Speed For Digital Transaction
    Blockchains require huge amounts of computing power, which tends to reduce the speed of digital transactions, though there are workarounds it is advisable to use centralized databases when in need of high-speed transactions in milliseconds.
  • Data Immutability
    Data immutability has always been one of the biggest disadvantages of the blockchain. It is clear that multiple systems benefit from it including supply chain, financial systems, and so on. However, it suffers from the fact that once data is written, it cannot be removed. Every person on the earth has the right to privacy. However, if the same person utilizes a digital platform that runs on blockchain technology, then he will be unable to remove its trace from the system when he doesn’t want it there. In simple words, there is no way that he can remove his trace — leaving privacy rights into pieces.
  • Requires Expertise Knowledge
    Implementing and managing a blockchain project is hard. It requires thorough knowledge to go through the whole process. This is why it is hard to come across blockchain specialists or experts because it takes a lot of time and effort to train a blockchain expert. Hence this article is a good place to start and a good guide if you have already started.
  • Interoperability
    Multiple blockchain networks working hard to solve the distributed ledger problem uniquely makes it hard to relate them or integrate them with each other. This makes communication between different chains hard.
  • Legacy Applications Integration
    Many businesses and applications still use legacy systems and architecture; adopting blockchain technology requires a complete overhaul of these systems which I must say is not feasible for many of them.

Blockchain is still evolving and maturing all the time so don’t be surprised if these cons mentioned today become transformed to a pro later on. Bitcoin which is a cryptocurrency is one popular example of a blockchain, a popular blockchain that has been on the rise aside from bitcoin cryptocurrency is Ethereum blockchain. Bitcoin focuses on cryptocurrencies while Ethereum focuses more on smart contracts which have been the major driving force for the new tech platforms.

Recommended reading: Bitcoin vs. Ethereum: What’s the Difference?

Let’s Start Building Our API

With a solid understanding of blockchain, now let’s look at how to build an Ethereum blockchain and integrate it into a standard API in Node.js. The ultimate goal is to get a good understanding of how dApps and Blockchain platforms are being built.

Most dApps have similar architecture and structure. Basically, we have a user that interacts with the dApp frontend — either web or mobile — which then interacts with the backend APIs. The backend, then, on request interacts with the smart contract(s) or blockchain through public nodes; these either run Node.js applications or the backend uses blockchain by directly running the Node.js software. There are still so many things in between these processes from choosing to build a fully decentralized application or semi-decentralized application to choosing what should be decentralized and how to safely store private keys.

Recommended reading: Decentralized Applications Architecture: Back End, Security and Design Patterns

Things We Should Know First

For this tutorial, we’re going to try to build the backend of a decentralized music store app that uses the power of Ethereum blockchain for storing music and sharing it for downloads or streaming.

The basic structure of the application we’re trying to build has three parts:

  1. Authentication, which is done by email; of course we need to add an encrypted password to the app.
  2. Storage of data, with the music data is first stored in ipfs and the storage address is stored in the blockchain for retrieval.
  3. Retrieval, with any authenticated user being able to access the stored data on our platform and use it.

We will be building this with Node.js, but you can also build with Python or any other programming language. We’ll also see how to store media data in IPFS, get the address and write functions to store this address in — and retrieve this address from a blockchain with the Solidity programming language.

Here are some tools that we should have at our disposal for building or working with Ethereum and Node.js.

  • Node.js
    The first requirement is a Node application. We are trying to build a Node.js app, so we need a compiler. Please make sure you have Node.js installed — and please download the latest long term support binary (LTS).
  • Truffle Suite
    Truffle is a contract development and testing environment, as well as an asset pipeline for Ethereum blockchain. It provides an environment for compiling, pipelining, and running scripts. Once you’re talking about developing blockchain, Truffle is a popular stop to go to. Check out about Truffle Suite on Truffle Suite: Sweet Tools for Smart Contracts.
  • Ganache CLI
    Another tool that works well in hand with Truffle is Ganache-CLI. It’s built and maintained by the Truffle Suite team. After building and compiling, you need an emulator to develop and run blockchain apps, and then deploy smart contracts to be used. Ganache makes it easier for you to deploy a contract in an emulator without using actual money for transaction cost, recyclable accounts, and much more. Read more on Ganache CLI at Ganache CLI and Ganache.
  • Remix
    Remix is like an alternative to Ganache, but also comes with a GUI to help navigate deploying and testing of Ethereum smart contracts. You can learn more about it on Remix — Ethereum IDE & community. All you have to do is to visit https://remix.ethereum.org and use the GUI to write and deploy smart contracts.
  • Web3
    Web3 is a collection of libraries that allows you to interact with an Ethereum node. These could be local or remote nodes of the contract through HTTP, IPC or Web Sockets. Intro to Web3.js · Ethereum Blockchain Developer Crash Course is a good place to learn a bit about Web3.
  • IPFS
    A core protocol that is being used in building dApps. The InterPlanetary File System (IPFS) is a protocol and peer-to-peer network for storing and sharing data in a distributed file system. IPFS Powers the Distributed Web explains more on IPFS and how it’s usually used.

Creating A Backend API From Scratch

So first we have to create a backend to be used, and we’re using Node.js. When we want to create a new Node.js API, the first thing we’re going to do is initialize an npm package. As you probably know, npm stands for Node Package Manager, and it comes prepackaged with the Node.js binary. So we create a new folder and call it “blockchain-music”. We open the terminal in that folder directory, and then run the following command:

$ npm init -y && touch server.js routes.js

This starts up the project with a package.json file and answers yes to all prompts. Then we also create a server.js file and a routes.js file for writing the routes functions in the API.

After all these, you will have to install packages that we need to make our build easy and straightforward. This process is a continuous one, i.e. you can install a package any time during the development of your project.

Let’s install the most important ones we need right now:

You’ll also have to install Truffle.js globally, so you can use it everywhere in your local environment. If you want to install all of them at once, run the following code in your Terminal:

$ npm install nodemon truffle-contract dotenv mongodb shortid express web3 --save && npm install truffle -g

The --save flag is to save the package’s name in the package.json file. The -g flag is to store this particular package globally, so that we can use it in any project we are going to work on.

We then create an .env file where we can store our MongoDB database secret URI for use. We do so by running touch.env in the Terminal. If you don’t have a database account with MongoDB yet, start with the MongoDB page first.

The dotenv package exports our stored variable to the Node.js process environment. Please make sure that you don’t push the .env file when pushing to public repositories to avoid leaking your passwords and private data.

Next, we have to add scripts for build and development phases of our project in our package.json file. Currently our package.json looks like this:

{ "name": "test", "version": "1.0.0", "description": "", "main": "server.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "keywords": [], "author": "", "license": "ISC", "dependencies": { "express": "^4.17.1", "socket.io": "^2.3.0", "truffle-contract": "^4.0.31", "web3": "^1.3.0" }
}

We’re then going to add a start script to the package.json file to use the nodemon server so that whenever we make change it restarts the server itself, and a build script that uses the node server directly, it could look like this:

{ "name": "test", "version": "1.0.0", "description": "", "main": "server.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1", "start": "nodemon server.js", "build": "node server.js" }, "keywords": [], "author": "", "license": "ISC", "dependencies": { "express": "^4.17.1", "socket.io": "^2.3.0", "truffle-contract": "^4.0.31", "web3": "^1.3.0" }
}

Next, we have to initialize Truffle for use in our smart contract by using the Truffle package we installed globally earlier. In the same folder of our projects, we run the following command below in our terminal:

$ truffle init

Then we can start writing our code in our server.js file. Again, we’re trying to build a simple decentralized music store app, where customers can upload music for every other user to access and listen to.

Our server.js should be clean for easy coupling and decoupling of components, so routes and other functionalities will be put in other files like the routes.js. Our example server.js could be:

require('dotenv').config();
const express= require('express')
const app =express()
const routes = require('./routes')
const Web3 = require('web3');
const mongodb = require('mongodb').MongoClient
const contract = require('truffle-contract');
app.use(express.json()) mongodb.connect(process.env.DB,{ useUnifiedTopology: true },(err,client)=>{ const db =client.db('Cluster0') //home routes(app,db) app.listen(process.env.PORT || 8082, () => { console.log('listening on port 8082'); })
})

Basically, above we import the libraries that we need with require, then add a middleware that allows the use of JSON in our API using app.use, then connect to our MongoDB database and get the database access, and then we specify which database cluster we’re trying to access (for this tutorial it is “Cluster0”). After this, we call the function and import it from the routes file. Finally, we listen for any attempted connections on port 8082.

This server.js file is just a barebone to get the application started. Notice that we imported routes.js. This file will hold the route endpoints for our API. We also imported the packages we needed to use in the server.js file and initialized them.

We’re going to create five endpoints for user consumption:

  1. Registration endpoint for registering users just via email. Ideally, we’d do so with an email and password, but as we just want to identify each user, we’re not going to venture into password security and hashing for the sake of the brevity of this tutorial.
    POST /register
    Requirements: email
    
  2. Login endpoint for users by email.
    POST /login
    Requirements: email
    
  3. Upload endpoint for users — the API that gets the data of the music file. The frontend will convert the MP3/WAV files to an audio buffer and send that buffer to the API.
    POST /upload
    Requirements: name, title of music, music file buffer or URL stored
    
  4. Access endpoint that will provide the music buffer data to any registered user that requests it, and records who accessed it.
    GET /access/{email}/{id}
    Requirements: email, id
    
  5. We also want to provide access to the entire music library and return the results to a registered user.
    GET /access/{email}
    Requirements: email
    

Then we write our route functions in our routes.js file. We utilize the database storage and retrieval features, and then make sure we export the route function at the end of the file to make it possible to be imported in another file or folder.

const shortid = require('short-id')
function routes(app, db){ app.post('/register', (req,res)=>{ let email = req.body.email let idd = shortid.generate() if(email){ db.findOne({email}, (err, doc)=>{ if(doc){ res.status(400).json({"status":"Failed", "reason":"Already registered"}) }else{ db.insertOne({email}) res.json({"status":"success","id":idd}) } }) }else{ res.status(400).json({"status":"Failed", "reason":"wrong input"}) } }) app.post('/login', (req,res)=>{ let email = req.body.email if(email){ db.findOne({email}, (err, doc)=>{ if(doc){ res.json({"status":"success","id":doc.id}) }else{ res.status(400).json({"status":"Failed", "reason":"Not recognised"}) } }) }else{ res.status(400).json({"status":"Failed", "reason":"wrong input"}) } }) app.post('/upload', (req,res)=>{ let buffer = req.body.buffer let name = req.body.name let title = req.body.title if(buffer && title){ }else{ res.status(400).json({"status":"Failed", "reason":"wrong input"}) } }) app.get('/access/:email/:id', (req,res)=>{ if(req.params.id && req.params.email){ }else{ res.status(400).json({"status":"Failed", "reason":"wrong input"}) } })
}
module.exports = routes

Inside this route function, we have many other functions called within both the app and db parameters. These are the API endpoint functions that enable users to specify an endpoint in the URL. Ultimately we choose one of these functions to be executed and provide results as response to incoming requests.

We have four major endpoint functions:

  1. get: for reading record operations
  2. post: for creating record operations
  3. put: for updating record operations
  4. delete: for deleting record operations

In this routes function, we used the get and post operations. We use post for registration, login, and upload operations, and get for accessing the data operations. For a little bit more explanation on that, you can check out Jamie Corkhill’s article on “How To Get Started With Node: An Introduction To APIs, HTTP And ES6+ JavaScript”.

In the code above, we can also see some database operations like in the register route. We stored the email of a new user with db.createa and checked for the email in the login function with db.findOne. Now, before we can do all of it, we need to name a collection or table with the db.collection method. That’s exactly what we’ll be covering next.

Note: To learn more about the database operations in MongoDB, check the mongo Shell Methods documentation.

Building A Simple Blockchain Smart Contract With Solidity

Now we’re going to write a Blockchain contract in Solidity (that’s the language that smart contracts are written in) to simply store our data and retrieve it when we need it. The data we want to store is the music file data, meaning that we have to upload the music to IPFS, then store the address of the buffer in a blockchain.

First, we create a new file in the contract folder and name it Inbox.sol. To write a smart contract, it’s useful to have a good understanding of Solidity, but it’s not difficult as it’s similar to JavaScript.

Note: If you’re interested in learning more about Solidity, I’ve added a few resources at the bottom of the article to get you started.

pragma solidity ^0.5.0; contract Inbox{ //Structure mapping (string=>string) public ipfsInbox; //Events event ipfsSent(string _ipfsHash, string _address); event inboxResponse(string response); //Modifiers modifier notFull (string memory _string) { bytes memory stringTest = bytes(_string); require(stringTest.length==0); _; } // An empty constructor that creates an instance of the conteact constructor() public{} //takes in receiver's address and IPFS hash. Places the IPFSadress in the receiver's inbox function sendIPFS(string memory _address, string memory _ipfsHash) notFull(ipfsInbox[_address]) public{ ipfsInbox[_address] = _ipfsHash; emit ipfsSent(_ipfsHash, _address); } //retrieves hash function getHash(string memory _address) public view returns(string memory) { string memory ipfs_hash=ipfsInbox[_address]; //emit inboxResponse(ipfs_hash); return ipfs_hash; }
}

In our contract, we have two main functions: the sendIPFS and the getHash functions. Before we talk about the functions, we can see that we had to define a contract first called Inbox. Inside this class, we have structures used in the ipfsInbox object (first events, then modifiers).

After defining the structures and events, we have to initialize the contract by calling the constructor function. Then we defined three functions. (The checkInbox function was used in the test for testing results.)

The sendIPFS is where the user inputs the identifier and hash address after which it is stored on the blockchain. The getHash function retrieves the hash address when it is given the identifier. Again, the logic behind this is that we ultimately want to store the music in IPFS. To test how it works, you can hop on to a Remix IDE, copy, paste, and test your contract, as well as debug any errors and run again (hopefully it won’t be needed!).

After testing that our code works correctly in the remix, let’s move on to compiling it locally with the Truffle suite. But first, we need to make some changes to our files and set up our emulator using ganache-cli:

First, let’s install ganache-cli. In the same directory, run the following command in your terminal:

$ npm install ganache-cli -g

Then let’s open another Terminal and run another command in the same folder:

$ ganache-cli

This starts up the emulator for our blockchain contract to connect and work. Minimize the Terminal and continue with the other Terminal you’ve been using.

Now go to the truffle.js file if you’re using a Linux/Mac OS or truffle-config.js in Windows, and modify this file to look like this:

const path = require("path");
module.exports = { // to customize your Truffle configuration! contracts_build_directory: path.join(__dirname, "/build"), networks: { development: { host: "127.0.0.1", port: 8545, network_id: "*" //Match any network id } }
};

Basically what we did is adding the path of the build folder where the smart contract is converted to JSON files. Then we also specified the network that Truffle should use for migration.

Then, also in the migrations folder, create a new file named 2_migrate_inbox.js and add the following code inside the files:

var IPFSInbox = artifacts.require("./Inbox.sol");
module.exports = function(deployer) { deployer.deploy(IPFSInbox);
};

We did so to get the contract file and deploy it automatically to a JSON, using the deployer function during the Truffle migration.

After the above changes we run:

$ truffle compile

We should see some messages at the end which show successful compilation, such as:

> Compiled successfully using: - solc: 0.5.16+commit.9c3226ce.Emscripten.clang

Next, we migrate our contract by running:

$ truffle migrate

Once we have successfully migrated our contracts, we should have something like this at the end:

Summary
=======
> Total deployments: 1
> Final cost: 0.00973432 ETH

And we’re almost done! We have built our API with Node.js, and also set up and built our smart contract.

We should also write tests for our contract to test the behaviour of our contract and ensure it is the desired behaviour. The tests are usually written and placed in the test folder. An example test written in a file named InboxTest.js created in the test folder is:

const IPFSInbox = artifacts.require("./Inbox.sol")
contract("IPFSInbox", accounts =>{ it("emit event when you send a ipfs address", async()=>{ //ait for the contract const ipfsInbox = await IPFSInbox.deployed() //set a variable to false and get event listener eventEmitted = false //var event = () await ipfsInbox.ipfsSent((err,res)=>{ eventEmitted=true }) //call the contract function which sends the ipfs address await ipfsInbox.sendIPFS(accounts[1], "sampleAddress", {from: accounts[0]}) assert.equal(eventEmitted, true, "sending an IPFS request does not emit an event") })
})

So we run our test by running the following:

$ truffle test

It tests our contract with the files in the test folder and shows the number of passed and failed tests. For this tutorial, we should get:

$ truffle test
Using network 'development'.
Compiling your contracts...
===========================
> Compiling .\contracts\Inbox.sol
> Artifacts written to C:\Users\Ademola\AppData\Local\Temp\test--2508-n0vZ513BXz4N
> Compiled successfully using: — solc: 0.5.16+commit.9c3226ce.Emscripten.clang Contract: IPFSInbox √ emit event when you send an ipfs address (373ms) 1 passing (612ms)

Integrating The Smart Contract To The Backend API Using Web3

Most times when you see tutorials, you see decentralized apps built to integrate the frontend directly to the blockchain. But there are times when the integration to the backend is needed as well, for example when using third-party backend APIs and services, or when using blockchain to build a CMS.

The use of Web3 is very important to this cause, as it helps us access remote or local Ethereum nodes and use them in our applications. Before we go on, we’ll discuss the local and remote Ethereum nodes. The local nodes are the nodes deployed on our system with emulators like ganache-cli but a remote node is one that is deployed on online faucets/platforms like ropsten or rinkeby. To dive in deeper, you can follow a tutorial on how to deploy on ropsten 5-minute guide to deploying smart contracts with Truffle and Ropsten or you could use truffle wallet provider and deploy via An Easier Way to Deploy Your Smart Contracts.

We are using ganache-cli in this tutorial, but if we were deploying on ropsten, we should have copied or stored our contract address somewhere like in our .env file, then move on to update the server.js file, import web3, import the migrated contract and set up a Web3 instance.

require('dotenv').config();
const express= require('express')
const app =express()
const routes = require('./routes')
const Web3 = require('web3');
const mongodb = require('mongodb').MongoClient
const contract = require('truffle-contract');
const artifacts = require('./build/Inbox.json');
app.use(express.json())
if (typeof web3 !== 'undefined') { var web3 = new Web3(web3.currentProvider) } else { var web3 = new Web3(new Web3.providers.HttpProvider('http://localhost:8545'))
}
const LMS = contract(artifacts)
LMS.setProvider(web3.currentProvider)
mongodb.connect(process.env.DB,{ useUnifiedTopology: true }, async(err,client)=>{ const db =client.db('Cluster0') const accounts = await web3.eth.getAccounts(); const lms = await LMS.deployed(); //const lms = LMS.at(contract_address) for remote nodes deployed on ropsten or rinkeby routes(app,db, lms, accounts) app.listen(process.env.PORT || 8082, () => { console.log('listening on port '+ (process.env.PORT || 8082)); })
})

In the server.js file, we check if the web3 instance is initialized already. If not, we initialize it on the network port which we defined earlier (8545). Then we build a contract based on the migrated JSON file and truffle-contract package, and set the contract provider to the Web3 instance provider which must have been initialized by now.

We then get accounts by web3.eth.getAccounts. For the development stage, we call the deployed function in our contract class that asks ganache-cli — which is still running — to give us a contract address to use. But if we’ve already deployed our contract to a remote node, we call a function inputting the address as an argument. The sample function is commented below the defined lms variable in our code above. Then we call the routes function inputting the app instance, database instance, contract instance (lms), and accounts data as arguments. Finally, we listen for requests on port 8082.

Also, by now, we should have installed the MongoDB package, because we are using it in our API as our database. Once we have that, we move onto the routes page where we use the methods defined in the contract to accomplish tasks like saving and retrieving the music data.

In the end, our routes.js should look like this:

const shortid = require('short-id')
const IPFS =require('ipfs-api');
const ipfs = IPFS({ host: 'ipfs.infura.io', port: 5001,protocol: 'https' }); function routes(app, dbe, lms, accounts){ let db= dbe.collection('music-users') let music = dbe.collection('music-store') app.post('/register', (req,res)=>{ let email = req.body.email let idd = shortid.generate() if(email){ db.findOne({email}, (err, doc)=>{ if(doc){ res.status(400).json({"status":"Failed", "reason":"Already registered"}) }else{ db.insertOne({email}) res.json({"status":"success","id":idd}) } }) }else{ res.status(400).json({"status":"Failed", "reason":"wrong input"}) } }) app.post('/login', (req,res)=>{ let email = req.body.email if(email){ db.findOne({email}, (err, doc)=>{ if(doc){ res.json({"status":"success","id":doc.id}) }else{ res.status(400).json({"status":"Failed", "reason":"Not recognised"}) } }) }else{ res.status(400).json({"status":"Failed", "reason":"wrong input"}) } }) app.post('/upload', async (req,res)=>{ let buffer = req.body.buffer let name = req.body.name let title = req.body.title let id = shortid.generate() + shortid.generate() if(buffer && title){ let ipfsHash = await ipfs.add(buffer) let hash = ipfsHash[0].hash lms.sendIPFS(id, hash, {from: accounts[0]}) .then((_hash, _address)=>{ music.insertOne({id,hash, title,name}) res.json({"status":"success", id}) }) .catch(err=>{ res.status(500).json({"status":"Failed", "reason":"Upload error occured"}) }) }else{ res.status(400).json({"status":"Failed", "reason":"wrong input"}) } }) app.get('/access/:email', (req,res)=>{ if(req.params.email){ db.findOne({email: req.body.email}, (err,doc)=>{ if(doc){ let data = music.find().toArray() res.json({"status":"success", data}) } }) }else{ res.status(400).json({"status":"Failed", "reason":"wrong input"}) } }) app.get('/access/:email/:id', (req,res)=>{ let id = req.params.id if(req.params.id && req.params.email){ db.findOne({email:req.body.email},(err,doc)=>{ if(doc){ lms.getHash(id, {from: accounts[0]}) .then(async(hash)=>{ let data = await ipfs.files.get(hash) res.json({"status":"success", data: data.content}) }) }else{ res.status(400).json({"status":"Failed", "reason":"wrong input"}) } }) }else{ res.status(400).json({"status":"Failed", "reason":"wrong input"}) } })
} module.exports = routes

At the beginning of the routes file, we imported the short-id package and ipfs-http-client and then initialized IPFS with the HTTP client using the backend URL ipfs.infura.io and port 5001. This allowed us to use the IPFS methods to upload and retrieve data from IPFS (check out more here).

In the upload route, we save the audio buffer to IPFS which is better compared to just storing it on the blockchain for anyone registered or unregistered to use. Then we saved the address of the buffer in the blockchain by generating an ID and using it as an identifier in the sendIFPS function. Finally, then we save all the other data associated with the music file to our database. We should not forget to update our argument in the routes function since we changed it in the server.js file.

In the access route using id, we then retrieve our data by getting the id from the request, using the id to access the IPFS hash address, and then access the audio buffer using the address. But this requires authentication of a user by email which is done before anything else.

Phew, we’re done! Right now we have an API that can receive requests from users, access a database, and communicate to a node that has the software running on them. We shouldn’t forget that we have to export our function with module.exports though!

As we have noticed, our app is a decentralized app. However, it’s not fully decentralized as we only stored our address data on the blockchain and every other piece of data was stored securely in a centralized database which is the basis for semi-dApps. So the consumption of data can be done directly via request or using a frontend application in JavaScript to send fetch requests.

Our music store backend app can now safely store music data and provide access to anyone who needs to access it, provided it is a registered user. Using blockchain for music sharing makes it cheaper to store music data while focusing on connecting artists directly with users, and perhaps it could help them generate revenue that way. This wouldn’t require a middleman that uses royalty; instead, all of the revenue would go to the artist as users request their music to either download or stream. A good example of a music streaming application that uses blockchain just like this is Opus OPUS: Decentralized music sharing platform. However, there are also a few others like Musicoin, Audius, and Resonate.

What Next?

The final thing after coding is to start our server by running npm run start or npm run build and test our backend endpoints on either the browser or with Postman. After running and testing our API we could add more features to our backend and blockchain smart contract. If you’d like to get more guidance on that, please check the further reading section for more articles.

It’s worth mentioning that it is critical to write unit and integration tests for our API to ensure correct and desirable behaviors. Once we have all of that done, we can deploy our application on the cloud for public use. This can be done on its own with or without adding a frontend (microservices) on Heroku, GCP, or AWS for public use. Happy coding!

Note: You can always check my repo for reference. Also, please note that the .env file containing the MongoDB database URI is included for security reasons.

Further Reading And Related Resources

Online Fraud in 2021 Is Booming

Fraud-prevention expert Uri Arad will give an exclusive, live-stream presentation to the CommerceCo by Practical Ecommerce peer-to-peer community on Thursday, January 21, at 2:00 p.m. Eastern Time.

Arad is co-founder of Identiq, a privacy-protecting, network-based service that helps merchants identify legitimate consumers the first time they shop. It’s critical, as fraudsters are increasingly sophisticated and can look like a regular shopper.

Uri Arad with Identiq

Uri Arad

During the presentation, CommerceCo members will learn what potential fraud threats could harm their businesses in 2021 and what options they may have to protect their companies.

The presentation is exclusive to the CommerceCo by Practical Ecommerce community, which is made up of experienced professionals from retailers and brands. Membership in the community is paid, meaning only serious ecommerce pros participate. Members can discover products and techniques to improve their companies, network with peers to advance their careers, and learn skills to better themselves.

Ecommerce Booms, Fraud Looms

Ecommerce sales dramatically increased in 2020. Unfortunately, with the boom came a commensurate increase in card-not-present fraud.

Retail ecommerce sales in the United States grew by 20-to-40 percent in 2020, depending on one’s source.

The United Census Bureau, for example, reported on November 19, 2020, that “the third quarter 2020 ecommerce estimate increased 36.7 percent from the third quarter of 2019 while total retail sales increased 7.0 percent in the same period. Ecommerce sales in the third quarter of 2020 accounted for 14.3 percent of total sales.”

“We’ve seen ecommerce accelerate in ways that didn’t seem possible last spring, given the extent of the economic crisis,” said Andrew Lipsman, eMarketer principal analyst. “While much of the shift has been led by essential categories like grocery, there has been surprising strength in discretionary categories like consumer electronics and home furnishings that benefited from pandemic-driven lifestyle needs.”

eMarketer estimated that U.S. ecommerce sales were up 34.2 percent in 2020 compared to 2019. And the list could go on with estimates from dozens of other surveys, all saying the pandemic drove more sales online.

Fraud Opportunities

The shift to ecommerce has opened the door for crime. With retailers processing more orders, offering new channels like curbside pick-up, or adding ecommerce for the first time, fraudsters had new opportunities for attack.

Writing on the Identiq blog, Shoshana Maraney, Identiq’s content and communications director, described three of the many fraud trends that could impact merchants in 2021: buy-online-pick-up-in-store fraud, refund fraud, and account takeovers.

Maraney wrote, “The reason that refund fraud has taken off like a rocket recently — to the extent that few businesses have really caught up yet with quite how much money they’re losing — is that this has become a winning business model for fraudsters.”

“Criminals now offer refund fraud services. All the customer has to do is place the order, and the fraudster will take care of the rest. The customer will get to keep their order for free, paying the fraudster a small percentage of the cost of the item. The retailer bears the cost.”

Logo: CommerceCo by Practical Ecommerce

Arad’s presentation will describe the state-of-the-art methods for preventing card-not-present fraud in its many forms. What’s more, CommerceCo presentations are not just another webinar. Members can speak directly with Arad, asking him their own questions and even appearing on screen with Arad, if they like. I will be the moderator.

Finally, as with all weekly CommerceCo presentations, the video recording will be available to members.

17 Free Design Tools for 2021

Here is a list of free tools to design a website, social media ad, logo, infographic, and more. There are editing applications, resource libraries, and tools to find the right font and color palette. All of these are free, though several also offer premium plans with extended features.

Pixlr

Pixlr home page.

Pixlr

Pixlr is a free photo editing tool that’s easy to use and has a large library of effects. Access Pixlr’s library of stickers, overlays, borders, icons, and decorative texts to add to your photos. Use Pixlr on your mobile device, too. Price: Free. Premium account is $4.90 per month.

Canva

Canva home page

Canva

Canva is a drag-and-drop editor to design social media graphics, logos, and more. Choose from thousands of layouts. Use free stock images, illustrations, preset photo files, icons, shapes, and hundreds of fonts. Price: Free. Premium plans start at $12.95 per month.

Burst

Burst home page

Burst

Burst is a free stock photo platform that’s powered by Shopify. The image library includes thousands of high-resolution, royalty-free images shot by a global community of photographers. Burst provides designers, developers, bloggers, and entrepreneurs with access to beautiful free stock photography. Price: Free.

Adobe Spark

Adobe Spark home page

Adobe Spark

Adobe Spark is a tool to create compelling social graphics, web pages, and short videos. Pick a photo, add text, and apply design filters or animations. Turn words and images into magazine-style web stories. Use Spark Post to create social graphics, Spark Page for web pages, and Spark Video to create compelling short videos. Explore a variety of layouts and fonts, and then tweak with text, photos, and icons. Price: Free. Premium plan is $9.99 per month.

Fontjoy

Fontjoy home page

Fontjoy

Fontjoy helps designers choose the best font combinations. Mix and match different fonts for the perfect pairing. Price: Free.

Colormind

Colormind home page

Colormind

Colormind is a color scheme generator that uses deep learning from photographs, movies, and popular art. Check Colormind for daily color models and to discover color combinations and develop palettes. Price: Free.

Easel.ly

Easel.ly home page

Easel.ly

Easel.ly is an infographic maker. Access templates to visualize timelines, reports, processes, resumés, and comparisons. Price: Free. Premium plans start at $4 per month.

Vectr

Vectr home page

Vectr

Vectr is a tool to create vector graphics easily. Create blur-free logos, presentations, cards, brochures, website mockups, or any two-dimensional graphic. Send anyone a Vectr document for real-time collaboration. Watch your team create and edit designs live. Price: Free.

Colorcinch

Colorcinch home page

Colorcinch

Colorcinch is a photo and text editor with a large selection of image filters and cartooning effects. Draw freehand with multi-style brushes, crop and resize, adjust exposure, sharpen or blur, make colors pop, add text, and edit layers. Access over 50,000 vector graphics and icons and over 1.5 million high-resolution stock photos. Price: Free. Premium plan is $5.99 per month.

Pablo

Pablo home page

Pablo

Pablo, from Buffer, is a tool to create engaging social media images in under 30 seconds. Type your text or select a quote, choose an image from a collection of Unsplash photos or upload your own, then style. Share via Twitter, Facebook, or through the Buffer queue. Price: Free.

Stencil

Stencil home page

Stencil

Stencil is another editorial tool to easily create social media graphics, ads, blog headers, and more. Stencil features over 1,225 templates, 3,350 fonts, 5 million stock photos, and 3 million icons and graphics. Price: Free. Premium plans start at $9 per month.

Snappa

Snappa home page

Snappa

Snappa is an application to create graphics for social media, ads, blogs, and more. Choose from thousands of templates, 200 fonts, 100,000 vectors, and 4 million stock photos. Remove backgrounds with a single click. Resize images and photos in one click for Facebook, Instagram, Twitter, LinkedIn, Pinterest, YouTube, ads, and more. Share your graphics to Facebook, Twitter, and other popular social media platforms without leaving Snappa. Price: Free. Premium plans start at $10 per month.

Infogram

Infogram home page

Infogram

Infogram is a tool to create infographics. Use more than 35 interactive charts and over 550 maps to help visualize data, including pie charts, bar graphs, column tables, and word clouds. Choose from over 20 ready-made design themes or create a customized brand theme with your own logo, colors, and fonts. Price: Free. Premium plans start at $19 per month.

GIMP

Gimp home page

GIMP

GIMP (GNU Image Manipulation Program) is a cross-platform tool for quality image creation and manipulation and advanced photo retouching. GIMP provides features to produce icons, graphical design elements, and art for user interface components and mockups. Price: Free.

Inkscape

Inkscape home page

Inkscape

Inkscape is a free and open-source vector graphics editor. It offers features for illustrations, including logos, typography, cartoons, and diagrams. Inkscape includes a pencil tool, shapes tool, text tool, embedded bitmaps, a cloning tool, and more. Price: Free.

Paint.net

Paint.net home page

Paint.net

Paint.net is an image and photo editing application for Microsoft Windows. It features layers, unlimited undo, special effects, and a wide variety of tools, along with an active online community, helpful tutorials, and useful plugins. Paint.net was originally a free replacement for the Microsoft Paint software that comes with Windows. Price: Free.

Google Charts

Google Charts home page

Google Charts

Google Charts is an application to display live data on your site. Choose from a variety of charts to fit your data, from simple scatter plots to hierarchical treemaps. Easily connect charts and controls into an interactive dashboard. Price: Free.

Quick Refresher of U.S. CAN-SPAM Requirements

New consumer privacy laws in the U.S. and elsewhere apply to many forms of digital promotion, including email marketing. Thus it’s worth reviewing the requirements of the CAN-SPAM Act of 2003, which sets rules for the use of commercial email to U.S.-based recipients.

I’ll do that in this post.

Commercial Email

President George W. Bush signed CAN-SPAM into law to help protect U.S. consumers from malicious, unsolicited email. The acronym stands for “Controlling the Assault of Non-Solicited Pornography And Marketing.”

The Act applies to any commercial electronic message to U.S. recipients — B2C and B2B. It includes transactional and marketing messages. Both fall under the CAN-SPAM rules, although transactional emails are subject only to truthful information, while marketing messages must meet all requirements, as summarized below.

For example, the transactional email message that follows is from Roto-Rooter, the plumbing company. The email confirms the details of a service appointment. CAN-SPAM requirements for this type of message are that the information must be truthful.

Screenshot of a Roto-Rooter service appointment confirmation.

The CAN-SPAM Act requires this transactional message from Rotor-Rooter to be accurate and not misleading.

CAN-SPAM and Ecommerce

CAN-SPAM does not require explicit permission from email recipients, unlike the Canadian Anti-Spam Legislation, which does.

Key CAN-SPAM requirements include:

  • Not misleading to the recipient. All emails must contain an accurate representation of the sender — individual, brand, or company — and a clear, non-deceptive subject line. For example, an ecommerce company cannot insert “Amazon” as the “From” name unless it is Amazon. The subject line must accurately describe the content, and marketing messages must also convey the purpose, such as an advertisement or promotion.
  • Includes a physical mailing address in the body of the email. An address where unsubscribe requests can be physically mailed is also a requirement.
  • Provides an unsubscribe link. The Act requires an obvious link for recipients to unsubscribe from all of the sender’s emails.
Screenshot of a financial services email with an unsubscribe link and a physical address.

Commercial email messages to U.S. recipients must contain an unsubscribe link and a physical mailing address.

  • Opt-out requests honored within 10 days. Commercial email senders have 10 business days to process unsubscribe requests. Email service providers typically do this automatically, requiring no additional action from the sender. However, a sender must maintain this global suppression list indefinitely, even when changing service providers.
  • Senders and their agencies are responsible. Agencies and consultants that send on behalf of clients are responsible for the email, as are the client-senders.

Fines

CAN-SPAM calls for fines up to $43,792 for each violation. Fortunately, most email service providers have built-in enforcement mechanisms to help senders avoid honest mistakes. For example, most providers will not send an email without an unsubscribe link and a physical mailing address.

For more, see the CAN-SPAM Act compliance guide from the U.S. Federal Trade Commission.