How To Overcome Data Onboarding Challenges For Software Products

Companies willing to pay good money for a new piece of software are most likely not starting from scratch. They’re running an established business, with well-built and documented processes. So, they have tons of data to carry over.

As a result, the decision to bring a new app into the fold is not one they take lightly. Internal processes need to change. Getting the team to adopt the new solution can take time. Integrating it with existing systems and external tools can be a problem. Oh yeah, and there’s the matter of compliance to worry about, too.

This means there’s a lot of pressure on new software to provide a top-notch experience from the get-go. Fail to provide companies with a simple and intuitive way to onboard their data and you can expect high rates of customer churn as a result.

If you’re designing a product that needs data from customers in order to be of any value, here’s what you need to know about building out your data onboarding process.

How Data Onboarding Correlates With User Satisfaction

Business software is essentially just an empty box waiting to be filled with its users’ data. Without the ability to flawlessly onboard users’ data, the software essentially becomes useless.

Let’s look at what happens when you get the data onboarding process right.

End User Benefits

If you can nail the data onboarding piece, expect your end users to reap the following benefits:

  • They’ll be more confident in their decision.
    With complete and accurate data transferred into your software, users actually see how valuable it is soon after signing up. This leaves little room for second-guessing their decision, which leads to greater satisfaction overall with the product, and ultimately more money for your business.
  • You’ll get greater team buy-in.
    A positive data onboarding experience lets customers use your product faster, reducing the time needed for them to get value. So, really, data onboarding sets the stage for how your customers and their team will view the rest of your app.
  • They’ll experience more success with the software.
    Since users won’t have to stress about data formatting and cleanup or troubleshooting error-ridden import processes, they can get more out of the product and its features.

Software Developer Benefits

The software provider (you and/or your client) benefits, too:

  • Improve user satisfaction.
    Your end users don’t need to be technical wizards to figure out how to onboard data into your product. When you make light work of this, you reduce churn, attract more users and retain more loyal users over the long-term.
  • Spend less time on customer service.
    You can stop worrying about having to support a faulty data onboarding process as well as taking over tasks like data formatting and validation for your users. Instead, put your time and energy towards building better relationships with customers instead of putting out fires all the time.
    Kelly Abbott, Co-Founder and CTO of Tablecloth, can attest to this:

    “We have cut the amount of time we spend wrangling with files by 95%. We basically had all hands working to solve those problems at times.”

  • Have greater confidence in your product.
    When you have a data onboarding solution that’s flexible and powerful, you don’t have to restrict what data your users can or can’t import. It’s no longer a limitation.
    As Abbott explains:

    “It has made us more contemplative about the data we are asking clients for. We no longer have to avoid asking for data that may require too much time to fix. Flatfile eliminates that problem and has improved our willingness to experiment with different types of data we can incorporate into our analyses. The more time we spend tinkering with different data types, the more likely we are going to uncover the insight that produces additional value in the marketplace. That is indispensable for a startup like us.”

  • Save money.
    Although you’ll have to spend money on a third-party data onboarding solution, you’ll save your company the time and money otherwise spent trying to manage a custom-built data importer, onboarding process and client relationships. (Tablecloth, for instance, saved tens of thousands of dollars when they adopted Flatfile.)

The Challenges Of Data Onboarding For Software Products

Let’s have a look at the common challenges in data onboarding and how Flatfile Concierge removes them:

Challenge #1: There’s A Lot Of Data To Aggregate

When signing up for new business software, users probably expect to do a little work upfront, like filling in basic account information, configuring settings and adding users. The last thing you want to do is surprise them with a data importer that’s going to cause more work for them.

Let’s say, for example, you’ve built a CRM.

Unless the software targets startups and other new businesses, users are going to have a ton of external data to bring along. For instance:

  • Contact info for clients, prospects, vendors, partners and team members;
  • Existing customer data like account and sales history;
  • Prospect data like communication history;
  • Sales pipeline details;
  • Team and individual goals and metrics.

Unless your CRM directly integrates with every one of your users’ previous CRMs, how are they going to move this data over? Copy and paste? CSV templates?

An animation demonstrating Flatfile Concierge data models. Companies can create specific data models as a guide for customers to use when importing data. (Image source: Flatfile)

Plus, you have to think about all of the other sources a CRM pulls info in from. Payment gateways. Spreadsheets that live on a sales team’s drive. Signed contracts that have been emailed or faxed to your company. There’s a lot of data coming from different places and people.

The Fix

There are a number of things Flatfile Concierge does to fix this problem.

For starters, it allows data to be imported from a variety of file types:

  • CSV,
  • TSV,
  • XLS,
  • XML,
  • And more.

With this kind of flexibility, your users won’t have to worry about transferring data to one specific file type and then cleaning up errors that occur during the transfer. Flatfile Concierge can handle various file types, of varying data types, and easily validate it all.

Another thing to think about is how your software is going to track and organize each imported file and its corresponding data.

What Flatfile allows your users to do is create collaborative workspaces to place data in. When a team member adds new data to the workspace, a record is captured containing the:

  • Date of upload,
  • File name,
  • User who submitted the data,
  • Number of rows added,
  • Version history,
  • Upload errors.

Flatfile Concierge animation demonstrating notifications for when spreadsheets are imported. (Image source: Flatfile)

This will keep things organized while also keeping everyone accountable to the data they contribute. And with this information readily available from a centralized dashboard, there’ll be no secret as to what’s been uploaded, by whom and when. Import errors can also be fixed collaboratively, without the need to re-upload spreadsheet data.

Challenge #2: Data Is Imported In A Variety Of States

When you give your software users the ability to transfer their data into your product, there’s not a lot you or the software team can do in terms of formatting or cleaning up end users’ data beforehand. Nor should you have to. Your job is to ensure customers see the value in the software; not to struggle with importing data.

You could give them a spreadsheet template, but that would require them to spend time reformatting all their data. You could point them to the knowledgebase, but, again, that assumes that your end users will be willing to do that extra work.

In reality, your users are going to be in a hurry to get inside the new software and get to work. They’re not going to stop to deal with this. That’s the software’s job.

However, many data onboarding solutions don’t handle messy spreadsheets very well. Not only do they have a hard time recognizing what some of the data is (often because the data model doesn’t match their own), but then the application refuses to accept certain spreadsheet columns.

Even if it’s the end user’s fault for not properly organizing or labeling their data or teaching their team how to do so (or just not knowing what to do in the first place), who do you think they’re going to blame in the end when their data won’t import?

The Fix

Flatfile Concierge’s importer is AI-powered, which means that your software (and data importer) really can do the work for your end users.

Using advanced validation logic, the data importer can figure out what the data is and where it goes.

While Flatfile will automatically match columns and corresponding data to your software’s actual data fields, users get a chance to confirm that’s the case before allowing it into the system:

Before this happens, you can do a little work on the backend to ensure that Flatfile knows what to do with your users’ data:

  • Create target data models so Flatfile can navigate complex spreadsheet formats and datatypes your users will likely try to import.
  • Create a template with validation rules so Flatfile’s AI knows exactly how to map everything out.
  • Validate imported data against other databases to help the importer contextualize, validate and clean up the data over time.

Once you’ve done that upfront work, the rest is easy.

The bulk of the work will be done by Flatfile Concierge when it transforms imported data into something clean and useful. In fact, about 95% of imported columns will automatically map to your software thanks to Flatfile’s machine learning and fuzzy matching system.

The end user will have the opportunity to review the parts of their data that contain errors. If they find any, they can repair the errors inside Flatfile, rather than have to fix it in a spreadsheet and re-import.

Challenge #3: Getting And Tracking Data From Multiple Users

When there are a lot of cooks in the kitchen, there are a number of things that can go wrong.

Data can sometimes live on team members’ computers, or worse, sent over email, which can be a huge security concern for sensitive data. This can happen if users aren’t given access to the software platform or find the data importer too intimidating to use.

On the flip side of that, with the wrong data onboarding process, it could become like a free-for-all where people add whatever the heck they like to the company’s data. While the data does get imported, there’s no review framework so the company’s database is filled with errors and duplicate entries.

Your end users need to be able to maintain order, control and security when dealing with something as serious as company data — especially if you want your software to be usable.

The Fix

Flatfile Concierge has designed the data onboarding process to be a collaborative one.

As you can see, company admins can invite specific collaborators (i.e. customers) to add data to their workspaces. But this isn’t a blanket invitation to import data.

Admins have the ability to create an approval process. They get to:

  • Ask for specific data sets from team members.
  • Control which workspaces they’re allowed to import data to.
  • Review all data submissions before flowing the approved data into the platform.

Admins can also import data on the customer’s behalf. Flatfile Concierge ensures that data onboarding is never a dead-end for customers.

Not only does this ensure that the right data ends up in the software, but the controlled flow means the data will end up being cleaner and more accurate, too. All of this, while providing a seamless data onboarding experience for users.

Challenge #4: Data Security Is Always A Concern

When it comes to web and app development, user privacy and security are top priority. If our customers and visitors don’t trust that their information is safe from prying eyes (and isn’t being sold off to advertisers), they’re going to stop using our solutions in the first place.

The same thing happens with software — though it’s not just the company’s personal data they have to worry about securing.

Often, when companies import data into software (like the CRM example), they’re importing their customers’ private and sensitive data. Allow that to be compromised and you can kiss your software goodbye.

So, yes, the software itself needs to be secured. That’s a given. But so, too, does your data onboarding process. It’s a huge point of vulnerability if left unchecked.

The Fix

The first thing Flatfile Concierge does is to encourage users to move away from sharing sensitive data over email, FTP, and other unsecured platforms by providing a user-friendly data onboarding solution.

The second thing it does is provide an authenticated and compliant workspace for users to import, validate, and post their data to your software.

Here’s how Flatfile Concierge secures its workspaces:

  • Each collaborator enters the data importer through an authenticated invitation.
  • Data is encrypted in transit and stored in an encrypted Amazon S3 bucket.
  • The data onboarding platform is 100% GDPR compliant.
  • Flatfile is HIPAA and SOC2 compliant and can adjust for other compliance requirements as needed.

In addition, once data is successfully migrated into your application, it’s deleted from Flatfile. This way, you only have to worry about securing your data within your software and not on previous platforms it’s touched.

Wrapping Up

With an insufficient or error-prone data onboarding process, you, the software provider and its end users are going to spend too much time manually cleaning and validating spreadsheets. This won’t just happen during the initial user signup either. If the data importer isn’t up to the task, you’re all going to be throwing away a ton of time and resources every time data needs to be uploaded or transferred into the platform from existing customers.

Of course, this all assumes that your importer can even get user data into the software. (Sadly, this happens with too many custom-built solutions.)

Needless to say: Your data onboarding process must be flawless for your team and customers. It’s the only way to keep user churn rates low and user satisfaction high.

Data onboarding is a really complex process to handle. Save yourself the trouble in trying to develop your own data onboarding solution and the time trying to troubleshoot the problems with it. With an AI-powered data importer like Flatfile Concierge, everything’s taken care of for you.

Manual Texts Recover 21 Percent of Abandoned Carts, Says LiveRecover Founder

Dennis Hegstad believes the best way to recover abandoned carts is via text messages. But his messages are not automated or bot-driven. Hegstad’s company, LiveRecover, sends one-to-one texts from a real person to folks who have left an ecommerce checkout without completing a purchase.

“It’s peer-to-peer texting,” he told me. “We’ll send messages from a real person. It’s not a drip campaign. About 55 percent get replies. Our total recovery rate on average is about 21 percent, which is really good.”

The notion of using SMS for abandoned carts is not new. But using humans to do it individually is unique. I recently spoke with Hegstad about his company and the rise of commercial SMS, among other topics.

What follows is the entire audio of our conversation and a transcript, edited for length and clarity.

Eric Bandholz: Tell us about LiveRecover.

Dennis Hegstad: We do SMS marketing with a focus on recovering abandoned carts for ecommerce businesses on Shopify, WooCommerce, and other platforms.

It’s peer-to-peer texting. We’ll send messages from a real person — “live texting agents” is what we call them. It’s not a drip campaign.

The agents are mostly based in the Philippines. They can type a certain number of words per minute, they speak English as a first language, and they’ve had some ecommerce experience.

Bandholz: How do you convince recipients that the message is coming from a human and not a bot?

Hegstad: We encounter a lot of people who will joke with our agents and say like, “If you can prove that you are real, I’ll buy.” And then we’ll respond with a funny SpongeBob meme, such as, “Give me all your money” or something like that. And they’re like, “This is so cool. I thought this was a bot. You guys have great customer service.” But, for sure, many people assume that it’s automated.

Bandholz: So your agents send messages to abandoned carts. How many of those are converted to orders?

Hegstad: About 55 percent get replies. Our total recovery rate on average is about 21 percent, which is really good.

We don’t text 24 hours a day. There’s a quiet period when we cannot text. No one’s texting after 9:00 p.m. and before 8:00 a.m., per the recipient’s time zone. There are regulations around that.

Bandholz: Can shoppers text your service for general queries? Or is it just purely abandoned carts?

Hegstad: Right now, it’s just abandoned checkout recovery. I say “checkout recovery” because we collect the phone number at the checkout, not in the cart. But we are adding live agent support, so a merchant could have a widget that says, “If you have a question about this order, text in, and our agents will support you.” But that requires us to get more integrated with the stores on a customer service level because if shoppers are asking things live, we need answers on hand.

Bandholz: We did a little bit of that at Beardbrand. We had our live chat box that was connected to a bunch of people overseas. They didn’t necessarily understand beard grooming questions. Plus, those widgets are intense resource hogs. They add so much time to loading a page. So we killed our live chat altogether.

So, instead, we published a banner that says “Text ‘style’ to this phone number.” We have an in-house community manager who will receive the text and reply to that.

Hegstad: That makes sense. We’re in this wild west era of SMS marketing, which is fun and exciting. Everyone’s rushing to do SMS, whether that’s abandoned cart recovery or creating an SMS list, or winning back a customer.

But we don’t know what the long term value will be, what the duration of a customer subscription is on email versus SMS.

SMS is great now, but it’s going to become less commercial and more concierge, real-time, and personalized, versus just being a transactional machine that reminds you to buy stuff. I don’t think consumers really want that. They want to have questions answered. So, you have to be a little sensitive about how you’re using SMS for the long term.

Does Beardbrand use SMS for abandoned cart or welcome series?

Bandholz: No. For SMS, all we do is consultation. But we’re going to test promotional texts soon. We’ve got a new product launch coming up. We may do an SMS campaign to let people know about it. It would probably link straight to the product page, especially mobile.

But your company doesn’t offer a promotional text service?

Hegstad: No. We want to be the best at abandoned checkout recovery.

Bandholz: How do phone numbers work? If I’m a LiveRecover customer, can I pick a phone number for my texts?

Hegstad: There are two types of phone numbers. One’s called a short code, which is essentially a three, zero, three, zero, three, which is a commercial phone number where you can send tens of thousands, or hundreds of thousands, of texts per minute.

There’s another number called a long code, which is a more standard phone number. If you send hundreds of texts per minute from that number, you’ll get flagged by a carrier, which will likely impose a cool-down period when you can’t use the number.

So for LiveRecover, our customers don’t pick their phone number. We do all that for them. But they do get a number that’s used in relation to where the customer is. So if you live in Texas and you abandon cart, you would be getting a text from a Texas number. If you live in Florida, you’d be getting a text from a Florida number. One customer doesn’t have a dedicated number. They’re rotated out in a big pool amongst all our customers.

And, yes, there’s a much better reply rate with a local number because people recognize it.

Bandholz: Does LiveResponse have copycats?

Hegstad: Yes. It’s a bit annoying, but at the end of the day, no one is going to be in first place by chasing the person in front of you. And we’re not trying to reinvent the wheel either. Mailchimp was here when we launched, as was Attentive and SMSBump. Now there’s a slew of competitors. We’re not upset with that. We think, “Good for you guys. You did a great job.”

But when people copy and paste the copy that our team wrote and bid against your keywords on Google, that’s just scummy. I’m cheering for Postscript, for example, which is a competitor. But I’m not cheering for anybody that’s copying and pasting my work.

Bandholz: How can people connect with you and learn more about your company?

Hegstad: LiveRecover.com is the website. We offer a $500 free trial. You don’t need a credit card to sign up. If you want to talk about SMS or listen to me about life, go to @dennishegstad on Twitter.

The Value of Personalized Marketing for Ecommerce

A well-executed personalized marketing program may resemble person-to-person conversations like those a shopper could experience in a brick-and-mortar retail store.

It is easy to recognize a genuinely good customer experience when you have it.

Imagine you visit a top-notch kitchen boutique in a posh shopping area. The inquisitive associate in the store helps you select an espresso machine, recommends espresso beans, and offers tips for how and when to clean and maintain the machine.

Then, a week later, when you come back into the store, the associate recognizes you, asks about the espresso machine and the beans, and genuinely seems interested in your answers and feelings.

This simple act of remembering who you are — in the sea of customers the associate sees daily — makes you feel like a valued customer. That experience makes you appreciate the store and the associate. So when she asks about your family, what you might be cooking tonight, or if you’ve ever tried a Scanpan CS+ skillet, it feels like a conversation with a trusted advisor, not like a sales pitch.

Photo of a retail store employee

Like an associate asking questions in a store, businesses interested in personalized marketing need to think of data collection as a conversation.

Personalized Marketing

In effect, this conversation is personalized marketing. A textbook would define the term differently. It would say personalized marketing uses data analysis and digital technologies to provide new or prospective customers with individualized messages and product offers.

However, at its core, personalized marketing may be a conversation.

If the conversation doesn’t go well, the customer’s experience can be jarring. If you have ever received an email message from a retailer that starts with the line “Hello %%%FIRST NAME%%%” you understand just how damaging a poor conversation, a poor attempt at personalized marketing can feel.

This same sort of disconnect can happen between an advertising campaign and the transactional emails a customer receives after a purchase. A shopper who is drawn to a retailer because of the latter’s funny ad campaign may notice if it turns out that the brand’s email messages are bland.

The Calm App

“I am going to use one of our customers as an example; [it is] Calm, the meditation app,” said Garin Hobbs, director of deal strategy at Iterable, which makes a growth marketing platform.

“Most app marketers would tend to segment their audience in, probably, fairly broad swaths: those who use it for free; those who’ve seen the value to pay for premium,” Hobbs said, adding that this is not enough. Calm saw engagement rates fall significantly even after customers paid for a subscription. Customers were drawn to Calm from its ad campaigns but then experienced a disconnect.

“So let’s think about what might draw people to a meditation app, like Calm. You’d think meditation is a shared value. But different reasons draw each of us to it.

“One person might do it for stress relief or to reduce anxiety. Another might do it to have an opportunity to step away and unplug for just five minutes every day. It’s just finding that inner peace and the opportunity to come back to the center. That’s very different from stress or anxiety. A third person — such as an athletic-minded person who does a lot of physical training — because of the mental training aspect. A person might come to develop this new sort of intellectual habit. Then there might be a person who is adopting a more new-age lifestyle, and for whatever reason, he feels like meditation is a part of that. Finally, we might have someone looking for ways to feel more fulfilled and relaxed but might feel like meditation is a little crunchy. It’s for people who hide crystals around their house. I’m not really sure if that’s for me,” Hobbs continued.

“So now we have six different people who are all drawn to the common value of meditation,” Hobbs emphasized the word “value” with air quotes as he spoke. “But subjectively, there are vastly different things that drive them here.”

Give and Take

Here is the challenge. Marketers who want to identify what is driving each customer to shop and engage should ask questions like the inquisitive store associate mentioned above.

Some of these questions might come as part of an initial sale when a merchant’s ecommerce software collects a name and address. The give and take of conversation could continue in a welcome series. It might be as simple as asking a shopper if she prefers email messages or text or if she would rather use an app or the mobile web.

This conversation continues with each new purchase and new interaction. It is holistic because the conversation takes place not just in email but on the store’s website and in its app when a customer interacts with a customer service agent.

Ask for Clarification

Once a question is asked, or behavior is observed, it needs to be understood. If the customer’s response or behavior isn’t clear, clarify it.

There is an inherent risk in making a wrong inference. A business could cause the conversation to fizzle or, worse still, alienate a customer.

“If I see that somebody’s entire data history is purchasing men’s clothes, but all of a sudden I see a dress, what’s the significance of that dress?” Hobbs said.

“If we’re using straight inference, I might infer that it is either a gift for somebody else or it’s for themselves. It’s 2020, and we can’t roll the dice on questions like this. That dress could be for the customer, and you need to ask or discover that respectfully — in a way that makes the customer feel safe, that makes the customer feel like a desired part of your brand audience, but it also has to be asked in a way that is very genuine and authentic to brand values,” Hobbs said.

This might mean using polling on a merchant’s website, progressive profiling via email, or other ways to ask for clarification. It is not unlike how a good store associate might try to understand what is really important to a customer and what his or her motivations are.

Leading to Loyalty

“The field of competition for any category of ecommerce retail — goodness, for any category of anything — is extremely dense. The internet has created such an equal opportunity for anybody with an idea … to go out and compete with even the very largest brands. Consumers are absolutely spoiled for choice,” said Hobbs.

“Think about even just a single item, such as a Patagonia puff jacket. I could easily list off a dozen places right now where I could buy that same jacket at the same price and still get things like free overnight shipping. So what’s to draw me to one brand versus the other?”

“The real answer is two-fold: value and experience. And those two qualities are highly subjective. But as we think about how consumers interact with brands, value and experience are usually the qualities that matter. That’s what draws us. That’s what keeps us. It is less about habit and more about the many, many different psychological and physical things that go into loyalty and help create preference.”

An individual’s feelings and preferences influence a subjective view. And one of the best ways a merchant can discover those feelings and preferences is through the give and take of a customer conversation.

Designing Emails to Drive Clicks and Conversions

Email design has evolved to reflect changes in technology and consumer behavior. The goal is to elicit the desired action from the recipient, such as a click or conversion. In this post, I’ll address tips on designing high-performing emails for ecommerce.

Every email marketing message should contain specific elements. For U.S. senders, the message must be compliant with the CAN-SPAM Act of 2003, which requires a clear unsubscribe link and a valid “from” address that is representative of the sender.

Beyond those requirements, the message and images are up to the marketer, so long as both are not misleading to the recipients. Common design elements include:

  • Logo or brand name, linked to the home page;
  • A primary image;
  • Text-based message (not embedded in an image);
  • A thoughtful balance of text and images;
  • Clear call-to-action;
  • Contact information and social media accounts;
  • Compelling “From” line, subject line, and preheader.

An email template can help streamline the production process. A consistent template also helps recipients navigate and respond to the message.

Email Templates

In my experience, a visual hierarchy produces the best layouts. Images are powerful. They impact recipients’ actions and attitudes. Templates should reflect the natural way people comprehend and interpret information, such as an inverted V pattern with a large image on top, then text, and then a call-to-action.

A marketing email from Costco with an inverted "V" design.

This email from Costco uses an inverted “V” design.

Another popular layout is a “Z” pattern, where the recipients read left to right, mimicking standard reading patterns.

This email from Rite Aid uses a "Z" design pattern.

This email from Rite Aid uses a “Z” design pattern.

Hero section. The so-called “hero” section of an email typically appears just under the logo and top navigation. It conveys the main objective of the email. Hero text should be large enough to read easily. Buttons should facilitate a finger on mobile.

This "hero" example from Famous Footwear is easy to read and conveys the overall theme. 

This “hero” example from Famous Footwear is easy to read and conveys the overall theme.

White space. Adequate white space around an image, text, and call-to-action is critical. It makes the entire message less intimidating and easier to digest, especially on mobile.

Buttons. Again, all buttons should be large enough for a finger on a smartphone and not too close. Many designers utilize a “bulletproof button.” It is not an image. It’s text on top of a background, such as a solid color. This enables recipients to read the call-to-action if images are turned off or take too long to download.  Campaign Monitor, for example, offers a widget for building bulletproof buttons.

Experiment and Test

Maintaining email design consistency can help recipients know what to expect. But change is good, too, to avoid design fatigue. Test new design layouts against performance — opens, clicks, conversions.

One idea is testing “dark mode” in an email.  That’s a setting on smartphones wherein users swap light colors for dark and vice versa. The purpose is to preserve battery life and ease viewing in low-light situations. Certainly all emails should by default render well in dark mode. But marketers can also test a native dark version with white text on a dark or black background.

Another test is inserting interactive or dynamic content, such as animated buttons, product carousels, countdown timers, surveys, or polls. Accelerated Mobile Pages (“AMP”) for email was introduced in 2018 to integrate live or custom content into the body of an email. AMP has not taken off due to limited support from email service providers. But Gmail does support it. I’ve addressed the possibilities at “Does ‘AMP for Email’ Impact Ecommerce?

Getting Started With Next.js

Lately, Next.js has termed itself The React Framework for Production, and with such bold claim comes a bevy of features that it offers to help you take your React websites from zero to production. These features would matter less if Next.js isn’t relatively easy to learn, and while the numerous features might mean more things and nuances to learn, its attempt at simplicity, power, and perhaps success at it is definitely something to have in your arsenal.

As you settle in to learn about Next.js, there are some things you might already be familiar with and you might even be surprised at how it gives you a lot to work with that it might seem almost overwhelming at face value. Next.js is lit for static sites and it has been well-engineered for that purpose. But it also takes it further with its Incremental Static Regeneration that combines well with existing features to make development a soothing experience. But wait, you might ask. Why Next.js?

This tutorial will be beneficial to developers who are looking to get started with Next.js or have already begun but need to fill some knowledge gaps. You do not need to be a pro in React, however, having a working experience with React will come in handy.

But Why Next.js?

  1. Relatively easy to learn.
    That’s it. If you’ve written any React at all, you’d find yourself at home with Next.js. It offers you advanced tools and a robust API support, but it doesn’t force you to use them.
  2. Built-in CSS support.
    Writing CSS in component-driven frameworks comes with a sacrosanct need for the “cascade”. It’s why you have CSS-in-JS tools, but Next.js comes out of the box with its own offering — styled-jsx, and also supports a bevy of styling methodologies.
  3. Automatic TypeScript support.
    If you like to code in TypeScript, with Next.js, you literally have automatic support for TypeScript configuration and compilation.
  4. Multiple data fetching technique.
    It supports SSG and/or SSR. You can choose to use one or the other, or both.
  5. File-system routing.
    To navigate between one page to another is supported through the file-system of your app. You do not need any special library to handle routing.

There are many more other features, e.g. using experimental ES features like optional-chaining, not importing react everywhere you use JSX, support for APIs like next/head that helps manage the head of your HTML document, and so on. Suffice to say the deeper you go, the more you enjoy, appreciate, and discover many other features.

Requirements For Creating A Next.js App

Creating a Next.js app requires Node.js, and npm (or npx) installed.

To check if you have Node.js installed, run the command in your terminal:

# It should respond with a version number
node -v

Ideally, npm (and npx) comes with your Node.js installation. To confirm that you have them installed, run the commands in your terminal:

# Run this. It should respond with a version number
npm -v # Then run this. It should also respond with a version number
npx -v

In case any of the commands above fails to respond with a version number, you might want to look into installing Node.js and npm.

If you prefer the yarn package manager instead, you can run install it with the command:

# Installs yarn globally
npm i -g yarn

Then confirm the installation with:

# It should also respond with a version number
yarn -v

Creating A Next.js App

Getting the requirements above out of the way, creating a Next.js can be done in two ways, the first being the simplest:

  1. With create-next-app, or
  2. Manually

Creating A Next.js App With create-next-app

Using create-next-app is simple and straightforward, plus you can also get going with a starter like Next.js with Redux, Next.js with Tailwind CSS, or Next.js with Sanity CMS etc. You can view the full list of starters in the Next.js examples repo.

# Create a new Next.js app with npx
npx create-next-app <app-name> # Create a new Next.js app with npm
npm create-next-app <app-name> # With yarn
yarn create next-app <app-name>

If you’re wondering what the difference between npm and npx is, there’s an in-depth article on the npm blog, Introducing npx: an npm package runner.

Creating A Next.js Project Manually

This requires three packages: next, react, and react-dom.

# With npm
npm install next react react-dom # With yarn
yarn add next react react-dom

Then add the following scripts to package.json.

"scripts": { "dev": "next dev", "start": "next start", "build": "next build"
}

Folder Structure

One salient thing you might notice after creating a Next.js app is the lean folder structure. You get the bare minimum to run a Next.js app. No more, no less. What you end up with as your app grows is up to you more than it is to the framework.

The only Next.js specific folders are the pages, public, and styles folder.

# other files and folders, .gitignore, package.json...
- pages - api - hello.js - _app.js - index.js
- public - favicon.ico - vercel.svg
- styles - globals.css - Home.module.css

Pages

In a Next.js app, pages is one of the Next-specific folders you get. Here are some things you need to know about pages:

  • Pages are React components
    Each file in it is a page and each page is a React component.

// Location: /pages/homepage.js
// <HomePage/> is just a basic React component
export default HomePage() { return <h1>Welcome to Next.js</h1>
}
  • Custom pages
    These are special pages prefixed with the underscore, like _app.js.

    • _app.js: This is a custom component that resides in the pages folder. Next.js uses this component to initialize pages.
    • _document.js: Like _app.js, _document.js is a custom component that Next.js uses to augment your applications <html> and <body> tags. This is necessary because Next.js pages skip the definition of the surrounding document’s markup.
  • File-based routing system based on pages
    Next.js has a file-based routing system where each page automatically becomes a route based on its file name. For example, a page at pages/profile will be located at /profile, and pages/index.js at /.

# Other folders
- pages - index.js # located at / - profile.js # located at /profile - dashboard - index.js # located at /dashboard - payments.js # located at /dashboard/payments

Routing

Next.js has a file-based routing system based on pages. Every page created automatically becomes a route. For example, pages/books.js will become route /book.js.

- pages - index.js # url: / - books.js # url: /books - profile.js # url: /profile

Routing has led to libraries like React Router and can be daunting and quite complex because of the sheer number of ways you might see fit to route section of your pages in your Next.js app. Speaking about routing in Next.js is fairly straightforward, for the most part of it, the file-based routing system can be used to define the most common routing patterns.

Index Routes

The pages folder automatically has a page index.js which is automatically routed to the starting point of your application as /. But you can have different index.jss across your pages, but one in each folder. You don’t have to do this but it helps to define the starting point of your routes, and avoid some redundancy in naming. Take this folder structure for example:

- pages - index.js - users - index.js - [user].js

There are two index routes at / and /users. It is possible to name the index route in the users folder users.js and have it routed to /users/users if that’s readable and convenient for you. Otherwise, you can use the index route to mitigate the redundancy.

Nested Routes

How do you structure your folder to have a route like /dashboard/user/:id.

You need nested folders:

- pages - index.js - dashboard - index.js - user - [id].js # dynamic id for each user

You can nest and go deeper as much as you like.

Dynamic Route Segments

The segments of a URL are not always indeterminate. Sometimes you just can’t tell what will be there at development. This is where dynamic route segments come in. In the last example, :id is the dynamic segment in the URL /dashboard/user/:id. The id determines the user that will be on the page currently. If you can think about it, most likely you can create it with the file-system.

The dynamic part can appear anywhere in the nested routes:

- pages - dashboard - user - [id].js - profile

will give the route /dashboard/user/:id/profile which leads to a profile page of a user with a particular id.

Imagine trying to access a route /news/:category/:category-type/:league/:team where category, category-type, league, and team are dynamic segments. Each segment will be a file, and files can’t be nested. This is where you’d need a catch-all routes where you spread the dynamic parts like:

- pages - news - [...id].js

Then you can access the route like /news/sport/football/epl/liverpool.

You might be wondering how to get the dynamic segments in your components. The useRouter hook, exported from next/router is reserved for that purpose and others. It exposes the router object.

import { useRouter } from 'next/router'; export default function Post() { // useRouter returns the router object const router = useRouter(); console.log({ router }); return <div> News </div>;
}

The dynamic segments are in the query property of the router object, accessed with router.query. If there are no queries, the query property returns an empty object.

Linking Between Pages

Navigating between pages in your apps can be done with the Link component exported by next/link. Say you have the pages:

- pages - index.js - profile.js - settings.js - users - index.js - [user].js

You can Link them like:

import Link from "next/link"; export default function Users({users) { return ( <div> <Link href="/">Home</Link> <Link href="/profile">Profile</Link> <Link href="/settings"> <a> Settings </a> </Link> <Link href="/users"> <a> Settings </a> </Link> <Link href="/users/bob"> <a> Settings </a> </Link> </div> )
}

The Link component has a number of acceptable props, href — the URL of the hyperlink — been the only required one. It’s equivalent to the href attribute of the HTML anchor (<a>) element.

Other props include:

PropDefault valueDescription
asSame as hrefIndicates what to show in the browser URL bar.
passHreffalseForces the Link component to pass the href prop to its child./td>
prefetchtrueAllows Next.js to proactively fetch pages currently in the viewport even before they’re visited for faster page transitions.
replacefalseReplaces the current navigation history instead of pushing a new URL onto the history stack.
scrolltrueAfter navigation, the new page should be scrolled to the top.
shallowfalseUpdate the path of the current page without re-running getStaticProps, getServerSideProps, or getInitialProps, allows the page to have stale data if turned on.

Styling

Next.js comes with three styling methods out of the box, global CSS, CSS Modules, and styled-jsx.

There’s an extensive article about Styling in Next.js that has been covered in Comparing Styling Methods in Next.js

Linting And Formatting

Linting and formatting I suspect is a highly opinionated topic, but empirical metrics show that most people who need it in their JavaScript codebase seem to enjoy the company of ESLint and Prettier. Where the latter ideally formats, the former lints your codebase. I’ve become quite accustomed to Wes Bos’s ESLint and Prettier Setup because it extends eslint-config-airbnb, interpolate prettier formatting through ESLint, includes sensible-defaults that mostly works (for me), and can be overridden if the need arises.

Including it in your Next.js project is fairly straightforward. You can install it globally if you want but we’d be doing so locally.

  • Run the command below in your terminal.
# This will install all peer dependencies required for the package to work
npx install-peerdeps --dev eslint-config-wesbos
  • Create a .eslintrc file at the root of your Next.js app, alongside the pages, styles and public folder, with the content:
{ "extends": [ "wesbos" ]
}

At this point, you can either lint and format your code manually or you can let your editor take control.

  • To lint and format manually requires adding two npm scripts lint, and lint:fix.
"scripts": { "dev": "next dev", "build": "next build", "start": "next start" "lint": "eslint .", # Lints and show you errors and warnings alone "lint:fix": "eslint . --fix" # Lints and fixes
},
  • If you’re using VSCode and you’d prefer your editor to automatically lint and format you need to first install the ESLint VSCode plugin then add the following commands to your VSCode settings:
# Other setting "editor.formatOnSave": true, "[javascript]": { "editor.formatOnSave": false
}, "[javascriptreact]": { "editor.formatOnSave": false
}, "eslint.alwaysShowStatus": true, "editor.codeActionsOnSave": { "source.fixAll": true
}, "prettier.disableLanguages": ["javascript", "javascriptreact"],

Note: You can learn more on how to make it work with VSCode over here.

As you work along you most likely will need to override some config, for example, I had to turn off the react/jsx-props-no-spreading rule which errors out when JSX props are been spread as in the case of pageProps in the Next.js custom page component, _app.js.

function MyApp({ Component, pageProps }) { return <Component {...pageProps} />;
}

Turning the rule off goes thus:

{ "extends": [ "wesbos" ], "rules": { "react/jsx-props-no-spreading": 0 }
}

Static Assets

At some or several points in your Next.js app lifespan, you’re going to need an asset or another. It could be icons, self-hosted fonts, or images, and so on. To Next.js this is otherwise known as Static File Serving and there is a single source of truth, the public folder. The Next.js docs warns: Don’t name the public directory anything else. The name cannot be changed and is the only directory used to serve static assets.

Accessing static files is straightforward. Take the folder structure below for example,

- pages profile.js
- public - favicon.ico #url /favicon.ico - assets - fonts - font-x.woff2 - font-x.woff # url: /assets/fonts/font-x.woff2 - images - profile-img.png # url: /assets/images/profile-img.png
- styles - globals.css

You can access the the profile-img.png image from the <Profile/> component:

// <Profile/> is a React component
export default function Profile() { return { <div className="profile-img__wrap"> <img src="/assets/images/profile-img.png" alt="a big goofy grin" /> </div> }
}

or the fonts in the fonts folder in CSS:

/* styles/globals.css */
@font-face { font-family: 'font-x'; src: url(/assets/fonts/font-x.woff2) format('woff2'), url(/assets/fonts/font-x.woff) format('woff');
}

Data Fetching

Data fetching in Next.js is a huge topic that requires some level of undertaken. Here, we’ll discuss the crux. Before we dive in, there’s a precursory need to have an idea of how Next.js renders its pages.

Pre-rendering is a huge part of how Next.js works as well as what makes it fast. By default, Next.js pre-renders every page by generating each page HTML in advance alongside the minimal JavaScript they need to run, through a process known as Hydration.

It is possible albeit impractical for you to turn off JavaScript and still have some parts of your Next.js app render. If you ever do this, consider doing it for mechanical purposes alone to show that Next.js truly Hydrates rendered pages.

That being said, there are two forms of pre-rendering:

  1. Static Generation (SG),
  2. Server-side Rendering (SSR).

The difference between the two lies in when data is been fetched. For SG, data is fetched at build time and reused on every request (which makes it faster because it can be cached), while in SSR, data is fetched on every request.

What they both have in common is that they can be mixed with Client-side Rendering
wit fetch, Axios, SWR, React Query etc.

The two forms of pre-rendering isn’t an absolute one-or-the-other case; you can choose to use Static Generation or Server-side Rendering, or you can use a hybrid of both. That is, while some parts of your Next.js app uses Static Generation, another can use SSR.

In both cases, Next.js offers special functions to fetch your data. You can use one of the Traditional Approach to Data Fetching in React or you can use the special functions. It’s advisable to use the special functions, not because they’re supposedly special, nor because they’re aptly named (as you’ll see) but because they give you a centralized and familiar data fetching technique that you can’t go wrong with.

The three special functions are:

  1. getStaticProps — used in SG when your page content depends on external data.
  2. getStaticPaths — used in SG when your page paths depends on external data.
  3. getServerSideProps — used in Server-side Rendering.

getStaticProps

getStaticProps is a sibling to getStaticPaths and is used in Static Generation. It’s an async function where you can fetch external data, and return it as a prop to the default component in a page. The data is returned as a props object and implicitly maps to the prop of the default export component on the page.

In the example below, we need to map over the accounts and display them, our page content is dependent on external data as we fetched and resolved in getStaticProps.

// accounts get passed as a prop to <AccountsPage/> from getStaticProps()
// Much more like <AccountsPage {...{accounts}} />
export default function AccountsPage({accounts}) { return ( <div> <h1>Bank Accounts</h1> {accounts.map((account) => ( <div key={account.id}> <p>{account.Description}</p> </div> ))} </div> )
} export async function getStaticProps() { // This is a real endpoint const res = await fetch('https://sampleapis.com/fakebank/api/Accounts'); const accounts = await res.json(); return { props: { accounts: accounts.slice(0, 10), }, };
}

As you can see, getStaticProps works with Static Generation, and returns a props object, hence the name.

getStaticPaths

Similar to getStaticProps, getStaticPaths is used in Static Generation but is different in that it is your page paths that is dynamic not your page content. This is often used with getStaticProps because it doesn’t return any data to your component itself, instead it returns the paths that should be pre-rendered at build time. With the knowledge of the paths, you can then go ahead to fetch their corresponding page content.

Think about Next.js pre-rendering your page in the aspect of a dynamic page with regards to Static Generation. For it to do this successfully at build time, it has to know what the page paths are. But it can’t because they’re dynamic and indeterminate, this is where getStaticPaths comes in.

Imagine you have a Next.js app with pages States and state that shows a list of countries in the United States and a single state respectively. You might have a folder structure that looks like:

- pages - index.js - states - index.js # url: /states - [id].js # url /states/[id].js 

You create the [id].js to show a single state based on their id. So, it the page content (data returned from getStaticProps) will be dependent on the page paths (data returned from getStaticPaths).

Let’s create the <States/> components first.

// The states will be passed as a prop from getStaticProps
export default function States({states}) { // We'll render the states here
} export async function getStaticProps() { // This is a real endpoint. const res = await fetch(https://sampleapis.com/the-states/api/the-states); const states = await res.json(); // We return states as a prop to <States/> return { props: { states } };
}

Now let’s create the dynamic page for a single state. It’s the reason we have that [id].js so that we can match the path /states/1, or /states/2 where 1 and 2 are the id in [id].js.

// We start by expecting a state prop from getStaticProps
export default function State({ state }) { // We'll render the states here
} // getStaticProps has a params prop that will expose the name given to the
// dynamic path, in this case, id that can be used to fetch each state by id.
export async function getStaticProps({ params }) { const res = await fetch( https://sampleapis.com/the-states/api/the-states?id=${params.id} ); const state = await res.json(); return { props: { state: state[0] } };
}

If you try to run the code as it is, you’d get the message: Error: getStaticPaths is required for dynamic SSG pages and is missing for /states/[id].

// The state component
// getStaticProps function
// getStaticPaths
export async function getStaticPaths() { // Fetch the list of states const res = await fetch("https://sampleapis.com/the-states/api/the-states"); const states = await res.json(); // Create a path from their ids: /states/1, /states/2 ... const paths = states.map((state) => /states/${state.id}); // Return paths, fallback is necessary, false means unrecognize paths will // render a 404 page return { paths, fallback: false };
}

With the paths returned from getStaticPaths, getStaticProps will be made aware and its params props will be populated with necessary values, like the id in this case.

Extras

Absolute Imports

There’s support for absolute import starting from Next.js 9.4 which means you no longer have to import components relatively like:

import FormField from "../../../../../../components/general/forms/formfield"

instead you can do so absolutely like:

import FormField from "components/general/forms/formfield";

To get this to work, you will need a jsconfig.json or tsconfig.json file for JavaScript and TypeScript respectively, with the following content:

{ "compilerOptions": { "baseUrl": "." }
}

This assumes that the components folder exists at the root of your app, alongside pages, styles, and public.

Experimental ES Features

It is possible to use some experimental features like Nullish coalescing operator (??) and Optional chaining (?.) in your Next.js app.

export default function User({user) { return <h1>{person?.name?.first ?? 'No name'}</h1>
}

Conclusion

According to the Next.js team, many of the goals they set out to accomplish were the ones listed in The 7 principles of Rich Web Applications, and as you work your way in and deep into the ecosystem, you’d realize you’re in safe hands like many other users who have chosen to use Next.js to power their websites/web applications. Give it a try, if you haven’t, and if you have, keep going.

Resources

Introduction to Google Analytics 4

Google Analytics has come a long way since it launched in 2005. Version 4 is the latest iteration. It enables the blending of online and offline user activity into one reporting stream.

Google purchased Urchin, an early day analytics platform, in April 2005. Later that year, Google repurposed Urchin, calling it Google Analytics, to report traffic and conversion details on public websites.

As the internet has evolved, the demand for metrics that track user activity — online and offline — has increased. Google Analytics 4 (previously “App + Web”) supports that demand, reporting the customer journey wherever it occurs.

Google Analytics 4

Google Analytics 4 will not immediately enhance the reporting of most online merchants. Version 4 does help companies that:

  • Have an app;
  • Have a software-as-a-service business model;
  • Are interested in advanced remarketing.

Nonetheless, all merchants should create a Google Analytics 4 property to benefit from the updated reporting and enhanced features that will come in time. The current version 4 is just the beginning.

To create a version 4 property, log in to Google Analytics > Admin [icon] > Create Property.

Screenshot of Google Analytics 4 setup.

To create a version 4 property, log in to Google Analytics > Admin [icon] > Create Property. Click image to enlarge.

Select “Apps and web” as the property type.

Screenshot of a setup screen for Google Analytics 4

Select “Apps and web” as the property type. Click image to enlarge.

Name the property and complete the settings on the page. Then click “Create.”

Screenshot of Google Analytics 4 set up page: name the property

Name the property and complete the settings on the page. Then click “Create.” Click image to enlarge.

On the next page, select your data stream. For most online merchants, it will be “Web.”

Screenshot of choosing the version 4 data stream

The data stream for most online merchants will be “Web.” Click image to enlarge.

Next, provide the “Website URL,” name the stream, and choose the interactions to track automatically. Select all options, even if you presently do not have some of those activities, such as Videos or File Downloads. Then click “Create Stream.”

Screenshot of version 4 setup

Provide the “Website URL,” name the stream, and choose the interactions to track automatically. Click image to enlarge.

The next page includes the option to add a new tag or use an existing one. If you have already tagged your site, click “Use existing on-page tag.” Otherwise, follow the instructions to add a new tag.

Screenshot of Google Analytics 4: assigning a tag

If you have already tagged your site, click “Use existing on-page tag.” Otherwise, follow the instructions to add a new tag. Click image to enlarge.

When selecting “Use existing on-page tag,” follow the instructions for your setup: a standalone Google Analytics tag on your site or via Google Tag Manager.

Screenshot of setting up tags for version 4

Follow the instructions for your setup: a standalone Google Analytics tag on your site or via Google Tag Manager. Click image to enlarge.

The steps above will enable general tracking. The more challenging part in version 4 is setting up Ecommerce. Google has addressed it — with more documentation to come.

Reporting

To share some advanced capabilities of Google Analytics 4, I am using data from Power My Analytics, which is my SaaS business. Visitors can browse our website and then start a free trial that is activated in our app. This trial signup is “online,” but when a customer converts from a trial to a paid subscription, that activity is offline in our app.

Note that there are no “Views” when setting up Google Analytics 4. The structure is Admin > Account > Property.

Screenshot of account setup for version 4

There are no longer “Views” when setting up Google Analytics 4. The structure is Admin > Account > Property. Click image to enlarge.

Google Analytics 4 contains new links on the left-side navigation. Realtime reporting is still available, along with Acquisition and Conversions. The other navigation links are new but link to similar reporting as before.

Screenshot of the main reporting screen showing the left-side navigation

Version 4 contains new links on the left-side navigation. Realtime reporting is still available, along with Acquisition and Conversions. Click image to enlarge.

Get Acquainted

Click around and get acquainted with the new reporting. Discover new metrics and dimensions. Some of them offer insights that you may have set up with custom tagging, such as “Engaged Sessions.” It is now automated. Expect more user-friendly features in time.

Smashing Giveaway: Join Smashing Newsletter and Win Smashing Prizes

With so much happening in front-end and UX these days, it can be quite difficult to keep track of important things. Luckily, there are wonderful newsletters and blogs out there that shed light on the latest in the web industry. In fact, with our weekly Smashing Email Newsletter, we aim to achieve that as well.

Every week, we send out useful front-end & UX tips, techniques and tools to help you get better at your work. We couldn’t be more grateful for the trust of 190,000 designers and web developers who are already subscribed. And if you aren’t yet, now there is a good reason to join in!

Your (smashing) email

Design, front-end & UX. 1× a week.
You can always
unsubscribe with just 1 click.

The Smashing Prizes

On Tuesday, Oct. 27, we’ll raffle 10 winners from the newsletter list and give away a few Smashing Goodies. If you win, you can choose upto three items from the list below:

What’s needed to join in? Just subscribe to Smashing Email Newsletter, and if you are already signed up, you’re already a part of the raffle! Exciting!

Thank You! ❤️

We kindly thank you for your trust and ongoing support. And perhaps tell your friends and colleagues about the newsletter as well, if you get a chance. It goes without saying that we’d sincerely appreciate it.

Thanks for being… smashing — now and ever, everyone!

Speed Up Your Workflow With Figma Plugins

One of the best ways to reduce time spent pushing pixels in Figma is to use some of the countless plugins that can do the work for you. Figma has added some amazing functionality recently to help with our workflows but plugins still fill in the gap for many tedious, repetitive daily tasks. This practical article is an attempt to highlight the most useful plugins that I use often to make my design process faster and more smooth.

Finding Figma Plugins

The amount of time we, as interface designers, spend clicking, selecting, renaming, moving, updating, and otherwise tweaking our designs in 2020 is surprising (and often frustrating). This is an unavoidable part of our jobs, even with all the AI design generating magic available today. However, any minutes or hours that we are able to save on moving pixels can be spent improving the design quality, growing as designers, or just enjoying life outside of design (I know, design is life, but…).

Enter Figma plugins! Members of the community have developed hundreds of plugins since their first introduction to Figma last year, many of which help us speed up design tasks (and some that will definitely slow you down). But, as the current Figma plugin search functionality is pretty basic, for me (and for many other plugin users) it means that we need to search Google first, find a detailed article or a blog post about a plugin that does something, read about it, then go to Figma to install and try it. This article will focus on a number of plugins for speeding up our workflows and it is also intended to contribute to the community and collective knowledge around what plugins to use for some tasks.

Hopefully one of the plugins listed below is what you are looking for, or maybe what you didn’t know you are looking for but still desperately need! (“There is nothing like looking, if you want to find something. You certainly usually find something, if you look, but it is not always quite the something you were after.”)

The plugins highlighted here aren’t necessarily the most flashy or exciting but they have improved my workflow drastically. Broadly speaking, I find that the most useful plugins:

  • Generate placeholder content in bulk.
  • Organize components and elements in bulk.
  • Change elements and styles in bulk.

So without further ado, let’s dive into the five plugins that I use every day (and some bonus keyboard shortcuts at the end of the article).

Five Plugins To Help You Enhance Your Workflow

  1. Similayer
  2. Content Reel
  3. Change Text
  4. Lorem ipsum
  5. Design System Organizer
  6. Bonus: Plugin Shortcuts

1. Similayer

When To Use It

Use Similayer when you need to select more than four or five elements with the same styles (images, components, text, frames, and so on).

Highlights

  • Select similar layers with a few clicks.
  • Use extremely specific properties/conditions to define which elements to select.
  • Select similar elements from only within a specific frame or the entire page.

The Details

Selecting multiple elements while holding Shift and double or triple-clicking into multiple groups is time-consuming and difficult. This is especially true when there are many nested layers or when trying to select elements from inside many groups. In such cases, your attempts to multi-select a few elements will more often be a miss, instead of a hit.

Selecting only a couple of elements can be quick enough when using Shift and the mouse cursor. If it takes more than five seconds to select the elements though, Similayer will save you time. Becoming familiar with this Similayer will expedite your workflows significantly and change how you work with Figma. During an average working day, I will use this plugin twice as often as all other plugins, combined. Of course, as with any plugin, Figma may add core functionality that eventually will make this plugin obsolete, but for the time being, it is a real life-saver.

I most often use Similayer to:

  • Select all instances of a component within a frame;
  • Select all text with a specific style (weight, family, size) within a frame;
  • Select all layers with the same name on a page.

This is just a glimpse of what the plugin can do — the options are almost limitless! Similayer can select any elements with definitive granularity. Because of this I usually select elements with Similayer before running any of the next three plugins.

Tip

To select elements in multiple top-level frames (e.g., let’s say there are five designs on a page but you only want to select elements in Design One and Design Two), temporarily add a frame around all the top-level frames you want to select elements in. Then use the “Limit selection within root frame” option to select elements from those specific screens/frames but not the entire page.

2. Content Reel

When To Use It

Use Content Reel any time you need to generate more than a couple of items of placeholder content. It could be addresses, phone numbers, names, order numbers, profile images, or almost any type of text string or image.

Highlights

  • Quickly generate structured placeholder content.
  • Supports images and text strings.
  • Create your own content sets for personal use or publish them publicly.

The Details

If you are tired of making up names, avatars, phone numbers, emails, position titles, addresses, or any other type of text to make your designs look realistic, this plugin is for you. Content Reel inserts structured placeholder content from data sets into designs. A great use case for this plugin is filling out tables with realistic content. Using (555) 555-5555 down an entire column will cut it when you’re in a pinch but doesn’t look very realistic. Aside from creating visually realistic designs, this plugin can help test the constraints of a design by using realistically looking “dummy” content.

When opening the plugin for the first time, different content sets will be available on your “home” page that will cover most of the basic use cases. For more unique cases or customized content, try searching the Content Library for more existing options or creating your own content sets!

As a practical example, I recently helped design a conference room booking app. For that project, I ended up creating a set of Amenities (WiFi, TV, outlets, coffee maker, and so on) to insert at random into different room listings.

Image sets can also be useful for cases such as avatars. The plugin also provides a default avatar set which works great but I ended up adding a custom set of more “professional” images for avatars. (Unsplash.com is a great website where you can find fitting photos for this purpose.)

Tip

When you create your own content set in the plugin, there is an option to keep it private for personal use or publish them directly in the plugin for your team or other users to find and utilize later.

3. Change Text

When To Use It

Use Change Text anytime you need to find & replace a word in multiple places or update multiple text layers simultaneously.

Highlights

  • Easily find and replace existing text from the current selection.
  • Use $& to insert current text like the Figma native batch rename tool.
  • Plugin only updates selected text layers (both a pro & con, depending on the use case).

The Details

This plugin makes fixing a typeo typo that was copy/pasted 25 times into a 5-second task. It’s similar to Figma’s native batch rename tool but for text! You can find and replace words or strings of text in multiple text layers simultaneously.

I love this plugin for client work. It’s a quick and thorough way to find and replace a company’s name in all existing designs or change everywhere a button label that says “Create” with a button label that says “Add”, based on a client’s request. There are so many more cases where this is useful though! The more you have a solid component architecture the less this plugin is needed but we all know how difficult it is to create components from the get-go.

Tip

First, use Similayer to quickly select all the text layers you want to change. Then, use Change Text to edit the text. If the text characters in all the target layers are all the same (e.g., all are “typeo”) you can easily select all the elements using the “Text characters” option in Similayer.

4. Lorem Ipsum

When To Use It

Use Lorem ipsum any time you need dummy text. Whether you need a quick 10-minute placeholder text or are waiting for a client to finish the final copy, lorem ipsum text is always handy.

Highlights

  • Generate multiple text boxes at the same time.
  • Content doesn’t have to begin with “Lorem ipsum” and will generate unique content for each instance.
  • Auto-generate fills a text area perfectly based on its size.

The Details

Lorem ipsum is one of those not-so-flashy but really vital plugins. I think every designer can probably read Latin these days without realizing it! The entire purpose of this plugin is to save you time by not having to write “real” placeholder content. All the more reason to only spend a couple of seconds generating the dummy text.

There are plenty of Figma plugins specifically for generating dummy content with various settings. If you don’t use this one specifically, make sure to grab one of the other 5+ options (just search “lorem ipsum). With this plugin, @Dave found the perfect balance of flexibility and simplicity with the ability to quickly choose the number of words, sentences, or paragraphs to generate. The added ability to automatically fill a text layer with the perfect fit of content sets this plugin apart. Especially when working with auto layout components that will resize based on content (when you don’t necessarily want them to).

Tip

Use the auto-generate option to perfectly fill any text layer with the dummy text when you don’t want layers to resize. For more abstract dummy text, try using a font family like Redacted (download) in combination with the Lorem ipsum plugin.

5. Design System Organizer

When To Use It

Use Design System Organizer if you are creating a new component library! It can also be helpful with customizing copied component libraries or in files with a large number of local styles or components.

Highlights

  • Rename, delete, reorganize components in bulk.
  • Works with all styles (text, color, fill, effect, grid).
  • Style names with multiple /’s are sorted into subfolders in the plugin.

The Details

This is one of those plugins that I don’t use every day (you probably won’t either) but it is still such a massive time saver that it needs to be included in the list. Depending on the task or file, this plugin can save hours of time that would be otherwise spent renaming and organizing. If you work with clients or new design systems often, this plugin will help you stay organized. The plugin costs $2.99 for a lifetime license (but there is a 30 day free trial for each file the plugin is used in). Even if you only set up a design system once it is well worth the cost, as the plugin will help you reorganize and clean up things as you go.

One great use for the Design System Organizer is to clean up or rename components in a library that you cloned from the Figma Community. Most community files are already well organized and easy to use out-of-the-box. Sometimes, if you’re like me, it can be more difficult to understand how new components are named than to just rename and re-categorize them in a method you are already familiar with.

Tip

This is a great plugin to use if you just need to delete styles in bulk. Unlike components, Figma doesn’t allow multi-selecting styles so deleting more than a few is time-consuming. Use this plugin to multi-select and remove styles from a file.

6. Bonus: Plugin Shortcuts

No matter which plugins you use or how often you use them, opening a plugin can be cumbersome. Plenty of right-clicks, nested dropdown menus, no thanks. Here are a couple of shortcuts to open Figma plugins.

Open A Plugin Fast

Use Cmd + / (or Ctrl + / on Windows) to open the Search menu. Quickly search a plugin by name and open it with Enter.

Open A Plugin Even Faster

Use Cmd + Opt + P (or Ctrl + Alt + P on Windows) to reopen the last used plugin. This is especially nice if you use the same plugin often (for me this shortcut will almost always open Similayer).

Open A Plugin The Fastest

On a Mac, it’s easy to create custom keyboard shortcut(s) for the Figma plugin(s) that you use daily.

Note: There is probably a program for Windows that will allow you to achieve something similar, but, as I am a Mac user, I won’t be able to help you with that.

The following method can also be used for any menu item that you regularly use or for menu items with existing shortcuts that are difficult to remember. Here’s how to do it:

  1. Navigate to System Preferences → Keyboard → Shortcuts → App Shortcuts.
  2. Click the “+” at the bottom left of the list to create a new shortcut.
  3. Select Figma from the Applications dropdown.
  4. Add the name of the plugin or menu item exactly as it appears in Figma (e.g. Similayer).
  5. Create a keyboard shortcut and click “Add”.
  6. Run the plugin in Figma with your custom shortcut!

Conclusion

Figma has added some amazing native functionality lately to make repetitive actions less repetitive. At the Config Europe event they announced more features releasing in 2020 that will save designers significant effort but plugins still fill in the gap where native features fall short. There are countless useful and fun plugins available in the Figma Community! If you invest a bit of time learning to use these five (and any other workflow-related plugins), your Figma productivity will benefit from it.

Make sure to also browse the Figma Community Files if you haven’t already! There are new resources every day to jumpstart your next project, improve processes, or be inspired by.

Until next time, meet me in the Figma community.

Happy designing!

SEO: How to Move Skydivers through the Funnel

What happens when consumers parachute from organic search results and land on the page Google thinks is the most personally relevant?

They skip as much as 95 percent of your prepared journey.

The page the searcher lands on may need to perform every step in the purchase cycle — awareness, intent, desire, action — depending on his goal and familiarity with your brand.

Purchase Journey

The steps in the purchase journey follow the page templates that make up your site. Each page typically performs a single function:

  • Home pages generate awareness and excitement around your brand and products;
  • Informational pages such as articles and shopping guides introduce the brand;
  • Major category pages and smaller subcategories pique interest and desire;
  • Product pages move shoppers to purchase.

Visitors are supposed to start at the home page or informational pages, where they pick up enough awareness and interest to move to the next page in the funnel. Some shoppers arrive just wanting information. They leave the site retaining brand impressions. Others, we hope, stay on the site, continuing the journey.

The product parade begins in earnest on category pages. Shoppers see lines of products, brands, and individual items. Some top-level categories might contain instructions on choosing a product or offering navigation by persona or usage types. But all share the goal of moving the shopper to the next stage for a smaller subset of goods.

The purpose of a product page is to prompt an add-to-cart action.

This, in sum, is the journey shoppers are supposed to take.

Searchers

However, every indexed page on your site is a potential landing page in the search engines’ eyes — even obscure or unrelated pages — so long as the pages send appropriate relevance and authority signals.

To integrate them into your funnel, meet those searching shoppers where they are in their journey, and direct them to the next step.

Thus every page has multiple purposes:

Wayfinding. Unless they can instantly identify where they are, searchers are likely to bounce back to the search results, taking a potentially negative brand impression with them. Inserting prominent, descriptive heading and navigation options into your page templates fulfills this purpose. However, it’s surprising how many ecommerce sites miss this one, especially on category pages of browsable product grids.

Awareness. Every page should prominently show your logo, company name, tagline, and other identifying elements. Unfortunately, search engines also index oddball pages, such as downloadable PDF files and administrative pages and documents.

Regularly review your organic search landing pages to identify those oddball URLs. Then modify the pages with proper brand elements, or replace the pages entirely.

Interest. Engagement is another word for interest. Engaging shoppers according to their intent — regardless of the page that they land on — is much more difficult. For instance, a shopper searching Google for “engagement rings” may want to browse images and compare prices before purchasing. A category or collection page featuring a grid of compelling engagement-ring photos would best serve that intent.

However, if he’s starting his journey, the shopper could be overwhelmed with a massive grid of rings. Instead, he likely needs to know the types of available rings and how to choose.

Blue Nile navigates this disparity in purposes well with its Engagement Rings page, shown below. This high-ranking page incorporates the needs of serious shoppers, custom ring builders, dreamers, skeptics, and information seekers. The zones near the top (highlighted in red by me) take care of shoppers who know what they want, including custom builders. Further down the page (not shown) are links to the most popular engagement rings, tips on shopping and sizing, reviews, and an offer to help.

Screenshot of Blue Nile's engagement ring page

Blue Nile’s top-level landing pages meet the needs of multiple searcher personas.

Smashing Podcast Episode 27 With Stefan Baumgartner: What Is TypeScript?

We’re talking about TypeScript. What is it, and how can it help us write better JavaScript? I spoke to expert Stefan Baumgartner to find out.

Show Notes

Weekly Update

Transcript

Drew McLellan: He’s a web developer and web lover based in Linz, Austria. Currently working at web performance company Dynatrace, he writes, speaks and organizes events all about software development and web technologies. Lately he’s the author of the book TypeScript in 50 Lessons, published this autumn by Smashing. So we know he’s an expert in TypeScript, but did you know he can juggle up to eight fiery weasels whilst blindfolded on a unicycle? My Smashing friends, please welcome Stefan Baumgartner. Hi, Stefan. How are you?

Stefan Baumgartner: Hi. I’m smashing. I didn’t that about me, so that’s very interesting.

Drew: It’s amazing what people find out about themselves on this podcast.

Stefan: Absolutely.

Drew: So, I wanted to talk to you today about TypeScript.

Stefan: Yes.

Drew: It’s the subject of your new book, so clearly it’s something you’ve spent a lot of time getting to really know in depth.

Stefan: Yes, absolutely.

Drew: For those who have not used TypeScript before, so might not be familiar with what it is, how would you describe TypeScript, and what problem is it actually solving for us?

Stefan: It’s a very good question. So, there are many ways of approaching TypeScript and one way that I like most, and also the way that I like to describe in my book, is as a tooling layer on top of JavaScript. JavaScript is a wonderful language, but it has its quirks. There’s some parts that can have multiple meanings. You have dynamic typing, which means that while you can have different types like number or string or an object based on the position where they are in your code, and there’s lots of implicit knowledge when you work, especially with the technologies or with Node.js, That you have to know about some interfaces that you use from APIs, function signatures, etc.

Stefan: And TypeScript tries to give you a type system around all that, to give you this information. So, it tries to figure out which types you set when assigning variable. It tells you which function signatures expect which well use at which position, and which return objects that you get that you can then access and modify and work with.

Stefan: And this was, back in the day when TypeScript was created, that’s now about 80 years ago, the prime goal of the TypeScript team to create this tooling layer, in form of an additional language. So, to take all the risk from JavaScript and then on top they create their own kind of meta language that allows you to define types for your functions, your objects, whatever there is. And this also means that every JavaScript code is TypeScript code, which also means that you can get started right away. If you know JavaScript, you’re basically a TypeScript developer as well. And you just take what you need to get more and more information about your code.

Drew: So, TypeScript is almost like imposing a sort of bunch of more strict rules about how we write JavaScript in order to make code more reliable? Is that…

Stefan: Yes, yes, this is exactly what it is. So, the strictness is totally up to you. So you can tell TypeScript how strict you want to have it. But their goal is to catch as much errors, or as much possible errors, that there can be. Like oh well this value could be null, so better do a check if this value you exists or it can be undefined. Or, at this position I don’t exactly know if it’s a string or a number so check if it’s type of string, check if it’s type of number.

Stefan: So TypeScript knows more, or can give you more information about the class of failures that you’re dealing with. And right now, the main goals of TypeScript are to catch as many errors as there are. So they spent a lot of time in providing more tools for you to declare your types and to declare the strict rules for you to figure out if there’s any error in your code that you might have a problem with in the long run.

Drew: So, I mean, really to get back to basics when we talk about types in a programing language, which obviously TypeScript is all about types, we have strictly type languages and weakly type languages and JavaScript is weakly typed, isn’t it? What do we actually mean when we say something is weakly typed?

Stefan: There’s weakly typed, and another word for that is also dynamically typed, which means that you don’t always have to know which type your variables or your constants have. So, the moment you assign a variable, let’s say var fu or let fu with a number once you forget something, the cross-credentials say fu is now of type number, it’s a number and I can’t do number operations on top of it, like addition, multiplication, subtractions and all those kinds of things. If you assign it a string then it’s a string.

Stefan: And, in JavaScript, you have the possibility to override it with the value of an entirely different class, entirely different type. So, you can, say at one point in time it’s 1, 2, 3, 4, at another point in time it’s a string like, “Hello world, Stefan,” or “fussy cat” or something like that. And this can cause for couple of errors because what if you expect your variable fu at some point in time to be a string and then you do string operation on top of it like two uppercase, two lowercase. Or if you expect a number and you want to edit to something, then you could do result, which you don’t expect.

Stefan: And, with TypeScript, you can explicitly set the types or you can tell TypeScript to infer a type from an assignment. So the moment you assign one to free, TypeScript knows, Hey, this is number, and throughout your whole code, throughout every user through, it will think it’s a number and it will tell you if you do something that is not allowed with numbers. So, yeah. And this is the difference between a statically or strongly typed language, where you say, okay, once it has the type it has to be of the type and the type can’t change afterwards. In the weakly or dynamically typed language where the type just depends on where you are in your code and it can change, especially at run times or during the execution of the code, which can cause a ton of problems if you don’t pay attention.

Drew: So, yes, there’s that whole class of error, isn’t there, where you, as a developer, think that a variable contains a certain type of value and actually when it comes to that point in the code and that’s executed, for whatever reason it’s something different. And TypeScript is adding that enforcement of types on top of JavaScript to sort of give us that extra level of checks and reliability to it, to get rid of that type of bug.

Stefan: Exactly. The best example is for, for example, at the string two, with the number two, and you get 22 as a string, at the number two vice-versa and you get the number four. So, it’s apparently the same operation, but just if you swap the number in the string you get two totally different results. And TypeScript pays attention that it don’t have errors like that. And the one biggest rule that it sets like, so once you do an assignment, it has to be of that type, and the type can’t change.

Drew: So, TypeScript doesn’t actually get run by the browser, does it? Or by node or whatever run time you’re using, it presumably gets compiled down somehow to JavaScript?

Stefan: Yes. So you, there are two ways of working with TypeScript. One way is just exactly what you said, you write the TypeScript code, especially with this typing meta language that you use, and then you have a compile step where TypeScript erases all the types and spits out regular JavaScript code. And TypeScript is also transpiled so you can, say if you write more than the JavaScript, you can compile it down to something that I-11, if you have to support it, can work with. That’s one way.

Stefan: The other way is, and this is an interesting way which I like a lot and which people are using actually, you write regular JavaScript and then you add type declarations in a separate file, and refer to it by adding JSDoc comments in your code. And TypeScript can read this comment information, this documentation information, maps it to the types you created in separate file, and can give you the same tooling, the same information that you would get if you write in this transpiled, compiled way.

Drew: Okay, so then, that way you just keep your standard JavaScript, but the tooling that you’re using around it knows to reference the sort of side car file that has all the definitions of what the types are.

Stefan: Exactly.

Drew: Type checking is one thing, but surely that’s the sort of thing we can, we don’t need a new language to do. That sort of analysis we could just have running in a code editor in a VS Code or whatever for example. Does TypeScript add things that take us beyond just what you could do in a code editor?

Stefan: The biggest advantage that you get is actually from code editors. One funny thing is that if you work with Virtual Studio Code and you write regular JavaScript, what you’re really doing is writing TypeScript because Virtual Studio Code has built in TypeScript a checker and analyser that gives you, that tries to figure out as much information as possible and gives this information back to the editor and has a close relationship to the editor into TypeScript. In particular, especially since you mentioned VS Code. VS code was their first project to work with TypeScript. Back in the day, where it was called Strada or Project Strada, where all the developers figured out how to actually create a language like that.

Stefan: So, editors and language are very, very much connected and you get the best benefit if you’re working with modern day editor. And thanks to the TypeScript team this doesn’t have to be VS Code. It can be basically any editor. So the proactive blankets for almost any editor out there that supports a so-called language protocol. There also getting it for all the other programming language, feedback and editor feedback, and analyzing information.

Stefan: So, yeah. This is actually the main use-case for that. And, of course, if you have bigger projects and you used to compile stepped version of TypeScript, having some sort of continuous integration, continuous delivery, where you constantly check if your project makes sense, you’re creating bundles that you should begin, this is also a part of TypeScript plays a huge role because with every commit to a GitHub repo or something you can do type checks and see if there might be an error slipped in that should be taken care of.

Drew: So, I guess there’s a level that your code editor can do automatically. Like you said that Visual Studio Code does by just analyzing it as your writing JavaScript, but then when you’re using these, when you’re declaring types specifically or adding these JSDoc comments that’s what takes it a step beyond that. That’s where you’re actually, it’s defined as more of a language on top of JavaScript.

Stefan: Yep. The cool thing is TypeScript is, in the way it’s designed, that it tries to get as much information out of your JavaScript as possible without you doing anything. So, if it sees a number in the wild it knows that this type is going to be number. Or if you have a function signature and you say you have a default value like the way you add a tax to a price and you see it as standard way you add a tax is 0.2. So if you add this in the function signature TypeScript already knows at this point I expect you to pass number and not something else.

Stefan: Also, if you return an object or if you write a JavaScript class, TypeScript can figure out what the well use are, what the types of your fields should be. And this works for actually quite a lot of use cases. So you have lots of scenarios where this is totally sufficient and you don’t need anything else. But when you do, this is now your part to strengthen TypeScript with additional type information that you provide. So, let’s say you want to create a type article which should have a number, a description, a price. You have different types for that. And then you create a compound or object type and once you declare this type and you know that your object should be of this type, TypeScript knows which values and which fields to expect.

Stefan: And one thing that’s particularly interesting here is that TypeScript is one of the few and certainly the most popular type system that works with structural typing, which means that as long as the shape properties and types of those properties is the same as the object that you pass along or the object that you get from somebody else, it will say it’s okay. You don’t have to have the exact name, it just needs to have the exact structure. So if you have a type called book which happens to have name, description, and price and you have a type called video which happens to have name, description, and price, those types are compatible with each other.

Drew: Okay, so that means we can sort of define customer types that make sense in terms of the project and the objects that our project is trying to model and then use TypeScript to enforce the shape of those.

Stefan: Yep.

Drew: So if we’ve got a product that has a price property that’s an integer in sense or what have you, then TypeScript will enforce that for us and if we pass in something that’s not a product or doesn’t have a price or whatever, that’s when we start getting our errors. Can you then go sort of step beyond that if you had like a cart type that had an array of products inside it? Can you enforce all the way that level?

Stefan: Yep. Yeah, exactly. You can enforce that as well. There you enter also a class of types which is already goes into very advanced topics which is generic types. So, the array type is a generic type. It tells you an array has certain interest so you can index them. It has a certain properties like length or map or for each, but the well use inside this array are defined by a so called generic. So you can say you have an array of number, you have an array of string, you have an array of articles.

Stefan: And then if you do array.map then you get, inside this map function, you get the type again like a string, number, article, whatever you pass along. And with generics you can do a lot of things. So this is where the type system really tries to make sense out of all the possible cases people encounter in JavaScript frameworks. So you have, especially in JavaScript you have so many functions that can mean so much. Like, okay the first argument is now a string, the second argument has to be an object, or if the first argument is a number, the second argument has to be a string. You’ve seen that in the wild in countless, countless libraries that you use.

Stefan: And, for this kind of scenario TypeScript also has structures in generic types and conditional types where you can have checks if this is one class of types do this, if this is another class of types do that, which really tries to figure out most of the scenarios that you find in day to day JavaScript code.

Stefan: So, and this is actually where the fun starts. The thing like creating object types, creating regular types like number, string, etc., or creating function types that’s one thing. But if you try to model a very complex behavior just within the type system, this can get very, how do I say it, mind boggling. Yeah, I guess mind boggling is the right word. This can be very mind boggling, but also a lot of fun.

Stefan: And this is where I kind of got my, found my call to work more with TypeScript because I just found out that I could do so many that I see in, not only in my code but in the code for my colleagues and code that I find online, to make more sense out of it and to be prepared for future scenarios. So, I mostly write types in TypeScript because I know that at some point in time I have to revisit code and I want to know what I’ve been thinking back when I originally wrote it.

Drew: Where is in your project that you would define these types? Because presumably you want them to be reusable all around your project. Where do you define them?

Stefan: So, I usually define them very close to the code that I actually write. So, when I write TypeScript I write in TypeScript first. So I transpile, I usually have a compile slip anyway of, maybe because I’m doing React and I need to transpile JSX, or because the project is so big that I want to do extra checks. So there’s a built chain anyway. Either I need to bundle or I need to transpile, so I’m writing regular TypeScript, not JavaScript with this JSDoc extensions. And there I try to keep the types very close to the objects that I declare.

Stefan: If there’s a type that is used throughout the whole project, I’m not only expecting the type, but also the objects or the functions for that objects. So this is some way of splitting and moving files and types around. There’s a very rare case where I also have one of those global type finishing files next to me, which is if my app has to deal with something that’s in the environment where I run it, be it either node or browser or whatever, where some global ideas or global concepts are that I want to carry in my program and hold. And this is actually a pretty standard set up.

Stefan: So you have your TypeScript files on one side, you have couple of type definition files on the other side, and then TypeScript tries to figure out everything, if it makes sense and if it’s possible to do and hopefully that’s the case.

Drew: Yes, I think we’re all very sort of used to having a build step, combilation step, in our work flows these days, aren’t we? With whether it’s running Babble or dealing with JSX and React or Web Pack and what have you so…

Stefan: Absolutely.

Drew: I guess adding TypeScript into there is just another small step in the process and quite easy to do.

Stefan: Yeah. So it’s, on one side TypeScript is a great extension, especially if you have a Babble set up running. So they provide an interface where you can do TypeScript type checks, even though your whole application is transpiled with Babble. TypeScript also has a lot of tools so it can be the only transpiler that you need. So it can transpile JSX, it can transpile down to Equiscript 5, Equiscript 3. The only thing that it doesn’t do anymore is bundling. So if you want to have bundled up application you need to take another tool like Roll Up or Web Pack or whatnot.

Drew: One of the features that I liked in newer versions of PHP, back when I was writing a lot of PHP, they brought in the ability to declare the types of each argument that a function was expecting. TypeScript does the same thing, right? You can say the first argument should be a number, the second argument should be a string-

Stefan: Yes.

Drew: … and then the tool sets going to catch that if you try and pass the wrong thing in.

Stefan: Exactly.

Drew: In a lot of sort of real world cases I find that I have function arguments or variables that would be of a given type, but there might also be null. Is that something that TypeScript allows for?

Stefan: Yep, yep. So, this is where you enter the wonderful world of Union types and this is the big chapter in my book in the middle where we go from beginner concepts to advanced concepts. Where we realize that you don’t, not only have different classed of types like numbers, string, or several object types, but you can combine them. So you can say, this argument can be of type of string, of type number, or of type null or of type undefined, or for some object type. And with that your arguments, especially function arguments become much, much more varied.

Stefan: So if you can for one thing say, if this function argument does not have null in its unit type, then you’re not allowed to pass null. And you can make sure that inside this function, this way you is never null. If it can happen that it’s null, you add pipe to pipe operator null to it, and suddenly you have to check if it’s null or if it isn’t. Which makes it very, very interesting. So, especially the case of null checks and having undefined values. This is something that you happened to have in TypeScript all the time, or in JavaScript all the time.

Stefan: And with this one little addition, like make sure you check for your null-ish values and if you don’t allow null-ish values they can’t be null. This erases a whole clause of errors that you would otherwise encounter. And this is also one lesson in my book where I just talk about this one compile of next trick null checks and what it means for your work and what you suddenly have to do. And at some point you realize, okay it’s much more tedious to add pipe null to every possible case where it could be null, instead of just for one time in one place of your code, check if it’s actually null and then just continue with what you did. So, that’s a very nice way of working with null and undefined values.

Drew: A lot of sort of more formal languages, OO type languages, have classes and give you the ability to define a class interface to be able to say if it’s a class that uses this interface it needs to have these methods, it needs to behave like this. Is that something that TypeScript gives us?

Stefan: Yeah, absolutely. So this is very much related to the history of TypeScript. TypeScript, when it got first released, you know it was eight years ago we weren’t talking about Equiscript 6 there, we weren’t talking about native Equiscript classes, we spoke mostly about objects and functions. There was no modern system so it was a very different type of JavaScript eight years ago than we have today. And eight years ago, the TypeScript team introduced lot of features from other program languages like Java, C#, like classes, interfaces, extra classes, name spaces, to create some sort of structural or structured programming tools that make it possible for you to lay out your code entirely different than you’re used to.

Stefan: But over the years, lots of those concepts found there way into JavaScript, especially classes. So, they had to revisit lots of that concepts again and made it much, much more aligned to the way JavaScript is right now. So you still have classes, you still have interfaces, but TypeScript classes are just the same JavaScript classes. And TypeScript interfaces are like compound type or composite types where you have just list of properties. They can be function properties or string properties or mass object properties, and interfaces and type declarations are, in the most parts, just the same.

Stefan: You also can implement, you know the implements keep it exist, you can implement an interface, you can implement a type. There are, just in some rare cases, effective to some rare cases there are totally the same. So, yes they exist, but they mean something different than your used to from other programming languages. And this is also something where I’d say people who come from other programming languages have to look out for things like that because they can be false friends. Where you, they think, Okay, oh this just works just like in Java or this works just like NC Sharp, where in turn it’s just borrowed language or it’s just the same names for concepts that are nuanced and somewhat different than to what you would expect.

Drew: It can be a real sort of mental hurdle to jump over, isn’t it?

Stefan: Absolutely.

Drew: If you’re familiar with a name meaning one sort of thing and now it means something else.

Stefan: Yep.

Drew: It can be quite difficult to reset how you think about those things. So, it sounds like TypeScript has some sort of really advanced features that help us who are working really hard in JavaScript all day. Is it just for us super nerds or can people who less familiar with JavaScript, is it useful for the more, the beginner or the intermediate as well?

Stefan: Yeah, I’d say both. So one of the nice things about TypeScript is it’s creditable. So you just use as much of TypeScript as you want to use. So if you learn JavaScript, you get some additional tooling that gently tells you, Hey, there might be some properties that you want to select. Like if you call document .queries elect it already tells you, no queries elect exists and it gives you some hint on what to expect as an argument without throwing a single error and without you needing to do anything with those red streak lines that you get in your editor.

Stefan: And for that already TypeScript can do a lot of things. So this basic tool aspect of it can help beginners just as much as people who are more familiar with JavaScript and then have been in JavaScript for a very, very long time. But as you progress you can buy into more and more and more concepts as long as it’s reasonable for you to do. So I’m always a strong proponent of not having to use every feature a programing language gives you, but just the features that you actually need and TypeScript is perfect for that because it has a ton of features from the history that we spoke about where it tried to introduce concepts that haven’t been in JavaScript. And now from all those concepts that try to make the most sense out of all the JavaScript code that there is outside, you can take whatever you need and whatever you like from that.

Stefan: And this is, I guess, what makes TypeScript so special. When I started working with TypeScript, like seriously working working with TypeScript, the things that it most was like having a react component and being super happy that if I press control space that I get all the name of the properties my function component would expect. So this alone helped me a lot and it did nothing else but using this feature for a very, very, very long time. And then I started, in some sort of library code that I created for my colleagues or for people who I work with, creating a type core set around my function so people who use my code know better what I meant when I wrote those particular functions. And there I go all in. I’m very deep, deep down the type system rabbit hole.

Drew: Yeah, I mean that’s interesting. In the last episode of this podcast I talked to Natalia from Vue JS all about Vue 3 and one of the big changes they’d made in Vue 3 was that it was rewritten using TypeScript. How important is it for libraries and frameworks to adopt TypeScript? What benefit is that actually providing those who are working with the library and not on the library code itself?

Stefan: So, I think for one part you get a lot of implicit documentation. Especially if you import Vue or React, React is kind of a mixed bag, but if you import Vue or Preact for that matter, Preact is also written in TypeScript. People who use your framework immediately have some information about all the functions and all the objects that they get without you needing to look up anything and you get some extra checks if what you’re doing is the right thing to do.

Stefan: So that’s a lot of implicit documentation for all the users of those libraries that you get, basically for free, if you start writing in TypeScript anyway. So every project that is written in TypeScript gets, produces all this extra information for free. I guess as a team, as a library author, it makes contributions a lot, lot easier for the same documenting reasons. It also makes checks a lot easier because there’s a whole class of errors that you can catch in the type system that you, that would take ages to catch in tests. That’s why nobody writes test for that, especially if this is of a certain type, kind of tests.

Stefan: And yeah, and then you of course get all the benefits that you would get if you used TypeScript in any other project are catching errors before they happen. And one thing that I have to mention here is, especially Preact because Preact tries to do the kind of thing that write JavaScript code, and add additional types on the side, which gives them a low barrier of entry for people who want to contribute because they don’t have to figure out how the type system works or how type script works because they’re just like JavaScript code. But they, as library authors, get the additional benefits of having type checks, of seeing if this works the way it is intended, and I think this is for lots of projects. Especially Open Source projects really the best way to go.So, I strongly advocate for the idea of having JavaScript types on the sides because it can help people so, so much for basically not a lot of investment on your part.

Drew: Increasingly, we’re seeing sort of whole organizations moving to JavaScript as their sort of language of choice, both in the front end where it’s an obvious choice, but in the back end of their products and systems. Would you consider TypeScript something that sort of larger teams and larger organizations would really benefit from? More than individuals?

Stefan: So, I’m currently in the same transition. So we have lots of Java NC Plus developers who are going to write a lot of JavaScript in the future and you know what, TypeScript can be some sort of guide for those carry areas of new program languages. JavaScript has a lot of quirks, a lot of history and a lot of prejudices if you come from a different programing language. So TypeScript can be a guide because there’s some concepts that you’re familiar with it in the type system.

Stefan: But also, I think, especially when you have lots of people working in the same hole base or lots of people who need to work with each other, this can be an additional layer of guidance in your project where you don’t have any bad surprises in the end. So, of course technology doesn’t solve any communication problems. This is not what TypeScript is intended for, but it can lower, it can make lot more room for the right discussion then, if you don’t have to talk about what do you expect from me in your code, but rather what should your code do or what should your library do.

Stefan: And, I always say that if you ever write code for other people or if you write code with lots of people or if you just write code for yourself, you have to revisit the next day, consider TypeScript because it might help you in the long run. And this is not just an investment for the next project of next week, but more for one who say, Okay, especially if long lasting projects for month, two, or years. Definitely offer that. You’re never going to know what you’ve been thinking of when you wrote the little piece of code one year ago, but types can give you a hint of what you meant.

Drew: One thing I think that stopped me looking too closely at TypeScript in the past is I sort of remember things like Coffee Script that were a sort of new Syntacs that sort of transpired down and I kind of thought that TypeScript was another one of those, but it’s really not, is it? It’s plain JavaScript with some extra things layered on top.

Stefan: Yeah, so this is something that the team also strengths a lot. It’s fundamentally not a new language. So, it could look like that if you look at examples from 80 years ago, this was also part, just like you too. I was avoiding for such long time, firstly because of Coffee Script, second because of tons of JavaScript developers telling me that this is Java 4 or this is JavaScript for child developers, like and now finally get all those tools that I know from years and years of writing Java and I don’t want to change the ways I write, I just want to have the same tools but drawing in a different environment and this scared me and I didn’t want to have to do anything with it. And it took me, I guess about six years or so until I tried it again.

Stefan: Especially after watching some videos of TypeScript’s creator Anders Hejlsberg who spoke exclusively about the tooling aspects and about this is JavaScript. So, I met him twice in Seattle and when he went into interview sessions where we all were, he said from himself that he was writing JavaScript for the better part of the last decade. And if the creator of TypeScript has this idea that he’s writing JavaScript, perhaps you know with this extra type annotations, this takes the whole language and the whole tool into totally different light.

Stefan: And that’s why they’re stretching the fact so much that everything that you have, especially if it’s a new language feature, this is JavaScript. So they are very closely aligned to the ECMAScript standard. They are also championing a couple of proposals in the ECMAScript standard. They are involved, they know what’s happening, and if there’s a new feature that reaches a certain stage, they’re adopting it in TypeScript, but they’re not creating any new language features on their own.

Stefan: Where they innovate is in the type system. And you can really separate the type system from the actual JavaScript code. Of course there’s some mingling of JavaScript code and types group, especially when you do type annotations, but other than that it’s JavaScript with benefits. And those benefits are what makes it worth in my opinion.

Drew: And I guess we could go through all sorts of those benefits, all sorts of features that are in TypeScript, we could go through them blow by blow, but it doesn’t necessarily make sense to do that in a podcast. It’s difficult to describe code, isn’t it?

Stefan: Oh, you an write an entire book about that.

Drew: Are there any particular features of TypeScript that you’re most excited about or think provide the most value to users?

Stefan: I guess one of the features from the type system that I like most, which is again advanced but not super advanced so that it’s easily graspable, are union and section types. Where you can say, Okay, this argument or this variable can be not only of this type, but also of another type. Or it needs to have features from this type and this other type. And if you, once you realize that you can make use of that, you suddenly can model your application a lot, lot better.

Stefan: So, I adopted a work for where I tried to think about the object and the functions that I have, like what do they expect, what is the data, how are the properties designed. And then I tried to work with them as much within functions and if you use an intersection types you can, you have so many tools of modeling your data that if you spend a little time doing that you catch a ton of errors and a ton of problems up front without spending too much time into, in TypeScript land.

Stefan: And that’s why I guess they would be my most favorite feature. And also the fact that TypeScript transpires everything. I don’t need any Babble or any other transpile and I’m pretty tired of having too many tools that I need to use. So if I just can rely on one tool and maybe another one for bundling, that takes a lot of noise off my mind. So, that’s what I’m also very thankful for. It can just do a lot.

Drew: You’ve written about what it can do in a new book for Smashing. Loads of great information for people who are wanting to learn TypeScript. So, what sort of developer is the book aimed at?

Stefan: Yeah, so if you read TypeScript in 50 Lessons, we assume that you are already a JavaScript developer. You don’t have to be a seasoned JavaScript developer, so just enough that you, you’ve written application with it, you know some quirks, you know what an object is, you know what an error is, you know what a function is, you know what an assignment is. So stuff like that.

Stefan: And we take you from there, from just enough JavaScript that you know how to be dangerous, and guide you through the TypeScript layer. So, you could write books about TypeScript that you just speak about every feature that there is and explain just a little and let the reader figure out whatever to do with it, and we take a total different approach. We focus on one particular part, which is the type system, we leave out a lot of other things that, they are, neither the team nor seasoned TypeScript developers would recommend that you do, and focus just on the part that is long lasting.

Stefan: So, this was one thing that I really cared about when writing this book that once it’s out it should have some relevance in years to come. Especially with TypeScript getting four releases a year, you never know all the features and you can’t express all the features. But you can express, or explain, how the type system works. And from that on you figure out the things on your own. And this is what we do, so we give a very in depth introduction into the type system.

Stefan: First, in the first four chapters we guide you to the point where Okay, yeah, you know how to assign types, now you know how to work with types. Then there’s this water sheds chapter where we go into unit intersection types and from then on you learn about type modeling and about moving into types system. And after you read the last few chapters, we had seven chapters in total, you should know everything, to be prepared for every single TypeScript that there is. And for every new class of types they introduce and for every new class of errors they try to solve.

Stefan: And it took me quite a while to write this book to be honest, so me knowing that it didn’t have to change the table of contents and it didn’t have to introduce any new concepts over the last 1 1/2 years to me is proof enough that we succeeded in that. So, maybe we snuck in one or two features from TypeScript 4.0 but not all. So all the learnings that you have are still well it even though we, I designed them 1 1/2 years ago. So, yeah this is the main goal of the book. And it’s kind of what we see in the tag line. We want to take you from a beginner to an expert. And I hope we succeed with that. Yeah.

Drew: I certainly found reading the book though because it is broken down in, you know it’s 50 lessons so it’s all in sort of fairly bite size chunks, and I found that I was able to start using all of that straight away. You’d read about something and then you could start using it. It’s not one of those books where you have to make it all the way through to the end before you can start being productive.

Stefan: Yeah. Yep. Absolutely.

Drew: Very easy to just sort of drop in and drop out of which with many of us being so busy and under so much pressure in our jobs and things at the moment, it’s great just to be able to read a little bit and then forget about it for a while and then come back and read a bit more.

Stefan: Yep. This is also something that we take a lot of care, that we really cared about to achieve. It can be really overwhelming to learn a new language. Especially a new programming language. And so those bite sized chunks, you know you just spend about five or maybe ten minutes with one of those lessons. And you can immediately apply the learnings of those lessons to some actual code and we provide you with all the code examples online. So if you go to TypeScript-Book.com you see a list of all the code examples that there are.

Stefan: And this helps you just getting in as much as you need and as much as you like and it gives you also a lot of room to breathe and to take a break and to get your mind off of it and then revisit back later. This is also by the edit some interludes in between chapters which are mostly non-technical. They give you little bit of TypeScript culture, little bit of ideas how the TypeScript team thinks, how the community thinks, and how writing good TypeScript code because you without actually focusing on the coding aspect. And we also added those to give you a little break, a little room to breathe, to digest what you just learned because we know that this can be a lot of stuff. And, yeah, so if you just take one lesson a day, you are through with it in 50 days and are an expert in TypeScript, so.

Drew: I often find that when I’m writing about something I’ve been putting together a presentation or an article or something like that, I find that I learn new things that I didn’t appreciate before because having to explain something, you have to make sure you really understand all the details. Was there anything that you found about TypeScript in writing the book that you realized you were learning for the first time?

Stefan: Yes. There were two kind of things that I learned while writing the book that really surprised me. And one thing is how the type definitions TypeScript brings along are structured and created and declared. So they have written a parcel that goes over all the web standards on top of your 3C and there’s this web interface definition language which is it’s own language created by W3C to declare JavaScript interfaces and to take this code slip, it’s refracted them into TypeScript types. And then have some way of structuring them to be ready from ECMAScript 5 standards up to ECMAScript 2020, 2021 standards and if you browse through those alter generated file and read how good they are and how well documented they are and how the structure types that you can learn a lot.

Stefan: So this was one thing there. I kind of lost track at some point while writing it because it was just spending two or three days within, in those lib.d.ts files and soaking up everything that they created. I even have one lesson dedicated to lib.d.ts because it was so, so surprising. And the other thing is, I guess, realizing how generics and conditional types really work under the hood.

Stefan: Because when you apply them and you work with them you just use them as much so you get the right results, but you never question what’s actually making them work and by explaining them in chapters five and six in my book I really found out there are very delicate mechanics underneath and if you understand those mechanics it gets a lot, lot easier creating conditional types, creating generic types, than it would be without just by trying to figure out things. That’s why I also have some flows of code in my book where we start with the conditional type that you write and then we go step by step evaluating what it means until we get to the result type.

Stefan: And this is something where I found some joy in it because it really made me understand what my book should be actually about. That I spend a lot of time and cared a lot about getting those examples right, so. I hope readers will find the same joy out of it because it can be very, very interesting. And, yeah, it gets a little bit nerdy, but that’s part of the fun.

Drew: For anyone wanting to actually get started with JavaScript it sounds like your book is a really great place to begin. Are there any other resources that you’d recommend?

Stefan: So, one thing that I also mention very early in the book is the TypeScript Playground. So, TypeScript offers an interactive editor online with lots and lots and lots of examples to give you good feeling of how it is to work with TypeScript, how TypeScript in the JavaScript only scenario looks like and works like or which language features there and what they mean for your types and especially in the recent year, the TypeScript team hired a person, Orta, just to work on documentation and Playground and website and all those learning resources. And you can see that the process immensely.

Stefan: So, he spend so much time into refactoring every bit and piece of the whole website that it’s now a great learning resource and Otta also, has also written the forward to my book and were chatting about how a book on TypeScript can or should be different compared to what they provide as a learning resource. And I think they work really well together. The book gives you a very tailored and opinionated view and the learning resource that guides you step by step whereas the handbook is this big knowledge base where you get all the additional information and can dig deep into one specific scenario that wouldn’t have enough room in a book.

Drew: Stefan’s book TypeScript in 50 Lessons is available digitally from Smashing right now and it’ll be available in print from November 2020. You can find it at TypeScript-Book.com. So, I’ve been learning all about TypeScript. What have you been learning about lately Stefan?

Stefan: I’m digging into different programing languages again. I’ve been learning a little bit of Go and a little bit of Thrust and what scenarios there are for using them and it’s fun learning something entirely new. It gives you a new perspective on what you already learned so far. So, this is what I enjoy a lot at the moment.

Drew: It’s always exciting, isn’t it? Learning a new language and getting a new perspective on how other languages are structured.

Stefan: Absolutely.

Drew: If you, dear listener, would like to hear more from Stefan, you can follow him on Twitter where he’s DDPRRT and you can find his personal site at fetblog.edu. TypeScript in 50 Lessons is available now from Smashing and you can read all about it at TypeScript-Book.com. Thanks for joining us today, Stefan. Do you have any parting words?

Stefan: Thank you very much. No, well, DDPRRT is the worst Twitter handle in the entire world and if you say it very fast it’s dead parrot, and if you know Monty Python, you might know about the dead parrot. So, that’s all I can say about the worst twitter handle that there is.

Drew: He’s pining for the fjords.

Stefan: Yeah. But, seriously, I hope people enjoy working with TypeScript. I hope they enjoy my book. I’m really, really excited about feedback, so if you have any feedback hit me up on Twitter. I’m here to chat with you about all that stuff. And I’m also very happy to work with you on type programs. If you have something that you can’t quite make sense out of it just drop me a line or a Twitter direct message. I really take the time to see if we can solve the problem.