FAF deciding on filters on post to be syndicated:

SmashingConf Fully Online For 2020

Array ( [post_title] => SmashingConf Fully Online For 2020 [post_content] => SmashingConf Fully Online For 2020

SmashingConf Fully Online For 2020

Rachel Andrew

2020 has been quite the year, and it’s only July. None of us can be certain what the rest of the year looks like, in particular for travel and events where lots of folks gather together. Given the uncertainty and the success of our online workshop series and Meets events, we’re taking all of our 2020 conferences online. How will that work? Read on to find out!

All of our online conference events will take place on the Hopin platform. We roadtested this platform for our Smashing Meets, and we love the way it allows for social chat and side events alongside the main conference. It’s as close as we can get to an in-person experience.

I just attended my second meetup with @hopinofficial and I can say this is the future of conferences.

Such a great experience!

— Pablo E. Carvallo (@pabloecarvallo) June 4, 2020

First Up: The Rescheduled SmashingConf Live!

We will be presenting SmashingConf Live on August 20th-21st. Two half-days, with four talks each day on UX, data visualization, CSS, and JavaScript.

Meet other people in our chat or network on our event platform. Join in on our Design and Coding Tournament, or enjoy watching folks taking part.

Join one of the many sessions! Listen to one of the fireside chats on privacy, web performance, and Machine Learning (ML). Get your website reviewed by one of our speakers.

Take a look at the full schedule. If you have bought a ticket to the postponed SmashingConf Live, your ticket will still be valid. We still have tickets available: register here and we will see you at the event.

We are still scheduling new workshops and repeats of our most popular workshops. Purchase a workshop with your conference ticket and save 100USD.

Then, we have moved our in-person events online. These will be as close as possible to the in-person experience, with side-events, a mystery speaker, and all the fun you expect from a SmashingConf.

September 7th–8th: SmashingConf Freiburg Online

SmashingConf Freiburg Online 2020The Freiburg conference is moving online on the original dates: September 7th–8th. One track, two days and 13 speakers, with all of the actionable insights you expect from SmashingConf. We’ll be running the event tied to the timezone in Germany — making this a great event for Europeans. Check out the schedule, and buy tickets here.

October 13th–14th: SmashingConf Austin (and New York) Online

SmashingConf Austin Online 2020We have combined the programming for New York and Austin as these two events were so close together and similar to each other. We’ll be running this event in Central time, just as if we were all in Austin. Check out the schedule, and buy tickets here. We’d love to see you in October!

November 10th–11th: SmashingConf San Francisco Online

SmashingConf San Francisco Online 2020In PST join us for a virtual San Francisco event on November 10th–11th. The schedule and tickets are online for you to take a look. We’ll be sure to have a great celebration for our final event of 2020!

Existing Ticketholders

We have moved your in-person ticket to the 2021 edition of your chosen conference and in order that you don’t miss out this year, you are invited to the online version of the event you registered for. Two for the price of one as our thanks for your support. If that doesn’t work out for you, we do have options as explained in the email. Didn’t get the email? Drop the team a line at hello@smashingconf.com and we will get back to you.

Join Us!

There are tickets now on sale for all of the above events — we are really looking forward to all of them! One thing we have learned with our online workshops is that taking events online means that lots of people can attend who can’t travel to conferences even in more usual times. That’s really exciting, and we look forward to sharing some days of learning and fun with you all!

Smashing Editorial (cr, aa, il)
[post_excerpt] => 2020 has been quite the year, and it’s only July. None of us can be certain what the rest of the year looks like, in particular for travel and events where lots of folks gather together. Given the uncertainty and the success of our online workshop series and Meets events, we’re taking all of our 2020 conferences online. How will that work? Read on to find out! All of our online conference events will take place on the Hopin platform. [post_date_gmt] => 2020-07-14 14:30:00 [post_date] => 2020-07-14 14:30:00 [post_modified_gmt] => 2020-07-14 14:30:00 [post_modified] => 2020-07-14 14:30:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/smashingconf-online-2020/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/smashingconf-online-2020/ [syndication_item_hash] => c14cc3f9ff8b08a1725c6110fd0e6b52 ) [post_type] => post [post_author] => 467 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) )

Decide filter: Returning post, everything seems orderly :SmashingConf Fully Online For 2020

Array ( [post_title] => SmashingConf Fully Online For 2020 [post_content] => SmashingConf Fully Online For 2020

SmashingConf Fully Online For 2020

Rachel Andrew

2020 has been quite the year, and it’s only July. None of us can be certain what the rest of the year looks like, in particular for travel and events where lots of folks gather together. Given the uncertainty and the success of our online workshop series and Meets events, we’re taking all of our 2020 conferences online. How will that work? Read on to find out!

All of our online conference events will take place on the Hopin platform. We roadtested this platform for our Smashing Meets, and we love the way it allows for social chat and side events alongside the main conference. It’s as close as we can get to an in-person experience.

I just attended my second meetup with @hopinofficial and I can say this is the future of conferences.

Such a great experience!

— Pablo E. Carvallo (@pabloecarvallo) June 4, 2020

First Up: The Rescheduled SmashingConf Live!

We will be presenting SmashingConf Live on August 20th-21st. Two half-days, with four talks each day on UX, data visualization, CSS, and JavaScript.

Meet other people in our chat or network on our event platform. Join in on our Design and Coding Tournament, or enjoy watching folks taking part.

Join one of the many sessions! Listen to one of the fireside chats on privacy, web performance, and Machine Learning (ML). Get your website reviewed by one of our speakers.

Take a look at the full schedule. If you have bought a ticket to the postponed SmashingConf Live, your ticket will still be valid. We still have tickets available: register here and we will see you at the event.

We are still scheduling new workshops and repeats of our most popular workshops. Purchase a workshop with your conference ticket and save 100USD.

Then, we have moved our in-person events online. These will be as close as possible to the in-person experience, with side-events, a mystery speaker, and all the fun you expect from a SmashingConf.

September 7th–8th: SmashingConf Freiburg Online

SmashingConf Freiburg Online 2020The Freiburg conference is moving online on the original dates: September 7th–8th. One track, two days and 13 speakers, with all of the actionable insights you expect from SmashingConf. We’ll be running the event tied to the timezone in Germany — making this a great event for Europeans. Check out the schedule, and buy tickets here.

October 13th–14th: SmashingConf Austin (and New York) Online

SmashingConf Austin Online 2020We have combined the programming for New York and Austin as these two events were so close together and similar to each other. We’ll be running this event in Central time, just as if we were all in Austin. Check out the schedule, and buy tickets here. We’d love to see you in October!

November 10th–11th: SmashingConf San Francisco Online

SmashingConf San Francisco Online 2020In PST join us for a virtual San Francisco event on November 10th–11th. The schedule and tickets are online for you to take a look. We’ll be sure to have a great celebration for our final event of 2020!

Existing Ticketholders

We have moved your in-person ticket to the 2021 edition of your chosen conference and in order that you don’t miss out this year, you are invited to the online version of the event you registered for. Two for the price of one as our thanks for your support. If that doesn’t work out for you, we do have options as explained in the email. Didn’t get the email? Drop the team a line at hello@smashingconf.com and we will get back to you.

Join Us!

There are tickets now on sale for all of the above events — we are really looking forward to all of them! One thing we have learned with our online workshops is that taking events online means that lots of people can attend who can’t travel to conferences even in more usual times. That’s really exciting, and we look forward to sharing some days of learning and fun with you all!

Smashing Editorial (cr, aa, il)
[post_excerpt] => 2020 has been quite the year, and it’s only July. None of us can be certain what the rest of the year looks like, in particular for travel and events where lots of folks gather together. Given the uncertainty and the success of our online workshop series and Meets events, we’re taking all of our 2020 conferences online. How will that work? Read on to find out! All of our online conference events will take place on the Hopin platform. [post_date_gmt] => 2020-07-14 14:30:00 [post_date] => 2020-07-14 14:30:00 [post_modified_gmt] => 2020-07-14 14:30:00 [post_modified] => 2020-07-14 14:30:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/smashingconf-online-2020/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/smashingconf-online-2020/ [syndication_item_hash] => c14cc3f9ff8b08a1725c6110fd0e6b52 ) [post_type] => post [post_author] => 467 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) )

FAF deciding on filters on post to be syndicated:

Is Redesigning Your Mobile App A Bad Idea?

Array ( [post_title] => Is Redesigning Your Mobile App A Bad Idea? [post_content] => Is Redesigning Your Mobile App A Bad Idea?

Is Redesigning Your Mobile App A Bad Idea?

Suzanne Scacca

I’m all for updating and upgrading mobile apps. I think if you’re not constantly looking at ways to improve the user experience, it’s just too easy to fall behind.

That said, a redesign should be done for the right reasons.

If it’s an existing app that’s already popular with users, any changes made to the design or content should be done in very small, incremental, strategic chunks through A/B testing.

If your app is experiencing serious issues with user acquisition or retention, then a redesign is probably necessary. Just be careful. You could end up making things even worse than they were before.

Let’s take a look at some recent redesign fails and review the lessons we can all learn from them.

Lesson #1: Never Mess With A Classic Interface (Scrabble GO)

Scrabble is one of the most profitable board games of all time, so it’s no surprise that EA decided to turn it into a mobile app. And it was well-received.

However, that all changed in early 2020 when the app was sold to Scopely and it was redesigned as an ugly, confusing and overwhelming mess of its former self.

Let me introduce you to Scrabble GO as it stands today.

The splash screen introducing gamers into the app looks nice. Considering how classically simply and beautiful the board game is, this is a good sign. Until this happens:

Scrabble GO home screen
The Scrabble GO home screen is distraction overload. (Source: Scrabble GO) (Large preview)

I don’t even know where to start with this, but I’m going to try:

Beyond the UI of the homescreen, the UI and UX within the game board have been altered, too.

Take, for instance, this plea from @lageerdes on Twitter:

Twitter user @lageerdes tells Scrabble GO she wants the old app back
Twitter user @lageerdes asks Scrabble GO why the old functionality is gone. (Source: Twitter) (Large preview)

It took Scrabble GO over a week to tell @lageerdes something that could’ve easily been spelled out in a game FAQ or Settings page. These aren’t the only classic features that the new app has either complicated or done away with.

Now, Scopely took note of the negative comments from users and promised to revamp the app accordingly (which was promising). But rather than revert back to the old and much-loved design, it just added a new mode:

Scrabble GO settings with 'Mode Settings'
Scrabble GO added new 'Mode Settings' to appease users. (Source: Scrabble GO) (Large preview)

You’d think that the mode switcher would be more prominently displayed — like in the menu bar. Instead, it’s buried under the “Profile Settings” tab and there’s no indication anywhere in the app that the classic mode even exists.

Sadly, classic mode isn’t much of an improvement (classic is on the right):

Scrabble GO home screen vs. the newly designed classic home screen
The new Scrabble GO home screen versus the newly designed classic mode home screen. (Source: Scrabble GO) (Large preview)

The colors are toned down, some of the elements in the top-half have been cut out or minimized, but it doesn’t address any of the users’ issues with the app or game play.

Worse, many users are reporting the app crashes on them, as this complaint from Twitter user @monicamhere demonstrates:

Twitter user @monicamhere complains that Scrabble GO app crashes
Twitter user @monicamhere complains to Scrabble GO about the app crashing. (Source: Twitter) (Large preview)

I suspect this is happening because the developers jammed a second overloaded mode into the app rather than simply refine the existing one based on user feedback.

So, what’s the lesson here?

Listen to what your users have to say. It’s valuable feedback that could make a world of difference in the user experience.

Lesson #2: Never Mislead Users At Checkout (Instacart)

This is an interesting case because the people who objected to this particular Instacart UI update weren’t its primary users.

Here’s why the change was an issue:

Users go onto the Instacart website or mobile app and do their grocery shopping from the local store of their choice. It’s a pretty neat concept:

Instacart mobile app - shopping with Wegmans
Instacart users can do virtual shopping with grocery stores like Wegmans. (Source: Instacart) (Large preview)

Users quickly search for items and add them to their virtual shopping cart. In many cases, they have the option to either do curbside pickup or have the groceries delivered to their front doorstep. Either way, a dedicated “shopper” picks out the items and bags them up.

When the user is done shopping, they get a chance to review their cart and make final changes before checking out.

On the checkout page, users get to pick when they want their order fulfilled. Beneath this section, they find a high-level summary of their charges:

Instacart checkout tab with summary of charges
Instacart checkout sums up the total costs of a user’s order. (Source: Instacart) (Large preview)

At first glance, this all appears pretty-straightforward.

But before all that is a section called “Delivery Tip”. This is where Instacart’s shoppers take issue.

They argued that this is a dark pattern. And it is. Let me explain:

The first thing that’s wrong is that the Delivery Tip isn’t included with the rest of the line items. If it’s part of the calculation, it should be present down there and not separated out in its own section.

The second thing that’s wrong is that the tip is automatically set at 5% or $2.00. This was the shoppers’ biggest grievance at the time. They believed that because the “(5.0%)” in the delivery tip line wasn’t there in 2018, users might’ve seen the amount and thought “That seems reasonable enough” and left it at that. Whereas if you spell out the percentage, users may be inclined to leave more money.

For users who take the time to read through their charges and realize that they can leave a larger tip, this is what the tip update page looks like for small orders:

Instacart delivery tip customization
Instacart enables users to change the way they tip the delivery person. (Source: Instacart) (Large preview)

It’s oddly organized as the pre-selected amount sits at the very bottom of the page. And then there’s a random $6 tip included as if the app creators didn’t want to calculate what 20% would be.

That’s not how the tip is presented to users with larger orders though:

Instacart users can customize delivery tip on big orders
Instacart enables users to customize the tip they leave the delivery person, from 5% to 20% or they can customize the amount. (Source: Instacart) (Large preview)

It’s a strange choice to present users with a different tip page layout. It’s also strange that this one includes an open field to input a custom tip (under “Other amount”) when it’s not available on smaller orders.

If Instacart wants to avoid angering its shoppers and users, there needs to be more transparency about what’s going on and they need to fix the checkout page.

Dark patterns have no place in app design and especially not at checkout.

If you’re building an app that provide users with delivery, pickup or personal shopper services (which is becoming increasingly more common), I’d recommend designing your checkout page like Grubhub’s:

Grubhub checkout page with tips
The Grubhub checkout page recaps the user’s order and provides tip amounts in percentages. (Source: Grubhub) (Large preview)

Users not only get a chance to see their items at the time of checkout, but the tip line is not deceptively designed or hidden. It sticks right there to the bottom of the page.

What’s more, tips are displayed as percentage amounts instead of random dollars. For U.S. consumers that are used to tipping 20% for good service, this is a much better way to ensure they leave a worthwhile tip for service workers rather than assume the dollar amount is okay.

And if they want to leave more or less, they can use the “Custom” option to input their own value.

Lesson #3: Never Waver In Your Decision To Roll Back (YouTube)

When the majority of your users speak up and say, “I really don’t like this new feature/update/design”, commit to whatever choice you make.

If you agree that the new feature sucks, then roll it back. And keep it that way.

If you don’t agree, then tweak it or just give it time until users get back on your side.

Just don’t flip-flop.

Here’s what happened when YouTube switched things up on its users… and then switched them again:

In 2019, YouTube tested hiding its comments section beneath this icon:

The Verge and XDA Developers - YouTube comments test
The Verge and XDA Developers report on a new placement of YouTube comments in 2019. (Source: Verge) (Large preview)

Before this test, comments appeared at the very bottom of the app, beneath the “Up next” video recommendations. With this update, however, they were moved behind this new button. Users would only see comments if they clicked it.

The response to the redesign clearly wasn’t positive as YouTube rolled back the update.

In 2020, YouTube decided to play around with the comments section again. Unlike the 2019 update, though, YouTube’s committed to this one (so far).

Here’s where the comments appear now:

YouTube comments section design in 2020
The YouTube comments redesign puts the comments above the 'Up next' section. (Source: YouTube) (Large preview)

They’re sandwiched between the “Subscribe” bar and the “Up next” section.

If YouTube users go looking for the comments section in the old spot, they’re going to find this message now:

YouTube notice: ‘Comments have moved’
A notice appears when YouTube users go looking for comments in the old location. (Source: YouTube) (Large preview)

This is a nice touch. Think about how many times you’ve had to redesign something in an app or on a website, but had no way of letting regular users know about it. Not only does this tell them there’s been a change, but “Go To Comments” takes them there.

With this tooltip, YouTube doesn’t assume that users will zero in on the new section right away. It shows them where it is:

YouTube new comments section tooltip
YouTube users see tooltip that shows them where the new comments section is. (Source: YouTube) (Large preview)

I actually think this is a good redesign. YouTube might be a place for some users to mindlessly watch video after video, but it’s a social media platform as well. By hiding the comments section under a button or tucking them into the bottom of the page, does that really encourage socialization? Of course not.

That said, users aren’t responding well to this change either, as Digital Information World reports. From what I can tell, the backlash is due to Google/YouTube disrupting the familiarity users have with the app’s layout. There’s really nothing here that suggests friction or disruption in their experience. It’s not even like the new section gets in the way or impedes users from binge-watching videos.

This is a tricky one because I don’t believe that YouTube should roll this update back.

There must be something in YouTube’s data that’s telling it that the bottom of the app is a bad place for comments, which is why it’s taking another stab at a redesign. It might be low engagement rates or people expressing annoyance at having to scroll so much to find them.

As such, I think this is a case for a mobile app developer not to listen to its users. And, in order to restore their trust and satisfaction, YouTube will need to hold firm to its decision this time.

Is A Mobile App Redesign The Best Idea For You?

Honestly, it’s impossible to please everyone. However, your goal should be to please, at the very least, most of your users.

So, if you’re planning to redesign your app, I’d suggest taking the safe approach and A/B testing it first to see what kind of feedback you get.

That way, you’ll only push out data-backed updates that improve the overall user experience. And you won’t have to deal with rolling back the app or the negative press you get from media outlets, social media comments, or app store reviews.

Further Reading on SmashingMag:

Smashing Editorial (ra, yk, il)
[post_excerpt] => I’m all for updating and upgrading mobile apps. I think if you’re not constantly looking at ways to improve the user experience, it’s just too easy to fall behind. That said, a redesign should be done for the right reasons. If it’s an existing app that’s already popular with users, any changes made to the design or content should be done in very small, incremental, strategic chunks through A/B testing. [post_date_gmt] => 2020-07-14 11:00:00 [post_date] => 2020-07-14 11:00:00 [post_modified_gmt] => 2020-07-14 11:00:00 [post_modified] => 2020-07-14 11:00:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/redesigning-mobile-app/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/redesigning-mobile-app/ [syndication_item_hash] => Array ( [0] => 9dffac7cdad1a0cf20be0509167bea38 [1] => 70b9b3740dda374db556bd3e686b9ab4 [2] => 22c942a90f5ae868b2eda562f1e57831 ) ) [post_type] => post [post_author] => 471 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => is-redesigning-your-mobile-app-a-bad-idea )

Decide filter: Returning post, everything seems orderly :Is Redesigning Your Mobile App A Bad Idea?

Array ( [post_title] => Is Redesigning Your Mobile App A Bad Idea? [post_content] => Is Redesigning Your Mobile App A Bad Idea?

Is Redesigning Your Mobile App A Bad Idea?

Suzanne Scacca

I’m all for updating and upgrading mobile apps. I think if you’re not constantly looking at ways to improve the user experience, it’s just too easy to fall behind.

That said, a redesign should be done for the right reasons.

If it’s an existing app that’s already popular with users, any changes made to the design or content should be done in very small, incremental, strategic chunks through A/B testing.

If your app is experiencing serious issues with user acquisition or retention, then a redesign is probably necessary. Just be careful. You could end up making things even worse than they were before.

Let’s take a look at some recent redesign fails and review the lessons we can all learn from them.

Lesson #1: Never Mess With A Classic Interface (Scrabble GO)

Scrabble is one of the most profitable board games of all time, so it’s no surprise that EA decided to turn it into a mobile app. And it was well-received.

However, that all changed in early 2020 when the app was sold to Scopely and it was redesigned as an ugly, confusing and overwhelming mess of its former self.

Let me introduce you to Scrabble GO as it stands today.

The splash screen introducing gamers into the app looks nice. Considering how classically simply and beautiful the board game is, this is a good sign. Until this happens:

Scrabble GO home screen
The Scrabble GO home screen is distraction overload. (Source: Scrabble GO) (Large preview)

I don’t even know where to start with this, but I’m going to try:

Beyond the UI of the homescreen, the UI and UX within the game board have been altered, too.

Take, for instance, this plea from @lageerdes on Twitter:

Twitter user @lageerdes tells Scrabble GO she wants the old app back
Twitter user @lageerdes asks Scrabble GO why the old functionality is gone. (Source: Twitter) (Large preview)

It took Scrabble GO over a week to tell @lageerdes something that could’ve easily been spelled out in a game FAQ or Settings page. These aren’t the only classic features that the new app has either complicated or done away with.

Now, Scopely took note of the negative comments from users and promised to revamp the app accordingly (which was promising). But rather than revert back to the old and much-loved design, it just added a new mode:

Scrabble GO settings with 'Mode Settings'
Scrabble GO added new 'Mode Settings' to appease users. (Source: Scrabble GO) (Large preview)

You’d think that the mode switcher would be more prominently displayed — like in the menu bar. Instead, it’s buried under the “Profile Settings” tab and there’s no indication anywhere in the app that the classic mode even exists.

Sadly, classic mode isn’t much of an improvement (classic is on the right):

Scrabble GO home screen vs. the newly designed classic home screen
The new Scrabble GO home screen versus the newly designed classic mode home screen. (Source: Scrabble GO) (Large preview)

The colors are toned down, some of the elements in the top-half have been cut out or minimized, but it doesn’t address any of the users’ issues with the app or game play.

Worse, many users are reporting the app crashes on them, as this complaint from Twitter user @monicamhere demonstrates:

Twitter user @monicamhere complains that Scrabble GO app crashes
Twitter user @monicamhere complains to Scrabble GO about the app crashing. (Source: Twitter) (Large preview)

I suspect this is happening because the developers jammed a second overloaded mode into the app rather than simply refine the existing one based on user feedback.

So, what’s the lesson here?

Listen to what your users have to say. It’s valuable feedback that could make a world of difference in the user experience.

Lesson #2: Never Mislead Users At Checkout (Instacart)

This is an interesting case because the people who objected to this particular Instacart UI update weren’t its primary users.

Here’s why the change was an issue:

Users go onto the Instacart website or mobile app and do their grocery shopping from the local store of their choice. It’s a pretty neat concept:

Instacart mobile app - shopping with Wegmans
Instacart users can do virtual shopping with grocery stores like Wegmans. (Source: Instacart) (Large preview)

Users quickly search for items and add them to their virtual shopping cart. In many cases, they have the option to either do curbside pickup or have the groceries delivered to their front doorstep. Either way, a dedicated “shopper” picks out the items and bags them up.

When the user is done shopping, they get a chance to review their cart and make final changes before checking out.

On the checkout page, users get to pick when they want their order fulfilled. Beneath this section, they find a high-level summary of their charges:

Instacart checkout tab with summary of charges
Instacart checkout sums up the total costs of a user’s order. (Source: Instacart) (Large preview)

At first glance, this all appears pretty-straightforward.

But before all that is a section called “Delivery Tip”. This is where Instacart’s shoppers take issue.

They argued that this is a dark pattern. And it is. Let me explain:

The first thing that’s wrong is that the Delivery Tip isn’t included with the rest of the line items. If it’s part of the calculation, it should be present down there and not separated out in its own section.

The second thing that’s wrong is that the tip is automatically set at 5% or $2.00. This was the shoppers’ biggest grievance at the time. They believed that because the “(5.0%)” in the delivery tip line wasn’t there in 2018, users might’ve seen the amount and thought “That seems reasonable enough” and left it at that. Whereas if you spell out the percentage, users may be inclined to leave more money.

For users who take the time to read through their charges and realize that they can leave a larger tip, this is what the tip update page looks like for small orders:

Instacart delivery tip customization
Instacart enables users to change the way they tip the delivery person. (Source: Instacart) (Large preview)

It’s oddly organized as the pre-selected amount sits at the very bottom of the page. And then there’s a random $6 tip included as if the app creators didn’t want to calculate what 20% would be.

That’s not how the tip is presented to users with larger orders though:

Instacart users can customize delivery tip on big orders
Instacart enables users to customize the tip they leave the delivery person, from 5% to 20% or they can customize the amount. (Source: Instacart) (Large preview)

It’s a strange choice to present users with a different tip page layout. It’s also strange that this one includes an open field to input a custom tip (under “Other amount”) when it’s not available on smaller orders.

If Instacart wants to avoid angering its shoppers and users, there needs to be more transparency about what’s going on and they need to fix the checkout page.

Dark patterns have no place in app design and especially not at checkout.

If you’re building an app that provide users with delivery, pickup or personal shopper services (which is becoming increasingly more common), I’d recommend designing your checkout page like Grubhub’s:

Grubhub checkout page with tips
The Grubhub checkout page recaps the user’s order and provides tip amounts in percentages. (Source: Grubhub) (Large preview)

Users not only get a chance to see their items at the time of checkout, but the tip line is not deceptively designed or hidden. It sticks right there to the bottom of the page.

What’s more, tips are displayed as percentage amounts instead of random dollars. For U.S. consumers that are used to tipping 20% for good service, this is a much better way to ensure they leave a worthwhile tip for service workers rather than assume the dollar amount is okay.

And if they want to leave more or less, they can use the “Custom” option to input their own value.

Lesson #3: Never Waver In Your Decision To Roll Back (YouTube)

When the majority of your users speak up and say, “I really don’t like this new feature/update/design”, commit to whatever choice you make.

If you agree that the new feature sucks, then roll it back. And keep it that way.

If you don’t agree, then tweak it or just give it time until users get back on your side.

Just don’t flip-flop.

Here’s what happened when YouTube switched things up on its users… and then switched them again:

In 2019, YouTube tested hiding its comments section beneath this icon:

The Verge and XDA Developers - YouTube comments test
The Verge and XDA Developers report on a new placement of YouTube comments in 2019. (Source: Verge) (Large preview)

Before this test, comments appeared at the very bottom of the app, beneath the “Up next” video recommendations. With this update, however, they were moved behind this new button. Users would only see comments if they clicked it.

The response to the redesign clearly wasn’t positive as YouTube rolled back the update.

In 2020, YouTube decided to play around with the comments section again. Unlike the 2019 update, though, YouTube’s committed to this one (so far).

Here’s where the comments appear now:

YouTube comments section design in 2020
The YouTube comments redesign puts the comments above the 'Up next' section. (Source: YouTube) (Large preview)

They’re sandwiched between the “Subscribe” bar and the “Up next” section.

If YouTube users go looking for the comments section in the old spot, they’re going to find this message now:

YouTube notice: ‘Comments have moved’
A notice appears when YouTube users go looking for comments in the old location. (Source: YouTube) (Large preview)

This is a nice touch. Think about how many times you’ve had to redesign something in an app or on a website, but had no way of letting regular users know about it. Not only does this tell them there’s been a change, but “Go To Comments” takes them there.

With this tooltip, YouTube doesn’t assume that users will zero in on the new section right away. It shows them where it is:

YouTube new comments section tooltip
YouTube users see tooltip that shows them where the new comments section is. (Source: YouTube) (Large preview)

I actually think this is a good redesign. YouTube might be a place for some users to mindlessly watch video after video, but it’s a social media platform as well. By hiding the comments section under a button or tucking them into the bottom of the page, does that really encourage socialization? Of course not.

That said, users aren’t responding well to this change either, as Digital Information World reports. From what I can tell, the backlash is due to Google/YouTube disrupting the familiarity users have with the app’s layout. There’s really nothing here that suggests friction or disruption in their experience. It’s not even like the new section gets in the way or impedes users from binge-watching videos.

This is a tricky one because I don’t believe that YouTube should roll this update back.

There must be something in YouTube’s data that’s telling it that the bottom of the app is a bad place for comments, which is why it’s taking another stab at a redesign. It might be low engagement rates or people expressing annoyance at having to scroll so much to find them.

As such, I think this is a case for a mobile app developer not to listen to its users. And, in order to restore their trust and satisfaction, YouTube will need to hold firm to its decision this time.

Is A Mobile App Redesign The Best Idea For You?

Honestly, it’s impossible to please everyone. However, your goal should be to please, at the very least, most of your users.

So, if you’re planning to redesign your app, I’d suggest taking the safe approach and A/B testing it first to see what kind of feedback you get.

That way, you’ll only push out data-backed updates that improve the overall user experience. And you won’t have to deal with rolling back the app or the negative press you get from media outlets, social media comments, or app store reviews.

Further Reading on SmashingMag:

Smashing Editorial (ra, yk, il)
[post_excerpt] => I’m all for updating and upgrading mobile apps. I think if you’re not constantly looking at ways to improve the user experience, it’s just too easy to fall behind. That said, a redesign should be done for the right reasons. If it’s an existing app that’s already popular with users, any changes made to the design or content should be done in very small, incremental, strategic chunks through A/B testing. [post_date_gmt] => 2020-07-14 11:00:00 [post_date] => 2020-07-14 11:00:00 [post_modified_gmt] => 2020-07-14 11:00:00 [post_modified] => 2020-07-14 11:00:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/redesigning-mobile-app/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/redesigning-mobile-app/ [syndication_item_hash] => Array ( [0] => 9dffac7cdad1a0cf20be0509167bea38 [1] => 70b9b3740dda374db556bd3e686b9ab4 [2] => 22c942a90f5ae868b2eda562f1e57831 ) ) [post_type] => post [post_author] => 471 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => is-redesigning-your-mobile-app-a-bad-idea )

FAF deciding on filters on post to be syndicated:

Smashing Podcast Episode 20 With Marcy Sutton: What Is Gatsby?

Array ( [post_title] => Smashing Podcast Episode 20 With Marcy Sutton: What Is Gatsby? [post_content] => Smashing Podcast Episode 20 With Marcy Sutton: What Is Gatsby?

Smashing Podcast Episode 20 With Marcy Sutton: What Is Gatsby?

Drew McLellan

Photo of Marcy SuttonToday, we’re talking about Gatsby. What is it and how does it fit into your web development stack? Drew McLellan talks to expert Marcy Sutton to find out.

Show Notes

Weekly Update

Transcript

Drew McLellan: She is the lead engineer on the Developer Relations team at Gatsby. Previously, she worked on the open source axe-core accessibility testing library, and has also worked as an accessibility engineer at Adobe. She’s passionate about improving the web for people with disabilities and often speaks about it at conferences. In 2016, O’Reilly gave her web platform award for a work in accessibility.

Drew: She also co-leads the accessibility Seattle and NW Tech Women Meetups in a local region. So, we know she’s a skilled engineer and an accessibility expert. But did you know she wants to send it Angel Falls in a barrel? My smashing friends please welcome Marcy Sutton.

Marcy Sutton: Hello.

Drew: Hello. Marcy. How are you?

Marcy: I’m smashing. How are you?

Drew: I’m very good. Thank you. I wanted to talk to you today about Gatsby. Because it came up in a conversation I was having on a previous episode about learning React with Mina Markham. I realized I was in danger of doing the typical man on the internet thing of giving an opinion on something that I had no direct experience of. So that’s not how we do things at Smashing.

Drew: And I want to make sure that we properly cover Gatsby. And what better way to do it than to talk somebody who knows it inside and out. So, presuming that perhaps I’ve heard the name Gatsby. But I’ve got no idea where it fits into the stack when building website. What exactly is Gatsby.

Marcy: Gatsby is a website generator, it currently uses React. But it will create a static website for you that then will rehydrate into a full React web application. So you really get the best of both worlds with fast builds that you’re compiling HTML files that will load fast for users. And then you get all of this enhancement with JavaScript to make really interactive dynamic web apps.

Marcy: So, it’s really exciting space to be in. And I’ve been working on the learning side with documentation and now on the Devrel team, I’m focused on making it as good as it can be, knowing accessibility challenges with JavaScript and just trying to fix it from the inside out.

Drew: Many of us will be familiar, I guess, with the concept of a static site generator. And Gatsby seems to broadly fit into that role. But to me, it seems like it goes a lot further than the most SSG’s do. And most site generators are front-end code agnostic. It seems that with Gatsby, you end up with Gatsby code running as part of your site. Is that a fair assessment? And if so, what sort of things is Gatsby actually doing in your front-end?

Marcy: Yes, I’d say the biggest piece of that is client side routing. So, Gatsby right now is using reach router under the hood. It does kind of its own implementation. But that is the piece that when you load your static site for the first time, there are HTML files there. So, if the user turns JavaScript off for some reason, your site should still be there, still have content.

Marcy: But if JavaScript is enabled, that’s when this hydration step happens where, when you use links in your Gatsby site, it will go pre-fetch resources from that page, so it will load faster. So, that is all enabled with this JavaScript layer that Gatsby gives you. And so beyond that, it really depends kind of what you’re using in your site will end up in that JavaScript bundle.

Marcy: But for things that use a lot of interactivity, like accessible interfaces, that’s a good place to be. For me, I really enjoy having JavaScript available to me at all times, and having my markup just be in a good spot. I know it’s a matter of preference, whether you want your HTML and your JavaScript and your CSS all kind of neatly coupled and there’s room for variations within building Gatsby

Marcy: You don’t always have to use something like CSS and JS. But it’s really about getting the power of that dynamic JavaScript layer, available to you while you’re writing your website. It’s not like this add on in a separate file.

Drew: When I think of how a static site generator usually works, I’m thinking of content in perhaps Markdown files. And the generator runs across that content and merges it with templates and creates 10s, hundreds, thousands of HTML files, which are the pages of your website. When I think of a React site or app, I’m thinking more about single page experience where the interfaces be created by React on the fly. So, you’re saying Gatsby does both of those? It’s creating all the pages and also enhancing it with JavaScript?

Marcy: It is, yes. Gatsby will use Node.js at build time, it will go over your React components and compile them into HTML files. Which honestly, the first time I looked at Gatsby I wouldn’t turn JavaScript off and was like, “All right, are there other pages here, what is this?” And I was so happy that Gatsby works that way by default. It will create built files from your react components, which is awesome.

Marcy: I have explored more progressive enhancement approaches since it’s in JavaScript. Like what if you want to output something progressively enhanced for users, where if they do have JavaScript turned off, you don’t want all this broken code that assumes JavaScript is there. So there are some quirks with it. But you can work around that kind of thing at least for your core user flows where you want someone to still be able to buy something, you might need to add some support and for those use cases.

Marcy: But I’ve been pleasantly surprised that the way that Gatsby rolls that out by default. And so, it is a choice that they made to build sites that way, and we’re always evaluating it. Is it still the best way? What do we need to do to give our users what they’re asking for? So, we’re doing some explorations internally, ongoing just to make sure Gatsby is doing the best job it can at building a website.

Marcy: So keeping bundle sizes small, and making sure that for making trade offs for what we say is performant code with pre-fetching. Like, do we have the data to back that up? That’s the kind of thing as a developer advocate that I’m super interested in, is making sure that what we’re packaging and bundling on websites is actually needed and will really make the best Gatsby site it can make.

Drew: You mentioned performance there, and there’s a big focus on performance. It certainly seems from the way that Gatsby presents itself. Is that a true feature of Gatsby or is it just the nature of JAMstack websites?

Marcy: I think it can be a nature of JAMstack websites. Ultimately, it’s going to come down to what you’re bundling on your website. So, no matter what framework or tool you’re using, we still have to be thoughtful in what we’re putting in those bundles for end users. But Gatsby really aims to give you good defaults. Not only for performance, but for accessibility as well.

Marcy: But that always takes evaluation, we always have to make sure that if we’ve added something, that it’s still performant. But yeah, getting that initial payload of static files, they load fast. Much faster than classic WordPress site that I used to have. But then enhancing it with JavaScript. There are some trade offs there for sure.

Marcy: But it works really well, lots of people, they really like their Gatsby sites. So, it’s been fun to get to work on it full time, and learn the ins and outs of a JavaScript framework like Gatsby.

Drew: What sort of performance features just Gatsby actually put into place to speed up your sites?

Marcy: Well, with the pre-fetching for links, this client said routing stuff, I’d say that’s probably the biggest one. Making it really easy to generate a progressive web app. So, having some offline capabilities, you can sort of pick and choose what you want in terms of offline and PWA type stuff. But they really make that part of the initial experience, like a lot of the starter example sites that you might start from have examples of using a manifest, and kind of making that modern version of your website.

Marcy: Really, it’s like not shipping code that you don’t need. That’s a big part of it. Caching, that’s really the pre-fetching for links. That’s what I would say is the biggest piece of it.

Drew: So this is where the site is actually anticipating where the user is going to go. Is it as intelligent as that or does it pre-fetch everything on the page or?

Marcy: No, it’s based on user interaction. So, if the user scrolls down the View port, there’s something pre-fetching that happens there. If you hover over links, it will kind of estimate that there’s a pretty good chance that you might go to that page. We’ve been talking internally, well, I guess, open source to about whether that pre-fetching should happen on keyboard focus too, so that intersection of accessibility and performance is very interesting.

Marcy: There’s some trade offs there like should a keyboard user who can’t use the mouse and is tabbing through every link to navigate, should that really be fetching content for every single one of those because a mouse user might be a bit more selective about where they put their mouse cursor. So, those conversations I find extremely fascinating.

Marcy: And trying to think of what data do we need to validate these assumptions too. So yeah, it’s been super interesting to look at those defaults and what improvements can we make and really checking like how much data is that fetching? Is that really a good thing? Just to speed it up a little bit? Or is it fast enough without that? Are there alternative solutions that we could use as part of the fun of working on a framework because being able to evaluate all those trade offs.

Drew: This is pre-fetching something that users just get for free in their site. So do they have to do any work to implement it?

Marcy: You do get it for free using Gatsby link. So it’s a component that comes with Gatsby and when you use that, it outputs anchor tags. So your HTML is real HTML and you’ve leveraged the web platform in that way. But in your React components, you are working directly with the Gatsby link component. And that has all of those mechanisms for… It looks at whatever your future HREF will be, for that link of where you want to go to and it will go and grab resources from that link and preload them.

Marcy: And it’s only internal to your site. So it’s not going off and trying to fetch things on other websites. But it seems to work pretty well. I know some users are actively looking for ways, like you actually have to opt out of some of these things. At least routing, not using the pre-fetching. You just use regular anchor tags. And then you don’t really get that functionality. It’s pretty easy to use something else. But some of the discussions we’re having are around client side routing, and how to make that the best it can be. And so, that’s a really interesting space too.

Drew: How closely do you have to work within the Gatsby ecosystem in terms of if I wanted to have my own link component? Would that be completely fine, I wouldn’t be fighting against Gatsby to do that sort of thing?

Marcy: No, you could slot in whatever components you want, as long as they work with the React runtime. That’s really the beauty of it. Anything you could put in a React app, you could put in a Gatsby app. There’s even a pre-React plugin. So, there are some alternatives to working with Gatsby. But I love how you can pull in whatever, off the shelf components that you want to use or write your own.

Marcy: And I think that flexibility is what people really enjoy. There is the caveat of it uses the React runtime. And so, you have to be okay with using react or using this pre-React plugin. But personally, I really like react and JSX for working with accessibility and templates, especially with React hooks. So, being able to use the hut in my Gatsby site is just so cool. I really like it.

Drew: And how’s the process of building a Gatsby site that’s presumably a node module that you can just install and you would do a build like you would with any other static site generator?

Marcy: Yes. There is a CLI that you install globally. And I guess it’s whether you want to install it globally. That’s what we recommend, because then you can run it from any directory on your computer, but it will pull down whatever you need to build a Gatsby site. And then you can add on, say you wanted to use WordPress as a headless CMS or some other content source.

Marcy: You can install packages, plugins to make that work, and then integrate it with your site. There’s also some starters and themes that you can use to get up and running quicker. I’ve used those if I want to test out something or start a site rapidly for a specific integration like Drupal or prismic or whichever CMS or eCommerce solution or something I want to use.

Marcy: There’s lots of examples. So you’re not always tinkering with trial and error trying to figure it out, but it’s sort of these building blocks that you can piece together and create… It’s what we call the content mesh. And so, you can use these best in breed integrations to create a site instead of, if I had a classic WordPress site, the authoring experience and working with teams is really great.

Marcy: But there were shortcomings in the front end, like how it would work on a mobile device. What else? If I wanted an eCommerce solution? I think there’re some things that are easier to do these days, but being able to pick whichever kind of best solutions you want for authentication, or whatever that modern thing is, you’re like, “I wish I could use that.” With Gatsby, you can pull a lot of these things together and make this content mesh way of building that’s pretty refreshing.

Marcy: Especially when you can still use those integrations like WordPress and still work with teams. So, we’re pretty excited about this new way of working where you can pick all the technologies that you’ve like, or that work for your team.

Drew: One of the big features that Gatsby has tout strongly is this ability to pull in data or content from a variety of different sources. You mentioned things like WordPress and Drupal, and what have you. Traditionally, if I was using something like Jekyll or Eleventy, or something like that, I would need to wire up that myself to interact with API’s, perhaps pull content down and write it into markdown files or JSON files, and then have the generator work with those files.

Drew: So it would be a sort of two step process, could use something like source bit, which we covered on a previous episode that does that sort of thing? Do I understand rightly that Gatsby has just this native ability to consume different sources in a way that other static site generators, just don’t?

Marcy: I think what makes Gatsby really strong in this area is its GraphQL data layer, and the plugin ecosystem. So, chances are someone has already written a plugin for whatever data source you’d be looking to build. And if not, there’s probably something close. But using GraphQL, is kind of the under workings of it. The layer that makes all of these integrations possible is using GraphQL.

Marcy: And so, there’s lots of possibilities for what you could pull in and we try to make it easy to write plugins too. So it’s been really neat learning about how to write a plugin, and the AST or Abstract Syntax Tree that it creates and kind of learning about how all that works has been really cool. But yeah, I’d say, there are a lot of things off the shelf that you could pick up without having to write it all yourself, which is pretty awesome.

Marcy: And it’s nice to have the flexibility to pull in markdown. Say your developers want to write their blog content markdown, but the marketing team is really not happy with that, you could combine content sources, and source them from multiple places. I’ve seen people sourcing things from other GitHub repos, and they use a get plugin to pull in markdown content that way. Lots of flexibility.

Drew: And I guess you’ve got the option then of writing your own plugins to pull from a custom data source. Say you’ve got some legacy system and you want to put a nice, shiny new website on the front of it. You could write a plugin that would get the data out in whatever format that is needed and translate it into something that gets bigger than work with?

Marcy: You could yes. And so, plugins enable that. And then there’s this abstraction on top of that, which we call Gatsby themes. And those are not only user interface code, but they could be GraphQL queries, configurations that set up a plugin, so it’s like a plugin with usage kind of bundled together. And you can distribute those themes on NPM.

Marcy: And then, their version and you can pull them in. And that whole API is really interesting too for teams who say you have multiple repos, and you want to pull data into those, it would be very repetitive to have the same queries in all of these repos in the same code. So, to dry things up a bit and not repeat yourself so much. You can use these abstractions called themes, to sort of distribute around that logic or code that would enable that source plugin. So, there’s these kind of layers of abstractions that you can build on top of it that we’ve heard that teams are really getting a lot out of right now.

Drew: So a theme in the Gatsby world isn’t a look and feel like it would be with CMS like WordPress.

Marcy: Yeah, I mean, it can but that’s not all that it is. So, naming things is very hard. But themes I’ve really enjoyed learning about just the flexibility and being able to, yes, you could include some user interface code. But there could be some query language code that goes in there as well. But the fact that it’s kind of bundled together, makes it easy to distribute. Yeah, it’s been a really neat abstraction that it’s been cool to see what people are building and what themes they’re shipping, and all that.

Drew: Yeah, I can imagine it would lead to some fairly innovative uses of Gatsby. Have you seen anything that’s been, in particular that caught your eye that customers are doing this particularly creative?

Marcy: Yeah. Well, in terms of themes, I mean, one of the first ones I read about there’s a case study on the Gatsby blog, I think from Apollo. And they wrote a documentation site using Gatsby themes and that used a get source plugin. And so, it really kind of decouples your sourcing, and your content, making it so that teams can pull in a theme to use across multiple repos.

Marcy: I’d say that’s the most interesting to me just because of what I can envision it enabling like, past teams I was on where we had to source content, we were just so like limited and where that code could live and how repeatable it could be. And so, seeing a solution now where teams are like, “Oh, this works great.” And that was even last summer, or like that was a case study a while ago.

Marcy: So since then, API’s have been improving, and there’s a whole team working on Gatsby themes. And I know they are rolling out some big improvements in the next few weeks. So, I don’t want to steal their thunder, but there’s some neat stuff coming with themes. They’ve been overhauling some of the blog themes like the core themes that we offer from Gatsby.

Marcy: I know they’re using it internally to build some of our own product announcement, or product improvements that will be announced here in the next couple of weeks. So, lots of cool stuff going on with Gatsby themes, and people selling their own themes and starters. I think that’s really interesting too.

Drew: There’s a bit of a marketplace springing up around Gatsby.

Marcy: There is, Yeah.

Drew: Is there any sort of online training and those sorts of things if somebody wants… If somebody decided that they were really going to get into Gatsby and they needed to learn it quick? Are there run places they can go with that sort of information is available?

Marcy: A ton of it? Yes. There’s definitely the Gatsby Doc’s site, which is gatsbyjs.org/doc’s. And we have tutorials, and I’ve been doing live streams almost every week for Gatsby stuff. There are a ton of educators who have Gatsby material on YouTube and various learning platforms. Egghead, I think some of my teammates from Gatsby have egghead videos as well.

Marcy: So, there is a ton of stuff out there. I would say check the dates on it if you find something. We’re always actively updating the Gatsby Doc’s, some of the older third party videos and things that may, check the dates on those because we can’t monitor every single learning resource for update. It’s hard to keep up with our own staff.

Marcy: Because there’s just so much with how many content sourcing options and use cases. It’s a very broad space. But there’s so much learning material out there, and a ton of ways to get started that you can sort of try and find things like depending on where you are on your learning spectrum. Are you at the beginning stages, are you coming from other technologies and you just need to learn about like what is this React thing.

Marcy: You can sort of pick and choose which materials will work for you based on where you’re at. I’ve been doing a course recently through live streams called Gatsby Web Creators, where we went all the way from beginner HTML, CSS and JavaScript through to creating our first Gatsby site. We just completed that on Friday. And so, it’s been really neat to go all the way back to the beginning.

Marcy: And because a lot of materials with Gatsby, it uses React. So, it’s a pretty big jump to get started with that. So, I really wanted to go back and take the steps to get all the way through to building things with React and Gatsby. So that was really neat. And I’m excited to continue on that route, so that there is more beginner material and more things to help people understand how to build a site with Gatsby because a lot of those skills are portable to other frameworks.

Drew: One of the big questions that is going to come up for anybody who’s thinking about building sort of client project sites using Gatsby, one of the big questions that’s going to come up is about managing content and putting stuff in front of a client. You mentioned already how Gatsby can connect to different content management systems. Is that the primary method that you would put in place to deal with that question? Or does Gatsby have anything in its ecosystem that would enable people to edit content in any way?

Marcy: Yeah, I would say having a CMS or something can make those team relationships work a lot better. I’ve been in those use cases where the dev teams like, “Just learn HTML.” And you see this glaze over from the client of like, “No, I can’t believe you just said that.” So having a system where people can do their best work in whatever ways suits them best, is super, super important.

Marcy: Like you just can’t handle marketer GitHub, and might work some of the time but not all the time. And so, having like a preview and build infrastructure makes that better, and that’s where the Gatsby cloud product space kind of enters into the fray. There are ways to do preview. Without the paid cloud side and Gatsby cloud does have a free tier for personal projects, so it’s not all paid.

Marcy: But we have this, like the open source and the product ecosystem kind of come together so that Gatsby can as a founding organization, make enough money to keep the open source framework, keep that healthy, and keep our community rolling along with that. So, that’s kind of where this open source commercial side comes together, and really enabling some of these workflows that teams need.

Marcy: Some things like getting fast previews, getting builds out the door fast and deployed. And so, there are solutions on the Gatsby cloud side specifically, and then wherever there is an open source way to make Gatsby work like with a preview server or something, we try to document that and make sure our community knows what’s what and how to serve those team needs.

Marcy: Yeah, I would say like, you need some way to preview your CMS changes because it’s like that instant gratification. You don’t want to be waiting an hour for a build to see some content.

Drew: So that’s interesting. The Gatsby cloud service gives you that ability to use a headless CMS service, where you’re just working with the content, but you’ve got no visualization of what it would look like in your site enables you to get a preview of how that would work. Is that right?

Marcy: It is, yeah. And so, it’s part of the trade off of decoupling, your headless CMS, which may have had, like WordPress, you could just look up the front end, but we’re giving it a new front end, and potentially pulling in other sources and other things that WordPress doesn’t know about. And so, decoupling it in that way makes sense. But you still, as a team member, you have to be able to do your work in the speed that you’re rapidly used to.

Marcy: And so, that is where Gatsby preview, Gatsby builds come in to give that front end back to teams so they can collaborate, they can make decisions, get something shipped. So that has sprung up in the last year, getting more features and improvements in all the time and that we’ve heard from some teams that are really starting to see speed increases.

Marcy: And as we figure out like, “Okay, if this build is going slow, why is that?” It’s usually because the site is really, really big. So we’ve been focused a lot on improvements for large sites, and really improving those team, collaborative workflows. It’s a big focus of the team right now.

Drew: So Gatsby cloud is, I guess at its heart is a hosting service. Is it a CDN for deploying your Gatsby sites with a load of Gatsby specific functionality and features around it?

Marcy: I would call it more of a continuous delivery product because it’s not an actual CDN. It integrates with CDNs like Fastly, Netlify. There’s a lot of different providers that you can hook up and some of them for free. You can do a lot for free, which is pretty awesome. I just did it the other day in our last Gatsby web creators session, we use Gatsby cloud and Netlify to build our site.

Marcy: And it enables you to make Gatsby sites faster specifically, because it does have those improvements. It only has to build one type of site. So, there’re some improvements that Gatsby cloud can make, that no other platform can make because they are trying to like support all of these different types of websites and they do them all very well.

Marcy: But for Gatsby, if that’s all you’re building, and there’s quite a few agencies, who are all in on Gatsby, and they want to make it as fast as they can. So, that’s where Gatsby cloud can make some performance improvements specifically for Gatsby, because it doesn’t have to worry about any other platforms.

Drew: So, Gatsby cloud would do your build, and it would then just deploy it to something like Netlify or presumably a whole range of different places.

Marcy: Yep. Yep, it will. And so, it’s the piece of Netlify that it would be using then as it’s uploading these built packages. Built files. It’s not using their builds, so the builds are happening on Gatsby clouds infrastructure, and that’s where some a lot of speed increases can happen. And then there’s still that upload step to get it out to a CDN, whichever one you’ve chosen.

Marcy: But yeah, it seems like teams are really loving this ability to see. I mean, it’s functionality that you would have missed. And so, that’s a necessary thing to add back in, is to be able to do these collaborative previews and get sign offs and all of that.

Drew: So, Gatsby cloud is provided as a service from Gatsby the company, and there’s Gatsby the open source project as well. Is this a similar sort of relationship to like WordPress and automatic have, where you’ve got a commercial entity developing an open source product?

Marcy: I would say so yeah, like Drupal. There’s precedent in tech to have these founding organizations where it’s kind of a virtuous cycle. And we’re working on publishing some governance documentation right now to make sure that, that’s super clear to our community, how we make decisions. But the entire goal is to keep Gatsby sustainable, so that it can continue to be an open source project that people can use it with ever even getting into Gatsby cloud.

Marcy: You could use other solutions with it if you want. And so, we need like enough business to sustain, like the people working on it. And so, I’m kind of in between, like I float in between the open source and commercial side and trying to make sure that we’re prioritizing things. I mean, as you could imagine, we’re juggling a lot of things with how broad the spaces like, we all have our niche use cases that we like, feel really strongly about, we need to do for our jobs.

Marcy: That adds up to be a lot of niche use cases. So, we try to juggle and prioritize and really listen to our community about what hurts right now, what’s painful, what’s going well. And so, that’s been an interesting journey to get for me personally to get back into devREL and really be listening to the community about, how can we make us be even better?

Drew: And is there a big community around Gatsby lots and lots of people using it?

Marcy: There are a lot of people using it, a lot of contributors. So for a lot of folks, it might be their first time contributing to open source like coming over to our docks and joining us for Hacktoberfest and things like that. And so, it has been really neat to see what a big community Gatsby does have, especially with things like accessibility and trying to make sure that frameworks do all they can out of the box for free.

Marcy: And so, there’s this, I don’t know, subset or intersection of accessibility and Gatsby and that’s my happy place. But the broader community, a lot of people learn React or learn web development through Gatsby. And so, that’s really neat to see a progression through our community, and hopefully we get people to come contribute, even if it’s an issue or something of like, “Hey, this link was broken or this part of the docks was confusing to me or it’s outdated.”

Marcy: Like even just telling a framework or a project that you use, that something could be better is a great way to contribute, because you can help us gain insight into the things that could use improvement. So that’s a great way to contribute.

Drew: You mentioned accessibility and of course, people will know you as being an accessibility expert. And they might be surprised to see you working with sort of fully featured front end framework like React, thinking that perhaps the two don’t really go together. Is that always the case at JavaScript heavy front ends are worth less accessible?

Marcy: Well, I wish it weren’t the case. But I think the data has shown that a lot of websites that do use front end frameworks are less accessible than those that don’t. A project that comes to mind is the Web a Million. And actually, I have a blog post, I’m refreshing the Gatsby site to see if my blog post has launched yet. But webbing through the web a million this project, they used their automated wave tool to crawl the top 1 million home pages and evaluate them for some accessibility violations.

Marcy: And it was really depressing results. Like they’ve run it twice now I think, and I think it got worse. So, it’s not great, but I don’t think you can really point to any one framework because there’s plenty of sites that don’t use frameworks that have lots of accessibility problems. So, it’s kind of a broader industry issue, a really society.

Marcy: And so, for me working on a full featured web framework, I saw as an opportunity to try and get more accessibility awareness in the mainstream. And so, that was an intentional move on my part to go and try to make an impact on a lot of sites like working on one site is cool. You can solve some really interesting problems. For me, I wanted to advocate accessibility much more broadly and try to make frameworks the best they can be from the inside.

Marcy: So even if something is rough right now, trying to play that long game of like, “Okay, what web standards things can we talk about? What framework improvements can we make so that if this is kind of rough, like not just give up on it.” So, even if I know it’s… I don’t know, JavaScript is some folks enemy I feel like I like it. You need some JavaScript to make accessible user interfaces, you just do.

Marcy: So, I am trying to like straddle those viewpoints and do the right thing while listening to my activist colleagues and friends kind of out there like pushing us forward as an industry. And then on the inside, I can be the messenger and the person that could try and reconcile some of those huge trade offs and ethical questions about What are we building?

Marcy: So, it’s challenging, but I really like it, because I have an impact to make on the web. And so, web framework. Lots of people are building Gatsby sites. So, seems like a good place to try and make an impact.

Drew: You mentioned briefly that Gatsby uses React at the moment. Is there a possible future where Gatsby might work with other frameworks, might receive a view version of Gatsby?

Marcy: I would love that. I’ve certainly talked about it. There is a pre React plug in, as I mentioned earlier. So you can swap that out. I think a big part of what we are talking about is sustainability of projects, trying to make the right call, these aren’t easy choices to make. It’s not just like rip it out and start over. There’s a lot of concerns that go along with that. It goes deep.

Marcy: So, it’s something we’re actively talking about. And I don’t really have anything specific to share right now. But we do have some internal meetings coming up soon to talk about that sort of thing. So, it’s being discussed, and I would love to have a view flavor, that’d be amazing. But as you can imagine there’s some interesting challenges that come along with that, and we want to make sure it’s the right move so that we’re not like, I don’t know, going down a path and having it not work for whatever reason, then we’re maintaining two frameworks, like how do we make that actually realistic in terms of what we can maintain and make succeed for an open source community?

Drew: So I’ve been learning all about Gatsby. What have you been learning about lately Marcy?

Marcy: Well, I wish it was better but work life balance. I’ve been learning about, for me, unfortunately, I’m in like a burnout cycle. And so, I feel like I’m continually learning the lesson of how to be productive, especially this year in 2020. There’s just like one thing after another. So, trying to get really clear focus on where I want to go in my career, what makes me happy?

Marcy: How can I sustain, and we’re talking about sustainability. Like how can I sustain my own life after a career of really pushing hard on accessibility in particular like, “Okay, how can I kind of take a little step back and make sure that what I am putting out there, what I am doing is meaningful, worth the energy.” See, a lot of my lessons have been kind of that intersection of work and life and trying to make the most of the time that’s been… I don’t know about you, but it’s been pretty stressful for a lot of people including me.

Drew: It’s been very, very stressful. We are at very difficult times, isn’t it?

Marcy: Yeah, yeah. I mean, we have so much to be thankful for in this industry, having opportunities and skills that you can apply. Seeing a lot of layoffs in our industry, really trying to make decisions that reflect where we’re at and not just going through the motions. So that was a big motivator for Gatsby web creators was, “Wow, there’s a lot of school age kids not in school this year, it would be really cool to see an outcome of turning some kids’ eyes onto web development.”

Marcy: Like when I was in seventh grade, and someone came to a class of mine to talk about photojournalism. I was like, “I want to be a photojournalist.” So that actually did work. I got some feedback from someone that said, “My seventh grader’s learning from you, and now they’re really excited about code.” So, that was a really good thing to spend some energy on, in a time where like, that wasn’t something I would have necessarily thought of before being in these circumstances in 2020.

Marcy: So, really trying to be like nimble and make choices that kind of reflect where I want to go and what’s happening.

Drew: If you the listener would like to hear more from Marcy, you can find her on Twitter where she’s @marcysutton and find all her latest goings on, on her personal website, marcysutton.com. And of course you can find out how to get started with Gatsby from Gatsbyjs.org. Thanks for joining us today Marcy, do you have any parting words?

Marcy: Make the most of it wherever that might be.

Smashing Editorial (il)
[post_excerpt] => Today, we’re talking about Gatsby. What is it and how does it fit into your web development stack? Drew McLellan talks to expert Marcy Sutton to find out. Show Notes Gatsby Marcy on Twitter Marcy’s personal website Weekly Update “Make Your Sites Fast, Accessible And Secure With Help From Google” by Dion Almaer “Understanding Plugin Development In Gatsby” by Aleem Isiaka “Creating Tiny Desktop Apps With Tauri And Vue. [post_date_gmt] => 2020-07-14 05:00:00 [post_date] => 2020-07-14 05:00:00 [post_modified_gmt] => 2020-07-14 05:00:00 [post_modified] => 2020-07-14 05:00:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/smashing-podcast-episode-20/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/smashing-podcast-episode-20/ [syndication_item_hash] => Array ( [0] => b17b3d928209a01d917278069cd1b18b [1] => bd96f8793ccfc386dd08979f54aa89b4 [2] => 8ea937901c6a7878548ac305410c40f2 ) ) [post_type] => post [post_author] => 472 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => smashing-podcast-episode-20-with-marcy-sutton-what-is-gatsby )

Decide filter: Returning post, everything seems orderly :Smashing Podcast Episode 20 With Marcy Sutton: What Is Gatsby?

Array ( [post_title] => Smashing Podcast Episode 20 With Marcy Sutton: What Is Gatsby? [post_content] => Smashing Podcast Episode 20 With Marcy Sutton: What Is Gatsby?

Smashing Podcast Episode 20 With Marcy Sutton: What Is Gatsby?

Drew McLellan

Photo of Marcy SuttonToday, we’re talking about Gatsby. What is it and how does it fit into your web development stack? Drew McLellan talks to expert Marcy Sutton to find out.

Show Notes

Weekly Update

Transcript

Drew McLellan: She is the lead engineer on the Developer Relations team at Gatsby. Previously, she worked on the open source axe-core accessibility testing library, and has also worked as an accessibility engineer at Adobe. She’s passionate about improving the web for people with disabilities and often speaks about it at conferences. In 2016, O’Reilly gave her web platform award for a work in accessibility.

Drew: She also co-leads the accessibility Seattle and NW Tech Women Meetups in a local region. So, we know she’s a skilled engineer and an accessibility expert. But did you know she wants to send it Angel Falls in a barrel? My smashing friends please welcome Marcy Sutton.

Marcy Sutton: Hello.

Drew: Hello. Marcy. How are you?

Marcy: I’m smashing. How are you?

Drew: I’m very good. Thank you. I wanted to talk to you today about Gatsby. Because it came up in a conversation I was having on a previous episode about learning React with Mina Markham. I realized I was in danger of doing the typical man on the internet thing of giving an opinion on something that I had no direct experience of. So that’s not how we do things at Smashing.

Drew: And I want to make sure that we properly cover Gatsby. And what better way to do it than to talk somebody who knows it inside and out. So, presuming that perhaps I’ve heard the name Gatsby. But I’ve got no idea where it fits into the stack when building website. What exactly is Gatsby.

Marcy: Gatsby is a website generator, it currently uses React. But it will create a static website for you that then will rehydrate into a full React web application. So you really get the best of both worlds with fast builds that you’re compiling HTML files that will load fast for users. And then you get all of this enhancement with JavaScript to make really interactive dynamic web apps.

Marcy: So, it’s really exciting space to be in. And I’ve been working on the learning side with documentation and now on the Devrel team, I’m focused on making it as good as it can be, knowing accessibility challenges with JavaScript and just trying to fix it from the inside out.

Drew: Many of us will be familiar, I guess, with the concept of a static site generator. And Gatsby seems to broadly fit into that role. But to me, it seems like it goes a lot further than the most SSG’s do. And most site generators are front-end code agnostic. It seems that with Gatsby, you end up with Gatsby code running as part of your site. Is that a fair assessment? And if so, what sort of things is Gatsby actually doing in your front-end?

Marcy: Yes, I’d say the biggest piece of that is client side routing. So, Gatsby right now is using reach router under the hood. It does kind of its own implementation. But that is the piece that when you load your static site for the first time, there are HTML files there. So, if the user turns JavaScript off for some reason, your site should still be there, still have content.

Marcy: But if JavaScript is enabled, that’s when this hydration step happens where, when you use links in your Gatsby site, it will go pre-fetch resources from that page, so it will load faster. So, that is all enabled with this JavaScript layer that Gatsby gives you. And so beyond that, it really depends kind of what you’re using in your site will end up in that JavaScript bundle.

Marcy: But for things that use a lot of interactivity, like accessible interfaces, that’s a good place to be. For me, I really enjoy having JavaScript available to me at all times, and having my markup just be in a good spot. I know it’s a matter of preference, whether you want your HTML and your JavaScript and your CSS all kind of neatly coupled and there’s room for variations within building Gatsby

Marcy: You don’t always have to use something like CSS and JS. But it’s really about getting the power of that dynamic JavaScript layer, available to you while you’re writing your website. It’s not like this add on in a separate file.

Drew: When I think of how a static site generator usually works, I’m thinking of content in perhaps Markdown files. And the generator runs across that content and merges it with templates and creates 10s, hundreds, thousands of HTML files, which are the pages of your website. When I think of a React site or app, I’m thinking more about single page experience where the interfaces be created by React on the fly. So, you’re saying Gatsby does both of those? It’s creating all the pages and also enhancing it with JavaScript?

Marcy: It is, yes. Gatsby will use Node.js at build time, it will go over your React components and compile them into HTML files. Which honestly, the first time I looked at Gatsby I wouldn’t turn JavaScript off and was like, “All right, are there other pages here, what is this?” And I was so happy that Gatsby works that way by default. It will create built files from your react components, which is awesome.

Marcy: I have explored more progressive enhancement approaches since it’s in JavaScript. Like what if you want to output something progressively enhanced for users, where if they do have JavaScript turned off, you don’t want all this broken code that assumes JavaScript is there. So there are some quirks with it. But you can work around that kind of thing at least for your core user flows where you want someone to still be able to buy something, you might need to add some support and for those use cases.

Marcy: But I’ve been pleasantly surprised that the way that Gatsby rolls that out by default. And so, it is a choice that they made to build sites that way, and we’re always evaluating it. Is it still the best way? What do we need to do to give our users what they’re asking for? So, we’re doing some explorations internally, ongoing just to make sure Gatsby is doing the best job it can at building a website.

Marcy: So keeping bundle sizes small, and making sure that for making trade offs for what we say is performant code with pre-fetching. Like, do we have the data to back that up? That’s the kind of thing as a developer advocate that I’m super interested in, is making sure that what we’re packaging and bundling on websites is actually needed and will really make the best Gatsby site it can make.

Drew: You mentioned performance there, and there’s a big focus on performance. It certainly seems from the way that Gatsby presents itself. Is that a true feature of Gatsby or is it just the nature of JAMstack websites?

Marcy: I think it can be a nature of JAMstack websites. Ultimately, it’s going to come down to what you’re bundling on your website. So, no matter what framework or tool you’re using, we still have to be thoughtful in what we’re putting in those bundles for end users. But Gatsby really aims to give you good defaults. Not only for performance, but for accessibility as well.

Marcy: But that always takes evaluation, we always have to make sure that if we’ve added something, that it’s still performant. But yeah, getting that initial payload of static files, they load fast. Much faster than classic WordPress site that I used to have. But then enhancing it with JavaScript. There are some trade offs there for sure.

Marcy: But it works really well, lots of people, they really like their Gatsby sites. So, it’s been fun to get to work on it full time, and learn the ins and outs of a JavaScript framework like Gatsby.

Drew: What sort of performance features just Gatsby actually put into place to speed up your sites?

Marcy: Well, with the pre-fetching for links, this client said routing stuff, I’d say that’s probably the biggest one. Making it really easy to generate a progressive web app. So, having some offline capabilities, you can sort of pick and choose what you want in terms of offline and PWA type stuff. But they really make that part of the initial experience, like a lot of the starter example sites that you might start from have examples of using a manifest, and kind of making that modern version of your website.

Marcy: Really, it’s like not shipping code that you don’t need. That’s a big part of it. Caching, that’s really the pre-fetching for links. That’s what I would say is the biggest piece of it.

Drew: So this is where the site is actually anticipating where the user is going to go. Is it as intelligent as that or does it pre-fetch everything on the page or?

Marcy: No, it’s based on user interaction. So, if the user scrolls down the View port, there’s something pre-fetching that happens there. If you hover over links, it will kind of estimate that there’s a pretty good chance that you might go to that page. We’ve been talking internally, well, I guess, open source to about whether that pre-fetching should happen on keyboard focus too, so that intersection of accessibility and performance is very interesting.

Marcy: There’s some trade offs there like should a keyboard user who can’t use the mouse and is tabbing through every link to navigate, should that really be fetching content for every single one of those because a mouse user might be a bit more selective about where they put their mouse cursor. So, those conversations I find extremely fascinating.

Marcy: And trying to think of what data do we need to validate these assumptions too. So yeah, it’s been super interesting to look at those defaults and what improvements can we make and really checking like how much data is that fetching? Is that really a good thing? Just to speed it up a little bit? Or is it fast enough without that? Are there alternative solutions that we could use as part of the fun of working on a framework because being able to evaluate all those trade offs.

Drew: This is pre-fetching something that users just get for free in their site. So do they have to do any work to implement it?

Marcy: You do get it for free using Gatsby link. So it’s a component that comes with Gatsby and when you use that, it outputs anchor tags. So your HTML is real HTML and you’ve leveraged the web platform in that way. But in your React components, you are working directly with the Gatsby link component. And that has all of those mechanisms for… It looks at whatever your future HREF will be, for that link of where you want to go to and it will go and grab resources from that link and preload them.

Marcy: And it’s only internal to your site. So it’s not going off and trying to fetch things on other websites. But it seems to work pretty well. I know some users are actively looking for ways, like you actually have to opt out of some of these things. At least routing, not using the pre-fetching. You just use regular anchor tags. And then you don’t really get that functionality. It’s pretty easy to use something else. But some of the discussions we’re having are around client side routing, and how to make that the best it can be. And so, that’s a really interesting space too.

Drew: How closely do you have to work within the Gatsby ecosystem in terms of if I wanted to have my own link component? Would that be completely fine, I wouldn’t be fighting against Gatsby to do that sort of thing?

Marcy: No, you could slot in whatever components you want, as long as they work with the React runtime. That’s really the beauty of it. Anything you could put in a React app, you could put in a Gatsby app. There’s even a pre-React plugin. So, there are some alternatives to working with Gatsby. But I love how you can pull in whatever, off the shelf components that you want to use or write your own.

Marcy: And I think that flexibility is what people really enjoy. There is the caveat of it uses the React runtime. And so, you have to be okay with using react or using this pre-React plugin. But personally, I really like react and JSX for working with accessibility and templates, especially with React hooks. So, being able to use the hut in my Gatsby site is just so cool. I really like it.

Drew: And how’s the process of building a Gatsby site that’s presumably a node module that you can just install and you would do a build like you would with any other static site generator?

Marcy: Yes. There is a CLI that you install globally. And I guess it’s whether you want to install it globally. That’s what we recommend, because then you can run it from any directory on your computer, but it will pull down whatever you need to build a Gatsby site. And then you can add on, say you wanted to use WordPress as a headless CMS or some other content source.

Marcy: You can install packages, plugins to make that work, and then integrate it with your site. There’s also some starters and themes that you can use to get up and running quicker. I’ve used those if I want to test out something or start a site rapidly for a specific integration like Drupal or prismic or whichever CMS or eCommerce solution or something I want to use.

Marcy: There’s lots of examples. So you’re not always tinkering with trial and error trying to figure it out, but it’s sort of these building blocks that you can piece together and create… It’s what we call the content mesh. And so, you can use these best in breed integrations to create a site instead of, if I had a classic WordPress site, the authoring experience and working with teams is really great.

Marcy: But there were shortcomings in the front end, like how it would work on a mobile device. What else? If I wanted an eCommerce solution? I think there’re some things that are easier to do these days, but being able to pick whichever kind of best solutions you want for authentication, or whatever that modern thing is, you’re like, “I wish I could use that.” With Gatsby, you can pull a lot of these things together and make this content mesh way of building that’s pretty refreshing.

Marcy: Especially when you can still use those integrations like WordPress and still work with teams. So, we’re pretty excited about this new way of working where you can pick all the technologies that you’ve like, or that work for your team.

Drew: One of the big features that Gatsby has tout strongly is this ability to pull in data or content from a variety of different sources. You mentioned things like WordPress and Drupal, and what have you. Traditionally, if I was using something like Jekyll or Eleventy, or something like that, I would need to wire up that myself to interact with API’s, perhaps pull content down and write it into markdown files or JSON files, and then have the generator work with those files.

Drew: So it would be a sort of two step process, could use something like source bit, which we covered on a previous episode that does that sort of thing? Do I understand rightly that Gatsby has just this native ability to consume different sources in a way that other static site generators, just don’t?

Marcy: I think what makes Gatsby really strong in this area is its GraphQL data layer, and the plugin ecosystem. So, chances are someone has already written a plugin for whatever data source you’d be looking to build. And if not, there’s probably something close. But using GraphQL, is kind of the under workings of it. The layer that makes all of these integrations possible is using GraphQL.

Marcy: And so, there’s lots of possibilities for what you could pull in and we try to make it easy to write plugins too. So it’s been really neat learning about how to write a plugin, and the AST or Abstract Syntax Tree that it creates and kind of learning about how all that works has been really cool. But yeah, I’d say, there are a lot of things off the shelf that you could pick up without having to write it all yourself, which is pretty awesome.

Marcy: And it’s nice to have the flexibility to pull in markdown. Say your developers want to write their blog content markdown, but the marketing team is really not happy with that, you could combine content sources, and source them from multiple places. I’ve seen people sourcing things from other GitHub repos, and they use a get plugin to pull in markdown content that way. Lots of flexibility.

Drew: And I guess you’ve got the option then of writing your own plugins to pull from a custom data source. Say you’ve got some legacy system and you want to put a nice, shiny new website on the front of it. You could write a plugin that would get the data out in whatever format that is needed and translate it into something that gets bigger than work with?

Marcy: You could yes. And so, plugins enable that. And then there’s this abstraction on top of that, which we call Gatsby themes. And those are not only user interface code, but they could be GraphQL queries, configurations that set up a plugin, so it’s like a plugin with usage kind of bundled together. And you can distribute those themes on NPM.

Marcy: And then, their version and you can pull them in. And that whole API is really interesting too for teams who say you have multiple repos, and you want to pull data into those, it would be very repetitive to have the same queries in all of these repos in the same code. So, to dry things up a bit and not repeat yourself so much. You can use these abstractions called themes, to sort of distribute around that logic or code that would enable that source plugin. So, there’s these kind of layers of abstractions that you can build on top of it that we’ve heard that teams are really getting a lot out of right now.

Drew: So a theme in the Gatsby world isn’t a look and feel like it would be with CMS like WordPress.

Marcy: Yeah, I mean, it can but that’s not all that it is. So, naming things is very hard. But themes I’ve really enjoyed learning about just the flexibility and being able to, yes, you could include some user interface code. But there could be some query language code that goes in there as well. But the fact that it’s kind of bundled together, makes it easy to distribute. Yeah, it’s been a really neat abstraction that it’s been cool to see what people are building and what themes they’re shipping, and all that.

Drew: Yeah, I can imagine it would lead to some fairly innovative uses of Gatsby. Have you seen anything that’s been, in particular that caught your eye that customers are doing this particularly creative?

Marcy: Yeah. Well, in terms of themes, I mean, one of the first ones I read about there’s a case study on the Gatsby blog, I think from Apollo. And they wrote a documentation site using Gatsby themes and that used a get source plugin. And so, it really kind of decouples your sourcing, and your content, making it so that teams can pull in a theme to use across multiple repos.

Marcy: I’d say that’s the most interesting to me just because of what I can envision it enabling like, past teams I was on where we had to source content, we were just so like limited and where that code could live and how repeatable it could be. And so, seeing a solution now where teams are like, “Oh, this works great.” And that was even last summer, or like that was a case study a while ago.

Marcy: So since then, API’s have been improving, and there’s a whole team working on Gatsby themes. And I know they are rolling out some big improvements in the next few weeks. So, I don’t want to steal their thunder, but there’s some neat stuff coming with themes. They’ve been overhauling some of the blog themes like the core themes that we offer from Gatsby.

Marcy: I know they’re using it internally to build some of our own product announcement, or product improvements that will be announced here in the next couple of weeks. So, lots of cool stuff going on with Gatsby themes, and people selling their own themes and starters. I think that’s really interesting too.

Drew: There’s a bit of a marketplace springing up around Gatsby.

Marcy: There is, Yeah.

Drew: Is there any sort of online training and those sorts of things if somebody wants… If somebody decided that they were really going to get into Gatsby and they needed to learn it quick? Are there run places they can go with that sort of information is available?

Marcy: A ton of it? Yes. There’s definitely the Gatsby Doc’s site, which is gatsbyjs.org/doc’s. And we have tutorials, and I’ve been doing live streams almost every week for Gatsby stuff. There are a ton of educators who have Gatsby material on YouTube and various learning platforms. Egghead, I think some of my teammates from Gatsby have egghead videos as well.

Marcy: So, there is a ton of stuff out there. I would say check the dates on it if you find something. We’re always actively updating the Gatsby Doc’s, some of the older third party videos and things that may, check the dates on those because we can’t monitor every single learning resource for update. It’s hard to keep up with our own staff.

Marcy: Because there’s just so much with how many content sourcing options and use cases. It’s a very broad space. But there’s so much learning material out there, and a ton of ways to get started that you can sort of try and find things like depending on where you are on your learning spectrum. Are you at the beginning stages, are you coming from other technologies and you just need to learn about like what is this React thing.

Marcy: You can sort of pick and choose which materials will work for you based on where you’re at. I’ve been doing a course recently through live streams called Gatsby Web Creators, where we went all the way from beginner HTML, CSS and JavaScript through to creating our first Gatsby site. We just completed that on Friday. And so, it’s been really neat to go all the way back to the beginning.

Marcy: And because a lot of materials with Gatsby, it uses React. So, it’s a pretty big jump to get started with that. So, I really wanted to go back and take the steps to get all the way through to building things with React and Gatsby. So that was really neat. And I’m excited to continue on that route, so that there is more beginner material and more things to help people understand how to build a site with Gatsby because a lot of those skills are portable to other frameworks.

Drew: One of the big questions that is going to come up for anybody who’s thinking about building sort of client project sites using Gatsby, one of the big questions that’s going to come up is about managing content and putting stuff in front of a client. You mentioned already how Gatsby can connect to different content management systems. Is that the primary method that you would put in place to deal with that question? Or does Gatsby have anything in its ecosystem that would enable people to edit content in any way?

Marcy: Yeah, I would say having a CMS or something can make those team relationships work a lot better. I’ve been in those use cases where the dev teams like, “Just learn HTML.” And you see this glaze over from the client of like, “No, I can’t believe you just said that.” So having a system where people can do their best work in whatever ways suits them best, is super, super important.

Marcy: Like you just can’t handle marketer GitHub, and might work some of the time but not all the time. And so, having like a preview and build infrastructure makes that better, and that’s where the Gatsby cloud product space kind of enters into the fray. There are ways to do preview. Without the paid cloud side and Gatsby cloud does have a free tier for personal projects, so it’s not all paid.

Marcy: But we have this, like the open source and the product ecosystem kind of come together so that Gatsby can as a founding organization, make enough money to keep the open source framework, keep that healthy, and keep our community rolling along with that. So, that’s kind of where this open source commercial side comes together, and really enabling some of these workflows that teams need.

Marcy: Some things like getting fast previews, getting builds out the door fast and deployed. And so, there are solutions on the Gatsby cloud side specifically, and then wherever there is an open source way to make Gatsby work like with a preview server or something, we try to document that and make sure our community knows what’s what and how to serve those team needs.

Marcy: Yeah, I would say like, you need some way to preview your CMS changes because it’s like that instant gratification. You don’t want to be waiting an hour for a build to see some content.

Drew: So that’s interesting. The Gatsby cloud service gives you that ability to use a headless CMS service, where you’re just working with the content, but you’ve got no visualization of what it would look like in your site enables you to get a preview of how that would work. Is that right?

Marcy: It is, yeah. And so, it’s part of the trade off of decoupling, your headless CMS, which may have had, like WordPress, you could just look up the front end, but we’re giving it a new front end, and potentially pulling in other sources and other things that WordPress doesn’t know about. And so, decoupling it in that way makes sense. But you still, as a team member, you have to be able to do your work in the speed that you’re rapidly used to.

Marcy: And so, that is where Gatsby preview, Gatsby builds come in to give that front end back to teams so they can collaborate, they can make decisions, get something shipped. So that has sprung up in the last year, getting more features and improvements in all the time and that we’ve heard from some teams that are really starting to see speed increases.

Marcy: And as we figure out like, “Okay, if this build is going slow, why is that?” It’s usually because the site is really, really big. So we’ve been focused a lot on improvements for large sites, and really improving those team, collaborative workflows. It’s a big focus of the team right now.

Drew: So Gatsby cloud is, I guess at its heart is a hosting service. Is it a CDN for deploying your Gatsby sites with a load of Gatsby specific functionality and features around it?

Marcy: I would call it more of a continuous delivery product because it’s not an actual CDN. It integrates with CDNs like Fastly, Netlify. There’s a lot of different providers that you can hook up and some of them for free. You can do a lot for free, which is pretty awesome. I just did it the other day in our last Gatsby web creators session, we use Gatsby cloud and Netlify to build our site.

Marcy: And it enables you to make Gatsby sites faster specifically, because it does have those improvements. It only has to build one type of site. So, there’re some improvements that Gatsby cloud can make, that no other platform can make because they are trying to like support all of these different types of websites and they do them all very well.

Marcy: But for Gatsby, if that’s all you’re building, and there’s quite a few agencies, who are all in on Gatsby, and they want to make it as fast as they can. So, that’s where Gatsby cloud can make some performance improvements specifically for Gatsby, because it doesn’t have to worry about any other platforms.

Drew: So, Gatsby cloud would do your build, and it would then just deploy it to something like Netlify or presumably a whole range of different places.

Marcy: Yep. Yep, it will. And so, it’s the piece of Netlify that it would be using then as it’s uploading these built packages. Built files. It’s not using their builds, so the builds are happening on Gatsby clouds infrastructure, and that’s where some a lot of speed increases can happen. And then there’s still that upload step to get it out to a CDN, whichever one you’ve chosen.

Marcy: But yeah, it seems like teams are really loving this ability to see. I mean, it’s functionality that you would have missed. And so, that’s a necessary thing to add back in, is to be able to do these collaborative previews and get sign offs and all of that.

Drew: So, Gatsby cloud is provided as a service from Gatsby the company, and there’s Gatsby the open source project as well. Is this a similar sort of relationship to like WordPress and automatic have, where you’ve got a commercial entity developing an open source product?

Marcy: I would say so yeah, like Drupal. There’s precedent in tech to have these founding organizations where it’s kind of a virtuous cycle. And we’re working on publishing some governance documentation right now to make sure that, that’s super clear to our community, how we make decisions. But the entire goal is to keep Gatsby sustainable, so that it can continue to be an open source project that people can use it with ever even getting into Gatsby cloud.

Marcy: You could use other solutions with it if you want. And so, we need like enough business to sustain, like the people working on it. And so, I’m kind of in between, like I float in between the open source and commercial side and trying to make sure that we’re prioritizing things. I mean, as you could imagine, we’re juggling a lot of things with how broad the spaces like, we all have our niche use cases that we like, feel really strongly about, we need to do for our jobs.

Marcy: That adds up to be a lot of niche use cases. So, we try to juggle and prioritize and really listen to our community about what hurts right now, what’s painful, what’s going well. And so, that’s been an interesting journey to get for me personally to get back into devREL and really be listening to the community about, how can we make us be even better?

Drew: And is there a big community around Gatsby lots and lots of people using it?

Marcy: There are a lot of people using it, a lot of contributors. So for a lot of folks, it might be their first time contributing to open source like coming over to our docks and joining us for Hacktoberfest and things like that. And so, it has been really neat to see what a big community Gatsby does have, especially with things like accessibility and trying to make sure that frameworks do all they can out of the box for free.

Marcy: And so, there’s this, I don’t know, subset or intersection of accessibility and Gatsby and that’s my happy place. But the broader community, a lot of people learn React or learn web development through Gatsby. And so, that’s really neat to see a progression through our community, and hopefully we get people to come contribute, even if it’s an issue or something of like, “Hey, this link was broken or this part of the docks was confusing to me or it’s outdated.”

Marcy: Like even just telling a framework or a project that you use, that something could be better is a great way to contribute, because you can help us gain insight into the things that could use improvement. So that’s a great way to contribute.

Drew: You mentioned accessibility and of course, people will know you as being an accessibility expert. And they might be surprised to see you working with sort of fully featured front end framework like React, thinking that perhaps the two don’t really go together. Is that always the case at JavaScript heavy front ends are worth less accessible?

Marcy: Well, I wish it weren’t the case. But I think the data has shown that a lot of websites that do use front end frameworks are less accessible than those that don’t. A project that comes to mind is the Web a Million. And actually, I have a blog post, I’m refreshing the Gatsby site to see if my blog post has launched yet. But webbing through the web a million this project, they used their automated wave tool to crawl the top 1 million home pages and evaluate them for some accessibility violations.

Marcy: And it was really depressing results. Like they’ve run it twice now I think, and I think it got worse. So, it’s not great, but I don’t think you can really point to any one framework because there’s plenty of sites that don’t use frameworks that have lots of accessibility problems. So, it’s kind of a broader industry issue, a really society.

Marcy: And so, for me working on a full featured web framework, I saw as an opportunity to try and get more accessibility awareness in the mainstream. And so, that was an intentional move on my part to go and try to make an impact on a lot of sites like working on one site is cool. You can solve some really interesting problems. For me, I wanted to advocate accessibility much more broadly and try to make frameworks the best they can be from the inside.

Marcy: So even if something is rough right now, trying to play that long game of like, “Okay, what web standards things can we talk about? What framework improvements can we make so that if this is kind of rough, like not just give up on it.” So, even if I know it’s… I don’t know, JavaScript is some folks enemy I feel like I like it. You need some JavaScript to make accessible user interfaces, you just do.

Marcy: So, I am trying to like straddle those viewpoints and do the right thing while listening to my activist colleagues and friends kind of out there like pushing us forward as an industry. And then on the inside, I can be the messenger and the person that could try and reconcile some of those huge trade offs and ethical questions about What are we building?

Marcy: So, it’s challenging, but I really like it, because I have an impact to make on the web. And so, web framework. Lots of people are building Gatsby sites. So, seems like a good place to try and make an impact.

Drew: You mentioned briefly that Gatsby uses React at the moment. Is there a possible future where Gatsby might work with other frameworks, might receive a view version of Gatsby?

Marcy: I would love that. I’ve certainly talked about it. There is a pre React plug in, as I mentioned earlier. So you can swap that out. I think a big part of what we are talking about is sustainability of projects, trying to make the right call, these aren’t easy choices to make. It’s not just like rip it out and start over. There’s a lot of concerns that go along with that. It goes deep.

Marcy: So, it’s something we’re actively talking about. And I don’t really have anything specific to share right now. But we do have some internal meetings coming up soon to talk about that sort of thing. So, it’s being discussed, and I would love to have a view flavor, that’d be amazing. But as you can imagine there’s some interesting challenges that come along with that, and we want to make sure it’s the right move so that we’re not like, I don’t know, going down a path and having it not work for whatever reason, then we’re maintaining two frameworks, like how do we make that actually realistic in terms of what we can maintain and make succeed for an open source community?

Drew: So I’ve been learning all about Gatsby. What have you been learning about lately Marcy?

Marcy: Well, I wish it was better but work life balance. I’ve been learning about, for me, unfortunately, I’m in like a burnout cycle. And so, I feel like I’m continually learning the lesson of how to be productive, especially this year in 2020. There’s just like one thing after another. So, trying to get really clear focus on where I want to go in my career, what makes me happy?

Marcy: How can I sustain, and we’re talking about sustainability. Like how can I sustain my own life after a career of really pushing hard on accessibility in particular like, “Okay, how can I kind of take a little step back and make sure that what I am putting out there, what I am doing is meaningful, worth the energy.” See, a lot of my lessons have been kind of that intersection of work and life and trying to make the most of the time that’s been… I don’t know about you, but it’s been pretty stressful for a lot of people including me.

Drew: It’s been very, very stressful. We are at very difficult times, isn’t it?

Marcy: Yeah, yeah. I mean, we have so much to be thankful for in this industry, having opportunities and skills that you can apply. Seeing a lot of layoffs in our industry, really trying to make decisions that reflect where we’re at and not just going through the motions. So that was a big motivator for Gatsby web creators was, “Wow, there’s a lot of school age kids not in school this year, it would be really cool to see an outcome of turning some kids’ eyes onto web development.”

Marcy: Like when I was in seventh grade, and someone came to a class of mine to talk about photojournalism. I was like, “I want to be a photojournalist.” So that actually did work. I got some feedback from someone that said, “My seventh grader’s learning from you, and now they’re really excited about code.” So, that was a really good thing to spend some energy on, in a time where like, that wasn’t something I would have necessarily thought of before being in these circumstances in 2020.

Marcy: So, really trying to be like nimble and make choices that kind of reflect where I want to go and what’s happening.

Drew: If you the listener would like to hear more from Marcy, you can find her on Twitter where she’s @marcysutton and find all her latest goings on, on her personal website, marcysutton.com. And of course you can find out how to get started with Gatsby from Gatsbyjs.org. Thanks for joining us today Marcy, do you have any parting words?

Marcy: Make the most of it wherever that might be.

Smashing Editorial (il)
[post_excerpt] => Today, we’re talking about Gatsby. What is it and how does it fit into your web development stack? Drew McLellan talks to expert Marcy Sutton to find out. Show Notes Gatsby Marcy on Twitter Marcy’s personal website Weekly Update “Make Your Sites Fast, Accessible And Secure With Help From Google” by Dion Almaer “Understanding Plugin Development In Gatsby” by Aleem Isiaka “Creating Tiny Desktop Apps With Tauri And Vue. [post_date_gmt] => 2020-07-14 05:00:00 [post_date] => 2020-07-14 05:00:00 [post_modified_gmt] => 2020-07-14 05:00:00 [post_modified] => 2020-07-14 05:00:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/smashing-podcast-episode-20/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/smashing-podcast-episode-20/ [syndication_item_hash] => Array ( [0] => b17b3d928209a01d917278069cd1b18b [1] => bd96f8793ccfc386dd08979f54aa89b4 [2] => 8ea937901c6a7878548ac305410c40f2 ) ) [post_type] => post [post_author] => 472 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => smashing-podcast-episode-20-with-marcy-sutton-what-is-gatsby )

FAF deciding on filters on post to be syndicated:

Crowdfunding Web Platform Features With Open Prioritization

Array ( [post_title] => Crowdfunding Web Platform Features With Open Prioritization [post_content] => Crowdfunding Web Platform Features With Open Prioritization

Crowdfunding Web Platform Features With Open Prioritization

Rachel Andrew

In my last post, I described some interesting CSS features — some of which are only available in one browser. Most web developers have some feature they wish was more widely available, or that was available at all. I encourage developers to use, talk about, and raise implementation bugs with browsers to try to get features implemented, however, what if there was a more direct way to do so? What if web developers could get together and fund the development of these features?

This is the model that open-source consultancy Igalia is launching with their Open Prioritization experiment. The basic idea is a crowdfunding model for web platform features. If we want a feature implemented, we can put in a small amount of money to help fund that work. If the goal is reached, the feature can be implemented. This article is based on an interview with Brian Kardell, Developer Advocate at Igalia.

What Is Open Prioritization?

The idea of open prioritization is that the community gets to choose and help fund feature development. Igalia have selected a list of target features, all of which are implemented or currently being implemented in at least one engine. Therefore, funding a feature will help it become available cross-browser, and more usable for us as developers. The initial list is:

The website gives more explanation of each feature and all of the details of how the funding will work. Igalia are working with Open Collective to manage the pledges.

Who Are Igalia?

You may never have heard of Igalia, but you will have benefited from their work. Igalia works on browser engines, and have specialist knowledge of all of the engines. They had the second-highest number of commits to the Chrome and WebKit source in 2019. If you love CSS Grid Layout, then you have Igalia to thank for the implementation in Chrome and WebKit. The work to add the feature to those browsers was done by a team at Igalia, rather than engineers working internally at the browser company.

This is what makes this idea so compelling. It isn’t a case of raising some money and then trying to persuade someone to do the work. Igalia have a track record of doing the work. Developers need to be paid, so by crowdsourcing the money we are able to choose what is worked on next. Igalia also already have the relationships with the engines to make any suggested feature likely to be a success.

Will Browsers Accept These Features If We Fund Them?

The fact that Igalia already have relationships within browser engine teams, and have already discussed the selected features with them means that if funded, we should see the features in browsers. And, there are already precedents for major features being funded by third parties and developed by Igalia. The Grid Layout implementation in Chrome and WebKit was funded by Bloomberg Tech. They were frustrated by the lack of Grid Layout implementation, and it was Bloomberg Tech who provided the money to develop that feature over several years.

Chrome and WebKit were happy to accept the implementation; there was no controversy over adding the feature. Rather, it was a matter of prioritization. The browsers had other work that was deemed a higher priority, and financial commitment and developer time was therefore directed elsewhere. The features that have been selected for this initial crowdfunding attempt are also non -controversial in terms of their implementation. If the work can be done then the engines are likely to accept it. Interoperability — things working in the same way across browsers — is something all browser vendors care about. There is no benefit to an engine to lag behind. We essentially just get to bypass the internal prioritization process for the feature.

Why Don’t Browsers Just Do This Stuff?

I asked Brian why the browser companies don’t fund these things themselves. He explained,

“People might think, for example, ‘Apple has all of the money in the world’ but this ignores complex realities. Apple’s business is not their Web browser. In fact, the web browser itself isn’t a money-making endeavor for anyone. Browsers and standards are voluntary, they are a commons. Cost-wise, however, browsers are considerable. They are massively more complex than most of us realize. Only 3 organizations today have invested the many years and millions of dollars annually that it takes to evolve and maintain a rendering engine project. Any one of them is already making a massive and unparalleled investment in the commons.”

Brian went on to point out the considerable investment of Firefox into Servo, and Google into LayoutNG, projects which will improve the browser experience and also make it possible to implement new features of the platform. There is a lot that any browser could be implementing in their engine, but the way those features are internally prioritized may not always map to our needs as developers.

It occurred to me that by funding browser implementation, we are doing the same thing that we do for other products that we use. Many of us will have developed a plugin for a needed feature in a CMS or paid a third party to provide it. The CMS developers spend their time working on the core product, ensuring that it is robust, secure, and up to date. Without the core product, adding plugins would be impossible. Third parties however can contribute parts to that platform, and in a sense that is what we can do via open prioritization. Show that a feature is worthwhile enough for us to pledge some cash to get it over the line.

How Does This Fit With Projects Such As Web We Want?

SmashingConf has supported the Web We Want project, where developers pitched web platform ideas to be discussed and voted for onstage at conferences. I was involved in several of these events as a host and on the panel. I wondered how open prioritization fits with these existing efforts. Brian explained that these are quite different things saying,

“... if you asked me what could make my house better I could name a million things. Some of those aren’t even remotely practical, they would just be really neat. But if you said make a list of things you could do with a budget for what each one costs — my list will be considerably more practical and bound by realities I know exist.

At the end of the month if you say "there is your list, and here is $100, what will you do with it?" that’s a very direct question that helps me accomplish something practical. Maybe I will paint. Maybe I will buy some new lighting. Or, maybe I will save it for some months toward something more costly.”

The Web We Want project asks an open question, it asks what we want of the platform. Many of the wants aren’t things that already exist as a specification. To actually start to implement any of these things would mean starting right at the beginning, with an idea that needs taking right from the specification stage. There are few certainties, and they would be very hard to put a price on.

The features selected for this first open prioritization experiment are deliberately limited in scope. They already have some implementation; they have a specification, and Igalia have already spoken to browser maintainers to check that the features are ready to work on but don’t feature in immediate priorities.

Supporting this project means supporting a concrete chunk of development, that can happen within a reasonably short timeframe. Posting an idea to Web We Want, writing up an idea on your blog, or adding an issue describing a totally new feature on the CSSWG GitHub repo potentially gets a new idea out into the discussion. However, those ideas may have a long slow path to becoming reality. And, given the nature of standards discussions, probably won’t happen in exactly the way that you imagined. It is valuable to propose these things, but very hard to estimate time and costs to a final implementation.

The same problem is true for the much-wanted feature of container queries, Igalia have gone so far as to mention container queries in their FAQ. Container queries are something that many people involved in the standards process and at browser vendors are looking into, however, those discussions are at an early stage. It isn’t something it would be possible to put a monetary value on at this point.

Get Involved!

There is more information at the Open Prioritization site, along with a detailed FAQ answering other questions that you might have. I’m excited about this because I’m always keen to help find ways for designers and developers to get involved in the web platform. It is our platform. We can wait for things to be granted to use by browser vendors, or we can actively contribute via ideas, bug reports, and with Open Prioritization a bit of cash, to help to make it better.

Smashing Editorial (il)
[post_excerpt] => In my last post, I described some interesting CSS features — some of which are only available in one browser. Most web developers have some feature they wish was more widely available, or that was available at all. I encourage developers to use, talk about, and raise implementation bugs with browsers to try to get features implemented, however, what if there was a more direct way to do so? What if web developers could get together and fund the development of these features? [post_date_gmt] => 2020-07-13 13:00:00 [post_date] => 2020-07-13 13:00:00 [post_modified_gmt] => 2020-07-13 13:00:00 [post_modified] => 2020-07-13 13:00:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/crowdfunding-web-platform-features-open-prioritization/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/crowdfunding-web-platform-features-open-prioritization/ [syndication_item_hash] => Array ( [0] => c16c33c9542b036374624abe9b0d23ad [1] => 869044a5c3d4ca5fd2a62041e0da70c2 [2] => 869044a5c3d4ca5fd2a62041e0da70c2 [3] => 879e8af4a9d14531b9dbf8b6485bb199 [4] => 44d16964a254523a8afa60152ed158fe [5] => 6e67e733c48ef396030fb7720a00e788 [6] => 13fb051ed61355e99bc10a13e1dbdc94 [7] => b4b229031b96c454b33b0c96c0376369 ) ) [post_type] => post [post_author] => 467 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => crowdfunding-web-platform-features-with-open-prioritization )

Decide filter: Returning post, everything seems orderly :Crowdfunding Web Platform Features With Open Prioritization

Array ( [post_title] => Crowdfunding Web Platform Features With Open Prioritization [post_content] => Crowdfunding Web Platform Features With Open Prioritization

Crowdfunding Web Platform Features With Open Prioritization

Rachel Andrew

In my last post, I described some interesting CSS features — some of which are only available in one browser. Most web developers have some feature they wish was more widely available, or that was available at all. I encourage developers to use, talk about, and raise implementation bugs with browsers to try to get features implemented, however, what if there was a more direct way to do so? What if web developers could get together and fund the development of these features?

This is the model that open-source consultancy Igalia is launching with their Open Prioritization experiment. The basic idea is a crowdfunding model for web platform features. If we want a feature implemented, we can put in a small amount of money to help fund that work. If the goal is reached, the feature can be implemented. This article is based on an interview with Brian Kardell, Developer Advocate at Igalia.

What Is Open Prioritization?

The idea of open prioritization is that the community gets to choose and help fund feature development. Igalia have selected a list of target features, all of which are implemented or currently being implemented in at least one engine. Therefore, funding a feature will help it become available cross-browser, and more usable for us as developers. The initial list is:

The website gives more explanation of each feature and all of the details of how the funding will work. Igalia are working with Open Collective to manage the pledges.

Who Are Igalia?

You may never have heard of Igalia, but you will have benefited from their work. Igalia works on browser engines, and have specialist knowledge of all of the engines. They had the second-highest number of commits to the Chrome and WebKit source in 2019. If you love CSS Grid Layout, then you have Igalia to thank for the implementation in Chrome and WebKit. The work to add the feature to those browsers was done by a team at Igalia, rather than engineers working internally at the browser company.

This is what makes this idea so compelling. It isn’t a case of raising some money and then trying to persuade someone to do the work. Igalia have a track record of doing the work. Developers need to be paid, so by crowdsourcing the money we are able to choose what is worked on next. Igalia also already have the relationships with the engines to make any suggested feature likely to be a success.

Will Browsers Accept These Features If We Fund Them?

The fact that Igalia already have relationships within browser engine teams, and have already discussed the selected features with them means that if funded, we should see the features in browsers. And, there are already precedents for major features being funded by third parties and developed by Igalia. The Grid Layout implementation in Chrome and WebKit was funded by Bloomberg Tech. They were frustrated by the lack of Grid Layout implementation, and it was Bloomberg Tech who provided the money to develop that feature over several years.

Chrome and WebKit were happy to accept the implementation; there was no controversy over adding the feature. Rather, it was a matter of prioritization. The browsers had other work that was deemed a higher priority, and financial commitment and developer time was therefore directed elsewhere. The features that have been selected for this initial crowdfunding attempt are also non -controversial in terms of their implementation. If the work can be done then the engines are likely to accept it. Interoperability — things working in the same way across browsers — is something all browser vendors care about. There is no benefit to an engine to lag behind. We essentially just get to bypass the internal prioritization process for the feature.

Why Don’t Browsers Just Do This Stuff?

I asked Brian why the browser companies don’t fund these things themselves. He explained,

“People might think, for example, ‘Apple has all of the money in the world’ but this ignores complex realities. Apple’s business is not their Web browser. In fact, the web browser itself isn’t a money-making endeavor for anyone. Browsers and standards are voluntary, they are a commons. Cost-wise, however, browsers are considerable. They are massively more complex than most of us realize. Only 3 organizations today have invested the many years and millions of dollars annually that it takes to evolve and maintain a rendering engine project. Any one of them is already making a massive and unparalleled investment in the commons.”

Brian went on to point out the considerable investment of Firefox into Servo, and Google into LayoutNG, projects which will improve the browser experience and also make it possible to implement new features of the platform. There is a lot that any browser could be implementing in their engine, but the way those features are internally prioritized may not always map to our needs as developers.

It occurred to me that by funding browser implementation, we are doing the same thing that we do for other products that we use. Many of us will have developed a plugin for a needed feature in a CMS or paid a third party to provide it. The CMS developers spend their time working on the core product, ensuring that it is robust, secure, and up to date. Without the core product, adding plugins would be impossible. Third parties however can contribute parts to that platform, and in a sense that is what we can do via open prioritization. Show that a feature is worthwhile enough for us to pledge some cash to get it over the line.

How Does This Fit With Projects Such As Web We Want?

SmashingConf has supported the Web We Want project, where developers pitched web platform ideas to be discussed and voted for onstage at conferences. I was involved in several of these events as a host and on the panel. I wondered how open prioritization fits with these existing efforts. Brian explained that these are quite different things saying,

“... if you asked me what could make my house better I could name a million things. Some of those aren’t even remotely practical, they would just be really neat. But if you said make a list of things you could do with a budget for what each one costs — my list will be considerably more practical and bound by realities I know exist.

At the end of the month if you say "there is your list, and here is $100, what will you do with it?" that’s a very direct question that helps me accomplish something practical. Maybe I will paint. Maybe I will buy some new lighting. Or, maybe I will save it for some months toward something more costly.”

The Web We Want project asks an open question, it asks what we want of the platform. Many of the wants aren’t things that already exist as a specification. To actually start to implement any of these things would mean starting right at the beginning, with an idea that needs taking right from the specification stage. There are few certainties, and they would be very hard to put a price on.

The features selected for this first open prioritization experiment are deliberately limited in scope. They already have some implementation; they have a specification, and Igalia have already spoken to browser maintainers to check that the features are ready to work on but don’t feature in immediate priorities.

Supporting this project means supporting a concrete chunk of development, that can happen within a reasonably short timeframe. Posting an idea to Web We Want, writing up an idea on your blog, or adding an issue describing a totally new feature on the CSSWG GitHub repo potentially gets a new idea out into the discussion. However, those ideas may have a long slow path to becoming reality. And, given the nature of standards discussions, probably won’t happen in exactly the way that you imagined. It is valuable to propose these things, but very hard to estimate time and costs to a final implementation.

The same problem is true for the much-wanted feature of container queries, Igalia have gone so far as to mention container queries in their FAQ. Container queries are something that many people involved in the standards process and at browser vendors are looking into, however, those discussions are at an early stage. It isn’t something it would be possible to put a monetary value on at this point.

Get Involved!

There is more information at the Open Prioritization site, along with a detailed FAQ answering other questions that you might have. I’m excited about this because I’m always keen to help find ways for designers and developers to get involved in the web platform. It is our platform. We can wait for things to be granted to use by browser vendors, or we can actively contribute via ideas, bug reports, and with Open Prioritization a bit of cash, to help to make it better.

Smashing Editorial (il)
[post_excerpt] => In my last post, I described some interesting CSS features — some of which are only available in one browser. Most web developers have some feature they wish was more widely available, or that was available at all. I encourage developers to use, talk about, and raise implementation bugs with browsers to try to get features implemented, however, what if there was a more direct way to do so? What if web developers could get together and fund the development of these features? [post_date_gmt] => 2020-07-13 13:00:00 [post_date] => 2020-07-13 13:00:00 [post_modified_gmt] => 2020-07-13 13:00:00 [post_modified] => 2020-07-13 13:00:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/crowdfunding-web-platform-features-open-prioritization/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/crowdfunding-web-platform-features-open-prioritization/ [syndication_item_hash] => Array ( [0] => c16c33c9542b036374624abe9b0d23ad [1] => 869044a5c3d4ca5fd2a62041e0da70c2 [2] => 869044a5c3d4ca5fd2a62041e0da70c2 [3] => 879e8af4a9d14531b9dbf8b6485bb199 [4] => 44d16964a254523a8afa60152ed158fe [5] => 6e67e733c48ef396030fb7720a00e788 [6] => 13fb051ed61355e99bc10a13e1dbdc94 [7] => b4b229031b96c454b33b0c96c0376369 ) ) [post_type] => post [post_author] => 467 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => crowdfunding-web-platform-features-with-open-prioritization )

FAF deciding on filters on post to be syndicated:

How To Create A Custom React Hook To Fetch And Cache Data

Array ( [post_title] => How To Create A Custom React Hook To Fetch And Cache Data [post_content] => How To Create A Custom React Hook To Fetch And Cache Data

How To Create A Custom React Hook To Fetch And Cache Data

Ademola Adegbuyi

If you are a newbie to React Hooks, you can start by checking the official documentation to get a grasp of it. After that, I’d recommend reading Shedrack Akintayo’s “Getting Started With React Hooks API”. To ensure you’re following along, there is also an article written by Adeneye David Abiodun that covers best practices with React Hooks which I’m sure will prove to be useful to you.

Throughout this article, we’ll be making use of Hacker News Search API to build a custom hook which we can use to fetch data. While this tutorial will cover the Hacker News Search API, we’ll have the hook work in a way that it will return response from any valid API link we pass to it.

Best Practices With React

React is a fantastic JavaScript library for building rich user interfaces. It provides a great component abstraction for organizing your interfaces into well-functioning code, and there’s just about anything you can use it for. Read more articles on React →

Fetching Data In A React Component

Before React hooks, it was conventional to fetch initial data in the componentDidMount() lifecycle method, and data based on prop or state changes in componentDidUpdate() lifecycle method.

Here’s how it works:

componentDidMount() {
  const fetchData = async () => {
    const response = await fetch(
      `https://hn.algolia.com/api/v1/search?query=JavaScript`
    );
    const data = await response.json();
    this.setState({ data });
  };
  
  fetchData();
}


componentDidUpdate(previousProps, previousState) {
    if (previousState.query !== this.state.query) {
      const fetchData = async () => {
        const response = await fetch(
          `https://hn.algolia.com/api/v1/search?query=${this.state.query}`
        );
        const data = await response.json();
        this.setState({ data });
      };

      fetchData();
    }
  }

The componentDidMount lifecycle method gets invoked as soon as the component gets mounted, and when that is done, what we did was to make a request to search for “JavaScript” via the Hacker News API and update the state based on the response.

The componentDidUpdate lifecycle method, on the other hand, gets invoked when there’s a change in the component. We compared the previous query in the state with the current query to prevent the method from getting invoked every time we set “data” in state. One thing we get from using hooks is to combine both lifecycle methods in a cleaner way — meaning that we won’t need to have two lifecycle methods for when the component mounts and when it updates.

Fetching Data With useEffect Hook

The useEffect hook gets invoked as soon as the component is mounted. If we need the hook to rerun based on some prop or state changes, we’ll need to pass them to the dependency array (which is the second argument of the useEffect hook).

Let’s explore how to fetch data with hooks:

import { useState, useEffect } from 'react';

const [status, setStatus] = useState('idle');
const [query, setQuery] = useState('');
const [data, setData] = useState([]);

useEffect(() => {
    if (!query) return;

    const fetchData = async () => {
        setStatus('fetching');
        const response = await fetch(
            `https://hn.algolia.com/api/v1/search?query=${query}`
        );
        const data = await response.json();
        setData(data.hits);
        setStatus('fetched');
    };

    fetchData();
}, [query]);

In the example above, we passed query as a dependency to our useEffect hook. By doing that, we’re telling useEffect to track query changes. If the previous query value isn’t the same as the current value, the useEffect get invoked again.

With that said, we’re also setting several status on the component as needed, as this will better convey some message to the screen based on some finite states status. In the idle state, we could let users know that they could make use of the search box to get started. In the fetching state, we could show a spinner. And, in the fetched state, we’ll render the data.

It’s important to set the data before you attempt to set status to fetched so that you can prevent a flicker which occurs as a result of the data being empty while you’re setting the fetched status.

Creating A Custom Hook

“A custom hook is a JavaScript function whose name starts with ‘use’ and that may call other Hooks.”

React Docs

That’s really what it is, and along with a JavaScript function, it allows you to reuse some piece of code in several parts of your app.

The definition from the React Docs has given it away but let’s see how it works in practice with a counter custom hook:

const useCounter = (initialState = 0) => {
      const [count, setCount] = useState(initialState);
      const add = () => setCount(count + 1);
      const subtract = () => setCount(count - 1);
      return { count, add, subtract };
};

Here, we have a regular function where we take in an optional argument, set the value to our state, as well as add the add and the subtract methods that could be used to update it.

Everywhere in our app where we need a counter, we can call useCounter like a regular function and pass an initialState so we know where to start counting from. When we don’t have an initial state, we default to 0.

Here’s how it works in practice:

import { useCounter } from './customHookPath';

const { count, add, subtract } = useCounter(100);

eventHandler(() => {
  add(); // or subtract();
});

What we did here was to import our custom hook from the file we declared it in, so we could make use of it in our app. We set its initial state to 100, so whenever we call add(), it increases count by 1, and whenever we call subtract(), it decreases count by 1.

Creating useFetch Hook

Now that we’ve learned how to create a simple custom hook, let’s extract our logic to fetch data into a custom hook.

const useFetch = (query) => {
    const [status, setStatus] = useState('idle');
    const [data, setData] = useState([]);

    useEffect(() => {
        if (!query) return;

        const fetchData = async () => {
            setStatus('fetching');
            const response = await fetch(
                `https://hn.algolia.com/api/v1/search?query=${query}`
            );
            const data = await response.json();
            setData(data.hits);
            setStatus('fetched');
        };

        fetchData();
    }, [query]);

    return { status, data };
};

It’s pretty much the same thing we did above with the exception of it being a function that takes in query and returns status and data. And, that’s a useFetch hook that we could use in several components in our React application.

This works, but the problem with this implementation now is, it’s specific to Hacker News so we might just call it useHackerNews. What we intend to do is, to create a useFetch hook that can be used to call any URL. Let’s revamp it to take in a URL instead!

const useFetch = (url) => {
    const [status, setStatus] = useState('idle');
    const [data, setData] = useState([]);

    useEffect(() => {
        if (!url) return;
        const fetchData = async () => {
            setStatus('fetching');
            const response = await fetch(url);
            const data = await response.json();
            setData(data);
            setStatus('fetched');
        };

        fetchData();
    }, [url]);

    return { status, data };
};

Now, our useFetch hook is generic and we can use it as we want in our various components.

Here’s one way of consuming it:

const [query, setQuery] = useState('');

const url = query && `https://hn.algolia.com/api/v1/search?query=${query}`;
const { status, data } = useFetch(url);

In this case, if the value of query is truthy, we go ahead to set the URL and if it’s not, we’re fine with passing undefined as it’d get handled in our hook. The effect will attempt to run once, regardless.

Memoizing Fetched Data

Memoization is a technique we would use to make sure that we don’t hit the hackernews endpoint if we have made some kind of request to fetch it at some initial phase. Storing the result of expensive fetch calls will save the users some load time, therefore, increasing overall performance.

Note: For more context, you could check out Wikipedia’s explanation on Memoization.

Let’s explore how we could do that!

const cache = {};

const useFetch = (url) => {
    const [status, setStatus] = useState('idle');
    const [data, setData] = useState([]);

    useEffect(() => {
        if (!url) return;

        const fetchData = async () => {
            setStatus('fetching');
            if (cache[url]) {
                const data = cache[url];
                setData(data);
                setStatus('fetched');
            } else {
                const response = await fetch(url);
                const data = await response.json();
                cache[url] = data; // set response in cache;
                setData(data);
                setStatus('fetched');
            }
        };

        fetchData();
    }, [url]);

    return { status, data };
};

Here, we’re mapping URLs to their data. So, if we make a request to fetch some existing data, we set the data from our local cache, else, we go ahead to make the request and set the result in the cache. This ensures we do not make an API call when we have the data available to us locally. We’ll also notice that we’re killing off the effect if the URL is falsy, so it makes sure we don’t proceed to fetch data that doesn’t exist. We can’t do it before the useEffect hook as that will go against one of the rules of hooks, which is to always call hooks at the top level.

Declaring cache in a different scope works but it makes our hook go against the principle of a pure function. Besides, we also want to make sure that React helps in cleaning up our mess when we no longer want to make use of the component. We’ll explore useRef to help us in achieving that.

Memoizing Data With useRef

useRef is like a box that can hold a mutable value in its .current property.”

React Docs

With useRef, we can set and retrieve mutable values at ease and its value persists throughout the component’s lifecycle.

Let’s replace our cache implementation with some useRef magic!

const useFetch = (url) => {
    const cache = useRef({});
    const [status, setStatus] = useState('idle');
    const [data, setData] = useState([]);

    useEffect(() => {
        if (!url) return;
        const fetchData = async () => {
            setStatus('fetching');
            if (cache.current[url]) {
                const data = cache.current[url];
                setData(data);
                setStatus('fetched');
            } else {
                const response = await fetch(url);
                const data = await response.json();
                cache.current[url] = data; // set response in cache;
                setData(data);
                setStatus('fetched');
            }
        };

        fetchData();
    }, [url]);

    return { status, data };
};

Here, our cache is now in our useFetch hook with an empty object as an initial value.

Wrapping Up

Well, I did state that setting the data before setting the fetched status was a good idea, but there are two potential problems we could have with that, too:

  1. Our unit test could fail as a result of the data array not being empty while we’re in the fetching state. React could actually batch state changes but it can’t do that if it’s triggered asynchronously;
  2. Our app re-renders more than it should.

Let’s do a final clean-up to our useFetch hook.,We’re going to start by switching our useStates to a useReducer. Let’s see how that works!

const initialState = {
    status: 'idle',
    error: null,
    data: [],
};

const [state, dispatch] = useReducer((state, action) => {
    switch (action.type) {
        case 'FETCHING':
            return { ...initialState, status: 'fetching' };
        case 'FETCHED':
            return { ...initialState, status: 'fetched', data: action.payload };
        case 'FETCH_ERROR':
            return { ...initialState, status: 'error', error: action.payload };
        default:
            return state;
    }
}, initialState);

Here, we added an initial state which is the initial value we passed to each of our individual useStates. In our useReducer, we check what type of action we want to perform, and set the appropriate values to state based on that.

This resolves the two problems we discussed earlier, as we now get to set the status and data at the same time in order to help prevent impossible states and unnecessary re-renders.

There’s just one more thing left: cleaning up our side effect. Fetch implements the Promise API, in the sense that it could be resolved or rejected. If our hook tries to make an update while the component has unmounted because of some Promise just got resolved, React would return Can't perform a React state update on an unmounted component.

Let’s see how we can fix that with useEffect clean-up!

useEffect(() => {
    let cancelRequest = false;
    if (!url) return;

    const fetchData = async () => {
        dispatch({ type: 'FETCHING' });
        if (cache.current[url]) {
            const data = cache.current[url];
            dispatch({ type: 'FETCHED', payload: data });
        } else {
            try {
                const response = await fetch(url);
                const data = await response.json();
                cache.current[url] = data;
                if (cancelRequest) return;
                dispatch({ type: 'FETCHED', payload: data });
            } catch (error) {
                if (cancelRequest) return;
                dispatch({ type: 'FETCH_ERROR', payload: error.message });
            }
        }
    };

    fetchData();

    return function cleanup() {
        cancelRequest = true;
    };
}, [url]);

Here, we set cancelRequest to true after having defined it inside the effect. So, before we attempt to make state changes, we first confirm if the component has been unmounted. If it has been unmounted, we skip updating the state and if it hasn’t been unmounted, we update the state. This will resolve the React state update error, and also prevent race conditions in our components.

Conclusion

We’ve explored several hooks concepts to help fetch and cache data in our components. We also went through cleaning up our useEffect hook which helps prevent a good number of problems in our app.

If you have any questions, please feel free to drop them in the comments section below!

References

Smashing Editorial (ks, ra, yk, il)
[post_excerpt] => If you are a newbie to React Hooks, you can start by checking the official documentation to get a grasp of it. After that, I’d recommend reading Shedrack Akintayo’s “Getting Started With React Hooks API”. To ensure you’re following along, there is also an article written by Adeneye David Abiodun that covers best practices with React Hooks which I’m sure will prove to be useful to you. Throughout this article, we’ll be making use of Hacker News Search API to build a custom hook which we can use to fetch data. [post_date_gmt] => 2020-07-13 09:00:00 [post_date] => 2020-07-13 09:00:00 [post_modified_gmt] => 2020-07-13 09:00:00 [post_modified] => 2020-07-13 09:00:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/custom-react-hook-fetch-cache-data/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/custom-react-hook-fetch-cache-data/ [syndication_item_hash] => Array ( [0] => f9806f468e46556f356414a00610d8ea [1] => 843b2f5149153e563b12184dcaac095f [2] => aa76373fc8fb2e18ad5c5591db112743 [3] => aa76373fc8fb2e18ad5c5591db112743 [4] => bf9f97cc346bec6a08f9c2dcadb4db16 [5] => 2c5d86f147a1ab0a3204ac30c0d286f1 [6] => 784a93358db9b85d1b3f3ad5de907bf6 [7] => 91eba4d54238d6b33dd2a902cfc06488 [8] => 03af1a62c680eda7287c36af4ae84297 ) ) [post_type] => post [post_author] => 611 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => how-to-create-a-custom-react-hook-to-fetch-and-cache-data )

Decide filter: Returning post, everything seems orderly :How To Create A Custom React Hook To Fetch And Cache Data

Array ( [post_title] => How To Create A Custom React Hook To Fetch And Cache Data [post_content] => How To Create A Custom React Hook To Fetch And Cache Data

How To Create A Custom React Hook To Fetch And Cache Data

Ademola Adegbuyi

If you are a newbie to React Hooks, you can start by checking the official documentation to get a grasp of it. After that, I’d recommend reading Shedrack Akintayo’s “Getting Started With React Hooks API”. To ensure you’re following along, there is also an article written by Adeneye David Abiodun that covers best practices with React Hooks which I’m sure will prove to be useful to you.

Throughout this article, we’ll be making use of Hacker News Search API to build a custom hook which we can use to fetch data. While this tutorial will cover the Hacker News Search API, we’ll have the hook work in a way that it will return response from any valid API link we pass to it.

Best Practices With React

React is a fantastic JavaScript library for building rich user interfaces. It provides a great component abstraction for organizing your interfaces into well-functioning code, and there’s just about anything you can use it for. Read more articles on React →

Fetching Data In A React Component

Before React hooks, it was conventional to fetch initial data in the componentDidMount() lifecycle method, and data based on prop or state changes in componentDidUpdate() lifecycle method.

Here’s how it works:

componentDidMount() {
  const fetchData = async () => {
    const response = await fetch(
      `https://hn.algolia.com/api/v1/search?query=JavaScript`
    );
    const data = await response.json();
    this.setState({ data });
  };
  
  fetchData();
}


componentDidUpdate(previousProps, previousState) {
    if (previousState.query !== this.state.query) {
      const fetchData = async () => {
        const response = await fetch(
          `https://hn.algolia.com/api/v1/search?query=${this.state.query}`
        );
        const data = await response.json();
        this.setState({ data });
      };

      fetchData();
    }
  }

The componentDidMount lifecycle method gets invoked as soon as the component gets mounted, and when that is done, what we did was to make a request to search for “JavaScript” via the Hacker News API and update the state based on the response.

The componentDidUpdate lifecycle method, on the other hand, gets invoked when there’s a change in the component. We compared the previous query in the state with the current query to prevent the method from getting invoked every time we set “data” in state. One thing we get from using hooks is to combine both lifecycle methods in a cleaner way — meaning that we won’t need to have two lifecycle methods for when the component mounts and when it updates.

Fetching Data With useEffect Hook

The useEffect hook gets invoked as soon as the component is mounted. If we need the hook to rerun based on some prop or state changes, we’ll need to pass them to the dependency array (which is the second argument of the useEffect hook).

Let’s explore how to fetch data with hooks:

import { useState, useEffect } from 'react';

const [status, setStatus] = useState('idle');
const [query, setQuery] = useState('');
const [data, setData] = useState([]);

useEffect(() => {
    if (!query) return;

    const fetchData = async () => {
        setStatus('fetching');
        const response = await fetch(
            `https://hn.algolia.com/api/v1/search?query=${query}`
        );
        const data = await response.json();
        setData(data.hits);
        setStatus('fetched');
    };

    fetchData();
}, [query]);

In the example above, we passed query as a dependency to our useEffect hook. By doing that, we’re telling useEffect to track query changes. If the previous query value isn’t the same as the current value, the useEffect get invoked again.

With that said, we’re also setting several status on the component as needed, as this will better convey some message to the screen based on some finite states status. In the idle state, we could let users know that they could make use of the search box to get started. In the fetching state, we could show a spinner. And, in the fetched state, we’ll render the data.

It’s important to set the data before you attempt to set status to fetched so that you can prevent a flicker which occurs as a result of the data being empty while you’re setting the fetched status.

Creating A Custom Hook

“A custom hook is a JavaScript function whose name starts with ‘use’ and that may call other Hooks.”

React Docs

That’s really what it is, and along with a JavaScript function, it allows you to reuse some piece of code in several parts of your app.

The definition from the React Docs has given it away but let’s see how it works in practice with a counter custom hook:

const useCounter = (initialState = 0) => {
      const [count, setCount] = useState(initialState);
      const add = () => setCount(count + 1);
      const subtract = () => setCount(count - 1);
      return { count, add, subtract };
};

Here, we have a regular function where we take in an optional argument, set the value to our state, as well as add the add and the subtract methods that could be used to update it.

Everywhere in our app where we need a counter, we can call useCounter like a regular function and pass an initialState so we know where to start counting from. When we don’t have an initial state, we default to 0.

Here’s how it works in practice:

import { useCounter } from './customHookPath';

const { count, add, subtract } = useCounter(100);

eventHandler(() => {
  add(); // or subtract();
});

What we did here was to import our custom hook from the file we declared it in, so we could make use of it in our app. We set its initial state to 100, so whenever we call add(), it increases count by 1, and whenever we call subtract(), it decreases count by 1.

Creating useFetch Hook

Now that we’ve learned how to create a simple custom hook, let’s extract our logic to fetch data into a custom hook.

const useFetch = (query) => {
    const [status, setStatus] = useState('idle');
    const [data, setData] = useState([]);

    useEffect(() => {
        if (!query) return;

        const fetchData = async () => {
            setStatus('fetching');
            const response = await fetch(
                `https://hn.algolia.com/api/v1/search?query=${query}`
            );
            const data = await response.json();
            setData(data.hits);
            setStatus('fetched');
        };

        fetchData();
    }, [query]);

    return { status, data };
};

It’s pretty much the same thing we did above with the exception of it being a function that takes in query and returns status and data. And, that’s a useFetch hook that we could use in several components in our React application.

This works, but the problem with this implementation now is, it’s specific to Hacker News so we might just call it useHackerNews. What we intend to do is, to create a useFetch hook that can be used to call any URL. Let’s revamp it to take in a URL instead!

const useFetch = (url) => {
    const [status, setStatus] = useState('idle');
    const [data, setData] = useState([]);

    useEffect(() => {
        if (!url) return;
        const fetchData = async () => {
            setStatus('fetching');
            const response = await fetch(url);
            const data = await response.json();
            setData(data);
            setStatus('fetched');
        };

        fetchData();
    }, [url]);

    return { status, data };
};

Now, our useFetch hook is generic and we can use it as we want in our various components.

Here’s one way of consuming it:

const [query, setQuery] = useState('');

const url = query && `https://hn.algolia.com/api/v1/search?query=${query}`;
const { status, data } = useFetch(url);

In this case, if the value of query is truthy, we go ahead to set the URL and if it’s not, we’re fine with passing undefined as it’d get handled in our hook. The effect will attempt to run once, regardless.

Memoizing Fetched Data

Memoization is a technique we would use to make sure that we don’t hit the hackernews endpoint if we have made some kind of request to fetch it at some initial phase. Storing the result of expensive fetch calls will save the users some load time, therefore, increasing overall performance.

Note: For more context, you could check out Wikipedia’s explanation on Memoization.

Let’s explore how we could do that!

const cache = {};

const useFetch = (url) => {
    const [status, setStatus] = useState('idle');
    const [data, setData] = useState([]);

    useEffect(() => {
        if (!url) return;

        const fetchData = async () => {
            setStatus('fetching');
            if (cache[url]) {
                const data = cache[url];
                setData(data);
                setStatus('fetched');
            } else {
                const response = await fetch(url);
                const data = await response.json();
                cache[url] = data; // set response in cache;
                setData(data);
                setStatus('fetched');
            }
        };

        fetchData();
    }, [url]);

    return { status, data };
};

Here, we’re mapping URLs to their data. So, if we make a request to fetch some existing data, we set the data from our local cache, else, we go ahead to make the request and set the result in the cache. This ensures we do not make an API call when we have the data available to us locally. We’ll also notice that we’re killing off the effect if the URL is falsy, so it makes sure we don’t proceed to fetch data that doesn’t exist. We can’t do it before the useEffect hook as that will go against one of the rules of hooks, which is to always call hooks at the top level.

Declaring cache in a different scope works but it makes our hook go against the principle of a pure function. Besides, we also want to make sure that React helps in cleaning up our mess when we no longer want to make use of the component. We’ll explore useRef to help us in achieving that.

Memoizing Data With useRef

useRef is like a box that can hold a mutable value in its .current property.”

React Docs

With useRef, we can set and retrieve mutable values at ease and its value persists throughout the component’s lifecycle.

Let’s replace our cache implementation with some useRef magic!

const useFetch = (url) => {
    const cache = useRef({});
    const [status, setStatus] = useState('idle');
    const [data, setData] = useState([]);

    useEffect(() => {
        if (!url) return;
        const fetchData = async () => {
            setStatus('fetching');
            if (cache.current[url]) {
                const data = cache.current[url];
                setData(data);
                setStatus('fetched');
            } else {
                const response = await fetch(url);
                const data = await response.json();
                cache.current[url] = data; // set response in cache;
                setData(data);
                setStatus('fetched');
            }
        };

        fetchData();
    }, [url]);

    return { status, data };
};

Here, our cache is now in our useFetch hook with an empty object as an initial value.

Wrapping Up

Well, I did state that setting the data before setting the fetched status was a good idea, but there are two potential problems we could have with that, too:

  1. Our unit test could fail as a result of the data array not being empty while we’re in the fetching state. React could actually batch state changes but it can’t do that if it’s triggered asynchronously;
  2. Our app re-renders more than it should.

Let’s do a final clean-up to our useFetch hook.,We’re going to start by switching our useStates to a useReducer. Let’s see how that works!

const initialState = {
    status: 'idle',
    error: null,
    data: [],
};

const [state, dispatch] = useReducer((state, action) => {
    switch (action.type) {
        case 'FETCHING':
            return { ...initialState, status: 'fetching' };
        case 'FETCHED':
            return { ...initialState, status: 'fetched', data: action.payload };
        case 'FETCH_ERROR':
            return { ...initialState, status: 'error', error: action.payload };
        default:
            return state;
    }
}, initialState);

Here, we added an initial state which is the initial value we passed to each of our individual useStates. In our useReducer, we check what type of action we want to perform, and set the appropriate values to state based on that.

This resolves the two problems we discussed earlier, as we now get to set the status and data at the same time in order to help prevent impossible states and unnecessary re-renders.

There’s just one more thing left: cleaning up our side effect. Fetch implements the Promise API, in the sense that it could be resolved or rejected. If our hook tries to make an update while the component has unmounted because of some Promise just got resolved, React would return Can't perform a React state update on an unmounted component.

Let’s see how we can fix that with useEffect clean-up!

useEffect(() => {
    let cancelRequest = false;
    if (!url) return;

    const fetchData = async () => {
        dispatch({ type: 'FETCHING' });
        if (cache.current[url]) {
            const data = cache.current[url];
            dispatch({ type: 'FETCHED', payload: data });
        } else {
            try {
                const response = await fetch(url);
                const data = await response.json();
                cache.current[url] = data;
                if (cancelRequest) return;
                dispatch({ type: 'FETCHED', payload: data });
            } catch (error) {
                if (cancelRequest) return;
                dispatch({ type: 'FETCH_ERROR', payload: error.message });
            }
        }
    };

    fetchData();

    return function cleanup() {
        cancelRequest = true;
    };
}, [url]);

Here, we set cancelRequest to true after having defined it inside the effect. So, before we attempt to make state changes, we first confirm if the component has been unmounted. If it has been unmounted, we skip updating the state and if it hasn’t been unmounted, we update the state. This will resolve the React state update error, and also prevent race conditions in our components.

Conclusion

We’ve explored several hooks concepts to help fetch and cache data in our components. We also went through cleaning up our useEffect hook which helps prevent a good number of problems in our app.

If you have any questions, please feel free to drop them in the comments section below!

References

Smashing Editorial (ks, ra, yk, il)
[post_excerpt] => If you are a newbie to React Hooks, you can start by checking the official documentation to get a grasp of it. After that, I’d recommend reading Shedrack Akintayo’s “Getting Started With React Hooks API”. To ensure you’re following along, there is also an article written by Adeneye David Abiodun that covers best practices with React Hooks which I’m sure will prove to be useful to you. Throughout this article, we’ll be making use of Hacker News Search API to build a custom hook which we can use to fetch data. [post_date_gmt] => 2020-07-13 09:00:00 [post_date] => 2020-07-13 09:00:00 [post_modified_gmt] => 2020-07-13 09:00:00 [post_modified] => 2020-07-13 09:00:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/custom-react-hook-fetch-cache-data/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/custom-react-hook-fetch-cache-data/ [syndication_item_hash] => Array ( [0] => f9806f468e46556f356414a00610d8ea [1] => 843b2f5149153e563b12184dcaac095f [2] => aa76373fc8fb2e18ad5c5591db112743 [3] => aa76373fc8fb2e18ad5c5591db112743 [4] => bf9f97cc346bec6a08f9c2dcadb4db16 [5] => 2c5d86f147a1ab0a3204ac30c0d286f1 [6] => 784a93358db9b85d1b3f3ad5de907bf6 [7] => 91eba4d54238d6b33dd2a902cfc06488 [8] => 03af1a62c680eda7287c36af4ae84297 ) ) [post_type] => post [post_author] => 611 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => how-to-create-a-custom-react-hook-to-fetch-and-cache-data )

FAF deciding on filters on post to be syndicated:

A Book Release For Click! And A Chance To Rethink Our Routines

Array ( [post_title] => A Book Release For Click! And A Chance To Rethink Our Routines [post_content] => A Book Release For Click! And A Chance To Rethink Our Routines

A Book Release For Click! And A Chance To Rethink Our Routines

Ari Stiles

Paul Boag has written three books in the Smashing Library, and Click! How to Encourage Clicks Without Shady Tricks is just the most recent one. Paul wrote some articles for Smashing Magazine in addition to appearing on the Smashing Podcast and SmashingTV to promote the book and talk about some of its themes. Click! finally started shipping in June, and our beloved preorder customers found a surprise in those packages.

This Time, It’s Personal

Paul signed 500 postcards—designed by our own Ricardo Gimenez — and our first 500 buyers each received one of the signed postcards along with their copy of Click!

It was fun to watch the reactions pop up on social media:

Reading it in my car on this very moment!👌🏻 pic.twitter.com/TQHq8PyYFM

— Yakari Van Dessel (@yakarivd) June 15, 2020

What a lovley surprise today. I just revived a new book from @smashingmag. Many thanks to Paul Boag for a surprise note 🥰 pic.twitter.com/VX7NH8WrJV

— Antonija 💙 (@AntonijaVresk) June 8, 2020

Who would have said that the book I purchased last month would arrive on my birthday! Best gift for myself 💖 thanks @smashingmag @paulboag pic.twitter.com/BZcirpJHyj

— Lu (@lubby_cat) June 9, 2020

this just got delivered before the weekend - good timing @smashingmag @boagworld pic.twitter.com/UmShqifaUe

— Jari Tirkkonen (@jatike) June 12, 2020

Look what came in the mail today!! This book is so relevant right now AND it looks so, so good on the inside. Wow. Thnx @boagworld and @smashingmag! Taking the rest of the day off now. Not really, but I wish I could. Man am I eager to fully read this. pic.twitter.com/d34UBpAxVI

— Marjolijn (@Marjolijn) June 10, 2020

The new @smashingmag landed in my mailbox this weekend 🚀👌 pic.twitter.com/rl3dAEd8a9

— Stanley van Heusden (@stannvanheusden) June 7, 2020

Haha, you’re welcome @smashingmag @boagworld pic.twitter.com/yBEKnurnvt

— Gijs Hosman (@gijshosman) June 10, 2020

Paul’s an experienced author, and we also had a very experienced illustrator for the cover and interior templates—Veerle Pieters! She has contributed artwork to Paul’s other books for Smashing, but she also designed the cover for our first Smashing Magazine Print edition. The illustrations and templates she provided for Click! helped make the book an inspiration to read, too.

A sneak peek into the Click book
Photo by Drew McLellan

Working with an experienced author and a seasoned illustrator is always inspiring. There were even a few pleasant surprises along the way:

@smashingmag keep this brillant idea ! pic.twitter.com/icFDRYwKNf

— Benoît HENRY (@bhenbe) June 14, 2020

Creativity In Quarantine

For a lot of us, quarantine has had a profound effect on our productivity levels. Feelings of being overwhelmed might be slowing you down, OR maybe your work is a welcome distraction, and you are getting a lot done right now.

One thing that has changed for most of us is our routines, though. This inevitably leads to some problem solving, which requires some creativity.

Click! comes along at a time when many of us need a creative “nudge.” The book inspires us to think differently about our routines for building online sites and services—what works, and what doesn’t.

“People are under enormous pressure to improve their conversion rates. Marketers have got targets they’ve got to meet, designers are under pressure ... people are inevitably turning to dark patterns. Not because they want to, but because they’re under pressure to. They’re under pressure to bring about results.

So the premise of this book is, first of all, to explain why dark patterns are a bad idea. I talk about it from a purely business point of view, that these are the business reasons why dark patterns are ultimately damaging. And then that inevitably leads to the question of, well, if dark patterns aren’t the answer, then what is?

The majority of the book is exploring what you can do to improve your conversion rate without resorting to these kinds of more manipulative techniques.”

— Paul Boag, Smashing Podcast Episode 12
The Ethical Design Handbook

Print + eBook

$ 39.00

Quality hardcover. Free worldwide shipping. 100 days money-back-guarantee.

eBook

$ 19.00

DRM-free, of course. ePUB, Kindle, PDF.
Included with Smashing Membership.

Plenty Of Inspiration

We want to help you stay inspired! You might like

And, have you looked at Smashing Membership lately? Members have free access to our eBooks, job board, partner offers, SmashingTV webinars, and discounts on just about everything else Smashing has to offer.

More Smashing Books

Promoting best practices and providing you with practical tips to master your daily coding and design challenges has always been (and will be) at the core of everything we do at Smashing.

In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as printed books that stand the test of time. Trine, Alla and Adam are some of these people. Have you checked out their books already?

Smashing Editorial (ra, il)
[post_excerpt] => Paul Boag has written three books in the Smashing Library, and Click! How to Encourage Clicks Without Shady Tricks is just the most recent one. Paul wrote some articles for Smashing Magazine in addition to appearing on the Smashing Podcast and SmashingTV to promote the book and talk about some of its themes. Click! finally started shipping in June, and our beloved preorder customers found a surprise in those packages. [post_date_gmt] => 2020-07-10 10:30:00 [post_date] => 2020-07-10 10:30:00 [post_modified_gmt] => 2020-07-10 10:30:00 [post_modified] => 2020-07-10 10:30:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/click-book-release-followup/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/click-book-release-followup/ [syndication_item_hash] => Array ( [0] => 3525fea09e3d303a06b40a0bbf0b3ebd [1] => 8c71dff5329881e82d4b4dd46f725983 [2] => 5eea4a93a02b6421018438b17d1a41a3 [3] => 961b54e273074634cb7090e52ca2cba8 [4] => c135be087461ffef9361e3e239800f8f [5] => a8b46b764c31420487f574520c40948c [6] => 919399ed51ad8eca0d0cf02e50ab88c0 [7] => 04e4c615e311bfeaedd5513a0c477e64 [8] => 0465d0954f89dc3d6c2eb55b59100e72 [9] => d709f3147d491b2005a42f88a36ddd61 [10] => 89ef3b53caba2c5cfc79b441e16f49a7 [11] => b0b28440f78b8ef61ec61b65efe5931e [12] => 763cbbeaf48973592331891afd2ec521 [13] => 99bfa1a12a28782794c10c4a1984b2a8 [14] => 5cfb580348530f734cb7b769f6d3cb73 [15] => aceddbfb75d51d7a4b126718a56a4c2a [16] => 99e9fa0a55716765d6669ff25005e72f [17] => 141f8824b38d1a3013fa54b515a82c29 [18] => 8cacb1e242a9415cca9514bca7c7c754 [19] => e0a43e20af15464841068cec15132ef6 [20] => 27032d9db9d113d9b2642852cbea0f25 [21] => da23e48d77b521b1191dc7ced20a4c30 [22] => 4ee9d53ac9ff4aebc5333b9421bc3391 [23] => 42453a3a9529a9d664d3f6e8d009718e [24] => 42453a3a9529a9d664d3f6e8d009718e [25] => 3900970b2dc2fec1c1e60aa006497ab9 [26] => 0992ba3c2d8384becc6d5b2a5972aadf [27] => a7d1ee1872bdd1e4285af3a155b5e575 [28] => a371c8198aa9e42670c4878daa229539 [29] => 91cb43e715c0fd90e81a15ecef2a06f2 ) ) [post_type] => post [post_author] => 536 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => a-book-release-for-click-and-a-chance-to-rethink-our-routines )

Decide filter: Returning post, everything seems orderly :A Book Release For Click! And A Chance To Rethink Our Routines

Array ( [post_title] => A Book Release For Click! And A Chance To Rethink Our Routines [post_content] => A Book Release For Click! And A Chance To Rethink Our Routines

A Book Release For Click! And A Chance To Rethink Our Routines

Ari Stiles

Paul Boag has written three books in the Smashing Library, and Click! How to Encourage Clicks Without Shady Tricks is just the most recent one. Paul wrote some articles for Smashing Magazine in addition to appearing on the Smashing Podcast and SmashingTV to promote the book and talk about some of its themes. Click! finally started shipping in June, and our beloved preorder customers found a surprise in those packages.

This Time, It’s Personal

Paul signed 500 postcards—designed by our own Ricardo Gimenez — and our first 500 buyers each received one of the signed postcards along with their copy of Click!

It was fun to watch the reactions pop up on social media:

Reading it in my car on this very moment!👌🏻 pic.twitter.com/TQHq8PyYFM

— Yakari Van Dessel (@yakarivd) June 15, 2020

What a lovley surprise today. I just revived a new book from @smashingmag. Many thanks to Paul Boag for a surprise note 🥰 pic.twitter.com/VX7NH8WrJV

— Antonija 💙 (@AntonijaVresk) June 8, 2020

Who would have said that the book I purchased last month would arrive on my birthday! Best gift for myself 💖 thanks @smashingmag @paulboag pic.twitter.com/BZcirpJHyj

— Lu (@lubby_cat) June 9, 2020

this just got delivered before the weekend - good timing @smashingmag @boagworld pic.twitter.com/UmShqifaUe

— Jari Tirkkonen (@jatike) June 12, 2020

Look what came in the mail today!! This book is so relevant right now AND it looks so, so good on the inside. Wow. Thnx @boagworld and @smashingmag! Taking the rest of the day off now. Not really, but I wish I could. Man am I eager to fully read this. pic.twitter.com/d34UBpAxVI

— Marjolijn (@Marjolijn) June 10, 2020

The new @smashingmag landed in my mailbox this weekend 🚀👌 pic.twitter.com/rl3dAEd8a9

— Stanley van Heusden (@stannvanheusden) June 7, 2020

Haha, you’re welcome @smashingmag @boagworld pic.twitter.com/yBEKnurnvt

— Gijs Hosman (@gijshosman) June 10, 2020

Paul’s an experienced author, and we also had a very experienced illustrator for the cover and interior templates—Veerle Pieters! She has contributed artwork to Paul’s other books for Smashing, but she also designed the cover for our first Smashing Magazine Print edition. The illustrations and templates she provided for Click! helped make the book an inspiration to read, too.

A sneak peek into the Click book
Photo by Drew McLellan

Working with an experienced author and a seasoned illustrator is always inspiring. There were even a few pleasant surprises along the way:

@smashingmag keep this brillant idea ! pic.twitter.com/icFDRYwKNf

— Benoît HENRY (@bhenbe) June 14, 2020

Creativity In Quarantine

For a lot of us, quarantine has had a profound effect on our productivity levels. Feelings of being overwhelmed might be slowing you down, OR maybe your work is a welcome distraction, and you are getting a lot done right now.

One thing that has changed for most of us is our routines, though. This inevitably leads to some problem solving, which requires some creativity.

Click! comes along at a time when many of us need a creative “nudge.” The book inspires us to think differently about our routines for building online sites and services—what works, and what doesn’t.

“People are under enormous pressure to improve their conversion rates. Marketers have got targets they’ve got to meet, designers are under pressure ... people are inevitably turning to dark patterns. Not because they want to, but because they’re under pressure to. They’re under pressure to bring about results.

So the premise of this book is, first of all, to explain why dark patterns are a bad idea. I talk about it from a purely business point of view, that these are the business reasons why dark patterns are ultimately damaging. And then that inevitably leads to the question of, well, if dark patterns aren’t the answer, then what is?

The majority of the book is exploring what you can do to improve your conversion rate without resorting to these kinds of more manipulative techniques.”

— Paul Boag, Smashing Podcast Episode 12
The Ethical Design Handbook

Print + eBook

$ 39.00

Quality hardcover. Free worldwide shipping. 100 days money-back-guarantee.

eBook

$ 19.00

DRM-free, of course. ePUB, Kindle, PDF.
Included with Smashing Membership.

Plenty Of Inspiration

We want to help you stay inspired! You might like

And, have you looked at Smashing Membership lately? Members have free access to our eBooks, job board, partner offers, SmashingTV webinars, and discounts on just about everything else Smashing has to offer.

More Smashing Books

Promoting best practices and providing you with practical tips to master your daily coding and design challenges has always been (and will be) at the core of everything we do at Smashing.

In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as printed books that stand the test of time. Trine, Alla and Adam are some of these people. Have you checked out their books already?

Smashing Editorial (ra, il)
[post_excerpt] => Paul Boag has written three books in the Smashing Library, and Click! How to Encourage Clicks Without Shady Tricks is just the most recent one. Paul wrote some articles for Smashing Magazine in addition to appearing on the Smashing Podcast and SmashingTV to promote the book and talk about some of its themes. Click! finally started shipping in June, and our beloved preorder customers found a surprise in those packages. [post_date_gmt] => 2020-07-10 10:30:00 [post_date] => 2020-07-10 10:30:00 [post_modified_gmt] => 2020-07-10 10:30:00 [post_modified] => 2020-07-10 10:30:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/click-book-release-followup/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/click-book-release-followup/ [syndication_item_hash] => Array ( [0] => 3525fea09e3d303a06b40a0bbf0b3ebd [1] => 8c71dff5329881e82d4b4dd46f725983 [2] => 5eea4a93a02b6421018438b17d1a41a3 [3] => 961b54e273074634cb7090e52ca2cba8 [4] => c135be087461ffef9361e3e239800f8f [5] => a8b46b764c31420487f574520c40948c [6] => 919399ed51ad8eca0d0cf02e50ab88c0 [7] => 04e4c615e311bfeaedd5513a0c477e64 [8] => 0465d0954f89dc3d6c2eb55b59100e72 [9] => d709f3147d491b2005a42f88a36ddd61 [10] => 89ef3b53caba2c5cfc79b441e16f49a7 [11] => b0b28440f78b8ef61ec61b65efe5931e [12] => 763cbbeaf48973592331891afd2ec521 [13] => 99bfa1a12a28782794c10c4a1984b2a8 [14] => 5cfb580348530f734cb7b769f6d3cb73 [15] => aceddbfb75d51d7a4b126718a56a4c2a [16] => 99e9fa0a55716765d6669ff25005e72f [17] => 141f8824b38d1a3013fa54b515a82c29 [18] => 8cacb1e242a9415cca9514bca7c7c754 [19] => e0a43e20af15464841068cec15132ef6 [20] => 27032d9db9d113d9b2642852cbea0f25 [21] => da23e48d77b521b1191dc7ced20a4c30 [22] => 4ee9d53ac9ff4aebc5333b9421bc3391 [23] => 42453a3a9529a9d664d3f6e8d009718e [24] => 42453a3a9529a9d664d3f6e8d009718e [25] => 3900970b2dc2fec1c1e60aa006497ab9 [26] => 0992ba3c2d8384becc6d5b2a5972aadf [27] => a7d1ee1872bdd1e4285af3a155b5e575 [28] => a371c8198aa9e42670c4878daa229539 [29] => 91cb43e715c0fd90e81a15ecef2a06f2 ) ) [post_type] => post [post_author] => 536 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => a-book-release-for-click-and-a-chance-to-rethink-our-routines )

FAF deciding on filters on post to be syndicated:

CSS Transitions In Vuejs And Nuxtjs

Array ( [post_title] => CSS Transitions In Vuejs And Nuxtjs [post_content] => CSS Transitions In Vuejs And Nuxtjs

CSS Transitions In Vuejs And Nuxtjs

Timi Omoyeni

Transitions are a module of CSS that lets you create gradual transitions between the values of specific CSS properties. The behavior of these transitions can be controlled by specifying their timing function, duration, and other attributes. Using these transitions in your applications and websites create a better visual experience and sometimes draws and holds the user’s attention while a piece of information is being introduced to or leaving the screen. According to Can I Use, transitions are supported by most browsers, although there are some minor issues with Internet Explorer and Safari.

Data on support for the css-transitions feature across the major browsers from caniuse.com
Source: caniuse.com. (Large preview)

Vue.js is an open-source JavaScript framework for building client-facing web applications and single-page applications (SPA). One of the features of this framework is the ability to add transitions to an app seamlessly, to switch between either pages or components, and we’re going to look at how to do that in this tutorial.

Nuxt.js is also a JavaScript framework, built on top of Vue.js (and often referred to as a framework of a framework), for building server-side web applications, static-generated websites, as well as SPAs. It works the same as Vue.js, so if you know Vue.js, you shouldn’t have many issues getting started with Nuxt.js. Nuxt.js comes with two properties for adding transitions to an app, and we’re going to cover those as well in this tutorial.

This tutorial requires basic knowledge of either Vue.js or Nuxt.js. All of the code snippets in this tutorial can be found on GitHub.

What Is A Transition?

According to the Oxford Dictionary, a transition can be defined as:

“A passage in a piece of writing that smoothly connects two topics or sections to each other.

The process or a period of changing from one state or condition to another.”

In terms of physics, a transition is defined thus:

“A change of an atom, nucleus, electron, etc. from one quantum state to another, with emission or absorption of radiation.”

From these definitions, we get an idea on what a transition is. The definitions all involve two different things or states. In terms of code, a transition is not so different.

What Is A CSS Transition?

According to Mozilla’s web documentation:

“CSS Transitions is a module of CSS that lets you create gradual transitions between the values of specific CSS properties. The behavior of these transitions can be controlled by specifying their timing function, duration, and other attributes.”

This means we can define a CSS transition as: the change in the CSS property of one or more elements from one value to another.

The CSS transition property enables us to add a transition effect to any valid element. It consists of up to four other properties (five, if we count the transition property itself) that can be used individually or combined as a shorthand. Each property has a different function.

transition-property

The transition-property accepts the name of the CSS property that we want to watch out for changes on and whose change process we want to transition. It looks like this:

.btn {
  width: 200px;
  height: 50px;
  transition-property: width;
  background-color: red;
  color: #fff;
  border: 0;
}

But this property does not do anything without the next property.

transition-duration

The transition-duration property specifies the time that the change of the element(s) in the transition-property should go on for. This property is required in order for the transition to work. If it is not set (with a value greater than 0s), then the default value of 0s would mean it would not run. So, let’s set a duration for this transition:

.btn {
  width: 200px;
  transition-property: width;
  transition-duration: 2s;
  background-color: red;
  color: #fff;
  border: 0;
}

Here, we have an element with a class name of btn that has a width of 200px. We are using both the transition-property and the transition-duration properties here. What this means is, “Hey, CSS, watch out for when the width property changes, and when this happens, let the effect take 2s to change.”

So, if we have a button with a class name of btn, then the index.html file would look like this:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>CSS Transitions</title>
    <link rel="stylesheet" href="./assets/styles.css">
</head>
<body>
    <Section>
        <h1>Hi CSS Transitions</h1>
        <button class="btn">Hover on me to see a cool transition</button>
    </Section>
</body>
</html>

Here, we have an HTML file that contains a button with a class that has transition-property and transition-duration properties watching out for changes to the element’s width.

One thing to note is that, in order for the transition on our button to work, we have to actually change the width of that element, either by manually adjusting the width with the developer tools in the browser, by using one of the CSS pseudo-classes, or by using JavaScript. For the purpose of this tutorial, we’re going to use the CSS pseudo-class :hover to change the width of the button:

// existing styles
.btn:hover {
  width: 300px;
}

Now, if we hover over this button, we should see the width of the button gradually increase over the set time, which is 2s.

See the Pen transition-property and transition-duration by Timi Omoyeni (@timibadass) on CodePen.

transition-timing-function

The transition-timing-function property determines the speed at which the transition effect occurs. Five values are available for this property:

So, if we add ease-in to our button, we should notice the speed at which the width and height change, compared to the speed at which the button returns to its normal state. Here is our updated styles.css sheet:

.btn {
  width: 200px;
  height: 50px;
  transition-property: width;
  transition-duration: 2s;
  transition-timing-function: ease-in;
  background-color: red;
  color: #fff;
  border: 0;
}

If we want a more dramatic speed effect or the freedom to set a specific speed effect, we can use cubic-bezier(n,n,n,n):

btn {
    width: 200px;
    height: 50px;
    transition-property: width;
    transition-duration: 2s;
    transition-timing-function: cubic-bezier(0.075, 0.82, 0.165, 1);
    background-color: red;
    color: #fff;
    border: 0;
}

See the Pen transition-timing-function by Timi Omoyeni (@timibadass) on CodePen.

One interesting thing about this value is that you can edit it directly in the browser using the developer tools.

cubic bezier in dev tools.
Cubic bezier in the browser’s developer tools. (Large preview)

If you click on the highlighted part of your developer tools, you’ll get an interface to modify the cubic-bezier options:

cubic bezier interface highlighted in yellow.
Cubic bezier interface highlighted in yellow. (Large preview)

As you move the two points around, the values of (n,n,n,n) change, and you’ll see a representation (highlighted in red) of how the speed effect will appear. This can be very useful when you have a specific speed effect in mind.

transition-delay

The transition-delay property sets how long (in seconds) the transition has to wait before its effect starts to occur. This time is different from the transition-duration property, which specifies how long the transition effect will take place.

.btn {
  width: 200px;
  height: 50px;
  transition-property: width;
  transition-duration: 2s;
  transition-timing-function: cubic-bezier(0.075, 0.82, 0.165, 1);
  transition-delay: 5s;
  background-color: red;
  color: #fff;
  border: 0;
}

If you try this in the browser, you will notice a delay before the element’s width starts to change. This is because of the transition-delay property and value that we have set.

See the Pen transition-delay by Timi Omoyeni (@timibadass) on CodePen.

Shorthand Property

The individual transition properties can be tedious to use. For this reason, we have the shorthand property: transition. It accepts all of the properties in a defined order:

{
  transition: a b c d;
}

Here, the letters correspond as follows:

We can refactor our existing transition to work using this shorthand property:

// from
.btn {
  width: 200px;
  height: 50px;
  transition-property: width;
  transition-duration: 2s;
  transition-timing-function: cubic-bezier(0.075, 0.82, 0.165, 1);
  transition-delay: 1s;
  background-color: red;
  color: #fff;
  border: 0;
}

// to
.btn {
  width: 200px;
  height: 50px;
  transition: width 2s cubic-bezier(0.075, 0.82, 0.165, 1) 1s;
  background-color: red;
  color: #fff;
  border: 0;
}

If we try this code in the browser, we will get the same transition effect that we got when we used the individual properties.

See the Pen using shorthand property by Timi Omoyeni (@timibadass) on CodePen.

Transitions In Vue.js

Vue.js comes with two different ways to add transitions to an application. This doesn’t mean we cannot use transitions the CSS way. It just means that Vue.js’ developers have built on top of CSS to make it easier to use transitions. Let’s look at them one by one.

Transitioning Individual Elements and Components

One way we can use transitions in Vue.js is by wrapping the transition component around an element or component, using any of the following:

When we are developing an application, there are instances when we want to display data to the user depending on a value (such as a boolean). Here’s an example of how this works, taken from the index.vue file:

<template>
  <div>
    <p v-if="show">Now you see me!</p>
    <p v-else>Now you don't!</p>
    <button @click="show = !show">click me</button>
  </div>
</template>
<script>
export default {
 data() {
   return {
     show: true
   }
 }
}
</script>
<style>
</style>

We have added two paragraphs to this page that appear depending on the value of show. We also have a button that changes the value of show when clicked. We’ll add this page to our App.vue file by importing it like so:

<template>
  <div id="app">
    <Index />
  </div>
</template>
<script>
import Index from "./components/index.vue";
export default {
  name: 'App',
  components: {
    Index
  }
}
</script>

If we open the browser, we should see our paragraph and button:

vue landing page when show is true.
Vue.js landing page in default state. (Large preview)

Right now, clicking the button changes only the value of show, which causes the visible text to change:

landing page when show is false.
Landing page when button is clicked. (Large preview)

Adding a transition to this paragraph can be done by wrapping both paragraphs in the transition component. This component accepts a name prop, which is very important for the transition to work.

<template>
  <div>
    <transition name="fade">
      <p v-if="show">Now you see me!</p>
      <p v-else>Now you don't!</p>
    </transition>
    <button @click="show = !show">click me</button>
  </div>
</template>

This name tells Vue.js which transition to apply to the elements or components inside this transition component. At this point, if we click on the button, we would still not notice any transition because we have yet to add the configuration for our transition in the form of CSS classes.

One thing to note is that, when using a transition between two elements of the same tag, we need to specify a key attribute on each element in order for the transition to occur.

<template>
  <div>
    <transition name="fade">
      <p v-if="show" key="visible">Now you see me!</p>
      <p v-else key="notVisible">Now you don't!</p>
    </transition>
    <button @click="show = !show">click me</button>
  </div>
</template>

Vue.js has six transition classes that are applied to the elements or components inside the transition component, and each of these classes represents a state in the transition process. We’re going to look at only a few of them.

v-enter

The v-enter class represents the “starting state for enter”. This is the point at which a condition (v-if or v-else) has been met and the element is about to be made visible. At this point, the class has been added to the element, and it is removed once the element has been added. The name prop (in this case, fade) attached to the transition component is prefixed to this class name, but without the v. This v can be used by default if name is not provided. Thus, we can add this class to our index.vue file:

<style>
p {
  color: green;
}

.fade-enter{
  color: red;
  transform: translateY(20px);
}
</style>

First, we add a color of green to all of the paragraphs on the page. Then, we add our first transition class, fade-name. Inside this class, we change the color to red, and we make use of the transform and translateY property to move the paragraph by 20px along the y-axis (vertically). If we try clicking on the button again, we will notice that very little or no transition takes place during the switch because we need to add this next class we’ll look at.

v-enter-active

The v-enter-active class represents the “whole entering” state of a transitioning element. It means that this class is added just before the element is inserted or becomes visible, and it is removed when the transition has ended. This class is important for v-enter to work because it can be used to add the CSS transition property to the class, together with its properties (transition-property, transition-duration, transition-timing-function, and transition-delay), some of which are needed in order for the transition effect to work. Let’s add this class to our app and see what happens:

.fade-enter-active {
  transition: transform .3s cubic-bezier(1.0, 0.5, 0.8, 1.0), color .5s cubic-bezier(1.0, 0.5, 0.8, 1.0);
}

See the Pen vue transition enter state by Timi Omoyeni (@timibadass) on CodePen.

Now, if we click on the button, we will notice the transition of the color and the position of each of the texts as they come into view. But the transition from visible to hidden isn’t smooth enough because no transition is happening.

v-leave-active

The v-leave-active class represents the entire state in which an element changes from visible to hidden. This means that this class is applied from the moment an element starts to leave the page, and it is removed once the transition ends. This class is important in order for a leave transition to be applied because it takes in the CSS transition property, which also takes in other transition properties. Let’s add this to our app and see what happens:

.fade-leave-active {
  transition: transform 1s cubic-bezier(1.0, 0.5, 0.8, 1.0), color 1s cubic-bezier(1.0, 0.5, 0.8, 1.0);
}

See the Pen vue transition leave active state by Timi Omoyeni (@timibadass) on CodePen.

When we click on the button now, we will notice that the element that should leave waits for approximately 2 seconds before disappearing. This is because Vue.js is expecting the next class with this transition to be added in order for it to take effect.

v-leave-to

The v-leave-to transition represents the “leaving” state, meaning the point at which an element starts to leave and its transition is triggered. It is added one frame after a leaving transition is triggered, and removed when the transition or animation finishes. Let’s add this class to our app and see what happens:

.fade-leave-to {
  transform: translateX(100px);
  color: cyan;
}

Clicking on the button, we will notice that each element that is leaving is sliding to the right as the color changes.

See the Pen vue transition leave to state by Timi Omoyeni (@timibadass) on CodePen.

Now that we understand how transitions work in Vue.js, here’s an image that brings it all together:

classification of vue transition classes
Classification of Vue.js transition classes. (Large preview)

Finally, notice the not-so-smooth transition that occurs during the enter and leave states of elements that are transitioning. This is because Vue.js’ transitions occur simultaneously. Vue.js has a mode prop that helps us achieve a very smooth transition process. This prop accepts one of the following values:

If we add this mode to our index.vue file and try again, we should see a better transition:

<template>
  <div>
    <transition name="fade" appear mode="out-in">
      <p v-if="show" key="visible">Now you see me!</p>
      <p v-else key="notVisible">Now you don't!</p>
    </transition>
    <button @click="transitionMe">click me</button>
  </div>
</template>

Now, if we click on the button, we will notice that one element leaves before another enters. This is a result of the mode we have selected for this transition. If we try the other mode, we will get a different behavior.

See the Pen vue transition with mode by Timi Omoyeni (@timibadass) on CodePen.

List Transitions

If you ever try adding transitions to more than one element at a time using the transition component, an error will get printed to the console:

Vue transition error printed in console.
Vue.js error printed in console. (Large preview)

This is because the transition component is not meant to render more than one element at a time. If we want to transition two or more elements at a time or render a list (using v-for), we make use of the transition-group component. This component also accepts a name prop, but it has some differences from the transition component, including the following:

    <template>
  <div>
    <h1>List/Group Transition</h1>
    <ul>
      <li v-for="user in users" :key="user.id">
        {{user.name}}
        <button>Remove</button>
      </li>
    </ul>
  </div>
</template>
<script>
export default {
  data() {
    return {
      users: [
        {
          name: "Vuejs",
          id: 1
        },
        {
          name: "Vuex",
          id: 2
        },
        {
          name: "Router",
          id: 3
        }
      ]
    };
  }
};
</script>
<style>
</style>

Here, we have an array of users, which we loop through using v-for, displaying the name in our template section. In order to be able to view this list, we need to import this component into the App.vue page:

<template>
  <div id="app">
    <Index />
    <listTransition />
  </div>
</template>
<script>
import Index from "./components/index.vue";
import listTransition from "./components/listTransition.vue";
export default {
  name: "App",
  components: {
    Index,
    listTransition
  }
};
</script>

Note that when using the transition-group component, instead of wrapping our list with a ul tag (or any tag we have in mind), we wrap it around the transition-group component and add the tag to the tag prop, like this:

<template>
  <div>
    <h1>List/Group Transition</h1>
    <transition-group name="slide-fade" tag='ul'>
      <li v-for="user in users" :key="user.id">
        {{user.name}}
        <button>Remove</button>
      </li>
    </transition-group>
  </div>
</template>
<script>
export default {
  data() {
    return {
      users: [
        {
          name: "Vuejs",
          id: 1
        },
        {
          name: "Vuex",
          id: 2
        },
        {
          name: "Router",
          id: 3
        }
      ]
    };
  }
};
</script>
<style>
.slide-fade-enter-active {
  transition: transform 0.3s cubic-bezier(1, 0.5, 0.8, 1),
    color 0.5s cubic-bezier(1, 0.5, 0.8, 1);
}
.slide-fade-leave-active {
  transition: transform 1s cubic-bezier(1, 0.5, 0.8, 1),
    color 1s cubic-bezier(1, 0.5, 0.8, 1);
}
.slide-fade-enter {
  color: mediumblue;
  transform: translateY(20px);
}
.slide-fade-leave-to {
  transform: translateX(100px);
  color: cyan;
}
</style>

Here, we have replaced the ul tag with the transition-group component, and added the ul as the tag prop value. If we inspect the updated page in the developer tools, we will see that the list is being wrapped in the element that we specified in the tag prop (that is, ul).

ul tag highlighted in dev tools.
The ul tag highlighted in the developer tools. (Large preview)

We have also added a transition name prop with a value of slide-fade to this component, with style rules below in the style section that follow this naming convention. For this to work, we need to add the following lines of code to our file:

<template>
  <div>
    <h1>List/Group Transition</h1>
    <transition-group name="slide-fade" tag="ul">
      <li v-for="user in users" :key="user.id">
        {{user.name}}
        <button @click="removeUser(user.id)">Remove</button>
      </li>
    </transition-group>
  </div>
</template>
<script>
export default {
  // ...
  methods: {
    removeUser(id) {
      let users = this.users.filter(user => user.id !== id);
      this.users = users;
    }
  }
};
</script>
  

In the template section, we add a click event to each button in the loop and pass the user.id to the removeUser method attached to this click event. We then create this function in the script section of our file. This function accepts an id as argument. Then, we run through our existing users and filter out the user with the id passed into this function. When this is done, we save our new array of users to the data of our page.

At this point, if you click on any of the buttons for the users, a transition effect will be applied as the user is being removed from the list.

See the Pen Vue list transition by Timi Omoyeni (@timibadass) on CodePen.

Transitions In Nuxt.js:

Adding transitions to a Nuxt.js application is quite different from how you might be used to in Vue.js. In Nuxt.js, the transition component is automatically added to the application for you. All you need to do is one of the following.

Add It to Individual Page Component

Nuxt.js allows us to add transitions to an individual page component seamlessly. This transition is applied while the user is navigating to this page. All we have to do is add a transition property to the script section of the component. This property can be a string, a function, or an object. Some of the properties it accepts are:

Like Vue.js, Nuxt.js has a default name that gets assigned to a transition class if no name is provided, and it is called page. Let’s see how this works when we add it to our application in transition.vue:

<template>
  <div>
    <p>
      Lorem ipsum dolor sit amet consectetur adipisicing elit. Labore libero
      odio, asperiores debitis harum ipsa neque cum nulla incidunt explicabo ut
      eaque placeat qui, quis voluptas. Aut necessitatibus aliquam veritatis.
    </p>
    <nuxt-link to="/">home</nuxt-link>
  </div>
</template>
<script>
export default {
  transition: {
    name: "fade",
    mode: "out-in"
  },
  data() {
    return {
      show: true
    };
  }
};
</script>
<style>
p {
  color: green;
}
.fade-enter-active {
  transition: transform 0.3s cubic-bezier(1, 0.5, 0.8, 1),
    color 0.5s cubic-bezier(1, 0.5, 0.8, 1);
}
.fade-leave-active {
  transition: transform 1s cubic-bezier(1, 0.5, 0.8, 1),
    color 1s cubic-bezier(1, 0.5, 0.8, 1);
}
.fade-enter {
  color: mediumblue;
  transform: translateY(20px);
}
.fade-leave-to {
  transform: translateX(100px);
  color: cyan;
}
</style>

On this page, we’ve displayed “lorem ipsum” in the template section. We’ve also added the transition property, to which we have passed an object whose name is set to fade and whose mode is set to out-in. Finally, in the style section, we’ve added some styles that control the transition as the user navigates between this page and another.

In order for this transition to work, we have to navigate to /transition, but we would not notice any transition if we manually enter this route in our browser. So, let’s add a link to this page on the index.vue page.

<template>
  <div class="container">
    <div>
      // ..
      <nuxt-link to="/transition">next page</nuxt-link>
    </div>
  </div>
</template>

Now, if we click the link on either of the two pages, we will notice a sliding transition as the browser moves to and from the /transition route.

pageTransition

Adding transitions to individual pages can be challenging if we want to add them to all of the pages in the application. That’s where pageTransition comes in. This property allows us to add a general configuration for all of our pages in the nuxt.config.js file. This property accepts both a string and object as an option. Let’s see how that works in nuxt.config.js:

export default {
    // ...
    /*
     ** Global CSS
     */
    css: [
        '~/assets/transition.css'
    ],
    pageTransition: {
        name: "fade",
        mode: "out-in"
    },
}

Here, we’ve added the link to a CSS file, which we’ll create shortly. We have also added the pageTransition property to the file, along with with its configuration. Now, let’s create our CSS file, transition.css, and add the following styles to it:

.fade-enter-active {
    transition: transform 0.3s cubic-bezier(1, 0.5, 0.8, 1), color 0.5s cubic-bezier(1, 0.5, 0.8, 1);
}
.fade-leave-active {
    transition: transform 1s cubic-bezier(1, 1, 1, 1), color 1s cubic-bezier(1, 0.5, 0.8, 1);
}
.fade-enter {
    color: mediumblue;
    transform: translateY(20px);
}
.fade-leave-to {
    transform: translate3d(-500px, -300px 400px);
    color: cyan;
}

We’ve added the classes and styles that will be applied to the transition between one route and the other. If we get rid of the transition configuration from the transition.vue page and try to navigate between the two pages, we will get a transition effect.

layoutTransition

The layoutTransition property enables us to apply transitions based on the layout that the page is on. It works the same way as pageTranslation, except that it works based on layout. The default transition name is layout. Here’s an example of how it works, in nuxt.config.js:

export default {
    // ...
    /*
     ** Global CSS
     */
    css: [
        '~/assets/transition.css'
    ],
    layoutTransition: {
        name: "fade",
        mode: "out-in"
    },
}

Note that fade has to be the name of the layout in order for the transition to work with its layout. Let’s create this new layout in newLayout.vue to see what I mean:

<template>
  <!-- Your template -->
  <div>
    <h1>new layout</h1>
  </div>
</template>
<script>
export default {
  layout: "blog"
  // page component definitions
};
</script>

Conclusion

We have learned about CSS transitions and how to create them using the transition properties individually (transition-property, transition-duration, transition-timing-function, and transition-delay) and using the shorthand transition property. We have also covered how to apply these transitions in both Vue.js and Nuxt.js. But that’s not all. Vue.js has more ways for us to apply transitions in an application:

Smashing Editorial (ks, ra, yk, il)
[post_excerpt] => Transitions are a module of CSS that lets you create gradual transitions between the values of specific CSS properties. The behavior of these transitions can be controlled by specifying their timing function, duration, and other attributes. Using these transitions in your applications and websites create a better visual experience and sometimes draws and holds the user’s attention while a piece of information is being introduced to or leaving the screen. [post_date_gmt] => 2020-07-10 09:00:00 [post_date] => 2020-07-10 09:00:00 [post_modified_gmt] => 2020-07-10 09:00:00 [post_modified] => 2020-07-10 09:00:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/css-transitions-vuejs-nuxtjs/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/css-transitions-vuejs-nuxtjs/ [syndication_item_hash] => Array ( [0] => 35535d9e8f610185dad08d515723d31f [1] => 3a9c477968bacdb0a9b81ff15fd6b02f [2] => 53771e9565b18f38a37c725144145299 [3] => ed93963df09e7db22740319912390b9f [4] => abbdb76666a8eec3c1d04551433cf78a [5] => c7253d05aa0a8e9b2cf8876f7cf94a46 [6] => 93c4d28541f45318c984ebec67740288 [7] => 7608d2ad569129ba8762b3b182be1fcc [8] => 9d45398ffee4eb65d78b363395d45d62 [9] => 3152231812f20bb75322a767da0cd7bd [10] => b38b3dc63f3d5229149f327969d6a607 [11] => a62d1bccb363ea57fb517f5067800d64 [12] => 7cbf675fcb1d9b5ecfca44fb1a41292c [13] => 287829bb8ebda6e259f081cc25b3782c [14] => e1033ab7f3469df7537e51cdf128afb1 [15] => ba3a80ad482dd78b2d7d3dd03e880adb [16] => 94e853d34453b9de168ec7f20b4ffc8e [17] => 0fc372baafef2079d896c8694aec12fa [18] => 9e86767a75551aec5bb65a71246d1a06 [19] => 662a7c47d2502f64e5c0e362535179d1 [20] => 03a4526872bcc9158f825d8aeac680c4 [21] => 0775d0d9a0a3a9ce5561223828aad7c6 [22] => 5abbefa685ea7b44dd08776a2b007e8a [23] => 28c0a677dc4bec2a51502da25654186e [24] => c4df8b0e4fb9ebc3b8e7f3bb158973e1 [25] => c4df8b0e4fb9ebc3b8e7f3bb158973e1 [26] => 3fc9b7ba420b99a5b86a3d50192c6992 [27] => 8fc2b3c42349aac0cd66d3df913776ad [28] => 49e0279112cf3a23b477089617c6de12 [29] => 98a242a6b29de40403b2b2581f7cd9cc ) ) [post_type] => post [post_author] => 551 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => css-transitions-in-vuejs-and-nuxtjs )

Decide filter: Returning post, everything seems orderly :CSS Transitions In Vuejs And Nuxtjs

Array ( [post_title] => CSS Transitions In Vuejs And Nuxtjs [post_content] => CSS Transitions In Vuejs And Nuxtjs

CSS Transitions In Vuejs And Nuxtjs

Timi Omoyeni

Transitions are a module of CSS that lets you create gradual transitions between the values of specific CSS properties. The behavior of these transitions can be controlled by specifying their timing function, duration, and other attributes. Using these transitions in your applications and websites create a better visual experience and sometimes draws and holds the user’s attention while a piece of information is being introduced to or leaving the screen. According to Can I Use, transitions are supported by most browsers, although there are some minor issues with Internet Explorer and Safari.

Data on support for the css-transitions feature across the major browsers from caniuse.com
Source: caniuse.com. (Large preview)

Vue.js is an open-source JavaScript framework for building client-facing web applications and single-page applications (SPA). One of the features of this framework is the ability to add transitions to an app seamlessly, to switch between either pages or components, and we’re going to look at how to do that in this tutorial.

Nuxt.js is also a JavaScript framework, built on top of Vue.js (and often referred to as a framework of a framework), for building server-side web applications, static-generated websites, as well as SPAs. It works the same as Vue.js, so if you know Vue.js, you shouldn’t have many issues getting started with Nuxt.js. Nuxt.js comes with two properties for adding transitions to an app, and we’re going to cover those as well in this tutorial.

This tutorial requires basic knowledge of either Vue.js or Nuxt.js. All of the code snippets in this tutorial can be found on GitHub.

What Is A Transition?

According to the Oxford Dictionary, a transition can be defined as:

“A passage in a piece of writing that smoothly connects two topics or sections to each other.

The process or a period of changing from one state or condition to another.”

In terms of physics, a transition is defined thus:

“A change of an atom, nucleus, electron, etc. from one quantum state to another, with emission or absorption of radiation.”

From these definitions, we get an idea on what a transition is. The definitions all involve two different things or states. In terms of code, a transition is not so different.

What Is A CSS Transition?

According to Mozilla’s web documentation:

“CSS Transitions is a module of CSS that lets you create gradual transitions between the values of specific CSS properties. The behavior of these transitions can be controlled by specifying their timing function, duration, and other attributes.”

This means we can define a CSS transition as: the change in the CSS property of one or more elements from one value to another.

The CSS transition property enables us to add a transition effect to any valid element. It consists of up to four other properties (five, if we count the transition property itself) that can be used individually or combined as a shorthand. Each property has a different function.

transition-property

The transition-property accepts the name of the CSS property that we want to watch out for changes on and whose change process we want to transition. It looks like this:

.btn {
  width: 200px;
  height: 50px;
  transition-property: width;
  background-color: red;
  color: #fff;
  border: 0;
}

But this property does not do anything without the next property.

transition-duration

The transition-duration property specifies the time that the change of the element(s) in the transition-property should go on for. This property is required in order for the transition to work. If it is not set (with a value greater than 0s), then the default value of 0s would mean it would not run. So, let’s set a duration for this transition:

.btn {
  width: 200px;
  transition-property: width;
  transition-duration: 2s;
  background-color: red;
  color: #fff;
  border: 0;
}

Here, we have an element with a class name of btn that has a width of 200px. We are using both the transition-property and the transition-duration properties here. What this means is, “Hey, CSS, watch out for when the width property changes, and when this happens, let the effect take 2s to change.”

So, if we have a button with a class name of btn, then the index.html file would look like this:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>CSS Transitions</title>
    <link rel="stylesheet" href="./assets/styles.css">
</head>
<body>
    <Section>
        <h1>Hi CSS Transitions</h1>
        <button class="btn">Hover on me to see a cool transition</button>
    </Section>
</body>
</html>

Here, we have an HTML file that contains a button with a class that has transition-property and transition-duration properties watching out for changes to the element’s width.

One thing to note is that, in order for the transition on our button to work, we have to actually change the width of that element, either by manually adjusting the width with the developer tools in the browser, by using one of the CSS pseudo-classes, or by using JavaScript. For the purpose of this tutorial, we’re going to use the CSS pseudo-class :hover to change the width of the button:

// existing styles
.btn:hover {
  width: 300px;
}

Now, if we hover over this button, we should see the width of the button gradually increase over the set time, which is 2s.

See the Pen transition-property and transition-duration by Timi Omoyeni (@timibadass) on CodePen.

transition-timing-function

The transition-timing-function property determines the speed at which the transition effect occurs. Five values are available for this property:

So, if we add ease-in to our button, we should notice the speed at which the width and height change, compared to the speed at which the button returns to its normal state. Here is our updated styles.css sheet:

.btn {
  width: 200px;
  height: 50px;
  transition-property: width;
  transition-duration: 2s;
  transition-timing-function: ease-in;
  background-color: red;
  color: #fff;
  border: 0;
}

If we want a more dramatic speed effect or the freedom to set a specific speed effect, we can use cubic-bezier(n,n,n,n):

btn {
    width: 200px;
    height: 50px;
    transition-property: width;
    transition-duration: 2s;
    transition-timing-function: cubic-bezier(0.075, 0.82, 0.165, 1);
    background-color: red;
    color: #fff;
    border: 0;
}

See the Pen transition-timing-function by Timi Omoyeni (@timibadass) on CodePen.

One interesting thing about this value is that you can edit it directly in the browser using the developer tools.

cubic bezier in dev tools.
Cubic bezier in the browser’s developer tools. (Large preview)

If you click on the highlighted part of your developer tools, you’ll get an interface to modify the cubic-bezier options:

cubic bezier interface highlighted in yellow.
Cubic bezier interface highlighted in yellow. (Large preview)

As you move the two points around, the values of (n,n,n,n) change, and you’ll see a representation (highlighted in red) of how the speed effect will appear. This can be very useful when you have a specific speed effect in mind.

transition-delay

The transition-delay property sets how long (in seconds) the transition has to wait before its effect starts to occur. This time is different from the transition-duration property, which specifies how long the transition effect will take place.

.btn {
  width: 200px;
  height: 50px;
  transition-property: width;
  transition-duration: 2s;
  transition-timing-function: cubic-bezier(0.075, 0.82, 0.165, 1);
  transition-delay: 5s;
  background-color: red;
  color: #fff;
  border: 0;
}

If you try this in the browser, you will notice a delay before the element’s width starts to change. This is because of the transition-delay property and value that we have set.

See the Pen transition-delay by Timi Omoyeni (@timibadass) on CodePen.

Shorthand Property

The individual transition properties can be tedious to use. For this reason, we have the shorthand property: transition. It accepts all of the properties in a defined order:

{
  transition: a b c d;
}

Here, the letters correspond as follows:

We can refactor our existing transition to work using this shorthand property:

// from
.btn {
  width: 200px;
  height: 50px;
  transition-property: width;
  transition-duration: 2s;
  transition-timing-function: cubic-bezier(0.075, 0.82, 0.165, 1);
  transition-delay: 1s;
  background-color: red;
  color: #fff;
  border: 0;
}

// to
.btn {
  width: 200px;
  height: 50px;
  transition: width 2s cubic-bezier(0.075, 0.82, 0.165, 1) 1s;
  background-color: red;
  color: #fff;
  border: 0;
}

If we try this code in the browser, we will get the same transition effect that we got when we used the individual properties.

See the Pen using shorthand property by Timi Omoyeni (@timibadass) on CodePen.

Transitions In Vue.js

Vue.js comes with two different ways to add transitions to an application. This doesn’t mean we cannot use transitions the CSS way. It just means that Vue.js’ developers have built on top of CSS to make it easier to use transitions. Let’s look at them one by one.

Transitioning Individual Elements and Components

One way we can use transitions in Vue.js is by wrapping the transition component around an element or component, using any of the following:

When we are developing an application, there are instances when we want to display data to the user depending on a value (such as a boolean). Here’s an example of how this works, taken from the index.vue file:

<template>
  <div>
    <p v-if="show">Now you see me!</p>
    <p v-else>Now you don't!</p>
    <button @click="show = !show">click me</button>
  </div>
</template>
<script>
export default {
 data() {
   return {
     show: true
   }
 }
}
</script>
<style>
</style>

We have added two paragraphs to this page that appear depending on the value of show. We also have a button that changes the value of show when clicked. We’ll add this page to our App.vue file by importing it like so:

<template>
  <div id="app">
    <Index />
  </div>
</template>
<script>
import Index from "./components/index.vue";
export default {
  name: 'App',
  components: {
    Index
  }
}
</script>

If we open the browser, we should see our paragraph and button:

vue landing page when show is true.
Vue.js landing page in default state. (Large preview)

Right now, clicking the button changes only the value of show, which causes the visible text to change:

landing page when show is false.
Landing page when button is clicked. (Large preview)

Adding a transition to this paragraph can be done by wrapping both paragraphs in the transition component. This component accepts a name prop, which is very important for the transition to work.

<template>
  <div>
    <transition name="fade">
      <p v-if="show">Now you see me!</p>
      <p v-else>Now you don't!</p>
    </transition>
    <button @click="show = !show">click me</button>
  </div>
</template>

This name tells Vue.js which transition to apply to the elements or components inside this transition component. At this point, if we click on the button, we would still not notice any transition because we have yet to add the configuration for our transition in the form of CSS classes.

One thing to note is that, when using a transition between two elements of the same tag, we need to specify a key attribute on each element in order for the transition to occur.

<template>
  <div>
    <transition name="fade">
      <p v-if="show" key="visible">Now you see me!</p>
      <p v-else key="notVisible">Now you don't!</p>
    </transition>
    <button @click="show = !show">click me</button>
  </div>
</template>

Vue.js has six transition classes that are applied to the elements or components inside the transition component, and each of these classes represents a state in the transition process. We’re going to look at only a few of them.

v-enter

The v-enter class represents the “starting state for enter”. This is the point at which a condition (v-if or v-else) has been met and the element is about to be made visible. At this point, the class has been added to the element, and it is removed once the element has been added. The name prop (in this case, fade) attached to the transition component is prefixed to this class name, but without the v. This v can be used by default if name is not provided. Thus, we can add this class to our index.vue file:

<style>
p {
  color: green;
}

.fade-enter{
  color: red;
  transform: translateY(20px);
}
</style>

First, we add a color of green to all of the paragraphs on the page. Then, we add our first transition class, fade-name. Inside this class, we change the color to red, and we make use of the transform and translateY property to move the paragraph by 20px along the y-axis (vertically). If we try clicking on the button again, we will notice that very little or no transition takes place during the switch because we need to add this next class we’ll look at.

v-enter-active

The v-enter-active class represents the “whole entering” state of a transitioning element. It means that this class is added just before the element is inserted or becomes visible, and it is removed when the transition has ended. This class is important for v-enter to work because it can be used to add the CSS transition property to the class, together with its properties (transition-property, transition-duration, transition-timing-function, and transition-delay), some of which are needed in order for the transition effect to work. Let’s add this class to our app and see what happens:

.fade-enter-active {
  transition: transform .3s cubic-bezier(1.0, 0.5, 0.8, 1.0), color .5s cubic-bezier(1.0, 0.5, 0.8, 1.0);
}

See the Pen vue transition enter state by Timi Omoyeni (@timibadass) on CodePen.

Now, if we click on the button, we will notice the transition of the color and the position of each of the texts as they come into view. But the transition from visible to hidden isn’t smooth enough because no transition is happening.

v-leave-active

The v-leave-active class represents the entire state in which an element changes from visible to hidden. This means that this class is applied from the moment an element starts to leave the page, and it is removed once the transition ends. This class is important in order for a leave transition to be applied because it takes in the CSS transition property, which also takes in other transition properties. Let’s add this to our app and see what happens:

.fade-leave-active {
  transition: transform 1s cubic-bezier(1.0, 0.5, 0.8, 1.0), color 1s cubic-bezier(1.0, 0.5, 0.8, 1.0);
}

See the Pen vue transition leave active state by Timi Omoyeni (@timibadass) on CodePen.

When we click on the button now, we will notice that the element that should leave waits for approximately 2 seconds before disappearing. This is because Vue.js is expecting the next class with this transition to be added in order for it to take effect.

v-leave-to

The v-leave-to transition represents the “leaving” state, meaning the point at which an element starts to leave and its transition is triggered. It is added one frame after a leaving transition is triggered, and removed when the transition or animation finishes. Let’s add this class to our app and see what happens:

.fade-leave-to {
  transform: translateX(100px);
  color: cyan;
}

Clicking on the button, we will notice that each element that is leaving is sliding to the right as the color changes.

See the Pen vue transition leave to state by Timi Omoyeni (@timibadass) on CodePen.

Now that we understand how transitions work in Vue.js, here’s an image that brings it all together:

classification of vue transition classes
Classification of Vue.js transition classes. (Large preview)

Finally, notice the not-so-smooth transition that occurs during the enter and leave states of elements that are transitioning. This is because Vue.js’ transitions occur simultaneously. Vue.js has a mode prop that helps us achieve a very smooth transition process. This prop accepts one of the following values:

If we add this mode to our index.vue file and try again, we should see a better transition:

<template>
  <div>
    <transition name="fade" appear mode="out-in">
      <p v-if="show" key="visible">Now you see me!</p>
      <p v-else key="notVisible">Now you don't!</p>
    </transition>
    <button @click="transitionMe">click me</button>
  </div>
</template>

Now, if we click on the button, we will notice that one element leaves before another enters. This is a result of the mode we have selected for this transition. If we try the other mode, we will get a different behavior.

See the Pen vue transition with mode by Timi Omoyeni (@timibadass) on CodePen.

List Transitions

If you ever try adding transitions to more than one element at a time using the transition component, an error will get printed to the console:

Vue transition error printed in console.
Vue.js error printed in console. (Large preview)

This is because the transition component is not meant to render more than one element at a time. If we want to transition two or more elements at a time or render a list (using v-for), we make use of the transition-group component. This component also accepts a name prop, but it has some differences from the transition component, including the following:

    <template>
  <div>
    <h1>List/Group Transition</h1>
    <ul>
      <li v-for="user in users" :key="user.id">
        {{user.name}}
        <button>Remove</button>
      </li>
    </ul>
  </div>
</template>
<script>
export default {
  data() {
    return {
      users: [
        {
          name: "Vuejs",
          id: 1
        },
        {
          name: "Vuex",
          id: 2
        },
        {
          name: "Router",
          id: 3
        }
      ]
    };
  }
};
</script>
<style>
</style>

Here, we have an array of users, which we loop through using v-for, displaying the name in our template section. In order to be able to view this list, we need to import this component into the App.vue page:

<template>
  <div id="app">
    <Index />
    <listTransition />
  </div>
</template>
<script>
import Index from "./components/index.vue";
import listTransition from "./components/listTransition.vue";
export default {
  name: "App",
  components: {
    Index,
    listTransition
  }
};
</script>

Note that when using the transition-group component, instead of wrapping our list with a ul tag (or any tag we have in mind), we wrap it around the transition-group component and add the tag to the tag prop, like this:

<template>
  <div>
    <h1>List/Group Transition</h1>
    <transition-group name="slide-fade" tag='ul'>
      <li v-for="user in users" :key="user.id">
        {{user.name}}
        <button>Remove</button>
      </li>
    </transition-group>
  </div>
</template>
<script>
export default {
  data() {
    return {
      users: [
        {
          name: "Vuejs",
          id: 1
        },
        {
          name: "Vuex",
          id: 2
        },
        {
          name: "Router",
          id: 3
        }
      ]
    };
  }
};
</script>
<style>
.slide-fade-enter-active {
  transition: transform 0.3s cubic-bezier(1, 0.5, 0.8, 1),
    color 0.5s cubic-bezier(1, 0.5, 0.8, 1);
}
.slide-fade-leave-active {
  transition: transform 1s cubic-bezier(1, 0.5, 0.8, 1),
    color 1s cubic-bezier(1, 0.5, 0.8, 1);
}
.slide-fade-enter {
  color: mediumblue;
  transform: translateY(20px);
}
.slide-fade-leave-to {
  transform: translateX(100px);
  color: cyan;
}
</style>

Here, we have replaced the ul tag with the transition-group component, and added the ul as the tag prop value. If we inspect the updated page in the developer tools, we will see that the list is being wrapped in the element that we specified in the tag prop (that is, ul).

ul tag highlighted in dev tools.
The ul tag highlighted in the developer tools. (Large preview)

We have also added a transition name prop with a value of slide-fade to this component, with style rules below in the style section that follow this naming convention. For this to work, we need to add the following lines of code to our file:

<template>
  <div>
    <h1>List/Group Transition</h1>
    <transition-group name="slide-fade" tag="ul">
      <li v-for="user in users" :key="user.id">
        {{user.name}}
        <button @click="removeUser(user.id)">Remove</button>
      </li>
    </transition-group>
  </div>
</template>
<script>
export default {
  // ...
  methods: {
    removeUser(id) {
      let users = this.users.filter(user => user.id !== id);
      this.users = users;
    }
  }
};
</script>
  

In the template section, we add a click event to each button in the loop and pass the user.id to the removeUser method attached to this click event. We then create this function in the script section of our file. This function accepts an id as argument. Then, we run through our existing users and filter out the user with the id passed into this function. When this is done, we save our new array of users to the data of our page.

At this point, if you click on any of the buttons for the users, a transition effect will be applied as the user is being removed from the list.

See the Pen Vue list transition by Timi Omoyeni (@timibadass) on CodePen.

Transitions In Nuxt.js:

Adding transitions to a Nuxt.js application is quite different from how you might be used to in Vue.js. In Nuxt.js, the transition component is automatically added to the application for you. All you need to do is one of the following.

Add It to Individual Page Component

Nuxt.js allows us to add transitions to an individual page component seamlessly. This transition is applied while the user is navigating to this page. All we have to do is add a transition property to the script section of the component. This property can be a string, a function, or an object. Some of the properties it accepts are:

Like Vue.js, Nuxt.js has a default name that gets assigned to a transition class if no name is provided, and it is called page. Let’s see how this works when we add it to our application in transition.vue:

<template>
  <div>
    <p>
      Lorem ipsum dolor sit amet consectetur adipisicing elit. Labore libero
      odio, asperiores debitis harum ipsa neque cum nulla incidunt explicabo ut
      eaque placeat qui, quis voluptas. Aut necessitatibus aliquam veritatis.
    </p>
    <nuxt-link to="/">home</nuxt-link>
  </div>
</template>
<script>
export default {
  transition: {
    name: "fade",
    mode: "out-in"
  },
  data() {
    return {
      show: true
    };
  }
};
</script>
<style>
p {
  color: green;
}
.fade-enter-active {
  transition: transform 0.3s cubic-bezier(1, 0.5, 0.8, 1),
    color 0.5s cubic-bezier(1, 0.5, 0.8, 1);
}
.fade-leave-active {
  transition: transform 1s cubic-bezier(1, 0.5, 0.8, 1),
    color 1s cubic-bezier(1, 0.5, 0.8, 1);
}
.fade-enter {
  color: mediumblue;
  transform: translateY(20px);
}
.fade-leave-to {
  transform: translateX(100px);
  color: cyan;
}
</style>

On this page, we’ve displayed “lorem ipsum” in the template section. We’ve also added the transition property, to which we have passed an object whose name is set to fade and whose mode is set to out-in. Finally, in the style section, we’ve added some styles that control the transition as the user navigates between this page and another.

In order for this transition to work, we have to navigate to /transition, but we would not notice any transition if we manually enter this route in our browser. So, let’s add a link to this page on the index.vue page.

<template>
  <div class="container">
    <div>
      // ..
      <nuxt-link to="/transition">next page</nuxt-link>
    </div>
  </div>
</template>

Now, if we click the link on either of the two pages, we will notice a sliding transition as the browser moves to and from the /transition route.

pageTransition

Adding transitions to individual pages can be challenging if we want to add them to all of the pages in the application. That’s where pageTransition comes in. This property allows us to add a general configuration for all of our pages in the nuxt.config.js file. This property accepts both a string and object as an option. Let’s see how that works in nuxt.config.js:

export default {
    // ...
    /*
     ** Global CSS
     */
    css: [
        '~/assets/transition.css'
    ],
    pageTransition: {
        name: "fade",
        mode: "out-in"
    },
}

Here, we’ve added the link to a CSS file, which we’ll create shortly. We have also added the pageTransition property to the file, along with with its configuration. Now, let’s create our CSS file, transition.css, and add the following styles to it:

.fade-enter-active {
    transition: transform 0.3s cubic-bezier(1, 0.5, 0.8, 1), color 0.5s cubic-bezier(1, 0.5, 0.8, 1);
}
.fade-leave-active {
    transition: transform 1s cubic-bezier(1, 1, 1, 1), color 1s cubic-bezier(1, 0.5, 0.8, 1);
}
.fade-enter {
    color: mediumblue;
    transform: translateY(20px);
}
.fade-leave-to {
    transform: translate3d(-500px, -300px 400px);
    color: cyan;
}

We’ve added the classes and styles that will be applied to the transition between one route and the other. If we get rid of the transition configuration from the transition.vue page and try to navigate between the two pages, we will get a transition effect.

layoutTransition

The layoutTransition property enables us to apply transitions based on the layout that the page is on. It works the same way as pageTranslation, except that it works based on layout. The default transition name is layout. Here’s an example of how it works, in nuxt.config.js:

export default {
    // ...
    /*
     ** Global CSS
     */
    css: [
        '~/assets/transition.css'
    ],
    layoutTransition: {
        name: "fade",
        mode: "out-in"
    },
}

Note that fade has to be the name of the layout in order for the transition to work with its layout. Let’s create this new layout in newLayout.vue to see what I mean:

<template>
  <!-- Your template -->
  <div>
    <h1>new layout</h1>
  </div>
</template>
<script>
export default {
  layout: "blog"
  // page component definitions
};
</script>

Conclusion

We have learned about CSS transitions and how to create them using the transition properties individually (transition-property, transition-duration, transition-timing-function, and transition-delay) and using the shorthand transition property. We have also covered how to apply these transitions in both Vue.js and Nuxt.js. But that’s not all. Vue.js has more ways for us to apply transitions in an application:

Smashing Editorial (ks, ra, yk, il)
[post_excerpt] => Transitions are a module of CSS that lets you create gradual transitions between the values of specific CSS properties. The behavior of these transitions can be controlled by specifying their timing function, duration, and other attributes. Using these transitions in your applications and websites create a better visual experience and sometimes draws and holds the user’s attention while a piece of information is being introduced to or leaving the screen. [post_date_gmt] => 2020-07-10 09:00:00 [post_date] => 2020-07-10 09:00:00 [post_modified_gmt] => 2020-07-10 09:00:00 [post_modified] => 2020-07-10 09:00:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/css-transitions-vuejs-nuxtjs/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/css-transitions-vuejs-nuxtjs/ [syndication_item_hash] => Array ( [0] => 35535d9e8f610185dad08d515723d31f [1] => 3a9c477968bacdb0a9b81ff15fd6b02f [2] => 53771e9565b18f38a37c725144145299 [3] => ed93963df09e7db22740319912390b9f [4] => abbdb76666a8eec3c1d04551433cf78a [5] => c7253d05aa0a8e9b2cf8876f7cf94a46 [6] => 93c4d28541f45318c984ebec67740288 [7] => 7608d2ad569129ba8762b3b182be1fcc [8] => 9d45398ffee4eb65d78b363395d45d62 [9] => 3152231812f20bb75322a767da0cd7bd [10] => b38b3dc63f3d5229149f327969d6a607 [11] => a62d1bccb363ea57fb517f5067800d64 [12] => 7cbf675fcb1d9b5ecfca44fb1a41292c [13] => 287829bb8ebda6e259f081cc25b3782c [14] => e1033ab7f3469df7537e51cdf128afb1 [15] => ba3a80ad482dd78b2d7d3dd03e880adb [16] => 94e853d34453b9de168ec7f20b4ffc8e [17] => 0fc372baafef2079d896c8694aec12fa [18] => 9e86767a75551aec5bb65a71246d1a06 [19] => 662a7c47d2502f64e5c0e362535179d1 [20] => 03a4526872bcc9158f825d8aeac680c4 [21] => 0775d0d9a0a3a9ce5561223828aad7c6 [22] => 5abbefa685ea7b44dd08776a2b007e8a [23] => 28c0a677dc4bec2a51502da25654186e [24] => c4df8b0e4fb9ebc3b8e7f3bb158973e1 [25] => c4df8b0e4fb9ebc3b8e7f3bb158973e1 [26] => 3fc9b7ba420b99a5b86a3d50192c6992 [27] => 8fc2b3c42349aac0cd66d3df913776ad [28] => 49e0279112cf3a23b477089617c6de12 [29] => 98a242a6b29de40403b2b2581f7cd9cc ) ) [post_type] => post [post_author] => 551 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => css-transitions-in-vuejs-and-nuxtjs )

FAF deciding on filters on post to be syndicated:

Removing Panic From E-Commerce Shipping And Inventory Alerts

Array ( [post_title] => Removing Panic From E-Commerce Shipping And Inventory Alerts [post_content] => Removing Panic From E-Commerce Shipping And Inventory Alerts

Removing Panic From E-Commerce Shipping And Inventory Alerts

Suzanne Scacca

When it comes to displaying shipping and inventory alerts on an e-commerce website, you have to be very careful about inciting panic in your shoppers.

“Item is out of stock.”

“Expect shipping delays.”

“Page does not exist.”

These words alone are enough to turn a pleasant shopping experience into a panicked and frustrating one.

You have to be very careful about how you design these notices on your site, too. You obviously want to inform visitors of changes that impact their shopping experience, but you don’t want panic to be the resulting emotion when your alert is seen.

Better Search UX

For large-scale and e-commerce sites, the search experience is an increasingly critical tool. You can vastly improve the experience for users with thoughtful microcopy and the right contextualization. Read a related article →

When people panic, the natural response is to find a way to regain some control over the situation. The problem is, that regained control usually comes at the expense of the business’s profits, trust, and customer loyalty.

Unfortunately, some of the design choices we make in terms of alerts can cause undue panic.

If you want to better control your shoppers’ responses and keep them on the path to conversion, keep reading.

Note: Because the following post was written during the coronavirus pandemic, many of the alerts you’re going to see are related to it. However, that doesn’t mean these tips aren’t valid for other panic-inducing situations — like when a storm destroys an area and it’s impossible to order anything in-store or online or like during the November shopping frenzy when out-of-stock inventory is commonplace.

1. Avoid Out-Of-Stock Notices When Possible

Colleen Kirk is a professor of marketing at the New York Institute of Technology and an expert on territorial shopping.

She has a great analogy to help us understand why this happens:

Have you ever felt as if another driver stole your parking spot or were upset when someone else nabbed the last sweater that you had your eye on? If so, you experienced psychological ownership. You can feel psychological ownership over pretty much anything that doesn’t belong to you, from the last chocolate truffle in a display case to the dream home you found on Zillow and even intangible things like ideas.

When it comes to shopping in person, people exhibit territorial shopping traits by placing their items in a shopping cart. Or when they put a separator bar between their items on a conveyor belt and the person’s behind them.

We don’t really have that luxury on a website. The best we can do is enable shoppers to save their items to their cart or a wishlist, but that doesn’t keep the items from selling out before they have a chance to buy them.

This can lead to huge problems, especially when a shopper has gotten excited about that toy or shirt they “put aside”, only to discover a few hours later that the item is gone or no longer available.

The worst thing you could do is to remove out-of-stock product pages. You don’t want shoppers running into a 404 page and experiencing not just panic but also confusion and frustration.

While not as bad, I’d also recommend against displaying an inactive “Sold Out” or “Out of Stock” notice or button to shoppers as Olivela has done here:

Olivela 'sold out' notice and inactive button for upcycled denim mask
The Olivela website displays an inactive 'Sold Out' button for its denim mask product. (Source: Olivela) (Large preview)

Seeing this kind of feels like Google Maps directing you to your destination, only for you to end up at the edge of a lake where there’s supposed to be a road.

“Why the heck did you even send me here?”, is what your shoppers are going to wonder. And then they’re going to have to turn around and try to find something comparable to buy instead.

There are better ways to handle this that will also reduce the chances of your shoppers panic-buying a second-rate substitute for the item they really wanted. And this plays into the idea of territorial shopping (but in a healthy way).

This is what Target does when an item is “temporarily out of stock”:

Target out of stock product - 'Notify me when it’s back' button
Target provides shoppers with the option to get notified when out-of-stock items become available. (Source: Target) (Large preview)

Rather than display an unclickable button and just completely shut down shoppers’ hopes of getting the item, a “Notify me when it’s back” button is presented. This is a more positive approach to handling out-of-stock inventory and, if customers really want the item, they’re more likely to use it rather than settle for something else or try another site altogether. (I’ll talk about the “Pick it up” option down below.)

Another way you can handle this is to do what Summersalt does and turn your “Sold Out” button into a “Pre-order” one:

Summersalt 'Pre-order' button for out-of-stock items
Summersalt provides shoppers with 'Pre-order' button for out-of-stock items. (Source: Summersalt) (Large preview)

What’s nice about this button is that it not only empowers shoppers to secure the currently unavailable item they have their eyes on, but it tells them exactly when they will get it.

So, if you know when inventory will be restored, this is a better option than the “Notify” button as there’s more transparency about availability.

2. Don’t Over-Promise Or Give Your Shoppers False Hope

It’s a good idea to keep your shoppers informed about how external situations may impact their online shopping experience. And you can use key areas like the website’s sticky banner or a promotional banner on the homepage to do that.

That said, be very careful what you promote.

Special notices aren’t the kinds of things that get ignored the way that cookie consent notices or lead generation pop-ups do.

Consumers know to look at these areas for things like promo codes and sales event details.

If you put anything else in there, you better make sure the notice is positive, useful and truthful.

Let’s take a look at what’s going on with the Gap website:

Gap’s 'Taking Care Takes Time' banner notice on website
Gap’s 'Taking Care Takes Time' notice addresses shipping delays due to COVID-19. (Source: Gap) (Large preview)

Gap has three notices that appear in the header of its site:

It’s overwhelming. And it’s almost as if they wanted the alert in the middle to be ignored (or missed altogether) as it’s sandwiched between two very sharp-looking banners that promote attractive deals.

If your alert is related to something that’s going to affect the shoppers’ experience, don’t bury it and don’t try to massage it with overly optimistic messaging. Also, don’t contradict it with another alert that suggests the first one isn’t one to worry about.

Here’s why I say that:

Gap promotion for 'Our Masks are Back' during COVID-19
Gap promotes 'Our Masks are Back' on the homepage of its website. (Source: Gap) (Large preview)

This message is problematic for a couple of reasons. For one, Gap’s masks aren’t really “back” if they’re only available for pre-order. Second, it runs contrary to the top banner’s message about shipping delays.

Unsurprisingly, shoppers did not react well to this hyped-up announcement:

Gap backlash on Facebook during COVID-19
Gap receives an extraordinary amount of backlash on Facebook over mismanagement of shipping and promotions. (Source: Gap) (Large preview)

When Facebook ran a promotion on Facebook about the masks, it received a huge amount of backlash from customers. Many customers didn’t want to hear about the masks because they’d been waiting over a month for existing orders to ship and couldn’t get a customer service rep to talk to them. There were also complaints about items being canceled from their orders, only for them to magically become “available” again in the store.

A website that’s handling similar circumstances well right now is Urban Outfitters:

Urban Outfitters curbside pickup - sticky notice on website
Urban Outfitters is focusing on the positive of a bad situation. (Source: Urban Outfitters) (Large preview)

Urban Outfitters, like a lot of non-essential retailers right now, has had to make some adjustments in order to stay alive. But rather than displaying a notice alerting online shoppers to shipping delays like many of its counterparts, Urban Outfitters has turned it into a positive.

The banner reads: “Your local UO is now offering curbside pickup!”

There’s no hidden message here that tries to explain away the brand’s bad behavior. There’s no greedy cash-grab, promising deep discounts for people who shop with them despite likely delays in shipping. There’s no overhyping of a promise they can’t possibly keep.

This is an actionable offer.

What I’d suggest for e-commerce businesses that want to keep customers on their side — even during a tumultuous time — is to keep their alerts simple, honest and useful. And if you’re going to promote multiple ones, make sure they tell the same story.

3. Soften The Blow By Promoting Multichannel Options

I recently signed a new lease on an apartment but was dismayed to discover that my move-in date and furniture delivery date wouldn’t line up because of shipping delays. I was well-aware of this when I made the purchase, but I wasn’t as prepared as I wanted to be. And because we were in the middle of a city-wide lockdown, I was unable to crash with friends who live over the state border.

I thought, “I’ll sleep on an air mattress. No biggie.” But then remembered I left mine behind in Delaware. So, I had to figure out a way to buy a cheap air mattress.

All of my go-to e-commerce websites had shipping delay alerts. And as I perused their available products (which there were very few), my panic only worsened. Until I discovered a website that offered BOPIS.

Buy-online-pickup-in-store is a shopping trend that’s proven very popular with consumers. According to data from Doddle and a report from Business Insider:

If you’re building an e-commerce site for a company with retail locations that do this, then make sure you get customers thinking about it right away.

Barnes and Noble, for instance, enables shoppers to set their preferred local store:

Barnes & Noble 'Change My Store' - location selection
Barnes & Noble allows shoppers to set their preferred retail store location. (Source: Barnes and Noble) (Large preview)

This is a great feature to have. The only thing I’d do differently is to place a button in the header of the site that immediately takes them to the “Change My Store” or “Find Store” pop-up or page. That way, shoppers don’t have to wait until they’ve found a product they like to discover whether or not it’s available at their store for pickup.

Once a user has set their store location, though, Barnes & Noble remembers it and uses it to enhance the remainder of the shopping experience:

Barnes & Noble search listings with online and in-store availability info
Barnes & Noble uses shopper location information to enhance search listings. (Source: Barnes and Noble) (Large preview)

This is what search listings look like for “Magna Tiles” on the Barnes & Noble site when the user sets their search to “List” view. Pretty neat, right? While “Grid” view is standard, only showing the featured image, product name and price, this view provides extra detail about availability.

This can provide a ton of relief for shoppers who too-often encounter “out-of-stock” or “not available in store” notices on individual product pages. This way, they can spend their time looking at the items they can get the most quickly and conveniently. There are no surprises.

If cross-channel options are available — between website and mobile app, website and retail store, website and third-party delivery service — make sure you call attention to them early on so customers can take advantage of the streamlined shopping experience.

Wrapping Up

Panicked shopping can lead to serious issues for your e-commerce site. Rather than put your shoppers in a situation where they grow impatient, dissatisfied, and frustrated by the site that seems to put up barriers at every turn, design your alerts so that they turn a bad experience into a positive one.

Smashing Editorial (ra, yk, il)
[post_excerpt] => When it comes to displaying shipping and inventory alerts on an e-commerce website, you have to be very careful about inciting panic in your shoppers. “Item is out of stock.” “Expect shipping delays.” “Page does not exist.” These words alone are enough to turn a pleasant shopping experience into a panicked and frustrating one. You have to be very careful about how you design these notices on your site, too. [post_date_gmt] => 2020-07-09 11:00:00 [post_date] => 2020-07-09 11:00:00 [post_modified_gmt] => 2020-07-09 11:00:00 [post_modified] => 2020-07-09 11:00:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/ecommerce-shipping-inventory-alerts/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/ecommerce-shipping-inventory-alerts/ [syndication_item_hash] => Array ( [0] => 5c90814a135bc9a1dc9f5a2eca8da8cd [1] => a815e341fd30c97bf9a4cc95273827bd [2] => 3654b58da59217ee403e2df06f805091 [3] => 3104762df5f55203134798a26b5c95ca [4] => fc34649407bb45b589763fa4071d97b3 [5] => eec4192bea4eae438d0f320cdf899aaf [6] => eeca0baeeb5a605ef35cf92703082d46 [7] => d6a4aa487cc06e6f192a31ab6f758fa6 [8] => 594f194561b983deb7e2db4c91ab985f [9] => b81def7e48f6d4d0c333f1942d4cf4e9 [10] => 19845e16a3983f9d2972377048c81b97 [11] => bbe573a967a2620c7aa1a7797b989399 [12] => 5f0c5f5edbb81f39d294570e8dc9faff [13] => 39c2fe61b3d87b69e6ef60cbf0a199c9 [14] => d57da25aa54333d18b9e54a5aba3f4e4 [15] => 5443c5989c44064f503399e23ecc2f98 [16] => 1ba95e2ec418f61bcf45e2167a95a263 [17] => 2131f336e9688f49074d7e47643c92b0 [18] => 4f4cef9e6d1dc170a4c7bcc6955d413b [19] => 2f2c1fed059988ff7ba19e260ccc732c [20] => 839d6734b1c1a432a352f1ed05dd29ef [21] => d89c9d9d03b325ce34825f413db86f14 [22] => 92ee635252f70d57a267b17bdcde6d95 [23] => c2b2034f4c3cf307d0cdeac090913904 [24] => 8afe0753d7b3850b2d0ea56454177b9e [25] => 622cb1ce8541b40b2f1fdde2fdad22c5 [26] => 3ee88462d250824e219d528667d06fd8 [27] => 12eced7c16a5c82ddffad177f791eb51 [28] => 76e5ccda9f937f6eafc2bf7157ed4e53 [29] => 76e5ccda9f937f6eafc2bf7157ed4e53 [30] => 6ba6e5fb59587f73673d7297a8a641a0 [31] => 27ec9ff401b70db92529e13398e0097d [32] => f3ef517be96958594e397a0b5be45bf6 [33] => fead4c7bac9ae2438ebe6f804dca82e2 ) ) [post_type] => post [post_author] => 471 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => removing-panic-from-e-commerce-shipping-and-inventory-alerts )

Decide filter: Returning post, everything seems orderly :Removing Panic From E-Commerce Shipping And Inventory Alerts

Array ( [post_title] => Removing Panic From E-Commerce Shipping And Inventory Alerts [post_content] => Removing Panic From E-Commerce Shipping And Inventory Alerts

Removing Panic From E-Commerce Shipping And Inventory Alerts

Suzanne Scacca

When it comes to displaying shipping and inventory alerts on an e-commerce website, you have to be very careful about inciting panic in your shoppers.

“Item is out of stock.”

“Expect shipping delays.”

“Page does not exist.”

These words alone are enough to turn a pleasant shopping experience into a panicked and frustrating one.

You have to be very careful about how you design these notices on your site, too. You obviously want to inform visitors of changes that impact their shopping experience, but you don’t want panic to be the resulting emotion when your alert is seen.

Better Search UX

For large-scale and e-commerce sites, the search experience is an increasingly critical tool. You can vastly improve the experience for users with thoughtful microcopy and the right contextualization. Read a related article →

When people panic, the natural response is to find a way to regain some control over the situation. The problem is, that regained control usually comes at the expense of the business’s profits, trust, and customer loyalty.

Unfortunately, some of the design choices we make in terms of alerts can cause undue panic.

If you want to better control your shoppers’ responses and keep them on the path to conversion, keep reading.

Note: Because the following post was written during the coronavirus pandemic, many of the alerts you’re going to see are related to it. However, that doesn’t mean these tips aren’t valid for other panic-inducing situations — like when a storm destroys an area and it’s impossible to order anything in-store or online or like during the November shopping frenzy when out-of-stock inventory is commonplace.

1. Avoid Out-Of-Stock Notices When Possible

Colleen Kirk is a professor of marketing at the New York Institute of Technology and an expert on territorial shopping.

She has a great analogy to help us understand why this happens:

Have you ever felt as if another driver stole your parking spot or were upset when someone else nabbed the last sweater that you had your eye on? If so, you experienced psychological ownership. You can feel psychological ownership over pretty much anything that doesn’t belong to you, from the last chocolate truffle in a display case to the dream home you found on Zillow and even intangible things like ideas.

When it comes to shopping in person, people exhibit territorial shopping traits by placing their items in a shopping cart. Or when they put a separator bar between their items on a conveyor belt and the person’s behind them.

We don’t really have that luxury on a website. The best we can do is enable shoppers to save their items to their cart or a wishlist, but that doesn’t keep the items from selling out before they have a chance to buy them.

This can lead to huge problems, especially when a shopper has gotten excited about that toy or shirt they “put aside”, only to discover a few hours later that the item is gone or no longer available.

The worst thing you could do is to remove out-of-stock product pages. You don’t want shoppers running into a 404 page and experiencing not just panic but also confusion and frustration.

While not as bad, I’d also recommend against displaying an inactive “Sold Out” or “Out of Stock” notice or button to shoppers as Olivela has done here:

Olivela 'sold out' notice and inactive button for upcycled denim mask
The Olivela website displays an inactive 'Sold Out' button for its denim mask product. (Source: Olivela) (Large preview)

Seeing this kind of feels like Google Maps directing you to your destination, only for you to end up at the edge of a lake where there’s supposed to be a road.

“Why the heck did you even send me here?”, is what your shoppers are going to wonder. And then they’re going to have to turn around and try to find something comparable to buy instead.

There are better ways to handle this that will also reduce the chances of your shoppers panic-buying a second-rate substitute for the item they really wanted. And this plays into the idea of territorial shopping (but in a healthy way).

This is what Target does when an item is “temporarily out of stock”:

Target out of stock product - 'Notify me when it’s back' button
Target provides shoppers with the option to get notified when out-of-stock items become available. (Source: Target) (Large preview)

Rather than display an unclickable button and just completely shut down shoppers’ hopes of getting the item, a “Notify me when it’s back” button is presented. This is a more positive approach to handling out-of-stock inventory and, if customers really want the item, they’re more likely to use it rather than settle for something else or try another site altogether. (I’ll talk about the “Pick it up” option down below.)

Another way you can handle this is to do what Summersalt does and turn your “Sold Out” button into a “Pre-order” one:

Summersalt 'Pre-order' button for out-of-stock items
Summersalt provides shoppers with 'Pre-order' button for out-of-stock items. (Source: Summersalt) (Large preview)

What’s nice about this button is that it not only empowers shoppers to secure the currently unavailable item they have their eyes on, but it tells them exactly when they will get it.

So, if you know when inventory will be restored, this is a better option than the “Notify” button as there’s more transparency about availability.

2. Don’t Over-Promise Or Give Your Shoppers False Hope

It’s a good idea to keep your shoppers informed about how external situations may impact their online shopping experience. And you can use key areas like the website’s sticky banner or a promotional banner on the homepage to do that.

That said, be very careful what you promote.

Special notices aren’t the kinds of things that get ignored the way that cookie consent notices or lead generation pop-ups do.

Consumers know to look at these areas for things like promo codes and sales event details.

If you put anything else in there, you better make sure the notice is positive, useful and truthful.

Let’s take a look at what’s going on with the Gap website:

Gap’s 'Taking Care Takes Time' banner notice on website
Gap’s 'Taking Care Takes Time' notice addresses shipping delays due to COVID-19. (Source: Gap) (Large preview)

Gap has three notices that appear in the header of its site:

It’s overwhelming. And it’s almost as if they wanted the alert in the middle to be ignored (or missed altogether) as it’s sandwiched between two very sharp-looking banners that promote attractive deals.

If your alert is related to something that’s going to affect the shoppers’ experience, don’t bury it and don’t try to massage it with overly optimistic messaging. Also, don’t contradict it with another alert that suggests the first one isn’t one to worry about.

Here’s why I say that:

Gap promotion for 'Our Masks are Back' during COVID-19
Gap promotes 'Our Masks are Back' on the homepage of its website. (Source: Gap) (Large preview)

This message is problematic for a couple of reasons. For one, Gap’s masks aren’t really “back” if they’re only available for pre-order. Second, it runs contrary to the top banner’s message about shipping delays.

Unsurprisingly, shoppers did not react well to this hyped-up announcement:

Gap backlash on Facebook during COVID-19
Gap receives an extraordinary amount of backlash on Facebook over mismanagement of shipping and promotions. (Source: Gap) (Large preview)

When Facebook ran a promotion on Facebook about the masks, it received a huge amount of backlash from customers. Many customers didn’t want to hear about the masks because they’d been waiting over a month for existing orders to ship and couldn’t get a customer service rep to talk to them. There were also complaints about items being canceled from their orders, only for them to magically become “available” again in the store.

A website that’s handling similar circumstances well right now is Urban Outfitters:

Urban Outfitters curbside pickup - sticky notice on website
Urban Outfitters is focusing on the positive of a bad situation. (Source: Urban Outfitters) (Large preview)

Urban Outfitters, like a lot of non-essential retailers right now, has had to make some adjustments in order to stay alive. But rather than displaying a notice alerting online shoppers to shipping delays like many of its counterparts, Urban Outfitters has turned it into a positive.

The banner reads: “Your local UO is now offering curbside pickup!”

There’s no hidden message here that tries to explain away the brand’s bad behavior. There’s no greedy cash-grab, promising deep discounts for people who shop with them despite likely delays in shipping. There’s no overhyping of a promise they can’t possibly keep.

This is an actionable offer.

What I’d suggest for e-commerce businesses that want to keep customers on their side — even during a tumultuous time — is to keep their alerts simple, honest and useful. And if you’re going to promote multiple ones, make sure they tell the same story.

3. Soften The Blow By Promoting Multichannel Options

I recently signed a new lease on an apartment but was dismayed to discover that my move-in date and furniture delivery date wouldn’t line up because of shipping delays. I was well-aware of this when I made the purchase, but I wasn’t as prepared as I wanted to be. And because we were in the middle of a city-wide lockdown, I was unable to crash with friends who live over the state border.

I thought, “I’ll sleep on an air mattress. No biggie.” But then remembered I left mine behind in Delaware. So, I had to figure out a way to buy a cheap air mattress.

All of my go-to e-commerce websites had shipping delay alerts. And as I perused their available products (which there were very few), my panic only worsened. Until I discovered a website that offered BOPIS.

Buy-online-pickup-in-store is a shopping trend that’s proven very popular with consumers. According to data from Doddle and a report from Business Insider:

If you’re building an e-commerce site for a company with retail locations that do this, then make sure you get customers thinking about it right away.

Barnes and Noble, for instance, enables shoppers to set their preferred local store:

Barnes & Noble 'Change My Store' - location selection
Barnes & Noble allows shoppers to set their preferred retail store location. (Source: Barnes and Noble) (Large preview)

This is a great feature to have. The only thing I’d do differently is to place a button in the header of the site that immediately takes them to the “Change My Store” or “Find Store” pop-up or page. That way, shoppers don’t have to wait until they’ve found a product they like to discover whether or not it’s available at their store for pickup.

Once a user has set their store location, though, Barnes & Noble remembers it and uses it to enhance the remainder of the shopping experience:

Barnes & Noble search listings with online and in-store availability info
Barnes & Noble uses shopper location information to enhance search listings. (Source: Barnes and Noble) (Large preview)

This is what search listings look like for “Magna Tiles” on the Barnes & Noble site when the user sets their search to “List” view. Pretty neat, right? While “Grid” view is standard, only showing the featured image, product name and price, this view provides extra detail about availability.

This can provide a ton of relief for shoppers who too-often encounter “out-of-stock” or “not available in store” notices on individual product pages. This way, they can spend their time looking at the items they can get the most quickly and conveniently. There are no surprises.

If cross-channel options are available — between website and mobile app, website and retail store, website and third-party delivery service — make sure you call attention to them early on so customers can take advantage of the streamlined shopping experience.

Wrapping Up

Panicked shopping can lead to serious issues for your e-commerce site. Rather than put your shoppers in a situation where they grow impatient, dissatisfied, and frustrated by the site that seems to put up barriers at every turn, design your alerts so that they turn a bad experience into a positive one.

Smashing Editorial (ra, yk, il)
[post_excerpt] => When it comes to displaying shipping and inventory alerts on an e-commerce website, you have to be very careful about inciting panic in your shoppers. “Item is out of stock.” “Expect shipping delays.” “Page does not exist.” These words alone are enough to turn a pleasant shopping experience into a panicked and frustrating one. You have to be very careful about how you design these notices on your site, too. [post_date_gmt] => 2020-07-09 11:00:00 [post_date] => 2020-07-09 11:00:00 [post_modified_gmt] => 2020-07-09 11:00:00 [post_modified] => 2020-07-09 11:00:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/ecommerce-shipping-inventory-alerts/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/ecommerce-shipping-inventory-alerts/ [syndication_item_hash] => Array ( [0] => 5c90814a135bc9a1dc9f5a2eca8da8cd [1] => a815e341fd30c97bf9a4cc95273827bd [2] => 3654b58da59217ee403e2df06f805091 [3] => 3104762df5f55203134798a26b5c95ca [4] => fc34649407bb45b589763fa4071d97b3 [5] => eec4192bea4eae438d0f320cdf899aaf [6] => eeca0baeeb5a605ef35cf92703082d46 [7] => d6a4aa487cc06e6f192a31ab6f758fa6 [8] => 594f194561b983deb7e2db4c91ab985f [9] => b81def7e48f6d4d0c333f1942d4cf4e9 [10] => 19845e16a3983f9d2972377048c81b97 [11] => bbe573a967a2620c7aa1a7797b989399 [12] => 5f0c5f5edbb81f39d294570e8dc9faff [13] => 39c2fe61b3d87b69e6ef60cbf0a199c9 [14] => d57da25aa54333d18b9e54a5aba3f4e4 [15] => 5443c5989c44064f503399e23ecc2f98 [16] => 1ba95e2ec418f61bcf45e2167a95a263 [17] => 2131f336e9688f49074d7e47643c92b0 [18] => 4f4cef9e6d1dc170a4c7bcc6955d413b [19] => 2f2c1fed059988ff7ba19e260ccc732c [20] => 839d6734b1c1a432a352f1ed05dd29ef [21] => d89c9d9d03b325ce34825f413db86f14 [22] => 92ee635252f70d57a267b17bdcde6d95 [23] => c2b2034f4c3cf307d0cdeac090913904 [24] => 8afe0753d7b3850b2d0ea56454177b9e [25] => 622cb1ce8541b40b2f1fdde2fdad22c5 [26] => 3ee88462d250824e219d528667d06fd8 [27] => 12eced7c16a5c82ddffad177f791eb51 [28] => 76e5ccda9f937f6eafc2bf7157ed4e53 [29] => 76e5ccda9f937f6eafc2bf7157ed4e53 [30] => 6ba6e5fb59587f73673d7297a8a641a0 [31] => 27ec9ff401b70db92529e13398e0097d [32] => f3ef517be96958594e397a0b5be45bf6 [33] => fead4c7bac9ae2438ebe6f804dca82e2 ) ) [post_type] => post [post_author] => 471 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => removing-panic-from-e-commerce-shipping-and-inventory-alerts )

FAF deciding on filters on post to be syndicated:

Creating Tiny Desktop Apps With Tauri And Vue.js

Array ( [post_title] => Creating Tiny Desktop Apps With Tauri And Vue.js [post_content] => Creating Tiny Desktop Apps With Tauri And Vue.js

Creating Tiny Desktop Apps With Tauri And Vue.js

Kelvin Omereshone

Technology makes our lives better, not just users, but also creators (developers and designers). In this article, I’ll introduce you to Tauri. This article will be useful to you if:

We will look at how to build a native cross-platform application from an existing web project. Let’s get to it!

Note: This article assumes you are comfortable with HTML, CSS, JavaScript, and Vue.js.

What Is Tauri?

The official website sums up Tauri well:

So, basically, Tauri allows you to use web technologies to create tiny and secure native desktop apps.

On its GitHub page, Tauri is described as a framework-agnostic toolchain for building highly secure native apps that have tiny binaries (i.e. file size) and that are very fast (i.e. minimal RAM usage).

Why Not Electron?

A popular tool for using web technologies to build desktop applications is Electron.

However, Electron apps have a rather large bundle size, and they tend to take up a lot of memory when running. Here is how Tauri compares to Electron:

In a nutshell, if your app is built with Electron, it will never be shipped officially in the PureOS store. This should be a concern for developers targeting such distributions.

More Features Of Tauri

Pros Of Tauri

Real-World Tauri

If you have been part of the Vue.js community for a while, then you’ll have heard of Guillaume Chau, a member of the core team of Vue.js. He is responsible for the Vue.js command-line interface (CLI), as well as other awesome Vue.js libraries. He recently created guijs, which stands for “graphical user interface for JavaScript projects”. It is a Tauri-powered native desktop app to visually manage your JavaScript projects.

Guijs is an example of what is possible with Tauri, and the fact that a core member of the Vue.js team works on the app tells us that Tauri plays nicely with Vue.js (amongst other front-end frameworks). Check out the guijs repository on GitHub if you are interested. And, yes, it is open-source.

How Tauri Works

At a high level, Tauri uses Node.js to scaffold an HTML, CSS, and JavaScript rendering window as a user interface (UI), managed and bootstrapped by Rust. The product is a monolithic binary that can be distributed as common file types for Linux (deb/appimage), macOS (app/dmg), and Windows (exe/msi).

How Tauri Apps Are Made

A Tauri app is created via the following steps:

  1. First, make an interface in your GUI framework, and prepare the HTML, CSS, and JavaScript for consumption.
  2. The Tauri Node.js CLI takes it and rigs the Rust runner according to your configuration.
  3. In development mode, it creates a WebView window, with debugging and Hot Module Reloading.
  4. In build mode, it rigs the bundler and creates a final application according to your settings.

Setting Up Your Environment

Now that you know what Tauri is and how it works, let me walk you through setting up your machine for development with Tauri.

Note: The setup here is for Linux machines, but guides for macOS and for Windows are also available.

Linux Setup

The polyglot nature of Tauri means that it requires a number of tool dependencies. Let’s kick it off by installing some of the dependencies. Run the following:

$ sudo apt update && sudo apt install libwebkit2gtk-4.0-dev build-essential curl libssl-dev appmenu-gtk3-module

Once the above is successful, proceed to install Node.js (if you don’t already have it), because Tauri requires its runtime. You can do so by running this:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.2/install.sh | bash

This will install nvm (Node.js version manager), which allows you to easily manage the Node.js runtime and easily switch between versions of Node.js. After it is installed, run this to see a list of Node.js versions:

nvm ls-remote

At the time of writing, the most recent version is 14.1.0. Install it like so:

nvm install v14.1.0

Once Node.js is fully set up, you would need to install the Rust compiler and the Rust package manager: Cargo. The command below would install both:

$ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

After running this command, make sure that Cargo and Rust are in your $PATH by running the following:

rust --version

If everything has gone well, this should return a version number.

According to the Tauri documentation, make sure you are on the latest version by running the following command:

$ rustup update stable

Voilà! You are one step closer to getting your machine 100% ready for Tauri. All that’s left now is to install the tauri-bundler crate. It’s best to quit your CLI, and run the command below in a new CLI window:

$ cargo install tauri-bundler --force

Eureka! If everything went all right, your machine is now ready for Tauri. Next up, we will get started integrating Tauri with Vue.js. Let’s get to it!

Yarn

The Tauri team recommends installing the Yarn package manager. So let’s install it this way:

npm install -g yarn

Then run the following:

yarn --version

If everything worked, a version number should have been returned.

Integrating Tauri With Vue.js

Now that we have Tauri installed, let’s bundle an existing web project. You can find the live demo of the project on Netlify. Go ahead and fork the repository, which will serve as a shell. After forking it, make sure to clone the fork by running this:

git clone https://github.com/[yourUserName]/nota-web

After cloning the project, run the following to install the dependencies:

yarn

Then, run this:

yarn serve

Your application should be running on localhost:8080. Kill the running server, and let’s install the Vue.js CLI plugin for Tauri.

vue-cli-plugin-tauri

The Tauri team created a Vue.js CLI plugin that quickly rigs and turns your Vue.js single-page application (SPA) into a tiny cross-platform desktop app that is both fast and secure. Let’s install that plugin:

vue add tauri

After the plugin is installed, which might take a while, it will ask you for a window title. Just type in nota and press “Enter”.

Let’s examine the changes introduced by the Tauri plugin.

package.json

The Tauri plugin added two scripts in the scripts section of our package.json file. They are:

"tauri:build": "vue-cli-service tauri:build",
"tauri:serve": "vue-cli-service tauri:serve"

The tauri:serve script should be used during development. So let’s run it:

yarn tauri:serve

The above would download the Rust crates needed to start our app. After that, it will launch our app in development mode, where it will create a WebView window, with debugging and Hot Module Reloading!

src-tauri

You will also notice that the plugin added a src-tauri directory to the root of your app directory. Inside this directory are files and folders used by Tauri to configure your desktop app. Let’s check out the contents:

icons/
src/
    build.rs
    cmd.rs
    main.rs
Cargo.lock
Cargo.toml
rustfmt.toml
tauri.conf.json
tauri.js

The only change we would need to make is in src-tauri/Cargo.toml. Cargo.toml is like the package.json file for Rust. Find the line below in Cargo.toml:

name = "app"

Change it to this:

name = "nota"

That’s all we need to change for this example!

Bundling

To bundle nota for your current platform, simply run this:

yarn tauri:build

Note: As with the development window, the first time you run this, it will take some time to collect the Rust crates and build everything. On subsequent runs, it will only need to rebuild the Tauri crates themselves.

When the above is completed, you should have a binary of nota for your current OS. For me, I have a .deb binary created in the src-tauri/target/release/bundle/deb/ directory.*

Going Cross-Platform

You probably noticed that the yarn tauri:build command just generated a binary for your operating system. So, let’s generate the binaries for other operating systems. To achieve this, we will set up a workflow on GitHub. We are using GitHub here to serve as a distribution medium for our cross-platform app. So, your users could just download the binaries in the “Release” tab of the project. The workflow we would implement would automatically build our binaries for us via the power of GitHub actions. Let’s get to it.

Creating The Tauri Workflow

Thanks to Jacob Bolda, we have a workflow to automatically create and release cross-platform apps with Tauri on GitHub. Apart from building the binary for the various platforms (Linux, Mac, and Windows), the action would also upload the binary for you as a release on GitHub. It also uses the Create a Release action made by Jacob to achieve this.

To use this workflow, create a .github directory in the root of nota-web. In this directory, create another directory named workflows. We would then create a workflow file in .github/workflows/, and name it release-tauri-app.yml.

In release-tauri-app.yml, we would add a workflow that builds the binaries for Linux, macOS, and Windows. This workflow would also upload the binaries as a draft release on GitHub. The workflow would be triggered whenever we push to the master.

Open release-tauri-app.yml, and add the snippet below:

name: release-tauri-app

on:
  push:
    branches:
      - master
    paths:
      - '**/package.json'

jobs:
  check-build:
    runs-on: ubuntu-latest
    timeout-minutes: 30

    steps:
      — uses: actions/checkout@v2
      — name: setup node
        uses: actions/setup-node@v1
        with:
          node-version: 12
      — name: install rust stable
        uses: actions-rs/toolchain@v1
        with:
          toolchain: stable
          profile: minimal
      — name: install webkit2gtk
        run: |
          sudo apt-get update
          sudo apt-get install -y webkit2gtk-4.0
      — run: yarn
      — name: build nota for tauri app
        run: yarn build
      — run: cargo install tauri-bundler --force
      — name: build tauri app
        run: yarn tauri:build

  create-release:
    needs: check-build
    runs-on: ubuntu-latest
    outputs:
      RELEASE_UPLOAD_URL: ${{ steps.create_tauri_release.outputs.upload_url }}

    steps:
      — uses: actions/checkout@v2
      — name: setup node
        uses: actions/setup-node@v1
        with:
          node-version: 12
      — name: get version
        run: echo ::set-env name=PACKAGE_VERSION::$(node -p "require('./package.json').version")
      — name: create release
        id: create_tauri_release
        uses: jbolda/create-release@v1.1.0
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        with:
          tag_name: ${{ matrix.package.name }}-v${{ env.PACKAGE_VERSION }}
          release_name: 'Release nota app v${{ env.PACKAGE_VERSION }}'
          body: 'See the assets to download this version and install.'
          draft: true
          prerelease: false

  create-and-upload-assets:
    needs: create-release
    runs-on: ${{ matrix.platform }}
    timeout-minutes: 30

    strategy:
      fail-fast: false
      matrix:
        platform: [ubuntu-latest, macos-latest, windows-latest]
        include:
          — platform: ubuntu-latest
            buildFolder: bundle/deb
            ext: \_0.1.0_amd64.deb
            compressed: ''
          — platform: macos-latest
            buildFolder: bundle/osx
            ext: .app
            compressed: .tgz
          — platform: windows-latest
            buildFolder: ''
            ext: .x64.msi
            compressed: ''

    steps:
      — uses: actions/checkout@v2
      — name: setup node
        uses: actions/setup-node@v1
        with:
          node-version: 12
      — name: install rust stable
        uses: actions-rs/toolchain@v1
        with:
          toolchain: stable
          profile: minimal
      — name: install webkit2gtk (ubuntu only)
        if: matrix.platform == 'ubuntu-latest'
        run: |
          sudo apt-get update
          sudo apt-get install -y webkit2gtk-4.0
      — run: yarn
      — name: build nota for tauri app
        run: yarn build
      — run: cargo install tauri-bundler --force
      — name: build tauri app
        run: yarn tauri:build
      — name: compress (macos only)
        if: matrix.platform == 'macos-latest'
        working-directory: ${{ format('./src-tauri/target/release/{0}', matrix.buildFolder ) }}
        run: tar -czf ${{ format('nota{0}{1}', matrix.ext, matrix.compressed ) }} ${{ format('nota{0}', matrix.ext ) }}
      — name: upload release asset
        id: upload-release-asset
        uses: actions/upload-release-asset@v1.0.2
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        with:
          upload_url: ${{ needs.create-release.outputs.RELEASE_UPLOAD_URL }}
          asset_path: ${{ format('./src-tauri/target/release/{0}/nota{1}{2}', matrix.buildFolder, matrix.ext, matrix.compressed ) }}
          asset_name: ${{ format('nota{0}{1}', matrix.ext, matrix.compressed ) }}
          asset_content_type: application/zip
      — name: build tauri app in debug mode
        run: yarn tauri:build --debug
      — name: compress (macos only)
        if: matrix.platform == 'macos-latest'
        working-directory: ${{ format('./src-tauri/target/debug/{0}', matrix.buildFolder ) }}
        run: tar -czf ${{ format('nota{0}{1}', matrix.ext, matrix.compressed ) }} ${{ format('nota{0}', matrix.ext ) }}
      — name: upload release asset with debug mode on
        id: upload-release-asset-debug-mode
        uses: actions/upload-release-asset@v1.0.2
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        with:
          upload_url: ${{ needs.create-release.outputs.RELEASE_UPLOAD_URL }}
          asset_path: ${{ format('./src-tauri/target/debug/{0}/nota{1}{2}', matrix.buildFolder, matrix.ext, matrix.compressed ) }}
          asset_name: ${{ format('nota-debug{0}{1}', matrix.ext, matrix.compressed ) }}
          asset_content_type: application/zip

To test the workflow, commit and push your changes to your fork’s master branch. After successfully pushing to GitHub, you can then click on the “Actions” tab in GitHub, then click on the “Check build” link to see the progress of the workflow.

Upon successful execution of the action, you can see the draft release in “Releases” on the repository page on GitHub. You can then go on to publish your release!

Conclusion

This article has introduced a polyglot toolchain for building secure, cross-platform, and tiny native applications. We’ve seen what Tauri is and how to incorporate it with Vue.js. Lastly, we bundled our first Tauri app by running yarn tauri:build, and we also used a GitHub action to create binaries for Linux, macOS, and Windows.

Let me know what you think of Tauri — I’d be excited to see what you build with it. You can join the Discord server if you have any questions.

The repository for this article is on GitHub. Also, see the binaries generated by the GitHub workflow.

Smashing Editorial (ra, il, al)
[post_excerpt] => Technology makes our lives better, not just users, but also creators (developers and designers). In this article, I’ll introduce you to Tauri. This article will be useful to you if: you have been building applications on the web with HTML, CSS, and JavaScript, and you want to use the same technologies to create apps targeted at Windows, macOS, or Linux platforms; you are already building cross-platform desktop apps with technologies like Electron, and you want to check out alternatives; you want to build apps with web technologies for Linux distributions, such as PureOS; you are a Rust enthusiast, and you’d like to apply it to build native cross-platform applications. [post_date_gmt] => 2020-07-08 11:00:00 [post_date] => 2020-07-08 11:00:00 [post_modified_gmt] => 2020-07-08 11:00:00 [post_modified] => 2020-07-08 11:00:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/tiny-desktop-apps-tauri-vuejs/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/tiny-desktop-apps-tauri-vuejs/ [syndication_item_hash] => Array ( [0] => fac257e2eea320889d4d5529d20f4000 [1] => cd3b0297308e41a2bf039ac459b56cb9 [2] => fa3bcd8520a265588144ce08bb34c645 [3] => 814baf5a0943dc2f07d4e53bcc5f7f54 [4] => e35bfbc8a7d69c62a2cf5f5fa48f2429 [5] => 36544c68aedefcef7c3052dd6464bdba [6] => 5b16cd48668f8bac2f43b6a49db95613 [7] => a98469b62dbde990f211239c75d56ad3 [8] => 4bcc368f8c572198a43b40fd7a692dd5 [9] => 110ab6cb9751ddbc35da16ffc56ddad6 [10] => 4a4d2f3eb4b8275dbcff2d7c20d127b2 [11] => 6c8ca2b3c90b0262e9ed33346b703344 [12] => 18429676bb85e28acc0718d48385bb09 [13] => a30f0cdd123b657afa0ae0897b9bd592 [14] => 6d11aa3869c36c18ecaee7a51db9e033 [15] => 5bc887249e6721360f24083ace4d3067 [16] => 51e0b0d1792ee07004f9097cf7da0263 [17] => 0afab500389e6369ee2a0149764f9adc [18] => e379f799004080e7568f719f26fe457d [19] => b7c97b831225c8ad2f185e4323086566 [20] => bb49ff477ff71af861ba565065bae904 [21] => 86ff8114464ed295002207de37429423 [22] => 234902203429117acec1d144e035682f [23] => 6f42248c2bc8a8bd36e33596e7626ec0 [24] => 5cb8c3a50b4ceeca6ce5568e90ecb7d7 [25] => 2730622b932e2bb0a674256bf5b90e48 [26] => d7205d530d71348cead4d526ba122d34 [27] => c94a31bce9c77617d393fbf83e2d6392 [28] => 3f877f1644831de2a86970581cdc0dff [29] => 563d60979088ceef481883f078a998ca [30] => 106beb3561fd4da7072104c09864760d [31] => 8c209de7d3276ecba300ce15e77534d2 [32] => 8686875b595a4114fbcc21671d99d36b [33] => 77294befd7a386ea943f612b25c328ee [34] => 77294befd7a386ea943f612b25c328ee [35] => 0a81a44831310575cfa80f28da6f6c06 [36] => d5c1889329ea6ff1443638d4f390c94f [37] => d58ed35bfcf3e7eea01666a37b792a33 [38] => c9683ba1ccfb679427827f7c2522a9e5 ) ) [post_type] => post [post_author] => 493 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => creating-tiny-desktop-apps-with-tauri-and-vue-js )

Decide filter: Returning post, everything seems orderly :Creating Tiny Desktop Apps With Tauri And Vue.js

Array ( [post_title] => Creating Tiny Desktop Apps With Tauri And Vue.js [post_content] => Creating Tiny Desktop Apps With Tauri And Vue.js

Creating Tiny Desktop Apps With Tauri And Vue.js

Kelvin Omereshone

Technology makes our lives better, not just users, but also creators (developers and designers). In this article, I’ll introduce you to Tauri. This article will be useful to you if:

We will look at how to build a native cross-platform application from an existing web project. Let’s get to it!

Note: This article assumes you are comfortable with HTML, CSS, JavaScript, and Vue.js.

What Is Tauri?

The official website sums up Tauri well:

So, basically, Tauri allows you to use web technologies to create tiny and secure native desktop apps.

On its GitHub page, Tauri is described as a framework-agnostic toolchain for building highly secure native apps that have tiny binaries (i.e. file size) and that are very fast (i.e. minimal RAM usage).

Why Not Electron?

A popular tool for using web technologies to build desktop applications is Electron.

However, Electron apps have a rather large bundle size, and they tend to take up a lot of memory when running. Here is how Tauri compares to Electron:

In a nutshell, if your app is built with Electron, it will never be shipped officially in the PureOS store. This should be a concern for developers targeting such distributions.

More Features Of Tauri

Pros Of Tauri

Real-World Tauri

If you have been part of the Vue.js community for a while, then you’ll have heard of Guillaume Chau, a member of the core team of Vue.js. He is responsible for the Vue.js command-line interface (CLI), as well as other awesome Vue.js libraries. He recently created guijs, which stands for “graphical user interface for JavaScript projects”. It is a Tauri-powered native desktop app to visually manage your JavaScript projects.

Guijs is an example of what is possible with Tauri, and the fact that a core member of the Vue.js team works on the app tells us that Tauri plays nicely with Vue.js (amongst other front-end frameworks). Check out the guijs repository on GitHub if you are interested. And, yes, it is open-source.

How Tauri Works

At a high level, Tauri uses Node.js to scaffold an HTML, CSS, and JavaScript rendering window as a user interface (UI), managed and bootstrapped by Rust. The product is a monolithic binary that can be distributed as common file types for Linux (deb/appimage), macOS (app/dmg), and Windows (exe/msi).

How Tauri Apps Are Made

A Tauri app is created via the following steps:

  1. First, make an interface in your GUI framework, and prepare the HTML, CSS, and JavaScript for consumption.
  2. The Tauri Node.js CLI takes it and rigs the Rust runner according to your configuration.
  3. In development mode, it creates a WebView window, with debugging and Hot Module Reloading.
  4. In build mode, it rigs the bundler and creates a final application according to your settings.

Setting Up Your Environment

Now that you know what Tauri is and how it works, let me walk you through setting up your machine for development with Tauri.

Note: The setup here is for Linux machines, but guides for macOS and for Windows are also available.

Linux Setup

The polyglot nature of Tauri means that it requires a number of tool dependencies. Let’s kick it off by installing some of the dependencies. Run the following:

$ sudo apt update && sudo apt install libwebkit2gtk-4.0-dev build-essential curl libssl-dev appmenu-gtk3-module

Once the above is successful, proceed to install Node.js (if you don’t already have it), because Tauri requires its runtime. You can do so by running this:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.2/install.sh | bash

This will install nvm (Node.js version manager), which allows you to easily manage the Node.js runtime and easily switch between versions of Node.js. After it is installed, run this to see a list of Node.js versions:

nvm ls-remote

At the time of writing, the most recent version is 14.1.0. Install it like so:

nvm install v14.1.0

Once Node.js is fully set up, you would need to install the Rust compiler and the Rust package manager: Cargo. The command below would install both:

$ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

After running this command, make sure that Cargo and Rust are in your $PATH by running the following:

rust --version

If everything has gone well, this should return a version number.

According to the Tauri documentation, make sure you are on the latest version by running the following command:

$ rustup update stable

Voilà! You are one step closer to getting your machine 100% ready for Tauri. All that’s left now is to install the tauri-bundler crate. It’s best to quit your CLI, and run the command below in a new CLI window:

$ cargo install tauri-bundler --force

Eureka! If everything went all right, your machine is now ready for Tauri. Next up, we will get started integrating Tauri with Vue.js. Let’s get to it!

Yarn

The Tauri team recommends installing the Yarn package manager. So let’s install it this way:

npm install -g yarn

Then run the following:

yarn --version

If everything worked, a version number should have been returned.

Integrating Tauri With Vue.js

Now that we have Tauri installed, let’s bundle an existing web project. You can find the live demo of the project on Netlify. Go ahead and fork the repository, which will serve as a shell. After forking it, make sure to clone the fork by running this:

git clone https://github.com/[yourUserName]/nota-web

After cloning the project, run the following to install the dependencies:

yarn

Then, run this:

yarn serve

Your application should be running on localhost:8080. Kill the running server, and let’s install the Vue.js CLI plugin for Tauri.

vue-cli-plugin-tauri

The Tauri team created a Vue.js CLI plugin that quickly rigs and turns your Vue.js single-page application (SPA) into a tiny cross-platform desktop app that is both fast and secure. Let’s install that plugin:

vue add tauri

After the plugin is installed, which might take a while, it will ask you for a window title. Just type in nota and press “Enter”.

Let’s examine the changes introduced by the Tauri plugin.

package.json

The Tauri plugin added two scripts in the scripts section of our package.json file. They are:

"tauri:build": "vue-cli-service tauri:build",
"tauri:serve": "vue-cli-service tauri:serve"

The tauri:serve script should be used during development. So let’s run it:

yarn tauri:serve

The above would download the Rust crates needed to start our app. After that, it will launch our app in development mode, where it will create a WebView window, with debugging and Hot Module Reloading!

src-tauri

You will also notice that the plugin added a src-tauri directory to the root of your app directory. Inside this directory are files and folders used by Tauri to configure your desktop app. Let’s check out the contents:

icons/
src/
    build.rs
    cmd.rs
    main.rs
Cargo.lock
Cargo.toml
rustfmt.toml
tauri.conf.json
tauri.js

The only change we would need to make is in src-tauri/Cargo.toml. Cargo.toml is like the package.json file for Rust. Find the line below in Cargo.toml:

name = "app"

Change it to this:

name = "nota"

That’s all we need to change for this example!

Bundling

To bundle nota for your current platform, simply run this:

yarn tauri:build

Note: As with the development window, the first time you run this, it will take some time to collect the Rust crates and build everything. On subsequent runs, it will only need to rebuild the Tauri crates themselves.

When the above is completed, you should have a binary of nota for your current OS. For me, I have a .deb binary created in the src-tauri/target/release/bundle/deb/ directory.*

Going Cross-Platform

You probably noticed that the yarn tauri:build command just generated a binary for your operating system. So, let’s generate the binaries for other operating systems. To achieve this, we will set up a workflow on GitHub. We are using GitHub here to serve as a distribution medium for our cross-platform app. So, your users could just download the binaries in the “Release” tab of the project. The workflow we would implement would automatically build our binaries for us via the power of GitHub actions. Let’s get to it.

Creating The Tauri Workflow

Thanks to Jacob Bolda, we have a workflow to automatically create and release cross-platform apps with Tauri on GitHub. Apart from building the binary for the various platforms (Linux, Mac, and Windows), the action would also upload the binary for you as a release on GitHub. It also uses the Create a Release action made by Jacob to achieve this.

To use this workflow, create a .github directory in the root of nota-web. In this directory, create another directory named workflows. We would then create a workflow file in .github/workflows/, and name it release-tauri-app.yml.

In release-tauri-app.yml, we would add a workflow that builds the binaries for Linux, macOS, and Windows. This workflow would also upload the binaries as a draft release on GitHub. The workflow would be triggered whenever we push to the master.

Open release-tauri-app.yml, and add the snippet below:

name: release-tauri-app

on:
  push:
    branches:
      - master
    paths:
      - '**/package.json'

jobs:
  check-build:
    runs-on: ubuntu-latest
    timeout-minutes: 30

    steps:
      — uses: actions/checkout@v2
      — name: setup node
        uses: actions/setup-node@v1
        with:
          node-version: 12
      — name: install rust stable
        uses: actions-rs/toolchain@v1
        with:
          toolchain: stable
          profile: minimal
      — name: install webkit2gtk
        run: |
          sudo apt-get update
          sudo apt-get install -y webkit2gtk-4.0
      — run: yarn
      — name: build nota for tauri app
        run: yarn build
      — run: cargo install tauri-bundler --force
      — name: build tauri app
        run: yarn tauri:build

  create-release:
    needs: check-build
    runs-on: ubuntu-latest
    outputs:
      RELEASE_UPLOAD_URL: ${{ steps.create_tauri_release.outputs.upload_url }}

    steps:
      — uses: actions/checkout@v2
      — name: setup node
        uses: actions/setup-node@v1
        with:
          node-version: 12
      — name: get version
        run: echo ::set-env name=PACKAGE_VERSION::$(node -p "require('./package.json').version")
      — name: create release
        id: create_tauri_release
        uses: jbolda/create-release@v1.1.0
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        with:
          tag_name: ${{ matrix.package.name }}-v${{ env.PACKAGE_VERSION }}
          release_name: 'Release nota app v${{ env.PACKAGE_VERSION }}'
          body: 'See the assets to download this version and install.'
          draft: true
          prerelease: false

  create-and-upload-assets:
    needs: create-release
    runs-on: ${{ matrix.platform }}
    timeout-minutes: 30

    strategy:
      fail-fast: false
      matrix:
        platform: [ubuntu-latest, macos-latest, windows-latest]
        include:
          — platform: ubuntu-latest
            buildFolder: bundle/deb
            ext: \_0.1.0_amd64.deb
            compressed: ''
          — platform: macos-latest
            buildFolder: bundle/osx
            ext: .app
            compressed: .tgz
          — platform: windows-latest
            buildFolder: ''
            ext: .x64.msi
            compressed: ''

    steps:
      — uses: actions/checkout@v2
      — name: setup node
        uses: actions/setup-node@v1
        with:
          node-version: 12
      — name: install rust stable
        uses: actions-rs/toolchain@v1
        with:
          toolchain: stable
          profile: minimal
      — name: install webkit2gtk (ubuntu only)
        if: matrix.platform == 'ubuntu-latest'
        run: |
          sudo apt-get update
          sudo apt-get install -y webkit2gtk-4.0
      — run: yarn
      — name: build nota for tauri app
        run: yarn build
      — run: cargo install tauri-bundler --force
      — name: build tauri app
        run: yarn tauri:build
      — name: compress (macos only)
        if: matrix.platform == 'macos-latest'
        working-directory: ${{ format('./src-tauri/target/release/{0}', matrix.buildFolder ) }}
        run: tar -czf ${{ format('nota{0}{1}', matrix.ext, matrix.compressed ) }} ${{ format('nota{0}', matrix.ext ) }}
      — name: upload release asset
        id: upload-release-asset
        uses: actions/upload-release-asset@v1.0.2
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        with:
          upload_url: ${{ needs.create-release.outputs.RELEASE_UPLOAD_URL }}
          asset_path: ${{ format('./src-tauri/target/release/{0}/nota{1}{2}', matrix.buildFolder, matrix.ext, matrix.compressed ) }}
          asset_name: ${{ format('nota{0}{1}', matrix.ext, matrix.compressed ) }}
          asset_content_type: application/zip
      — name: build tauri app in debug mode
        run: yarn tauri:build --debug
      — name: compress (macos only)
        if: matrix.platform == 'macos-latest'
        working-directory: ${{ format('./src-tauri/target/debug/{0}', matrix.buildFolder ) }}
        run: tar -czf ${{ format('nota{0}{1}', matrix.ext, matrix.compressed ) }} ${{ format('nota{0}', matrix.ext ) }}
      — name: upload release asset with debug mode on
        id: upload-release-asset-debug-mode
        uses: actions/upload-release-asset@v1.0.2
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        with:
          upload_url: ${{ needs.create-release.outputs.RELEASE_UPLOAD_URL }}
          asset_path: ${{ format('./src-tauri/target/debug/{0}/nota{1}{2}', matrix.buildFolder, matrix.ext, matrix.compressed ) }}
          asset_name: ${{ format('nota-debug{0}{1}', matrix.ext, matrix.compressed ) }}
          asset_content_type: application/zip

To test the workflow, commit and push your changes to your fork’s master branch. After successfully pushing to GitHub, you can then click on the “Actions” tab in GitHub, then click on the “Check build” link to see the progress of the workflow.

Upon successful execution of the action, you can see the draft release in “Releases” on the repository page on GitHub. You can then go on to publish your release!

Conclusion

This article has introduced a polyglot toolchain for building secure, cross-platform, and tiny native applications. We’ve seen what Tauri is and how to incorporate it with Vue.js. Lastly, we bundled our first Tauri app by running yarn tauri:build, and we also used a GitHub action to create binaries for Linux, macOS, and Windows.

Let me know what you think of Tauri — I’d be excited to see what you build with it. You can join the Discord server if you have any questions.

The repository for this article is on GitHub. Also, see the binaries generated by the GitHub workflow.

Smashing Editorial (ra, il, al)
[post_excerpt] => Technology makes our lives better, not just users, but also creators (developers and designers). In this article, I’ll introduce you to Tauri. This article will be useful to you if: you have been building applications on the web with HTML, CSS, and JavaScript, and you want to use the same technologies to create apps targeted at Windows, macOS, or Linux platforms; you are already building cross-platform desktop apps with technologies like Electron, and you want to check out alternatives; you want to build apps with web technologies for Linux distributions, such as PureOS; you are a Rust enthusiast, and you’d like to apply it to build native cross-platform applications. [post_date_gmt] => 2020-07-08 11:00:00 [post_date] => 2020-07-08 11:00:00 [post_modified_gmt] => 2020-07-08 11:00:00 [post_modified] => 2020-07-08 11:00:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/tiny-desktop-apps-tauri-vuejs/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/tiny-desktop-apps-tauri-vuejs/ [syndication_item_hash] => Array ( [0] => fac257e2eea320889d4d5529d20f4000 [1] => cd3b0297308e41a2bf039ac459b56cb9 [2] => fa3bcd8520a265588144ce08bb34c645 [3] => 814baf5a0943dc2f07d4e53bcc5f7f54 [4] => e35bfbc8a7d69c62a2cf5f5fa48f2429 [5] => 36544c68aedefcef7c3052dd6464bdba [6] => 5b16cd48668f8bac2f43b6a49db95613 [7] => a98469b62dbde990f211239c75d56ad3 [8] => 4bcc368f8c572198a43b40fd7a692dd5 [9] => 110ab6cb9751ddbc35da16ffc56ddad6 [10] => 4a4d2f3eb4b8275dbcff2d7c20d127b2 [11] => 6c8ca2b3c90b0262e9ed33346b703344 [12] => 18429676bb85e28acc0718d48385bb09 [13] => a30f0cdd123b657afa0ae0897b9bd592 [14] => 6d11aa3869c36c18ecaee7a51db9e033 [15] => 5bc887249e6721360f24083ace4d3067 [16] => 51e0b0d1792ee07004f9097cf7da0263 [17] => 0afab500389e6369ee2a0149764f9adc [18] => e379f799004080e7568f719f26fe457d [19] => b7c97b831225c8ad2f185e4323086566 [20] => bb49ff477ff71af861ba565065bae904 [21] => 86ff8114464ed295002207de37429423 [22] => 234902203429117acec1d144e035682f [23] => 6f42248c2bc8a8bd36e33596e7626ec0 [24] => 5cb8c3a50b4ceeca6ce5568e90ecb7d7 [25] => 2730622b932e2bb0a674256bf5b90e48 [26] => d7205d530d71348cead4d526ba122d34 [27] => c94a31bce9c77617d393fbf83e2d6392 [28] => 3f877f1644831de2a86970581cdc0dff [29] => 563d60979088ceef481883f078a998ca [30] => 106beb3561fd4da7072104c09864760d [31] => 8c209de7d3276ecba300ce15e77534d2 [32] => 8686875b595a4114fbcc21671d99d36b [33] => 77294befd7a386ea943f612b25c328ee [34] => 77294befd7a386ea943f612b25c328ee [35] => 0a81a44831310575cfa80f28da6f6c06 [36] => d5c1889329ea6ff1443638d4f390c94f [37] => d58ed35bfcf3e7eea01666a37b792a33 [38] => c9683ba1ccfb679427827f7c2522a9e5 ) ) [post_type] => post [post_author] => 493 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => creating-tiny-desktop-apps-with-tauri-and-vue-js )

FAF deciding on filters on post to be syndicated:

CSS News July 2020

Array ( [post_title] => CSS News July 2020 [post_content] => CSS News July 2020

CSS News July 2020

Rachel Andrew

Things move a lot faster than they used to in terms of the implementation of Web Platform features, and this post is a round-up of news about CSS features that are making their way into the platform. If you are the sort of person who doesn’t like reading about things if you can’t use them now, then this article probably isn’t for you — we have many others for you to enjoy instead! However, if you like to know what is on the way and read more about the things you can play with in a beta version of a browser, read on!

Flexbox Gaps

Let’s start with something that is implemented in the shipping version of one browser, and in beta in another. In CSS Grid, we can use the gap, column-gap and row-gap properties to define the gaps between rows and columns or both at the same time. The column-gap feature also appears in the Multi-column layout to create gaps between columns.

While you can use margins to space out grid items, the nice thing about the gap feature is that you only get gaps between your items; you do not end up with additional space to account for at the start and end of the grid. Adding margins has typically been how we have created space between flex items. To create a regular space between flex items, we use a margin. If we do not want a margin at the start and end, we have to use a negative margin on the container to remove it.

It would be really nice to have that gap feature in Flexbox as well, wouldn’t it? The good news is that we do have — it’s already implemented in Firefox and is in the Beta version of Chrome.

In the next CodePen, you can see all three options. The first are flex items using margins on each side. This creates a gap at the start and end of the flex container. The second uses a negative margin on the flex container to pull that margin outside of the border. The third dispenses with margins altogether and instead uses gap: 20px, creating a gap between items but not on the start and end edge.

See the Pen Flex Items with margins and negative margin, and the gap feature by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Flex Items with margins and negative margin, and the gap feature by Rachel Andrew (@rachelandrew) on CodePen.

Mind The Gap

The Flexbox gap implementation highlights a few interesting things. Firstly, you may well remember that when the gap feature was first introduced to Grid Layout, the properties were:

These properties were shipped when grid first appeared in browsers. However, in much the same way as the alignment features (justify-content, align-content, align-items, and so on) first appeared in Flexbox and then became available to Grid (once it was decided the gap features were useful to more than grid), they were moved and renamed.

Along with those alignment features, the gap properties are now in the Box Alignment specification. The specification deals with alignment and space distribution so is a natural home for them. To prevent us from having multiple properties prefixed with a spec name, they were also renamed to drop the grid- prefix.

If you have the grid- prefixed versions in your code, you don’t need to worry. They have been kept as an alias of the properties so your code won’t break. For new projects, however, the unprefixed versions are implemented in all browsers.

Detecting Gap Support For Flexbox

You might be thinking that you could use the gap feature in Flexbox and use Feature Queries to test for support by using a margin as fallback. Sadly, this isn’t the case because feature queries test for a name and value. For example, if I want to test for grid support, I can use the following query:

@supports (display: grid) {
  .grid {
    /* grid layout code here */
  }
}

If I were to test for gap: 20px, however, I would get a positive response in Chrome which currently does not support gap in Flexbox but does support it in Grid. All those feature queries do is check to see if the browser recognizes the property and value. They have no way to test for support within a particular layout mode. I raised this as an issue in the CSS WG, however, it turns out to not be an easy thing to fix, and there are limited places currently where we have this partial implementation problem.

Aspect Ratio Unit

Some things have an aspect ratio that we want to preserve, and image or a video for example. If you place an image or video directly on the page using the HTML img or video element, then it nicely keeps the aspect ratio it arrives with (unless you forcibly change the width or height). However, we sometimes want to add an element with no intrinsic aspect ratio while making one dimension flexible with the other retaining a specific aspect ratio. This most often happens when we embed a video with an iframe, however, you might also want to make perfectly square areas on your grid (something which also requires that one dimension can react to another).

The way we currently deal with this is by way of the padding hack. This uses the fact that padding in the block direction is copied from the inline direction when we use a percentage. It’s a not very elegant solution to the problem, but it works.

The aspect ratio unit seeks to solve that by allowing us to specify an aspect ratio for a length. Chrome has implemented this in Canary, so you can take a look at the demo below using Canary if you enable the Experimental Web Platform Features flag.

I have created a grid layout and set my grid items to use a 1 / 1 aspect ratio. The width of the items is determined by their grid column track size (as is flexible). The height is then being copied from that to make a square. Just for fun, I then rotated the items.

A grid of square items

In Canary, you can take a look in the demo and see how the items remain square even as their track grows and shrinks, due to the fact that the block size is using a 1/1 ratio of the inline size.

See the Pen Grid using the aspect ratio for items (needs Chrome Canary and Exp Web Platform Features flag) by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid using the aspect ratio for items (needs Chrome Canary and Exp Web Platform Features flag) by Rachel Andrew (@rachelandrew) on CodePen.

Native Masonry Support

Developers often ask if CSS Grid can be used to create a Masonry- or Pinterest-styled layout. While some demos look a bit like that, Grid was never designed to do Masonry.

To explain, you need to know what a Masonry layout is. In a typical Masonry layout, items display by row. Once the first row is filled, new items populate another row. However, if some of the items in the first row are shorter than others, the second-row items will rise up to fill the gap. The Masonry library is how many people achieve this using JavaScript.

If you try to create this layout using CSS Grid and auto-placement, you will see that you lose that block direction rearrangement of items. They lay themselves out in strict rows and columns because that is what a grid does.

So could grid ever be used as a Masonry layout? One of the engineers at Mozilla thinks so, and has created a prototype of the functionality. You can test it out by using Firefox Nightly with the flag layout.css.grid-template-masonry-value.enabled set to true by going to about:config in the Firefox Nightly URL bar.

Masonry Layout in Firefox Nightly

See the Pen Proposed Masonry (needs Firefox Nightly and the ) by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Proposed Masonry (needs Firefox Nightly and the ) by Rachel Andrew (@rachelandrew) on CodePen.

While this is very exciting for anyone who has had to create this kind of layout using JavaScript, a number of us do wonder if the grid specification is the place to define this very specific layout. You can read some of my thoughts in my article “Does Masonry Belong In The CSS Grid Specification?”.

Subgrid

We have had support for the subgrid value of grid-template-columns and grid-template-rows in Firefox for some time. Using this value means that you can inherit the size and number of tracks from a parent grid down through child grids. Essentially, as long as a grid item has display: grid, it can inherit the tracks that it covers rather than creating new column or row tracks.

The feature can be tested in Firefox, and I have lots of examples that you can test out. The article “Digging Into The Display Property: Grids All The Way Down” explains how subgrid differs from nested grids, and “CSS Grid Level 2: Here Comes Subgrid” introduces the specification. I also have a set of broken-down examples at “Grid by Example”.

However, the first question people have when I talk about subgrid is, “When will it be available in Chrome?” I still can’t give you a when, but some good news is on the horizon. On June 18th in a Chromium blog post, it was announced that the Microsoft Edge team (now working on Chromium) are working to reimplement Grid Layout into the LayoutNG engine, i.e. Chromium’s next-generation layout engine. Part of this work will involve also added subgrid support.

Adding features to browsers isn’t a quick process, however, the Microsoft team brought us Grid Layout in the first place — along with the early prefixed implementation that shipped in IE10. So this is great news and I look forward to being able to test the implementation when it ships in Beta.

prefers-reduced-data

Not yet implemented in any browser — but with a bug listed for Chrome with recent activity— is the prefers_reduced_data media feature. This will allow CSS to check if the visitor has enabled data saving in their device and adjust the website accordingly. You might, for example, choose to avoid loading large images.

@media (prefers-reduced-data: reduce) {
  .image {
    background-image: url("images/placeholder.jpg");
  }
}

The prefers_reduced_data media feature works in the same way as some of the already implemented user preference media features in the Level 5 Media Queries Specification. For example, the media features prefers_reduced_motion and prefers_color_scheme allow you to test to see if the visitor has requested to reduce motion or a dark mode in their operating system and tailor your CSS to suit.

::marker

The ::marker pseudo-element allows us to target the list marker. At a very straightforward level, this means that we can target the list bullet and change its color or size. (This was previously impossible due to the fact that you could only target the entire list item — text and marker.)

See the Pen ::marker by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen ::marker by Rachel Andrew (@rachelandrew) on CodePen.

Support for ::marker is already available in Firefox, and can now be found in Chrome Beta, too.

In addition to styling bullets on actual lists, you can use ::marker on other elements. In the example below, I have a heading which has been given display: list-item and therefore has a marker which I have replaced with an emoji.

See the Pen display: list-item and ::marker by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen display: list-item and ::marker by Rachel Andrew (@rachelandrew) on CodePen.

Note: You can read more about ::marker and other list-related things in my article “CSS Lists, Markers and Counters” here on Smashing Magazine.

Please Test New Features

While it’s fun to have a little peek into what is coming up, I recommend testing out the implementations if you have a use case for any of these things. You may well find a bug or something that doesn’t work as you expect. Browser vendors and the CSS Working Group would love to know. If you think you have found a bug in a browser — perhaps you are testing ::marker in Chrome and find it displays differently to the implementation in Firefox — then raise an issue with the browser: “How To File A Good Bug” explains how. If you think that the specification could do something it doesn’t yet, then raise an issue against the specification over at the CSS Working Group GitHub repository.

Smashing Editorial (il)
[post_excerpt] => Things move a lot faster than they used to in terms of the implementation of Web Platform features, and this post is a round-up of news about CSS features that are making their way into the platform. If you are the sort of person who doesn’t like reading about things if you can’t use them now, then this article probably isn’t for you — we have many others for you to enjoy instead! [post_date_gmt] => 2020-07-07 10:30:00 [post_date] => 2020-07-07 10:30:00 [post_modified_gmt] => 2020-07-07 10:30:00 [post_modified] => 2020-07-07 10:30:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/css-news-july-2020/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/css-news-july-2020/ [syndication_item_hash] => Array ( [0] => 6a0dd5d47629b917eb1517cc71a181c0 [1] => a5a0ce362eaf6474a3088eb3e7936aaa [2] => 6faeb64ec37e4d1e58047aa89e7a9e62 [3] => c7fa9709a64e98761c2787905b309337 [4] => 665e7a983a49546027b3a4abd4e7ee79 [5] => 15dae98994ce13497efdac42a05cf83c [6] => c89e0fe45c1fa9027badecac02b56744 [7] => 9c13485d3d0addda6581cd624295b62a [8] => 24fea278faca4f4f7a75f283e4410280 [9] => 501fb3984449379f4740aaa20a927edc [10] => fdc01a39d47786900f190118a5e64016 [11] => bf63cd576cd033c93816d19bbfd39519 [12] => 937bfda3a2f4265d9cddf81a32a6ec86 [13] => f95fb0094da4c609342a06d0ad14813e [14] => 8297d64c5c2431e7e575b52e9d5d7c9d [15] => dc2539b68de23db0a3509cae5f3a69d6 [16] => bf600a32b21537ada852efd991dbe748 [17] => 33f7212c43514678d32392015078ea04 [18] => ba10f9b3e2f45402c6f07f93589e6064 [19] => 7aeb846d630d89cf83a0f9cf2fb291b3 [20] => 8fb671cc4d2a119f9d9f0280a3791c2c [21] => 4b844279b05615557653d483b2f50b31 [22] => 59698dcb92e6f8ec9f1e9ab29b08f210 [23] => d33b539f8238b5e81c80f600fb2181f8 [24] => 504201c1fc091b8e06b62f953ea503eb [25] => 4a964279ba1cdb50d3f0f1f4675b6f64 [26] => 911be70af58895ffb170104a1bcb1627 [27] => daef9dd5835a20b0afd98c8330e1585b [28] => c86fb0cd0ca48140ac5fb3b73e0c5466 [29] => 1cbdad3b32bee024ccbb727737e9511d [30] => e1538470152a06834d8b8fc03d1c2dd4 [31] => 0e2b9df990374ed4d5b91411b51356f4 [32] => 3b62c4ac960463243340c43872b8f48a [33] => 4be2863bf4856458a03878556550ca2e [34] => d9976802a8c6689a02437f87949d3286 [35] => 25ba69e89d854eb5f6087b346ed2edb1 [36] => 0aaa923fd5e580cd87c69238fbee7a78 [37] => ea9902cdaea4acf2175ff4589deea960 [38] => cdfcece3ac55870fe9b362873d018688 [39] => cdfcece3ac55870fe9b362873d018688 [40] => cfee2c1456fd9b074f6107271b5f7450 [41] => 2ed1bf49760c678cd48008da68bf0158 [42] => 111b441962fc3fefe3b4bd279abcbb05 [43] => dfd95310ed3c8a0f1c368a4d66eff5e3 ) ) [post_type] => post [post_author] => 467 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => css-news-july-2020 )

Decide filter: Returning post, everything seems orderly :CSS News July 2020

Array ( [post_title] => CSS News July 2020 [post_content] => CSS News July 2020

CSS News July 2020

Rachel Andrew

Things move a lot faster than they used to in terms of the implementation of Web Platform features, and this post is a round-up of news about CSS features that are making their way into the platform. If you are the sort of person who doesn’t like reading about things if you can’t use them now, then this article probably isn’t for you — we have many others for you to enjoy instead! However, if you like to know what is on the way and read more about the things you can play with in a beta version of a browser, read on!

Flexbox Gaps

Let’s start with something that is implemented in the shipping version of one browser, and in beta in another. In CSS Grid, we can use the gap, column-gap and row-gap properties to define the gaps between rows and columns or both at the same time. The column-gap feature also appears in the Multi-column layout to create gaps between columns.

While you can use margins to space out grid items, the nice thing about the gap feature is that you only get gaps between your items; you do not end up with additional space to account for at the start and end of the grid. Adding margins has typically been how we have created space between flex items. To create a regular space between flex items, we use a margin. If we do not want a margin at the start and end, we have to use a negative margin on the container to remove it.

It would be really nice to have that gap feature in Flexbox as well, wouldn’t it? The good news is that we do have — it’s already implemented in Firefox and is in the Beta version of Chrome.

In the next CodePen, you can see all three options. The first are flex items using margins on each side. This creates a gap at the start and end of the flex container. The second uses a negative margin on the flex container to pull that margin outside of the border. The third dispenses with margins altogether and instead uses gap: 20px, creating a gap between items but not on the start and end edge.

See the Pen Flex Items with margins and negative margin, and the gap feature by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Flex Items with margins and negative margin, and the gap feature by Rachel Andrew (@rachelandrew) on CodePen.

Mind The Gap

The Flexbox gap implementation highlights a few interesting things. Firstly, you may well remember that when the gap feature was first introduced to Grid Layout, the properties were:

These properties were shipped when grid first appeared in browsers. However, in much the same way as the alignment features (justify-content, align-content, align-items, and so on) first appeared in Flexbox and then became available to Grid (once it was decided the gap features were useful to more than grid), they were moved and renamed.

Along with those alignment features, the gap properties are now in the Box Alignment specification. The specification deals with alignment and space distribution so is a natural home for them. To prevent us from having multiple properties prefixed with a spec name, they were also renamed to drop the grid- prefix.

If you have the grid- prefixed versions in your code, you don’t need to worry. They have been kept as an alias of the properties so your code won’t break. For new projects, however, the unprefixed versions are implemented in all browsers.

Detecting Gap Support For Flexbox

You might be thinking that you could use the gap feature in Flexbox and use Feature Queries to test for support by using a margin as fallback. Sadly, this isn’t the case because feature queries test for a name and value. For example, if I want to test for grid support, I can use the following query:

@supports (display: grid) {
  .grid {
    /* grid layout code here */
  }
}

If I were to test for gap: 20px, however, I would get a positive response in Chrome which currently does not support gap in Flexbox but does support it in Grid. All those feature queries do is check to see if the browser recognizes the property and value. They have no way to test for support within a particular layout mode. I raised this as an issue in the CSS WG, however, it turns out to not be an easy thing to fix, and there are limited places currently where we have this partial implementation problem.

Aspect Ratio Unit

Some things have an aspect ratio that we want to preserve, and image or a video for example. If you place an image or video directly on the page using the HTML img or video element, then it nicely keeps the aspect ratio it arrives with (unless you forcibly change the width or height). However, we sometimes want to add an element with no intrinsic aspect ratio while making one dimension flexible with the other retaining a specific aspect ratio. This most often happens when we embed a video with an iframe, however, you might also want to make perfectly square areas on your grid (something which also requires that one dimension can react to another).

The way we currently deal with this is by way of the padding hack. This uses the fact that padding in the block direction is copied from the inline direction when we use a percentage. It’s a not very elegant solution to the problem, but it works.

The aspect ratio unit seeks to solve that by allowing us to specify an aspect ratio for a length. Chrome has implemented this in Canary, so you can take a look at the demo below using Canary if you enable the Experimental Web Platform Features flag.

I have created a grid layout and set my grid items to use a 1 / 1 aspect ratio. The width of the items is determined by their grid column track size (as is flexible). The height is then being copied from that to make a square. Just for fun, I then rotated the items.

A grid of square items

In Canary, you can take a look in the demo and see how the items remain square even as their track grows and shrinks, due to the fact that the block size is using a 1/1 ratio of the inline size.

See the Pen Grid using the aspect ratio for items (needs Chrome Canary and Exp Web Platform Features flag) by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid using the aspect ratio for items (needs Chrome Canary and Exp Web Platform Features flag) by Rachel Andrew (@rachelandrew) on CodePen.

Native Masonry Support

Developers often ask if CSS Grid can be used to create a Masonry- or Pinterest-styled layout. While some demos look a bit like that, Grid was never designed to do Masonry.

To explain, you need to know what a Masonry layout is. In a typical Masonry layout, items display by row. Once the first row is filled, new items populate another row. However, if some of the items in the first row are shorter than others, the second-row items will rise up to fill the gap. The Masonry library is how many people achieve this using JavaScript.

If you try to create this layout using CSS Grid and auto-placement, you will see that you lose that block direction rearrangement of items. They lay themselves out in strict rows and columns because that is what a grid does.

So could grid ever be used as a Masonry layout? One of the engineers at Mozilla thinks so, and has created a prototype of the functionality. You can test it out by using Firefox Nightly with the flag layout.css.grid-template-masonry-value.enabled set to true by going to about:config in the Firefox Nightly URL bar.

Masonry Layout in Firefox Nightly

See the Pen Proposed Masonry (needs Firefox Nightly and the ) by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Proposed Masonry (needs Firefox Nightly and the ) by Rachel Andrew (@rachelandrew) on CodePen.

While this is very exciting for anyone who has had to create this kind of layout using JavaScript, a number of us do wonder if the grid specification is the place to define this very specific layout. You can read some of my thoughts in my article “Does Masonry Belong In The CSS Grid Specification?”.

Subgrid

We have had support for the subgrid value of grid-template-columns and grid-template-rows in Firefox for some time. Using this value means that you can inherit the size and number of tracks from a parent grid down through child grids. Essentially, as long as a grid item has display: grid, it can inherit the tracks that it covers rather than creating new column or row tracks.

The feature can be tested in Firefox, and I have lots of examples that you can test out. The article “Digging Into The Display Property: Grids All The Way Down” explains how subgrid differs from nested grids, and “CSS Grid Level 2: Here Comes Subgrid” introduces the specification. I also have a set of broken-down examples at “Grid by Example”.

However, the first question people have when I talk about subgrid is, “When will it be available in Chrome?” I still can’t give you a when, but some good news is on the horizon. On June 18th in a Chromium blog post, it was announced that the Microsoft Edge team (now working on Chromium) are working to reimplement Grid Layout into the LayoutNG engine, i.e. Chromium’s next-generation layout engine. Part of this work will involve also added subgrid support.

Adding features to browsers isn’t a quick process, however, the Microsoft team brought us Grid Layout in the first place — along with the early prefixed implementation that shipped in IE10. So this is great news and I look forward to being able to test the implementation when it ships in Beta.

prefers-reduced-data

Not yet implemented in any browser — but with a bug listed for Chrome with recent activity— is the prefers_reduced_data media feature. This will allow CSS to check if the visitor has enabled data saving in their device and adjust the website accordingly. You might, for example, choose to avoid loading large images.

@media (prefers-reduced-data: reduce) {
  .image {
    background-image: url("images/placeholder.jpg");
  }
}

The prefers_reduced_data media feature works in the same way as some of the already implemented user preference media features in the Level 5 Media Queries Specification. For example, the media features prefers_reduced_motion and prefers_color_scheme allow you to test to see if the visitor has requested to reduce motion or a dark mode in their operating system and tailor your CSS to suit.

::marker

The ::marker pseudo-element allows us to target the list marker. At a very straightforward level, this means that we can target the list bullet and change its color or size. (This was previously impossible due to the fact that you could only target the entire list item — text and marker.)

See the Pen ::marker by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen ::marker by Rachel Andrew (@rachelandrew) on CodePen.

Support for ::marker is already available in Firefox, and can now be found in Chrome Beta, too.

In addition to styling bullets on actual lists, you can use ::marker on other elements. In the example below, I have a heading which has been given display: list-item and therefore has a marker which I have replaced with an emoji.

See the Pen display: list-item and ::marker by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen display: list-item and ::marker by Rachel Andrew (@rachelandrew) on CodePen.

Note: You can read more about ::marker and other list-related things in my article “CSS Lists, Markers and Counters” here on Smashing Magazine.

Please Test New Features

While it’s fun to have a little peek into what is coming up, I recommend testing out the implementations if you have a use case for any of these things. You may well find a bug or something that doesn’t work as you expect. Browser vendors and the CSS Working Group would love to know. If you think you have found a bug in a browser — perhaps you are testing ::marker in Chrome and find it displays differently to the implementation in Firefox — then raise an issue with the browser: “How To File A Good Bug” explains how. If you think that the specification could do something it doesn’t yet, then raise an issue against the specification over at the CSS Working Group GitHub repository.

Smashing Editorial (il)
[post_excerpt] => Things move a lot faster than they used to in terms of the implementation of Web Platform features, and this post is a round-up of news about CSS features that are making their way into the platform. If you are the sort of person who doesn’t like reading about things if you can’t use them now, then this article probably isn’t for you — we have many others for you to enjoy instead! [post_date_gmt] => 2020-07-07 10:30:00 [post_date] => 2020-07-07 10:30:00 [post_modified_gmt] => 2020-07-07 10:30:00 [post_modified] => 2020-07-07 10:30:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/css-news-july-2020/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/css-news-july-2020/ [syndication_item_hash] => Array ( [0] => 6a0dd5d47629b917eb1517cc71a181c0 [1] => a5a0ce362eaf6474a3088eb3e7936aaa [2] => 6faeb64ec37e4d1e58047aa89e7a9e62 [3] => c7fa9709a64e98761c2787905b309337 [4] => 665e7a983a49546027b3a4abd4e7ee79 [5] => 15dae98994ce13497efdac42a05cf83c [6] => c89e0fe45c1fa9027badecac02b56744 [7] => 9c13485d3d0addda6581cd624295b62a [8] => 24fea278faca4f4f7a75f283e4410280 [9] => 501fb3984449379f4740aaa20a927edc [10] => fdc01a39d47786900f190118a5e64016 [11] => bf63cd576cd033c93816d19bbfd39519 [12] => 937bfda3a2f4265d9cddf81a32a6ec86 [13] => f95fb0094da4c609342a06d0ad14813e [14] => 8297d64c5c2431e7e575b52e9d5d7c9d [15] => dc2539b68de23db0a3509cae5f3a69d6 [16] => bf600a32b21537ada852efd991dbe748 [17] => 33f7212c43514678d32392015078ea04 [18] => ba10f9b3e2f45402c6f07f93589e6064 [19] => 7aeb846d630d89cf83a0f9cf2fb291b3 [20] => 8fb671cc4d2a119f9d9f0280a3791c2c [21] => 4b844279b05615557653d483b2f50b31 [22] => 59698dcb92e6f8ec9f1e9ab29b08f210 [23] => d33b539f8238b5e81c80f600fb2181f8 [24] => 504201c1fc091b8e06b62f953ea503eb [25] => 4a964279ba1cdb50d3f0f1f4675b6f64 [26] => 911be70af58895ffb170104a1bcb1627 [27] => daef9dd5835a20b0afd98c8330e1585b [28] => c86fb0cd0ca48140ac5fb3b73e0c5466 [29] => 1cbdad3b32bee024ccbb727737e9511d [30] => e1538470152a06834d8b8fc03d1c2dd4 [31] => 0e2b9df990374ed4d5b91411b51356f4 [32] => 3b62c4ac960463243340c43872b8f48a [33] => 4be2863bf4856458a03878556550ca2e [34] => d9976802a8c6689a02437f87949d3286 [35] => 25ba69e89d854eb5f6087b346ed2edb1 [36] => 0aaa923fd5e580cd87c69238fbee7a78 [37] => ea9902cdaea4acf2175ff4589deea960 [38] => cdfcece3ac55870fe9b362873d018688 [39] => cdfcece3ac55870fe9b362873d018688 [40] => cfee2c1456fd9b074f6107271b5f7450 [41] => 2ed1bf49760c678cd48008da68bf0158 [42] => 111b441962fc3fefe3b4bd279abcbb05 [43] => dfd95310ed3c8a0f1c368a4d66eff5e3 ) ) [post_type] => post [post_author] => 467 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => css-news-july-2020 )

FAF deciding on filters on post to be syndicated:

Understanding Plugin Development In Gatsby

Array ( [post_title] => Understanding Plugin Development In Gatsby [post_content] => Understanding Plugin Development In Gatsby

Understanding Plugin Development In Gatsby

Aleem Isiaka

Gatsby is a React-based static-site generator that has overhauled how websites and blogs are created. It supports the use of plugins to create custom functionality that is not available in the standard installation.

In this post, I will introduce Gatsby plugins, discuss the types of Gatsby plugins that exist, differentiate between the forms of Gatsby plugins, and, finally, create a comment plugin that can be used on any Gatsby website, one of which we will install by the end of the tutorial.

What Is A Gatsby Plugin?

Gatsby, as a static-site generator, has limits on what it can do. Plugins are means to extend Gatsby with any feature not provided out of the box. We can achieve tasks like creating a manifest.json file for a progressive web app (PWA), embedding tweets on a page, logging page views, and much more on a Gatsby website using plugins.

Types Of Gatsby Plugins

There are two types of Gatsby plugins, local and external. Local plugins are developed in a Gatsby project directory, under the /plugins directory. External plugins are those available through npm or Yarn. Also, they may be on the same computer but linked using the yarn link or npm link command in a Gatsby website project.

Forms Of Gatsby Plugins

Plugins also exist in three primary forms and are defined by their use cases:

Components Of A Gatsby Plugin

To create a Gatsby plugin, we have to define some files:

These files are referred to as API files in Gatsby’s documentation and should live in the root of a plugin’s directory, either local or external.

Not all of these files are required to create a Gatsby plugin. In our case, we will be implementing only the gatsby-node.js and gatsby-config.js API files.

Building A Comment Plugin For Gatsby

To learn how to develop a Gatsby plugin, we will create a comment plugin that is installable on any blog that runs on Gatsby. The full code for the plugin is on GitHub.

Serving and Loading Comments

To serve comments on a website, we have to provide a server that allows for the saving and loading of comments. We will use an already available comment server at gatsbyjs-comment-server.herokuapp.com for this purpose.

The server supports a GET /comments request for loading comments. POST /comments would save comments for the website, and it accepts the following fields as the body of the POST /comments request:

Integrating the Server With Gatsby Using API Files

Much like we do when creating a Gatsby blog, to create an external plugin, we should start with plugin boilerplate.

Initializing the folder

In the command-line interace (CLI) and from any directory you are convenient with, let’s run the following command:

gatsby new gatsby-source-comment-server https://github.com/Gatsbyjs/gatsby-starter-plugin

Then, change into the plugin directory, and open it in a code editor.

Installing axios for Network Requests

To begin, we will install the axios package to make web requests to the comments server:

npm install axios --save
// or
yarn add axios
Adding a New Node Type

Before pulling comments from the comments server, we need to define a new node type that the comments would extend. For this, in the plugin folder, our gatsby-node.js file should contain the code below:

exports.sourceNodes = async ({ actions }) => {
  const { createTypes } = actions;
  const typeDefs = `
    type CommentServer implements Node {
      _id: String
      author: String
      string: String
      content: String
      website: String
      slug: String
      createdAt: Date
      updatedAt: Date
    }
  `;
  createTypes(typeDefs);
};

First, we pulled actions from the APIs provided by Gatsby. Then, we pulled out the createTypes action, after which we defined a CommentServer type that extends Node.js. Then, we called createTypes with the new node type that we set.

Fetching Comments From the Comments Server

Now, we can use axios to pull comments and then store them in the data-access layer as the CommentServer type. This action is called “node sourcing” in Gatsby.

To source for new nodes, we have to implement the sourceNodes API in gatsby-node.js. In our case, we would use axios to make network requests, then parse the data from the API to match a GraphQL type that we would define, and then create a node in the GraphQL layer of Gatsby using the createNode action.

We can add the code below to the plugin’s gatsby-node.js API file, creating the functionality we’ve described:

const axios = require("axios");

exports.sourceNodes = async (
  { actions, createNodeId, createContentDigest },
  pluginOptions
) => {
  const { createTypes } = actions;
  const typeDefs = `
    type CommentServer implements Node {
      _id: String
      author: String
      string: String
      website: String
      content: String
      slug: String
      createdAt: Date
      updatedAt: Date
    }
  `;
  createTypes(typeDefs);

  const { createNode } = actions;
  const { limit, website } = pluginOptions;
  const _website = website || "";

  const result = await axios({
    url: `https://Gatsbyjs-comment-server.herokuapp.com/comments?limit=${_limit}&website=${_website}`,
  });

  const comments = result.data;

  function convertCommentToNode(comment, { createContentDigest, createNode }) {
    const nodeContent = JSON.stringify(comment);

    const nodeMeta = {
      id: createNodeId(`comments-${comment._id}`),
      parent: null,
      children: [],
      internal: {
        type: `CommentServer`,
        mediaType: `text/html`,
        content: nodeContent,
        contentDigest: createContentDigest(comment),
      },
    };

    const node = Object.assign({}, comment, nodeMeta);
    createNode(node);
  }

  for (let i = 0; i < comments.data.length; i++) {
    const comment = comments.data[i];
    convertCommentToNode(comment, { createNode, createContentDigest });
  }
};

Here, we have imported the axios package, then set defaults in case our plugin’s options are not provided, and then made a request to the endpoint that serves our comments.

We then defined a function to convert the comments into Gatsby nodes, using the action helpers provided by Gatsby. After this, we iterated over the fetched comments and called convertCommentToNode to convert the comments into Gatsby nodes.

Transforming Data (Comments)

Next, we need to resolve the comments to posts. Gatsby has an API for that called createResolvers. We can make this possible by appending the code below in the gatsby-node.js file of the plugin:

exports.createResolvers = ({ createResolvers }) => {
  const resolvers = {
    MarkdownRemark: {
      comments: {
        type: ["CommentServer"],
        resolve(source, args, context, info) {
          return context.nodeModel.runQuery({
            query: {
              filter: {
                slug: { eq: source.fields.slug },
              },
            },
            type: "CommentServer",
            firstOnly: false,
          });
        },
      },
    },
  };
  createResolvers(resolvers);
};

Here, we are extending MarkdownRemark to include a comments field. The newly added comments field will resolve to the CommentServer type, based on the slug that the comment was saved with and the slug of the post.

Final Code for Comment Sourcing and Transforming

The final code for the gatsby-node.js file of our comments plugin should look like this:

const axios = require("axios");

exports.sourceNodes = async (
  { actions, createNodeId, createContentDigest },
  pluginOptions
) => {
  const { createTypes } = actions;
  const typeDefs = `
    type CommentServer implements Node {
      _id: String
      author: String
      string: String
      website: String
      content: String
      slug: String
      createdAt: Date
      updatedAt: Date
    }
  `;
  createTypes(typeDefs);

  const { createNode } = actions;
  const { limit, website } = pluginOptions;
  const _limit = parseInt(limit || 10000); // FETCH ALL COMMENTS
  const _website = website || "";

  const result = await axios({
    url: `https://Gatsbyjs-comment-server.herokuapp.com/comments?limit=${_limit}&website=${_website}`,
  });

  const comments = result.data;

  function convertCommentToNode(comment, { createContentDigest, createNode }) {
    const nodeContent = JSON.stringify(comment);

    const nodeMeta = {
      id: createNodeId(`comments-${comment._id}`),
      parent: null,
      children: [],
      internal: {
        type: `CommentServer`,
        mediaType: `text/html`,
        content: nodeContent,
        contentDigest: createContentDigest(comment),
      },
    };

    const node = Object.assign({}, comment, nodeMeta);
    createNode(node);
  }

  for (let i = 0; i < comments.data.length; i++) {
    const comment = comments.data[i];
    convertCommentToNode(comment, { createNode, createContentDigest });
  }
};

exports.createResolvers = ({ createResolvers }) => {
  const resolvers = {
    MarkdownRemark: {
      comments: {
        type: ["CommentServer"],
        resolve(source, args, context, info) {
          return context.nodeModel.runQuery({
            query: {
              filter: {
                website: { eq: source.fields.slug },
              },
            },
            type: "CommentServer",
            firstOnly: false,
          });
        },
      },
    },
  };
  createResolvers(resolvers);
};
Saving Comments as JSON Files

We need to save the comments for page slugs in their respective JSON files. This makes it possible to fetch the comments on demand over HTTP without having to use a GraphQL query.

To do this, we will implement the createPageStatefully API in thegatsby-node.js API file of the plugin. We will use the fs module to check whether the path exists before creating a file in it. The code below shows how we can implement this:

import fs from "fs"
import {resolve: pathResolve} from "path"
exports.createPagesStatefully = async ({ graphql }) => {
  const comments = await graphql(
    `
      {
        allCommentServer(limit: 1000) {
          edges {
            node {
              name
              slug
              _id
              createdAt
              content
            }
          }
        }
      }
    `
  )

  if (comments.errors) {
    throw comments.errors
  }

  const markdownPosts = await graphql(
    `
      {
        allMarkdownRemark(
          sort: { fields: [frontmatter___date], order: DESC }
          limit: 1000
        ) {
          edges {
            node {
              fields {
                slug
              }
            }
          }
        }
      }
    `
  )

  const posts = markdownPosts.data.allMarkdownRemark.edges
  const _comments = comments.data.allCommentServer.edges

  const commentsPublicPath = pathResolve(process.cwd(), "public/comments")

  var exists = fs.existsSync(commentsPublicPath) //create destination directory if it doesn't exist

  if (!exists) {
    fs.mkdirSync(commentsPublicPath)
  }

  posts.forEach((post, index) => {
    const path = post.node.fields.slug
    const commentsForPost = _comments
      .filter(comment => {
        return comment.node.slug === path
      })
      .map(comment => comment.node)

    const strippedPath = path
      .split("/")
      .filter(s => s)
      .join("/")
    const _commentPath = pathResolve(
      process.cwd(),
      "public/comments",
      `${strippedPath}.json`
    )
    fs.writeFileSync(_commentPath, JSON.stringify(commentsForPost))
  })
}

First, we require the fs, and resolve the function of the path module. We then use the GraphQL helper to pull the comments that we stored earlier, to avoid extra HTTP requests. We remove the Markdown files that we created using the GraphQL helper. And then we check whether the comment path is not missing from the public path, so that we can create it before proceeding.

Finally, we loop through all of the nodes in the Markdown type. We pull out the comments for the current posts and store them in the public/comments directory, with the post’s slug as the name of the file.

The .gitignore in the root in a Gatsby website excludes the public path from being committed. Saving files in this directory is safe.

During each rebuild, Gatsby would call this API in our plugin to fetch the comments and save them locally in JSON files.

Rendering Comments

To render comments in the browser, we have to use the gatsby-browser.js API file.

Define the Root Container for HTML

In order for the plugin to identify an insertion point in a page, we would have to set an HTML element as the container for rendering and listing the plugin’s components. We can expect that every page that requires it should have an HTML element with an ID set to commentContainer.

Implement the Route Update API in the gatsby-browser.js File

The best time to do the file fetching and component insertion is when a page has just been visited. The onRouteUpdate API provides this functionality and passes the apiHelpers and pluginOpions as arguments to the callback function.

exports.onRouteUpdate = async (apiHelpers, pluginOptions) => {
  const { location, prevLocation } = apiHelpers
}
Create Helper That Creates HTML Elements

To make our code cleaner, we have to define a function that can create an HTML element, set its className, and add content. At the top of the gatsby-browser.js file, we can add the code below:

// Creates element, set class. innerhtml then returns it.
 function createEl (name, className, html = null) {
  const el = document.createElement(name)
  el.className = className
  el.innerHTML = html
  return el
}
Create Header of Comments Section

At this point, we can add a header into the insertion point of comments components, in the onRouteUpdate browser API . First, we would ensure that the element exists in the page, then create an element using the createEl helper, and then append it to the insertion point.

// ...

exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
  const commentContainer = document.getElementById("commentContainer")
  if (commentContainer && location.path !== "/") {
    const header = createEl("h2")
    header.innerHTML = "Comments"
    commentContainer.appendChild(header)
  }
}
Listing Comments

To list comments, we would append a ul element to the component insertion point. We will use the createEl helper to achieve this, and set its className to comment-list:

exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
  const commentContainer = document.getElementById("commentContainer")
  if (commentContainer && location.path !== "/") {
    const header = createEl("h2")
    header.innerHTML = "Comments"
    commentContainer.appendChild(header)
    const commentListUl = createEl("ul")
    commentListUl.className = "comment-list"
    commentContainer.appendChild(commentListUl)
}

Next, we need to render the comments that we have saved in the public directory to a ul element, inside of li elements. For this, we define a helper that fetches the comments for a page using the path name.

// Other helpers
const getCommentsForPage = async slug => {
  const path = slug
    .split("/")
    .filter(s => s)
    .join("/")
  const data = await fetch(`/comments/${path}.json`)
  return data.json()
}
// ... implements routeupdate below

We have defined a helper, named getCommentsForPage, that accepts paths and uses fetch to load the comments from the public/comments directory, before parsing them to JSON and returning them back to the calling function.

Now, in our onRouteUpdate callback, we will load the comments:

// ... helpers
exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
  const commentContainer = document.getElementById("commentContainer")
  if (commentContainer && location.path !== "/") {
    //... inserts header
    const commentListUl = createEl("ul")
    commentListUl.className = "comment-list"
    commentContainer.appendChild(commentListUl)
   const comments = await getCommentsForPage(location.pathname)
}

Next, let’s define a helper to create the list items:

// .... other helpers

const getCommentListItem = comment => {
  const li = createEl("li")
  li.className = "comment-list-item"

  const nameCont = createEl("div")
  const name = createEl("strong", "comment-author", comment.name)
  const date = createEl(
    "span",
    "comment-date",
    new Date(comment.createdAt).toLocaleDateString()
  )
  // date.className="date"
  nameCont.append(name)
  nameCont.append(date)

  const commentCont = createEl("div", "comment-cont", comment.content)

  li.append(nameCont)
  li.append(commentCont)
  return li
}

// ... onRouteUpdateImplementation

In the snippet above, we created an li element with a className of comment-list-item, and a div for the comment’s author and time. We then created another div for the comment’s text, with a className of comment-cont.

To render the list items of comments, we iterate through the comments fetched using the getComments helper, and then call the getCommentListItem helper to create a list item. Finally, we append it to the <ul class="comment-list"></ul> element:

// ... helpers
exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
  const commentContainer = document.getElementById("commentContainer")
  if (commentContainer && location.path !== "/") {
    //... inserts header
    const commentListUl = createEl("ul")
    commentListUl.className = "comment-list"
    commentContainer.appendChild(commentListUl)
   const comments = await getCommentsForPage(location.pathname)
    if (comments && comments.length) {
      comments.map(comment => {
        const html = getCommentListItem(comment)
        commentListUl.append(html)
        return comment
      })
    }
}

Posting a Comment

Post Comment Form Helper

To enable users to post a comment, we have to make a POST request to the /comments endpoint of the API. We need a form in order to create this form. Let’s create a form helper that returns an HTML form element.

// ... other helpers
const createCommentForm = () => {
  const form = createEl("form")
  form.className = "comment-form"
  const nameInput = createEl("input", "name-input", null)
  nameInput.type = "text"
  nameInput.placeholder = "Your Name"
  form.appendChild(nameInput)
  const commentInput = createEl("textarea", "comment-input", null)
  commentInput.placeholder = "Comment"
  form.appendChild(commentInput)
  const feedback = createEl("span", "feedback")
  form.appendChild(feedback)
  const button = createEl("button", "comment-btn", "Submit")
  button.type = "submit"
  form.appendChild(button)
  return form
}

The helper creates an input element with a className of name-input, a textarea with a className of comment-input, a span with a className of feedback, and a button with a className of comment-btn.

Append the Post Comment Form

We can now append the form into the insertion point, using the createCommentForm helper:

// ... helpers
exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
  const commentContainer = document.getElementById("commentContainer")
  if (commentContainer && location.path !== "/") {
    // insert header
    // insert comment list
    commentContainer.appendChild(createCommentForm())
  }
}

Post Comments to Server

To post a comment to the server, we have to tell the user what is happening — for example, either that an input is required or that the API returned an error. The <span class="feedback" /> element is meant for this. To make it easier to update this element, we create a helper that sets the element and inserts a new class based on the type of the feedback (whether error, info, or success).

// ... other helpers
// Sets the class and text of the form feedback
const updateFeedback = (str = "", className) => {
  const feedback = document.querySelector(".feedback")
  feedback.className = `feedback ${className ? className : ""}`.trim()
  feedback.innerHTML = str
  return feedback
}
// onRouteUpdate callback

We are using the querySelector API to get the element. Then we set the class by updating the className attribute of the element. Finally, we use innerHTML to update the contents of the element before returning it.

Submitting a Comment With the Comment Form

We will listen to the onSubmit event of the comment form to determine when a user has decided to submit the form. We don’t want empty data to be submitted, so we would set a feedback message and disable the submit button until needed:

exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
  // Appends header
  // Appends comment list
  // Appends comment form
  document
    .querySelector("body .comment-form")
    .addEventListener("submit", async function (event) {
      event.preventDefault()
      updateFeedback()
      const name = document.querySelector(".name-input").value
      const comment = document.querySelector(".comment-input").value
      if (!name) {
        return updateFeedback("Name is required")
      }
      if (!comment) {
        return updateFeedback("Comment is required")
      }
      updateFeedback("Saving comment", "info")
      const btn = document.querySelector(".comment-btn")
      btn.disabled = true
      const data = {
        name,
        content: comment,
        slug: location.pathname,
        website: pluginOptions.website,
      }

      fetch(
        "https://cors-anywhere.herokuapp.com/gatsbyjs-comment-server.herokuapp.com/comments",
        {
          body: JSON.stringify(data),
          method: "POST",
          headers: {
            Accept: "application/json",
            "Content-Type": "application/json",
          },
        }
      ).then(async function (result) {
        const json = await result.json()
        btn.disabled = false

        if (!result.ok) {
          updateFeedback(json.error.msg, "error")
        } else {
          document.querySelector(".name-input").value = ""
          document.querySelector(".comment-input").value = ""
          updateFeedback("Comment has been saved!", "success")
        }
      }).catch(async err => {
        const errorText = await err.text()
        updateFeedback(errorText, "error")
      })
    })
}

We use document.querySelector to get the form from the page, and we listen to its submit event. Then, we set the feedback to an empty string, from whatever it might have been before the user attempted to submit the form.

We also check whether the name or comment field is empty, setting an error message accordingly.

Next, we make a POST request to the comments server at the /comments endpoint, listening for the response. We use the feedback to tell the user whether there was an error when they created the comment, and we also use it to tell them whether the comment’s submission was successful.

Adding a Style Sheet

To add styles to the component, we have to create a new file, style.css, at the root of our plugin folder, with the following content:

#commentContainer {
}

.comment-form {
  display: grid;
}

At the top of gatsby-browser.js, import it like this:

import "./style.css"

This style rule will make the form’s components occupy 100% of the width of their container.

Finally, all of the components for our comments plugin are complete. Time to install and test this fantastic plugin we have built.

Test the Plugin

Create a Gatsby Website

Run the following command from a directory one level above the plugin’s directory:

// PARENT
// ├── PLUGIN
// ├── Gatsby Website

gatsby new private-blog https://github.com/gatsbyjs/gatsby-starter-blog

Install the Plugin Locally and Add Options

Next, change to the blog directory, because we need to create a link for the new plugin:

cd /path/to/blog
npm link ../path/to/plugin/folder
Add to gatsby-config.js

In the gatsby-config.js file of the blog folder, we should add a new object that has a resolve key and that has name-of-plugin-folder as the value of the plugin’s installation. In this case, the name is gatsby-comment-server-plugin:

module.exports = {
  // ...
  plugins: [
    // ...
    "gatsby-plugin-dom-injector",
    {
      resolve: "gatsby-comment-server-plugin",
      options: {website: "https://url-of-website.com"},
    },
  ],
}

Notice that the plugin accepts a website option to distinguish the source of the comments when fetching and saving comments.

Update the blog-post Component

For the insertion point, we will add <section class="comments" id="commentContainer"> to the post template component at src/templates/blog-post.js of the blog project. This can be inserted at any suitable position; I have inserted mine after the last hr element and before the footer.

Start the Development Server

Finally, we can start the development server with gatsby develop, which will make our website available locally at http://localhost:8000. Navigating to any post page, like http://localhost:8000/new-beginnings, will reveal the comment at the insertion point that we specified above.

Create a Comment

We can create a comment using the comment form, and it will provide helpful feedback as we interact with it.

List Comments

To list newly posted comments, we have to restart the server, because our content is static.

Conclusion

In this tutorial, we have introduced Gatsby plugins and demonstrated how to create one.

Our plugin uses different APIs of Gatsby and its own API files to provide comments for our website, illustrating how we can use plugins to add significant functionality to a Gatsby website.

Although we are pulling from a live server, the plugin is saving the comments in JSON files. We could make the plugin load comments on demand from the API server, but that would defeat the notion that our blog is a static website that does not require dynamic content.

The plugin built in this post exists as an npm module, while the full code is on GitHub.

References:

Resources:

Smashing Editorial (yk)
[post_excerpt] => Gatsby is a React-based static-site generator that has overhauled how websites and blogs are created. It supports the use of plugins to create custom functionality that is not available in the standard installation. In this post, I will introduce Gatsby plugins, discuss the types of Gatsby plugins that exist, differentiate between the forms of Gatsby plugins, and, finally, create a comment plugin that can be used on any Gatsby website, one of which we will install by the end of the tutorial. [post_date_gmt] => 2020-07-06 14:30:00 [post_date] => 2020-07-06 14:30:00 [post_modified_gmt] => 2020-07-06 14:30:00 [post_modified] => 2020-07-06 14:30:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/understanding-plugin-development-gatsby/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/understanding-plugin-development-gatsby/ [syndication_item_hash] => Array ( [0] => b8c36e0b66df6108b64290f380603eb8 [1] => 4fd0d547609d36932fcd52b73c0775f9 [2] => 0708ebd0859c798ff33109642faa61a3 [3] => 498738cfaa3b5e05ff657ec1581bdfa3 [4] => dbf77ac732d43627de9a6eb9f68f5cc6 [5] => 8aae109e919389f6e0bce836d5ac2e3e [6] => 2be0804cfba9f4f5f71ddbd13c9de713 [7] => af2fb5033d18442b5a2349a30b46fd8c [8] => ae56ba781a65226896930ec94065f6df [9] => 50ac168bd3afa80d57202cf356fe920e [10] => 3ad1e8afbd800bfa82bf4c8a7ef6a361 [11] => 9dbf07cd8db2bd91ea98bcd0963cca79 [12] => 4e15758cc0e23ecc317a2f1d5a73aab0 [13] => 42b3488c3ec9b911a51f332be3adba52 [14] => 2e9ff5e8f2cd23333b5e3587b1f0d6be [15] => f024de811e6e57f183e71d99307a3b48 [16] => aaee573f6be87a1cc5f0aa8cce28a58c [17] => 31736b93ae1b64d818687ded92e191b7 [18] => 6ffc47f74195dd66462797c28eaa1929 [19] => caecd752db911d9d2aff7001abd3a838 [20] => 06e4b972c67d45657c7dc7817d497aa0 [21] => ff2a5c5d6b25358d547ba5290293a640 [22] => d3c37594ddb279de71da0d4f57699ce9 [23] => 00012ed40fd2e4f103e76481df9fa5e1 [24] => 8fc66e41d66c6b3b5686f5de35a54453 [25] => 75cce2ec6f19f2eada90a64b13d7ccca [26] => 3f59232735d6bcf055a90413e48ed69f [27] => 46904fe8204c3002e0262bf695842a55 [28] => 2e84edd7f5d8336451450f6a957caea6 [29] => 03bfda0f7766a6a3bfd9326153b33903 [30] => 20bd43c8b2e9c814df35b96229183947 [31] => 60588a4e9a62caa7849e4ce395da04c4 [32] => 1f1de24bc61c08ad8cb2383de8fa578f [33] => b4630171ab402df7b9de64c67e9ce48d [34] => 7142d6258f56dbd0b23a12f1ea3ae9e0 [35] => 6014b90eb5ae952a86b268e7d554c86f [36] => e876917631dd932cab0a5d0adff4cda9 [37] => 7c576b4794b6c73c206ee365646902d7 [38] => dd08dadd21bb571de68c7bcf741b91a1 [39] => 4434ec2e38f94f57f1b0cb9afaee13d3 [40] => bc4a66dece25cc9f125ae9a4926a7586 [41] => bc4a66dece25cc9f125ae9a4926a7586 [42] => 9867620a00b039484490b46176336f56 [43] => 940e7a11a374274b4e108376127338d0 [44] => c790be9e02cb4dda6c7b887633cf8d30 [45] => c6f48bfc6ac2106b889d62af4c6cca26 ) ) [post_type] => post [post_author] => 548 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => understanding-plugin-development-in-gatsby )

Decide filter: Returning post, everything seems orderly :Understanding Plugin Development In Gatsby

Array ( [post_title] => Understanding Plugin Development In Gatsby [post_content] => Understanding Plugin Development In Gatsby

Understanding Plugin Development In Gatsby

Aleem Isiaka

Gatsby is a React-based static-site generator that has overhauled how websites and blogs are created. It supports the use of plugins to create custom functionality that is not available in the standard installation.

In this post, I will introduce Gatsby plugins, discuss the types of Gatsby plugins that exist, differentiate between the forms of Gatsby plugins, and, finally, create a comment plugin that can be used on any Gatsby website, one of which we will install by the end of the tutorial.

What Is A Gatsby Plugin?

Gatsby, as a static-site generator, has limits on what it can do. Plugins are means to extend Gatsby with any feature not provided out of the box. We can achieve tasks like creating a manifest.json file for a progressive web app (PWA), embedding tweets on a page, logging page views, and much more on a Gatsby website using plugins.

Types Of Gatsby Plugins

There are two types of Gatsby plugins, local and external. Local plugins are developed in a Gatsby project directory, under the /plugins directory. External plugins are those available through npm or Yarn. Also, they may be on the same computer but linked using the yarn link or npm link command in a Gatsby website project.

Forms Of Gatsby Plugins

Plugins also exist in three primary forms and are defined by their use cases:

Components Of A Gatsby Plugin

To create a Gatsby plugin, we have to define some files:

These files are referred to as API files in Gatsby’s documentation and should live in the root of a plugin’s directory, either local or external.

Not all of these files are required to create a Gatsby plugin. In our case, we will be implementing only the gatsby-node.js and gatsby-config.js API files.

Building A Comment Plugin For Gatsby

To learn how to develop a Gatsby plugin, we will create a comment plugin that is installable on any blog that runs on Gatsby. The full code for the plugin is on GitHub.

Serving and Loading Comments

To serve comments on a website, we have to provide a server that allows for the saving and loading of comments. We will use an already available comment server at gatsbyjs-comment-server.herokuapp.com for this purpose.

The server supports a GET /comments request for loading comments. POST /comments would save comments for the website, and it accepts the following fields as the body of the POST /comments request:

Integrating the Server With Gatsby Using API Files

Much like we do when creating a Gatsby blog, to create an external plugin, we should start with plugin boilerplate.

Initializing the folder

In the command-line interace (CLI) and from any directory you are convenient with, let’s run the following command:

gatsby new gatsby-source-comment-server https://github.com/Gatsbyjs/gatsby-starter-plugin

Then, change into the plugin directory, and open it in a code editor.

Installing axios for Network Requests

To begin, we will install the axios package to make web requests to the comments server:

npm install axios --save
// or
yarn add axios
Adding a New Node Type

Before pulling comments from the comments server, we need to define a new node type that the comments would extend. For this, in the plugin folder, our gatsby-node.js file should contain the code below:

exports.sourceNodes = async ({ actions }) => {
  const { createTypes } = actions;
  const typeDefs = `
    type CommentServer implements Node {
      _id: String
      author: String
      string: String
      content: String
      website: String
      slug: String
      createdAt: Date
      updatedAt: Date
    }
  `;
  createTypes(typeDefs);
};

First, we pulled actions from the APIs provided by Gatsby. Then, we pulled out the createTypes action, after which we defined a CommentServer type that extends Node.js. Then, we called createTypes with the new node type that we set.

Fetching Comments From the Comments Server

Now, we can use axios to pull comments and then store them in the data-access layer as the CommentServer type. This action is called “node sourcing” in Gatsby.

To source for new nodes, we have to implement the sourceNodes API in gatsby-node.js. In our case, we would use axios to make network requests, then parse the data from the API to match a GraphQL type that we would define, and then create a node in the GraphQL layer of Gatsby using the createNode action.

We can add the code below to the plugin’s gatsby-node.js API file, creating the functionality we’ve described:

const axios = require("axios");

exports.sourceNodes = async (
  { actions, createNodeId, createContentDigest },
  pluginOptions
) => {
  const { createTypes } = actions;
  const typeDefs = `
    type CommentServer implements Node {
      _id: String
      author: String
      string: String
      website: String
      content: String
      slug: String
      createdAt: Date
      updatedAt: Date
    }
  `;
  createTypes(typeDefs);

  const { createNode } = actions;
  const { limit, website } = pluginOptions;
  const _website = website || "";

  const result = await axios({
    url: `https://Gatsbyjs-comment-server.herokuapp.com/comments?limit=${_limit}&website=${_website}`,
  });

  const comments = result.data;

  function convertCommentToNode(comment, { createContentDigest, createNode }) {
    const nodeContent = JSON.stringify(comment);

    const nodeMeta = {
      id: createNodeId(`comments-${comment._id}`),
      parent: null,
      children: [],
      internal: {
        type: `CommentServer`,
        mediaType: `text/html`,
        content: nodeContent,
        contentDigest: createContentDigest(comment),
      },
    };

    const node = Object.assign({}, comment, nodeMeta);
    createNode(node);
  }

  for (let i = 0; i < comments.data.length; i++) {
    const comment = comments.data[i];
    convertCommentToNode(comment, { createNode, createContentDigest });
  }
};

Here, we have imported the axios package, then set defaults in case our plugin’s options are not provided, and then made a request to the endpoint that serves our comments.

We then defined a function to convert the comments into Gatsby nodes, using the action helpers provided by Gatsby. After this, we iterated over the fetched comments and called convertCommentToNode to convert the comments into Gatsby nodes.

Transforming Data (Comments)

Next, we need to resolve the comments to posts. Gatsby has an API for that called createResolvers. We can make this possible by appending the code below in the gatsby-node.js file of the plugin:

exports.createResolvers = ({ createResolvers }) => {
  const resolvers = {
    MarkdownRemark: {
      comments: {
        type: ["CommentServer"],
        resolve(source, args, context, info) {
          return context.nodeModel.runQuery({
            query: {
              filter: {
                slug: { eq: source.fields.slug },
              },
            },
            type: "CommentServer",
            firstOnly: false,
          });
        },
      },
    },
  };
  createResolvers(resolvers);
};

Here, we are extending MarkdownRemark to include a comments field. The newly added comments field will resolve to the CommentServer type, based on the slug that the comment was saved with and the slug of the post.

Final Code for Comment Sourcing and Transforming

The final code for the gatsby-node.js file of our comments plugin should look like this:

const axios = require("axios");

exports.sourceNodes = async (
  { actions, createNodeId, createContentDigest },
  pluginOptions
) => {
  const { createTypes } = actions;
  const typeDefs = `
    type CommentServer implements Node {
      _id: String
      author: String
      string: String
      website: String
      content: String
      slug: String
      createdAt: Date
      updatedAt: Date
    }
  `;
  createTypes(typeDefs);

  const { createNode } = actions;
  const { limit, website } = pluginOptions;
  const _limit = parseInt(limit || 10000); // FETCH ALL COMMENTS
  const _website = website || "";

  const result = await axios({
    url: `https://Gatsbyjs-comment-server.herokuapp.com/comments?limit=${_limit}&website=${_website}`,
  });

  const comments = result.data;

  function convertCommentToNode(comment, { createContentDigest, createNode }) {
    const nodeContent = JSON.stringify(comment);

    const nodeMeta = {
      id: createNodeId(`comments-${comment._id}`),
      parent: null,
      children: [],
      internal: {
        type: `CommentServer`,
        mediaType: `text/html`,
        content: nodeContent,
        contentDigest: createContentDigest(comment),
      },
    };

    const node = Object.assign({}, comment, nodeMeta);
    createNode(node);
  }

  for (let i = 0; i < comments.data.length; i++) {
    const comment = comments.data[i];
    convertCommentToNode(comment, { createNode, createContentDigest });
  }
};

exports.createResolvers = ({ createResolvers }) => {
  const resolvers = {
    MarkdownRemark: {
      comments: {
        type: ["CommentServer"],
        resolve(source, args, context, info) {
          return context.nodeModel.runQuery({
            query: {
              filter: {
                website: { eq: source.fields.slug },
              },
            },
            type: "CommentServer",
            firstOnly: false,
          });
        },
      },
    },
  };
  createResolvers(resolvers);
};
Saving Comments as JSON Files

We need to save the comments for page slugs in their respective JSON files. This makes it possible to fetch the comments on demand over HTTP without having to use a GraphQL query.

To do this, we will implement the createPageStatefully API in thegatsby-node.js API file of the plugin. We will use the fs module to check whether the path exists before creating a file in it. The code below shows how we can implement this:

import fs from "fs"
import {resolve: pathResolve} from "path"
exports.createPagesStatefully = async ({ graphql }) => {
  const comments = await graphql(
    `
      {
        allCommentServer(limit: 1000) {
          edges {
            node {
              name
              slug
              _id
              createdAt
              content
            }
          }
        }
      }
    `
  )

  if (comments.errors) {
    throw comments.errors
  }

  const markdownPosts = await graphql(
    `
      {
        allMarkdownRemark(
          sort: { fields: [frontmatter___date], order: DESC }
          limit: 1000
        ) {
          edges {
            node {
              fields {
                slug
              }
            }
          }
        }
      }
    `
  )

  const posts = markdownPosts.data.allMarkdownRemark.edges
  const _comments = comments.data.allCommentServer.edges

  const commentsPublicPath = pathResolve(process.cwd(), "public/comments")

  var exists = fs.existsSync(commentsPublicPath) //create destination directory if it doesn't exist

  if (!exists) {
    fs.mkdirSync(commentsPublicPath)
  }

  posts.forEach((post, index) => {
    const path = post.node.fields.slug
    const commentsForPost = _comments
      .filter(comment => {
        return comment.node.slug === path
      })
      .map(comment => comment.node)

    const strippedPath = path
      .split("/")
      .filter(s => s)
      .join("/")
    const _commentPath = pathResolve(
      process.cwd(),
      "public/comments",
      `${strippedPath}.json`
    )
    fs.writeFileSync(_commentPath, JSON.stringify(commentsForPost))
  })
}

First, we require the fs, and resolve the function of the path module. We then use the GraphQL helper to pull the comments that we stored earlier, to avoid extra HTTP requests. We remove the Markdown files that we created using the GraphQL helper. And then we check whether the comment path is not missing from the public path, so that we can create it before proceeding.

Finally, we loop through all of the nodes in the Markdown type. We pull out the comments for the current posts and store them in the public/comments directory, with the post’s slug as the name of the file.

The .gitignore in the root in a Gatsby website excludes the public path from being committed. Saving files in this directory is safe.

During each rebuild, Gatsby would call this API in our plugin to fetch the comments and save them locally in JSON files.

Rendering Comments

To render comments in the browser, we have to use the gatsby-browser.js API file.

Define the Root Container for HTML

In order for the plugin to identify an insertion point in a page, we would have to set an HTML element as the container for rendering and listing the plugin’s components. We can expect that every page that requires it should have an HTML element with an ID set to commentContainer.

Implement the Route Update API in the gatsby-browser.js File

The best time to do the file fetching and component insertion is when a page has just been visited. The onRouteUpdate API provides this functionality and passes the apiHelpers and pluginOpions as arguments to the callback function.

exports.onRouteUpdate = async (apiHelpers, pluginOptions) => {
  const { location, prevLocation } = apiHelpers
}
Create Helper That Creates HTML Elements

To make our code cleaner, we have to define a function that can create an HTML element, set its className, and add content. At the top of the gatsby-browser.js file, we can add the code below:

// Creates element, set class. innerhtml then returns it.
 function createEl (name, className, html = null) {
  const el = document.createElement(name)
  el.className = className
  el.innerHTML = html
  return el
}
Create Header of Comments Section

At this point, we can add a header into the insertion point of comments components, in the onRouteUpdate browser API . First, we would ensure that the element exists in the page, then create an element using the createEl helper, and then append it to the insertion point.

// ...

exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
  const commentContainer = document.getElementById("commentContainer")
  if (commentContainer && location.path !== "/") {
    const header = createEl("h2")
    header.innerHTML = "Comments"
    commentContainer.appendChild(header)
  }
}
Listing Comments

To list comments, we would append a ul element to the component insertion point. We will use the createEl helper to achieve this, and set its className to comment-list:

exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
  const commentContainer = document.getElementById("commentContainer")
  if (commentContainer && location.path !== "/") {
    const header = createEl("h2")
    header.innerHTML = "Comments"
    commentContainer.appendChild(header)
    const commentListUl = createEl("ul")
    commentListUl.className = "comment-list"
    commentContainer.appendChild(commentListUl)
}

Next, we need to render the comments that we have saved in the public directory to a ul element, inside of li elements. For this, we define a helper that fetches the comments for a page using the path name.

// Other helpers
const getCommentsForPage = async slug => {
  const path = slug
    .split("/")
    .filter(s => s)
    .join("/")
  const data = await fetch(`/comments/${path}.json`)
  return data.json()
}
// ... implements routeupdate below

We have defined a helper, named getCommentsForPage, that accepts paths and uses fetch to load the comments from the public/comments directory, before parsing them to JSON and returning them back to the calling function.

Now, in our onRouteUpdate callback, we will load the comments:

// ... helpers
exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
  const commentContainer = document.getElementById("commentContainer")
  if (commentContainer && location.path !== "/") {
    //... inserts header
    const commentListUl = createEl("ul")
    commentListUl.className = "comment-list"
    commentContainer.appendChild(commentListUl)
   const comments = await getCommentsForPage(location.pathname)
}

Next, let’s define a helper to create the list items:

// .... other helpers

const getCommentListItem = comment => {
  const li = createEl("li")
  li.className = "comment-list-item"

  const nameCont = createEl("div")
  const name = createEl("strong", "comment-author", comment.name)
  const date = createEl(
    "span",
    "comment-date",
    new Date(comment.createdAt).toLocaleDateString()
  )
  // date.className="date"
  nameCont.append(name)
  nameCont.append(date)

  const commentCont = createEl("div", "comment-cont", comment.content)

  li.append(nameCont)
  li.append(commentCont)
  return li
}

// ... onRouteUpdateImplementation

In the snippet above, we created an li element with a className of comment-list-item, and a div for the comment’s author and time. We then created another div for the comment’s text, with a className of comment-cont.

To render the list items of comments, we iterate through the comments fetched using the getComments helper, and then call the getCommentListItem helper to create a list item. Finally, we append it to the <ul class="comment-list"></ul> element:

// ... helpers
exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
  const commentContainer = document.getElementById("commentContainer")
  if (commentContainer && location.path !== "/") {
    //... inserts header
    const commentListUl = createEl("ul")
    commentListUl.className = "comment-list"
    commentContainer.appendChild(commentListUl)
   const comments = await getCommentsForPage(location.pathname)
    if (comments && comments.length) {
      comments.map(comment => {
        const html = getCommentListItem(comment)
        commentListUl.append(html)
        return comment
      })
    }
}

Posting a Comment

Post Comment Form Helper

To enable users to post a comment, we have to make a POST request to the /comments endpoint of the API. We need a form in order to create this form. Let’s create a form helper that returns an HTML form element.

// ... other helpers
const createCommentForm = () => {
  const form = createEl("form")
  form.className = "comment-form"
  const nameInput = createEl("input", "name-input", null)
  nameInput.type = "text"
  nameInput.placeholder = "Your Name"
  form.appendChild(nameInput)
  const commentInput = createEl("textarea", "comment-input", null)
  commentInput.placeholder = "Comment"
  form.appendChild(commentInput)
  const feedback = createEl("span", "feedback")
  form.appendChild(feedback)
  const button = createEl("button", "comment-btn", "Submit")
  button.type = "submit"
  form.appendChild(button)
  return form
}

The helper creates an input element with a className of name-input, a textarea with a className of comment-input, a span with a className of feedback, and a button with a className of comment-btn.

Append the Post Comment Form

We can now append the form into the insertion point, using the createCommentForm helper:

// ... helpers
exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
  const commentContainer = document.getElementById("commentContainer")
  if (commentContainer && location.path !== "/") {
    // insert header
    // insert comment list
    commentContainer.appendChild(createCommentForm())
  }
}

Post Comments to Server

To post a comment to the server, we have to tell the user what is happening — for example, either that an input is required or that the API returned an error. The <span class="feedback" /> element is meant for this. To make it easier to update this element, we create a helper that sets the element and inserts a new class based on the type of the feedback (whether error, info, or success).

// ... other helpers
// Sets the class and text of the form feedback
const updateFeedback = (str = "", className) => {
  const feedback = document.querySelector(".feedback")
  feedback.className = `feedback ${className ? className : ""}`.trim()
  feedback.innerHTML = str
  return feedback
}
// onRouteUpdate callback

We are using the querySelector API to get the element. Then we set the class by updating the className attribute of the element. Finally, we use innerHTML to update the contents of the element before returning it.

Submitting a Comment With the Comment Form

We will listen to the onSubmit event of the comment form to determine when a user has decided to submit the form. We don’t want empty data to be submitted, so we would set a feedback message and disable the submit button until needed:

exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
  // Appends header
  // Appends comment list
  // Appends comment form
  document
    .querySelector("body .comment-form")
    .addEventListener("submit", async function (event) {
      event.preventDefault()
      updateFeedback()
      const name = document.querySelector(".name-input").value
      const comment = document.querySelector(".comment-input").value
      if (!name) {
        return updateFeedback("Name is required")
      }
      if (!comment) {
        return updateFeedback("Comment is required")
      }
      updateFeedback("Saving comment", "info")
      const btn = document.querySelector(".comment-btn")
      btn.disabled = true
      const data = {
        name,
        content: comment,
        slug: location.pathname,
        website: pluginOptions.website,
      }

      fetch(
        "https://cors-anywhere.herokuapp.com/gatsbyjs-comment-server.herokuapp.com/comments",
        {
          body: JSON.stringify(data),
          method: "POST",
          headers: {
            Accept: "application/json",
            "Content-Type": "application/json",
          },
        }
      ).then(async function (result) {
        const json = await result.json()
        btn.disabled = false

        if (!result.ok) {
          updateFeedback(json.error.msg, "error")
        } else {
          document.querySelector(".name-input").value = ""
          document.querySelector(".comment-input").value = ""
          updateFeedback("Comment has been saved!", "success")
        }
      }).catch(async err => {
        const errorText = await err.text()
        updateFeedback(errorText, "error")
      })
    })
}

We use document.querySelector to get the form from the page, and we listen to its submit event. Then, we set the feedback to an empty string, from whatever it might have been before the user attempted to submit the form.

We also check whether the name or comment field is empty, setting an error message accordingly.

Next, we make a POST request to the comments server at the /comments endpoint, listening for the response. We use the feedback to tell the user whether there was an error when they created the comment, and we also use it to tell them whether the comment’s submission was successful.

Adding a Style Sheet

To add styles to the component, we have to create a new file, style.css, at the root of our plugin folder, with the following content:

#commentContainer {
}

.comment-form {
  display: grid;
}

At the top of gatsby-browser.js, import it like this:

import "./style.css"

This style rule will make the form’s components occupy 100% of the width of their container.

Finally, all of the components for our comments plugin are complete. Time to install and test this fantastic plugin we have built.

Test the Plugin

Create a Gatsby Website

Run the following command from a directory one level above the plugin’s directory:

// PARENT
// ├── PLUGIN
// ├── Gatsby Website

gatsby new private-blog https://github.com/gatsbyjs/gatsby-starter-blog

Install the Plugin Locally and Add Options

Next, change to the blog directory, because we need to create a link for the new plugin:

cd /path/to/blog
npm link ../path/to/plugin/folder
Add to gatsby-config.js

In the gatsby-config.js file of the blog folder, we should add a new object that has a resolve key and that has name-of-plugin-folder as the value of the plugin’s installation. In this case, the name is gatsby-comment-server-plugin:

module.exports = {
  // ...
  plugins: [
    // ...
    "gatsby-plugin-dom-injector",
    {
      resolve: "gatsby-comment-server-plugin",
      options: {website: "https://url-of-website.com"},
    },
  ],
}

Notice that the plugin accepts a website option to distinguish the source of the comments when fetching and saving comments.

Update the blog-post Component

For the insertion point, we will add <section class="comments" id="commentContainer"> to the post template component at src/templates/blog-post.js of the blog project. This can be inserted at any suitable position; I have inserted mine after the last hr element and before the footer.

Start the Development Server

Finally, we can start the development server with gatsby develop, which will make our website available locally at http://localhost:8000. Navigating to any post page, like http://localhost:8000/new-beginnings, will reveal the comment at the insertion point that we specified above.

Create a Comment

We can create a comment using the comment form, and it will provide helpful feedback as we interact with it.

List Comments

To list newly posted comments, we have to restart the server, because our content is static.

Conclusion

In this tutorial, we have introduced Gatsby plugins and demonstrated how to create one.

Our plugin uses different APIs of Gatsby and its own API files to provide comments for our website, illustrating how we can use plugins to add significant functionality to a Gatsby website.

Although we are pulling from a live server, the plugin is saving the comments in JSON files. We could make the plugin load comments on demand from the API server, but that would defeat the notion that our blog is a static website that does not require dynamic content.

The plugin built in this post exists as an npm module, while the full code is on GitHub.

References:

Resources:

Smashing Editorial (yk)
[post_excerpt] => Gatsby is a React-based static-site generator that has overhauled how websites and blogs are created. It supports the use of plugins to create custom functionality that is not available in the standard installation. In this post, I will introduce Gatsby plugins, discuss the types of Gatsby plugins that exist, differentiate between the forms of Gatsby plugins, and, finally, create a comment plugin that can be used on any Gatsby website, one of which we will install by the end of the tutorial. [post_date_gmt] => 2020-07-06 14:30:00 [post_date] => 2020-07-06 14:30:00 [post_modified_gmt] => 2020-07-06 14:30:00 [post_modified] => 2020-07-06 14:30:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/understanding-plugin-development-gatsby/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/understanding-plugin-development-gatsby/ [syndication_item_hash] => Array ( [0] => b8c36e0b66df6108b64290f380603eb8 [1] => 4fd0d547609d36932fcd52b73c0775f9 [2] => 0708ebd0859c798ff33109642faa61a3 [3] => 498738cfaa3b5e05ff657ec1581bdfa3 [4] => dbf77ac732d43627de9a6eb9f68f5cc6 [5] => 8aae109e919389f6e0bce836d5ac2e3e [6] => 2be0804cfba9f4f5f71ddbd13c9de713 [7] => af2fb5033d18442b5a2349a30b46fd8c [8] => ae56ba781a65226896930ec94065f6df [9] => 50ac168bd3afa80d57202cf356fe920e [10] => 3ad1e8afbd800bfa82bf4c8a7ef6a361 [11] => 9dbf07cd8db2bd91ea98bcd0963cca79 [12] => 4e15758cc0e23ecc317a2f1d5a73aab0 [13] => 42b3488c3ec9b911a51f332be3adba52 [14] => 2e9ff5e8f2cd23333b5e3587b1f0d6be [15] => f024de811e6e57f183e71d99307a3b48 [16] => aaee573f6be87a1cc5f0aa8cce28a58c [17] => 31736b93ae1b64d818687ded92e191b7 [18] => 6ffc47f74195dd66462797c28eaa1929 [19] => caecd752db911d9d2aff7001abd3a838 [20] => 06e4b972c67d45657c7dc7817d497aa0 [21] => ff2a5c5d6b25358d547ba5290293a640 [22] => d3c37594ddb279de71da0d4f57699ce9 [23] => 00012ed40fd2e4f103e76481df9fa5e1 [24] => 8fc66e41d66c6b3b5686f5de35a54453 [25] => 75cce2ec6f19f2eada90a64b13d7ccca [26] => 3f59232735d6bcf055a90413e48ed69f [27] => 46904fe8204c3002e0262bf695842a55 [28] => 2e84edd7f5d8336451450f6a957caea6 [29] => 03bfda0f7766a6a3bfd9326153b33903 [30] => 20bd43c8b2e9c814df35b96229183947 [31] => 60588a4e9a62caa7849e4ce395da04c4 [32] => 1f1de24bc61c08ad8cb2383de8fa578f [33] => b4630171ab402df7b9de64c67e9ce48d [34] => 7142d6258f56dbd0b23a12f1ea3ae9e0 [35] => 6014b90eb5ae952a86b268e7d554c86f [36] => e876917631dd932cab0a5d0adff4cda9 [37] => 7c576b4794b6c73c206ee365646902d7 [38] => dd08dadd21bb571de68c7bcf741b91a1 [39] => 4434ec2e38f94f57f1b0cb9afaee13d3 [40] => bc4a66dece25cc9f125ae9a4926a7586 [41] => bc4a66dece25cc9f125ae9a4926a7586 [42] => 9867620a00b039484490b46176336f56 [43] => 940e7a11a374274b4e108376127338d0 [44] => c790be9e02cb4dda6c7b887633cf8d30 [45] => c6f48bfc6ac2106b889d62af4c6cca26 ) ) [post_type] => post [post_author] => 548 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => understanding-plugin-development-in-gatsby )

FAF deciding on filters on post to be syndicated:

Make Your Sites Fast, Accessible And Secure With Help From Google

Array ( [post_title] => Make Your Sites Fast, Accessible And Secure With Help From Google [post_content] => Make Your Sites Fast, Accessible And Secure With Help From Google

Make Your Sites Fast, Accessible And Secure With Help From Google

Dion Almaer

Earlier this year, the Chrome team announced the Web Vitals initiative to provide unified guidance, metrics, and tools to help developers deliver great user experiences on the web. The Google Search team also recently announced that they will be evaluating page experience as a ranking criteria, and will include Core Web Vitals metrics as its foundation.

The three pillars of 2020 Core Web Vitals are loading, interactivity, and visual stability of page content, which are captured by the following metrics:

An illustration of the three metrics explained: Largest Contentful Paint, First Input Delay and Cumulative Layout Shift
Core Web Vitals 2020 (Large preview)

At web.dev LIVE, we shared best practices on how to optimize for Core Web Vitals and how to use Chrome DevTools to explore your site or app’s vitals values. We also shared plenty of other performance-related talks that you can find at web.dev/live in the Day 1 schedule.

tooling.report

The web is a complex platform and developing for it can be challenging at the best of times. Build tools aim to make a web developer’s life easier, but as a result build tools end up being quite complex themselves.

To help web developers and tooling authors conquer the complexity of the web, we built tooling.report. It’s a website that helps you choose the right build tool for your next project, decide if migrating from one tool to another is worth it, or figure out how to incorporate best practices into your tooling configuration and codebase. We aim to explain the tradeoffs involved when choosing a build tool and document how to follow best practices with any given build tool.

We designed a suite of tests for the report based on what we believe represents the best practices for web development. The tests allow us to determine which build tool allows you to follow what best practice and we worked with the build tool authors to make sure we used their tools correctly and represented them fairly.

An overview and comparison between the well-known build toolks webpack v4, Rollup v2, Parcel v2 and Browserify+Gulp
Comparison report of current set of libraries on tooling.report (Large preview)

The initial release of tooling.report covers webpack v4, Rollup v2, and Parcel v2 as well as Browserify+Gulp, which we believe are the most popular build tools right now. We built tooling.report with the flexibility of adding more build tools and additional tests with help from the community.

So if you think a best practice that should be tested or is missing, please propose it in a GitHub issue and if you’re up for writing adding a new tool we did not include in the initial set, we welcome you to contribute!

Meanwhile, you can read more about our approach towards building tooling.report and watch our session from web.dev LIVE for more.

Latest In Chrome DevTools And Lighthouse 6.0

Most web developers spend a lot of time of their day in their developer tools so we want to ensure that our tools enable greater productivity, whether it’s for debugging or for auditing and fixing issues to improve user experience.

Chrome Devtools: New Issues Tab, Color Deficiency Emulator And Web Vitals support

One of the most powerful features of Chrome DevTools is its ability to spot issues on a webpage and bring them to the developer’s attention — this is most pertinent as we move into the next phase of a privacy-first web. To reduce notification fatigue and clutter of the Console, we’ve launched the “Issues Tab” that focuses on three types of critical issues to start with: Cookie problems, Mixed content and COEP issues. Watch our session on finding and fixing problems with the Issues Tab for more.

A screenshot of the Chrome DevTools timeline where developers can track and measure metrics, performance and more
New Issues Tab in Chrome DevTools (Large preview)

Moreover, with Core Web Vitals becoming one of the most critical sets of metrics that we believe every developer must track and measure, we want to ensure developers are able to easily track how they perform against these thresholds. So we’ve added the three metrics in the Chrome DevTools timeline.

And finally, with an increasing number of developers focusing on accessibility, we also introduced a Color Vision Deficiency Emulator that allows developers to simulate vision deficiencies, including blurred vision & various other types of color blindness. We’re super excited to bring this feature to developers who’re looking to make their websites more color-blind friendly and you can see more about this and many other features in our session on What’s the latest in DevTools.

A screenshot from YouTube of a session featuring Jake Archibald and Surma
New Color Vision Deficiency Emulator in Chrome DevTools (Large preview)

Lighthouse 6.0: New Metrics, Core Web Vitals Lab Measurement, An Updated Performance Score, And Exciting New Audits

Lighthouse is an open-source automated tool that helps developers improve their site’s performance. In its latest version, we focused on providing insights based on metrics that give you a balanced view of your user experience quality against critical dimensions.

To ensure consistency, we’ve added support for the Core Web Vitals — LCP, TBT (lab equivalent for FID as Lighthouse is a lab tool) and CLS — and removed three old ones: First Meaningful Paint, First CPU Idle, and Max Potential FID. These removals are due to considerations like metric variability and newer metrics offering better reflections of the part of user experience that we’re trying to measure. Additionally, we also made some adjustments to the weights based on user feedback.

We also added a super nifty scoring calculator to help you explore your performance scoring, by providing a comparison between version v5 and v6 scores. When you run an audit with Lighthouse 6.0, the report comes with a link to the calculator with your results populated.

And finally, we added a bunch of useful new audits, with a focus on JavaScript Analysis and accessibility.

All new audits in Lighthouse 6.0 (Large preview)

There are many others that we spoke about at web.dev LIVE — watch the session on What’s latest in speed tooling and the latest in Puppeteer.

During web.dev LIVE, we shared more new features and updates that have come to the web over the past few months. Watch all the sessions to stay up to date and subscribe to the web.dev newsletter if you’d like more such content straight to your inbox.

Smashing Editorial (ef, ra, il)
[post_excerpt] => Earlier this year, the Chrome team announced the Web Vitals initiative to provide unified guidance, metrics, and tools to help developers deliver great user experiences on the web. The Google Search team also recently announced that they will be evaluating page experience as a ranking criteria, and will include Core Web Vitals metrics as its foundation. The three pillars of 2020 Core Web Vitals are loading, interactivity, and visual stability of page content, which are captured by the following metrics: [post_date_gmt] => 2020-07-06 14:00:00 [post_date] => 2020-07-06 14:00:00 [post_modified_gmt] => 2020-07-06 14:00:00 [post_modified] => 2020-07-06 14:00:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/web-dev-live-google-event-2020/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/web-dev-live-google-event-2020/ [syndication_item_hash] => Array ( [0] => 329e8f446f500e8d7ab70accf4a6fa94 [1] => 78e13a75f81635e47775531fc1aa25f1 [2] => 4403f020d228e801a3a111b968b32ece [3] => 4cb37e14e517cf6227315529e7a439de [4] => c6be589c4896b0097c710bbc3347c77a [5] => 3a396a643b1515ae8b15d6d9f197b7c8 [6] => 78d57d57166aa8f02993f666297af3bf [7] => 321ac71936d17bc15eed265877c2c017 [8] => 3d9f34b477f690c040847f1c3f349e24 [9] => 5336b8b48c8fe40ddafb21900a8b4738 [10] => 6f9857b266fd07b7de2b982f28b0f90c [11] => 7c3d0c963692eb6f7c58d22de1926c74 [12] => 818aded373efae1a11e0f1e7907ba297 [13] => 0dab27002410dfa6e528bf81757a4c1f [14] => 39facfab993d5c783436a6407e412217 [15] => 9961371cde66892b25aa93f867c6131c [16] => bed3330042f4e764971ce1442980a363 [17] => 8e841b9220d58baab10e9d966ca3f222 [18] => ef67c18f0f708c35b266a5588cebea69 [19] => 08ebba6fe0bb33d5bd1f3a7a58f900b4 [20] => 7892807a3135563a401c5b180199a7e4 [21] => 64c54bd12c9db4b456c7e68a742bec47 [22] => d5cd86fafaead49c7f54855cabd48bbc [23] => e6033e022e7188838e74b8531422bfd2 [24] => 24b0f177ffaf7149e09b9473a1a6e5c7 [25] => 7a003806feedf8708a4e1186f9c495fd [26] => 8a67cc884ef48b46302a0813e7d5d005 [27] => c9233288252f23043a8178153c10ee08 [28] => 400a3d19c6d59da8250629a96921122d [29] => 57cd1ca6738ca797a95bb59db50b0af4 [30] => 9868edffaf83669b3b43d728c5542fa0 [31] => c804bcab4fc80cacd1ebb68e1b23283d [32] => 0e4eade9f7dcb267723dfad4279582d0 [33] => d79557b2b37a4cc00c7670e6bc2f28c2 [34] => 0cc8a04ddc24f74734281091ff3a523e [35] => ec79eba3d672dc6edca61d59ff50a476 [36] => 26a5144e7b48734ebf30304a39bb8b2e [37] => 2d64a36d4b920d189606c8e16398eb9a [38] => 3a5970cbb4fb447e3e91df760749f675 [39] => 43a84d68e5a86d76028990c5eba699c9 [40] => 6eaf264539e2ce761d4e1b1a5f01500a [41] => 6eaf264539e2ce761d4e1b1a5f01500a [42] => d1391cb3f35230d2377c76ca06245aea [43] => 269fcaaff26f5b92b7a61941e5f7e721 [44] => 27904b8ffc29d2200bfb7813f58dc6de [45] => f91735020ed325b822840f92029b4c2d ) ) [post_type] => post [post_author] => 605 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => make-your-sites-fast-accessible-and-secure-with-help-from-google )

Decide filter: Returning post, everything seems orderly :Make Your Sites Fast, Accessible And Secure With Help From Google

Array ( [post_title] => Make Your Sites Fast, Accessible And Secure With Help From Google [post_content] => Make Your Sites Fast, Accessible And Secure With Help From Google

Make Your Sites Fast, Accessible And Secure With Help From Google

Dion Almaer

Earlier this year, the Chrome team announced the Web Vitals initiative to provide unified guidance, metrics, and tools to help developers deliver great user experiences on the web. The Google Search team also recently announced that they will be evaluating page experience as a ranking criteria, and will include Core Web Vitals metrics as its foundation.

The three pillars of 2020 Core Web Vitals are loading, interactivity, and visual stability of page content, which are captured by the following metrics:

An illustration of the three metrics explained: Largest Contentful Paint, First Input Delay and Cumulative Layout Shift
Core Web Vitals 2020 (Large preview)

At web.dev LIVE, we shared best practices on how to optimize for Core Web Vitals and how to use Chrome DevTools to explore your site or app’s vitals values. We also shared plenty of other performance-related talks that you can find at web.dev/live in the Day 1 schedule.

tooling.report

The web is a complex platform and developing for it can be challenging at the best of times. Build tools aim to make a web developer’s life easier, but as a result build tools end up being quite complex themselves.

To help web developers and tooling authors conquer the complexity of the web, we built tooling.report. It’s a website that helps you choose the right build tool for your next project, decide if migrating from one tool to another is worth it, or figure out how to incorporate best practices into your tooling configuration and codebase. We aim to explain the tradeoffs involved when choosing a build tool and document how to follow best practices with any given build tool.

We designed a suite of tests for the report based on what we believe represents the best practices for web development. The tests allow us to determine which build tool allows you to follow what best practice and we worked with the build tool authors to make sure we used their tools correctly and represented them fairly.

An overview and comparison between the well-known build toolks webpack v4, Rollup v2, Parcel v2 and Browserify+Gulp
Comparison report of current set of libraries on tooling.report (Large preview)

The initial release of tooling.report covers webpack v4, Rollup v2, and Parcel v2 as well as Browserify+Gulp, which we believe are the most popular build tools right now. We built tooling.report with the flexibility of adding more build tools and additional tests with help from the community.

So if you think a best practice that should be tested or is missing, please propose it in a GitHub issue and if you’re up for writing adding a new tool we did not include in the initial set, we welcome you to contribute!

Meanwhile, you can read more about our approach towards building tooling.report and watch our session from web.dev LIVE for more.

Latest In Chrome DevTools And Lighthouse 6.0

Most web developers spend a lot of time of their day in their developer tools so we want to ensure that our tools enable greater productivity, whether it’s for debugging or for auditing and fixing issues to improve user experience.

Chrome Devtools: New Issues Tab, Color Deficiency Emulator And Web Vitals support

One of the most powerful features of Chrome DevTools is its ability to spot issues on a webpage and bring them to the developer’s attention — this is most pertinent as we move into the next phase of a privacy-first web. To reduce notification fatigue and clutter of the Console, we’ve launched the “Issues Tab” that focuses on three types of critical issues to start with: Cookie problems, Mixed content and COEP issues. Watch our session on finding and fixing problems with the Issues Tab for more.

A screenshot of the Chrome DevTools timeline where developers can track and measure metrics, performance and more
New Issues Tab in Chrome DevTools (Large preview)

Moreover, with Core Web Vitals becoming one of the most critical sets of metrics that we believe every developer must track and measure, we want to ensure developers are able to easily track how they perform against these thresholds. So we’ve added the three metrics in the Chrome DevTools timeline.

And finally, with an increasing number of developers focusing on accessibility, we also introduced a Color Vision Deficiency Emulator that allows developers to simulate vision deficiencies, including blurred vision & various other types of color blindness. We’re super excited to bring this feature to developers who’re looking to make their websites more color-blind friendly and you can see more about this and many other features in our session on What’s the latest in DevTools.

A screenshot from YouTube of a session featuring Jake Archibald and Surma
New Color Vision Deficiency Emulator in Chrome DevTools (Large preview)

Lighthouse 6.0: New Metrics, Core Web Vitals Lab Measurement, An Updated Performance Score, And Exciting New Audits

Lighthouse is an open-source automated tool that helps developers improve their site’s performance. In its latest version, we focused on providing insights based on metrics that give you a balanced view of your user experience quality against critical dimensions.

To ensure consistency, we’ve added support for the Core Web Vitals — LCP, TBT (lab equivalent for FID as Lighthouse is a lab tool) and CLS — and removed three old ones: First Meaningful Paint, First CPU Idle, and Max Potential FID. These removals are due to considerations like metric variability and newer metrics offering better reflections of the part of user experience that we’re trying to measure. Additionally, we also made some adjustments to the weights based on user feedback.

We also added a super nifty scoring calculator to help you explore your performance scoring, by providing a comparison between version v5 and v6 scores. When you run an audit with Lighthouse 6.0, the report comes with a link to the calculator with your results populated.

And finally, we added a bunch of useful new audits, with a focus on JavaScript Analysis and accessibility.

All new audits in Lighthouse 6.0 (Large preview)

There are many others that we spoke about at web.dev LIVE — watch the session on What’s latest in speed tooling and the latest in Puppeteer.

During web.dev LIVE, we shared more new features and updates that have come to the web over the past few months. Watch all the sessions to stay up to date and subscribe to the web.dev newsletter if you’d like more such content straight to your inbox.

Smashing Editorial (ef, ra, il)
[post_excerpt] => Earlier this year, the Chrome team announced the Web Vitals initiative to provide unified guidance, metrics, and tools to help developers deliver great user experiences on the web. The Google Search team also recently announced that they will be evaluating page experience as a ranking criteria, and will include Core Web Vitals metrics as its foundation. The three pillars of 2020 Core Web Vitals are loading, interactivity, and visual stability of page content, which are captured by the following metrics: [post_date_gmt] => 2020-07-06 14:00:00 [post_date] => 2020-07-06 14:00:00 [post_modified_gmt] => 2020-07-06 14:00:00 [post_modified] => 2020-07-06 14:00:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/web-dev-live-google-event-2020/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/web-dev-live-google-event-2020/ [syndication_item_hash] => Array ( [0] => 329e8f446f500e8d7ab70accf4a6fa94 [1] => 78e13a75f81635e47775531fc1aa25f1 [2] => 4403f020d228e801a3a111b968b32ece [3] => 4cb37e14e517cf6227315529e7a439de [4] => c6be589c4896b0097c710bbc3347c77a [5] => 3a396a643b1515ae8b15d6d9f197b7c8 [6] => 78d57d57166aa8f02993f666297af3bf [7] => 321ac71936d17bc15eed265877c2c017 [8] => 3d9f34b477f690c040847f1c3f349e24 [9] => 5336b8b48c8fe40ddafb21900a8b4738 [10] => 6f9857b266fd07b7de2b982f28b0f90c [11] => 7c3d0c963692eb6f7c58d22de1926c74 [12] => 818aded373efae1a11e0f1e7907ba297 [13] => 0dab27002410dfa6e528bf81757a4c1f [14] => 39facfab993d5c783436a6407e412217 [15] => 9961371cde66892b25aa93f867c6131c [16] => bed3330042f4e764971ce1442980a363 [17] => 8e841b9220d58baab10e9d966ca3f222 [18] => ef67c18f0f708c35b266a5588cebea69 [19] => 08ebba6fe0bb33d5bd1f3a7a58f900b4 [20] => 7892807a3135563a401c5b180199a7e4 [21] => 64c54bd12c9db4b456c7e68a742bec47 [22] => d5cd86fafaead49c7f54855cabd48bbc [23] => e6033e022e7188838e74b8531422bfd2 [24] => 24b0f177ffaf7149e09b9473a1a6e5c7 [25] => 7a003806feedf8708a4e1186f9c495fd [26] => 8a67cc884ef48b46302a0813e7d5d005 [27] => c9233288252f23043a8178153c10ee08 [28] => 400a3d19c6d59da8250629a96921122d [29] => 57cd1ca6738ca797a95bb59db50b0af4 [30] => 9868edffaf83669b3b43d728c5542fa0 [31] => c804bcab4fc80cacd1ebb68e1b23283d [32] => 0e4eade9f7dcb267723dfad4279582d0 [33] => d79557b2b37a4cc00c7670e6bc2f28c2 [34] => 0cc8a04ddc24f74734281091ff3a523e [35] => ec79eba3d672dc6edca61d59ff50a476 [36] => 26a5144e7b48734ebf30304a39bb8b2e [37] => 2d64a36d4b920d189606c8e16398eb9a [38] => 3a5970cbb4fb447e3e91df760749f675 [39] => 43a84d68e5a86d76028990c5eba699c9 [40] => 6eaf264539e2ce761d4e1b1a5f01500a [41] => 6eaf264539e2ce761d4e1b1a5f01500a [42] => d1391cb3f35230d2377c76ca06245aea [43] => 269fcaaff26f5b92b7a61941e5f7e721 [44] => 27904b8ffc29d2200bfb7813f58dc6de [45] => f91735020ed325b822840f92029b4c2d ) ) [post_type] => post [post_author] => 605 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => make-your-sites-fast-accessible-and-secure-with-help-from-google )

FAF deciding on filters on post to be syndicated:

How To Test Your React Apps With The React Testing Library

Array ( [post_title] => How To Test Your React Apps With The React Testing Library [post_content] => How To Test Your React Apps With The React Testing Library

How To Test Your React Apps With The React Testing Library

Chidi Orji

Today, we’ll briefly discuss why it’s important to write automated tests for any software project, and shed light on some of the common types of automated testing. We’ll build a to-do list app by following the Test-Driven Development (TDD) approach. I’ll show you how to write both unit and functional tests, and in the process, explain what code mocks are by mocking a few libraries. I’ll be using a combination of RTL and Jest — both of which come pre-installed in any new project created with Create-React-App (CRA).

To follow along, you need to know how to set up and navigate a new React project and how to work with the yarn package manager (or npm). Familiarities with Axios and React-Router are also required.

Best Practices With React

React is a fantastic JavaScript library for building rich user interfaces. It provides a great component abstraction for organizing your interfaces into well-functioning code, and there’s just about anything you can use it for. Read more articles on React →

Why You Should Test Your Code

Before shipping your software to end-users, you first have to confirm that it is working as expected. In other words, the app should satisfy its project specifications.

Just as it is important to test our project as a whole before shipping it to end-users, it’s also essential to keep testing our code during the lifetime of a project. This is necessary for a number of reasons. We may make updates to our application or refactor some parts of our code. A third-party library may undergo a breaking change. Even the browser that is running our web application may undergo breaking changes. In some cases, something stops working for no apparent reason — things could go wrong unexpectedly. Thus, it is necessary to test our code regularly for the lifetime of a project.

Broadly speaking, there are manual and automated software tests. In a manual test, a real user performs some action on our application to verify that they work correctly. This kind of test is less reliable when repeated several times because it’s easy for the tester to miss some details between test runs.

In an automated test, however, a test script is executed by a machine. With a test script, we can be sure that whatever details we set in the script will remain unchanged on every test run.

This kind of test gives us the benefits of being predictable and fast, such that we can quickly find and fix bugs in our code.

Having seen the necessity of testing our code, the next logical question is, what sort of automated tests should we write for our code? Let’s quickly go over a few of them.

Types Of Automated Testing

There are many different types of automated software testing. Some of the most common ones are unit tests, integration tests, functional tests, end-to-end tests, acceptance tests, performance tests, and smoke tests.

  1. Unit test
    In this kind of test, the goal is to verify that each unit of our application, considered in isolation, is working correctly. An example would be testing that a particular function returns an expected value, give some known inputs. We’ll see several examples in this article.
  2. Smoke test
    This kind of test is done to check that the system is up and running. For example, in a React app, we could just render our main app component and call it a day. If it renders correctly we can be fairly certain that our app would render on the browser.
  3. Integration test
    This sort of test is carried out to verify that two or more modules can work well together. For example, you might run a test to verify that your server and database are actually communicating correctly.
  4. Functional test
    A functional test exists to verify that the system meets its functional specification. We’ll see an example later.
  5. End-to-end test
    This kind of test involves testing the application the same way it would be used in the real world. You can use a tool like cypress for E2E tests.
  6. Acceptance test
    This is usually done by the business owner to verify that the system meets specifications.
  7. Performance test
    This sort of testing is carried out to see how the system performs under significant load. In frontend development, this is usually about how fast the app loads on the browser.

There’s more here if you’re interested.

Why Use React Testing Library?

When it comes to testing React applications, there are a few testing options available, of which the most common ones I know of are Enzyme and React Testing Library (RTL).

RTL is a subset of the @testing-library family of packages. Its philosophy is very simple. Your users don’t care whether you use redux or context for state management. They care less about the simplicity of hooks nor the distinction between class and functional components. They just want your app to work in a certain way. It is, therefore, no surprise that the testing library’s primary guiding principle is

“The more your tests resemble the way your software is used, the more confidence they can give you.”

So, whatever you do, have the end-user in mind and test your app just as they would use it.

Choosing RTL gives you a number of advantages. First, it’s much easier to get started with it. Every new React project bootstrapped with CRA comes with RTL and Jest configured. The React docs also recommend it as the testing library of choice. Lastly, the guiding principle makes a lot of sense — functionality over implementation details.

With that out of the way, let’s get started with building a to-do list app, following the TDD approach.

Project Setup

Open a terminal and copy and run the below command.

# start new react project and start the server
npx create-react-app start-rtl && cd start-rtl && yarn start

This should create a new React project and start the server on http://localhost:3000. With the project running, open a separate terminal, run yarn test and then press a. This runs all tests in the project in watch mode. Running the test in watch mode means that the test will automatically re-run when it detects a change in either the test file or the file that is being tested. On the test terminal, you should see something like the picture below:

Initial test passing
Initial test passing. (Large preview)

You should see a lot of greens, which indicates that the test we’re running passed in flying colors.

As I mentioned earlier, CRA sets up RTL and Jest for every new React project. It also includes a sample test. This sample test is what we just executed.

When you run the yarn test command, react-scripts calls upon Jest to execute the test. Jest is a JavaScript testing framework that’s used in running tests. You won’t find it listed in package.json but you can do a search inside yarn.lock to find it. You can also see it in node_modules/.

Jest is incredible in the range of functionality that it provides. It provides tools for assertions, mocking, spying, etc. I strongly encourage you to take at least a quick tour of the documentation. There’s a lot to learn there that I cannot scratch in this short piece. We’ll be using Jest a lot in the coming sections.

Open package.json let’s see what we have there. The section of interest is dependencies.

  "dependencies": {
    "@testing-library/jest-dom": "^4.2.4",
    "@testing-library/react": "^9.3.2",
    "@testing-library/user-event": "^7.1.2",
    ...
  },

We have the following packages installed specifically for testing purpose:

  1. @testing-library/jest-dom: provides custom DOM element matchers for Jest.
  2. @testing-library/react: provides the APIs for testing React apps.
  3. @testing-library/user-event: provides advanced simulation of browser interactions.

Open up App.test.js let’s take a look at its content.

import React from 'react';
import { render } from '@testing-library/react';
import App from './App';

test('renders learn react link', () => {
  const { getByText } = render();
  const linkElement = getByText(/learn react/i);
  expect(linkElement).toBeInTheDocument();
});

The render method of RTL renders the <App /> component and returns an object which is de-structured for the getByText query. This query finds elements in the DOM by their display text. Queries are the tools for finding elements in the DOM. The complete list of queries can be found here. All of the queries from the testing library are exported by RTL, in addition to the render, cleanup, and act methods. You can read more about these in the API section.

The text is matched with the regular expression /learn react/i. The i flag makes the regular expression case-insensitive. We expect to find the text Learn React in the document.

All of this mimics the behavior a user would experience in the browser when interacting with our app.

Let’s start making the changes required by our app. Open App.js and replace the content with the below code.

import React from "react";
import "./App.css";
function App() {
  return (
    <div className="App">
      <header className="App-header">
        <h2>Getting started with React testing library</h2>
      </header>
    </div>
  );
}
export default App;

If you still have the test running, you should see the test fail. Perhaps you can guess why that is the case, but we’ll return to it a bit later. Right now I want to refactor the test block.

Replace the test block in src/App.test.js with the code below:

# use describe, it pattern
describe("<App />", () => {
  it("Renders <App /> component correctly", () => {
    const { getByText } = render(<App />);
    expect(getByText(/Getting started with React testing library/i)).toBeInTheDocument();
  });
});

This refactor makes no material difference to how our test will run. I prefer the describe and it pattern as it allows me structure my test file into logical blocks of related tests. The test should re-run and this time it will pass. In case you haven’t guessed it, the fix for the failing test was to replace the learn react text with Getting started with React testing library.

In case you don’t have time to write your own styles you can just copy the one below into App.css.

.App {
  min-height: 100vh;
  text-align: center;
}
.App-header {
  height: 10vh;
  display: flex;
  background-color: #282c34;
  flex-direction: column;
  align-items: center;
  justify-content: center;
  font-size: calc(10px + 2vmin);
  color: white;
}
.App-body {
  width: 60%;
  margin: 20px auto;
}
ul {
  padding: 0;
  display: flex;
  list-style-type: decimal;
  flex-direction: column;
}
li {
  font-size: large;
  text-align: left;
  padding: 0.5rem 0;
}
li a {
  text-transform: capitalize;
  text-decoration: none;
}
.todo-title {
  text-transform: capitalize;
}
.completed {
  color: green;
}
.not-completed {
  color: red;
}

You should already see the page title move up after adding this CSS.

I consider this a good point for me to commit my changes and push to Github. The corresponding branch is 01-setup.

Let’s continue with our project setup. We know we’re going to need some navigation in our app so we need React-Router. We’ll also be making API calls with Axios. Let’s install both.

# install react-router-dom and axios
yarn add react-router-dom axios

Most React apps you’ll build will have to maintain state. There’s a lot of libraries available for managing state. But for this tutorial, I’ll be using React’s context API and the useContext hook. So let’s set up our app’s context.

Create a new file src/AppContext.js and enter the below content.

import React from "react";
export const AppContext = React.createContext({});

export const AppProvider = ({ children }) => {
  const reducer = (state, action) => {
    switch (action.type) {
      case "LOAD_TODOLIST":
        return { ...state, todoList: action.todoList };
      case "LOAD_SINGLE_TODO":
        return { ...state, activeToDoItem: action.todo };
      default:
        return state;
    }
  };
  const [appData, appDispatch] = React.useReducer(reducer, {
    todoList: [],
    activeToDoItem: { id: 0 },
  });
  return (
    <AppContext.Provider value={{ appData, appDispatch }}>
      {children}
    </AppContext.Provider>
  );
};

Here we create a new context with React.createContext({}), for which the initial value is an empty object. We then define an AppProvider component that accepts children component. It then wraps those children in AppContext.Provider, thus making the { appData, appDispatch } object available to all children anywhere in the render tree.

Our reducer function defines two action types.

  1. LOAD_TODOLIST which is used to update the todoList array.
  2. LOAD_SINGLE_TODO which is used to update activeToDoItem.

appData and appDispatch are both returned from the useReducer hook. appData gives us access to the values in the state while appDispatch gives us a function which we can use to update the app’s state.

Now open index.js, import the AppProvider component and wrap the <App /> component with <AppProvider />. Your final code should look like what I have below.

import { AppProvider } from "./AppContext";

ReactDOM.render(
  <React.StrictMode>
    <AppProvider>
      <App />
    </AppProvider>
  </React.StrictMode>,
  document.getElementById("root")
);

Wrapping <App /> inside <AppProvider /> makes AppContext available to every child component in our app.

Remember that with RTL, the aim is to test our app the same way a real user would interact with it. This implies that we also want our tests to interact with our app state. For that reason, we also need to make our <AppProvider /> available to our components during tests. Let’s see how to make that happen.

The render method provided by RTL is sufficient for simple components that don’t need to maintain state or use navigation. But most apps require at least one of both. For this reason, it provides a wrapper option. With this wrapper, we can wrap the UI rendered by the test renderer with any component we like, thus creating a custom render. Let’s create one for our tests.

Create a new file src/custom-render.js and paste the following code.

import React from "react";
import { render } from "@testing-library/react";
import { MemoryRouter } from "react-router-dom";

import { AppProvider } from "./AppContext";

const Wrapper = ({ children }) => {
  return (
    <AppProvider>
      <MemoryRouter>{children}</MemoryRouter>
    </AppProvider>
  );
};

const customRender = (ui, options) =>
  render(ui, { wrapper: Wrapper, ...options });

// re-export everything
export * from "@testing-library/react";

// override render method
export { customRender as render };

Here we define a <Wrapper /> component that accepts some children component. It then wraps those children inside <AppProvider /> and <MemoryRouter />. MemoryRouter is

A <Router> that keeps the history of your “URL” in memory (does not read or write to the address bar). Useful in tests and non-browser environments like React Native.

We then create our render function, providing it the Wrapper we just defined through its wrapper option. The effect of this is that any component we pass to the render function is rendered inside <Wrapper />, thus having access to navigation and our app’s state.

The next step is to export everything from @testing-library/react. Lastly, we export our custom render function as render, thus overriding the default render.

Note that even if you were using Redux for state management the same pattern still applies.

Let’s now make sure our new render function works. Import it into src/App.test.js and use it to render the <App /> component.

Open App.test.js and replace the import line. This

import { render } from '@testing-library/react';

should become

import { render } from './custom-render';

Does the test still pass? Good job.

There’s one small change I want to make before wrapping up this section. It gets tiring very quickly to have to write const { getByText } and other queries every time. So, I’m going to be using the screen object from the DOM testing library henceforth.

Import the screen object from our custom render file and replace the describe block with the code below.

import { render, screen } from "./custom-render";

describe("<App />", () => {
  it("Renders <App /> component correctly", () => {
    render(<App />);
    expect(
      screen.getByText(/Getting started with React testing library/i)
    ).toBeInTheDocument();
  });
});

We’re now accessing the getByText query from the screen object. Does your test still pass? I’m sure it does. Let’s continue.

If your tests don’t pass you may want to compare your code with mine. The corresponding branch at this point is 02-setup-store-and-render.

Testing And Building The To-Do List Index Page

In this section, we’ll pull to-do items from http://jsonplaceholder.typicode.com/. Our component specification is very simple. When a user visits our app homepage,

  1. show a loading indicator that says Fetching todos while waiting for the response from the API;
  2. display the title of 15 to-do items on the screen once the API call returns (the API call returns 200). Also, each item title should be a link that will lead to the to-do details page.

Following a test-driven approach, we’ll write our test before implementing the component logic. Before doing that we’ll need to have the component in question. So go ahead and create a file src/TodoList.js and enter the following content:

import React from "react";
import "./App.css";
export const TodoList = () => {
  return (
    <div>
    </div>
  );
};

Since we know the component specification we can test it in isolation before incorporating it into our main app. I believe it’s up to the developer at this point to decide how they want to handle this. One reason you might want to test a component in isolation is so that you don’t accidentally break any existing test and then having to fight fires in two locations. With that out of the way let’s now write the test.

Create a new file src/TodoList.test.js and enter the below code:

import React from "react";
import axios from "axios";
import { render, screen, waitForElementToBeRemoved } from "./custom-render";
import { TodoList } from "./TodoList";
import { todos } from "./makeTodos";

describe("<App />", () => {
  it("Renders <TodoList /> component", async () => {
    render(<TodoList />);
    await waitForElementToBeRemoved(() => screen.getByText(/Fetching todos/i));

    expect(axios.get).toHaveBeenCalledTimes(1);
    todos.slice(0, 15).forEach((td) => {
      expect(screen.getByText(td.title)).toBeInTheDocument();
    });
  });
});

Inside our test block, we render the <TodoList /> component and use the waitForElementToBeRemoved function to wait for the Fetching todos text to disappear from the screen. Once this happens we know that our API call has returned. We also check that an Axios get call was fired once. Finally, we check that each to-do title is displayed on the screen. Note that the it block receives an async function. This is necessary for us to be able to use await inside the function.

Each to-do item returned by the API has the following structure.

{
  id: 0,
  userId: 0,
  title: 'Some title',
  completed: true,
}

We want to return an array of these when we

import { todos } from "./makeTodos"

The only condition is that each id should be unique.

Create a new file src/makeTodos.js and enter the below content. This is the source of todos we’ll use in our tests.

const makeTodos = (n) => {
  // returns n number of todo items
  // default is 15
  const num = n || 15;
  const todos = [];
  for (let i = 0; i < num; i++) {
    todos.push({
      id: i,
      userId: i,
      title: `Todo item ${i}`,
      completed: [true, false][Math.floor(Math.random() * 2)],
    });
  }
  return todos;
};

export const todos = makeTodos(200);

This function simply generates a list of n to-do items. The completed line is set by randomly choosing between true and false.

Unit tests are supposed to be fast. They should run within a few seconds. Fail fast! This is one of the reasons why letting our tests make actual API calls is impractical. To avoid this we mock such unpredictable API calls. Mocking simply means replacing a function with a fake version, thus allowing us to customize the behavior. In our case, we want to mock the get method of Axios to return whatever we want it to. Jest already provides mocking functionality out of the box.

Let’s now mock Axios so it returns this list of to-dos when we make the API call in our test. Create a file src/__mocks__/axios.js and enter the below content:

import { todos } from "../makeTodos";

export default {
  get: jest.fn().mockImplementation((url) => {
    switch (url) {
      case "https://jsonplaceholder.typicode.com/todos":
        return Promise.resolve({ data: todos });
      default:
        throw new Error(`UNMATCHED URL: ${url}`);
    }
  }),
};

When the test starts, Jest automatically finds this mocks folder and instead of using the actual Axios from node_modules/ in our tests, it uses this one. At this point, we’re only mocking the get method using Jest’s mockImplementation method. Similarly, we can mock other Axios methods like post, patch, interceptors, defaults etc. Right now they’re all undefined and any attempt to access, axios.post for example, would result in an error.

Note that we can customize what to return based on the URL the Axios call receives. Also, Axios calls return a promise which resolves to the actual data we want, so we return a promise with the data we want.

At this point, we have one passing test and one failing test. Let’s implement the component logic.

Open src/TodoList.js let’s build out the implementation piece by piece. Start by replacing the code inside with this one below.

import React from "react";
import axios from "axios";
import { Link } from "react-router-dom";
import "./App.css";
import { AppContext } from "./AppContext";

export const TodoList = () => {
  const [loading, setLoading] = React.useState(true);
  const { appData, appDispatch } = React.useContext(AppContext);

  React.useEffect(() => {
    axios.get("https://jsonplaceholder.typicode.com/todos").then((resp) => {
      const { data } = resp;
      appDispatch({ type: "LOAD_TODOLIST", todoList: data });
      setLoading(false);
    });
  }, [appDispatch, setLoading]);

  return (
    <div>
      // next code block goes here
    </div>
  );
};

We import AppContext and de-structure appData and appDispatch from the return value of React.useContext. We then make the API call inside a useEffect block. Once the API call returns, we set the to-do list in state by firing the LOAD_TODOLIST action. Finally, we set the loading state to false to reveal our to-dos.

Now enter the final piece of code.

{loading ? (
  <p>Fetching todos</p>
) : (
  <ul>
    {appData.todoList.slice(0, 15).map((item) => {
      const { id, title } = item;
      return (
        <li key={id}>
          <Link to={`/item/${id}`} data-testid={id}>
            {title}
          </Link>
        </li>
      );
    })}
  </ul>
)}

We slice appData.todoList to get the first 15 items. We then map over those and render each one in a <Link /> tag so we can click on it and see the details. Note the data-testid attribute on each Link. This should be a unique ID that will aid us in finding individual DOM elements. In a case where we have similar text on the screen, we should never have the same ID for any two elements. We’ll see how to use this a bit later.

My tests now pass. Does yours pass? Great.

Let’s now incorporate this component into our render tree. Open up App.js let’s do that.

First things. Add some imports.

import { BrowserRouter, Route } from "react-router-dom";
import { TodoList } from "./TodoList";

We need BrowserRouter for navigation and Route for rendering each component in each navigation location.

Now add the below code after the <header /> element.

<div className="App-body">
  <BrowserRouter>
    <Route exact path="/" component={TodoList} />
  </BrowserRouter>
</div>

This is simply telling the browser to render the <TodoList /> component when we’re on the root location, /. Once this is done, our tests still pass but you should see some error messages on your console telling you about some act something. You should also see that the <TodoList /> component seems to be the culprit here.

Terminal showing act warnings
Terminal showing act warnings. (Large preview)

Since we’re sure that our TodoList component by itself is okay, we have to look at the App component, inside of which is rendered the <TodoList /> component.

This warning may seem complex at first but it is telling us that something is happening in our component that we’re not accounting for in our test. The fix is to wait for the loading indicator to be removed from the screen before we proceed.

Open up App.test.js and update the code to look like so:

import React from "react";
import { render, screen, waitForElementToBeRemoved } from "./custom-render";
import App from "./App";
describe("<App />", () => {
  it("Renders <App /> component correctly", async () => {
    render(<App />);
    expect(
      screen.getByText(/Getting started with React testing library/i)
    ).toBeInTheDocument();
    await waitForElementToBeRemoved(() => screen.getByText(/Fetching todos/i));
  });
});

We’ve made two changes. First, we changed the function in the it block to an async function. This is a necessary step to allow us to use await in the function body. Secondly, we wait for the Fetching todos text to be removed from the screen. And voila!. The warning is gone. Phew! I strongly advise that you bookmark this post by Kent Dodds for more on this act warning. You’re gonna need it.

Now open the page in your browser and you should see the list of to-dos. You can click on an item if you like, but it won’t show you anything because our router doesn’t yet recognize that URL.

For comparison, the branch of my repo at this point is 03-todolist.

Let’s now add the to-do details page.

Testing And Building The Single To-Do Page

To display a single to-do item we’ll follow a similar approach. The component specification is simple. When a user navigates to a to-do page:

  1. display a loading indicator that says Fetching todo item id where id represents the to-do’s id, while the API call to https://jsonplaceholder.typicode.com/todos/item_id runs.
  2. When the API call returns, show the following information:
    • Todo item title
    • Added by: userId
    • This item has been completed if the to-do has been completed or
    • This item is yet to be completed if the to-do has not been completed.

Let’s start with the component. Create a file src/TodoItem.js and add the following content.

import React from "react";
import { useParams } from "react-router-dom";

import "./App.css";

export const TodoItem = () => {
  const { id } = useParams()
  return (
    <div className="single-todo-item">
    </div>
  );
};

The only thing new to us in this file is the const { id } = useParams() line. This is a hook from react-router-dom that lets us read URL parameters. This id is going to be used in fetching a to-do item from the API.

This situation is a bit different because we’re going to be reading the id from the location URL. We know that when a user clicks a to-do link, the id will show up in the URL which we can then grab using the useParams() hook. But here we’re testing the component in isolation which means that there’s nothing to click, even if we wanted to. To get around this we’ll have to mock react-router-dom, but only some parts of it. Yes. It’s possible to mock only what we need to. Let’s see how it’s done.

Create a new mock file src/__mocks__ /react-router-dom.js. Now paste in the following code:

module.exports = {
  ...jest.requireActual("react-router-dom"),
  useParams: jest.fn(),
};

By now you should have noticed that when mocking a module we have to use the exact module name as the mock file name.

Here, we use the module.exports syntax because react-router-dom has mostly named exports. (I haven’t come across any default export since I’ve been working with it. If there are any, kindly share with me in the comments). This is unlike Axios where everything is bundled as methods in one default export.

We first spread the actual react-router-dom, then replace the useParams hook with a Jest function. Since this function is a Jest function, we can modify it anytime we want. Keep in mind that we’re only mocking the part we need to because if we mock everything, we’ll lose the implementation of MemoryHistory which is used in our render function.

Let’s start testing!

Now create src/TodoItem.test.js and enter the below content:

import React from "react";
import axios from "axios";
import { render, screen, waitForElementToBeRemoved } from "./custom-render";
import { useParams, MemoryRouter } from "react-router-dom";
import { TodoItem } from "./TodoItem";

describe("<TodoItem />", () => {
  it("can tell mocked from unmocked functions", () => {
    expect(jest.isMockFunction(useParams)).toBe(true);
    expect(jest.isMockFunction(MemoryRouter)).toBe(false);
  });
});

Just like before, we have all our imports. The describe block then follows. Our first case is only there as a demonstration that we’re only mocking what we need to. Jest’s isMockFunction can tell whether a function is mocked or not. Both expectations pass, confirming the fact that we have a mock where we want it.

Add the below test case for when a to-do item has been completed.

  it("Renders <TodoItem /> correctly for a completed item", async () => {
    useParams.mockReturnValue({ id: 1 });
    render(<TodoItem />);

    await waitForElementToBeRemoved(() =>
      screen.getByText(/Fetching todo item 1/i)
    );

    expect(axios.get).toHaveBeenCalledTimes(1);
    expect(screen.getByText(/todo item 1/)).toBeInTheDocument();
    expect(screen.getByText(/Added by: 1/)).toBeInTheDocument();
    expect(
      screen.getByText(/This item has been completed/)
    ).toBeInTheDocument();
  });

The very first thing we do is to mock the return value of useParams. We want it to return an object with an id property, having a value of 1. When this is parsed in the component, we end up with the following URL https://jsonplaceholder.typicode.com/todos/1. Keep in mind that we have to add a case for this URL in our Axios mock or it will throw an error. We will do that in just a moment.

We now know for sure that calling useParams() will return the object { id: 1 } which makes this test case predictable.

As with previous tests, we wait for the loading indicator, Fetching todo item 1 to be removed from the screen before making our expectations. We expect to see the to-do title, the id of the user who added it, and a message indicating the status.

Open src/__mocks__/axios.js and add the following case to the switch block.

      case "https://jsonplaceholder.typicode.com/todos/1":
        return Promise.resolve({
          data: { id: 1, title: "todo item 1", userId: 1, completed: true },
        });

When this URL is matched, a promise with a completed to-do is returned. Of course, this test case fails since we’re yet to implement the component logic. Go ahead and add a test case for when the to-do item has not been completed.

  it("Renders <TodoItem /> correctly for an uncompleted item", async () => {
    useParams.mockReturnValue({ id: 2 });
    render(<TodoItem />);
    await waitForElementToBeRemoved(() =>
      screen.getByText(/Fetching todo item 2/i)
    );
    expect(axios.get).toHaveBeenCalledTimes(2);
    expect(screen.getByText(/todo item 2/)).toBeInTheDocument();
    expect(screen.getByText(/Added by: 2/)).toBeInTheDocument();
    expect(
      screen.getByText(/This item is yet to be completed/)
    ).toBeInTheDocument();
  });

This is the same as the previous case. The only difference is the ID of the to-do, the userId, and the completion status. When we enter the component, we’ll need to make an API call to the URL https://jsonplaceholder.typicode.com/todos/2. Go ahead and add a matching case statement to the switch block of our Axios mock.

case "https://jsonplaceholder.typicode.com/todos/2":
  return Promise.resolve({
    data: { id: 2, title: "todo item 2", userId: 2, completed: false },
  });

When the URL is matched, a promise with an uncompleted to-do is returned.

Both test cases are failing. Now let’s add the component implementation to make them pass.

Open src/TodoItem.js and update the code to the following:

import React from "react";
import axios from "axios";
import { useParams } from "react-router-dom";
import "./App.css";
import { AppContext } from "./AppContext";

export const TodoItem = () => {
  const { id } = useParams();
  const [loading, setLoading] = React.useState(true);
  const {
    appData: { activeToDoItem },
    appDispatch,
  } = React.useContext(AppContext);

  const { title, completed, userId } = activeToDoItem;
  React.useEffect(() => {
    axios
      .get(`https://jsonplaceholder.typicode.com/todos/${id}`)
      .then((resp) => {
        const { data } = resp;
        appDispatch({ type: "LOAD_SINGLE_TODO", todo: data });
        setLoading(false);
      });
  }, [id, appDispatch]);
  return (
    <div className="single-todo-item">
      // next code block goes here.
    </div>
  );
};

As with the <TodoList /> component, we import AppContext. We read activeTodoItem from it, then we read the to-do title, userId, and completion status. After that we make the API call inside a useEffect block. When the API call returns we set the to-do in state by firing the LOAD_SINGLE_TODO action. Finally, we set our loading state to false to reveal the to-do details.

Let’s add the final piece of code inside the return div:

{loading ? (
  <p>Fetching todo item {id}</p>
) : (
  <div>
    <h2 className="todo-title">{title}</h2>
    <h4>Added by: {userId}</h4>
    {completed ? (
      <p className="completed">This item has been completed</p>
    ) : (
      <p className="not-completed">This item is yet to be completed</p>
    )}
  </div>
)}

Once this is done all tests should now pass. Yay! We have another winner.

Our component tests now pass. But we still haven’t added it to our main app. Let’s do that.

Open src/App.js and add the import line:

import { TodoItem } from './TodoItem'

Add the TodoItem route above the TodoList route. Be sure to preserve the order shown below.

# preserve this order
<Route path="/item/:id" component={TodoItem} />
<Route exact path="/" component={TodoList} />

Open your project in your browser and click on a to-do. Does it take you to the to-do page? Of course, it does. Good job.

In case you’re having any problem, you can check out my code at this point from the 04-test-todo branch.

Phew! This has been a marathon. But bear with me. There’s one last point I’d like us to touch. Let’s quickly have a test case for when a user visits our app, and then proceed to click on a to-do link. This is a functional test to mimic how our app should work. In practice, this is all the testing we need to be done for this app. It ticks every box in our app specification.

Open App.test.js and add a new test case. The code is a bit long so we’ll add it in two steps.

import userEvent from "@testing-library/user-event";
import { todos } from "./makeTodos";

jest.mock("react-router-dom", () => ({
  ...jest.requireActual("react-router-dom"),
}));

describe("<App />"
  ...
  // previous test case
  ...

  it("Renders todos, and I can click to view a todo item", async () => {
    render(<App />);
    await waitForElementToBeRemoved(() => screen.getByText(/Fetching todos/i));
    todos.slice(0, 15).forEach((td) => {
      expect(screen.getByText(td.title)).toBeInTheDocument();
    });
    // click on a todo item and test the result
    const { id, title, completed, userId } = todos[0];
    axios.get.mockImplementationOnce(() =>
      Promise.resolve({
        data: { id, title, userId, completed },
      })
    );
    userEvent.click(screen.getByTestId(String(id)));
    await waitForElementToBeRemoved(() =>
      screen.getByText(`Fetching todo item ${String(id)}`)
    );

    // next code block goes here
  });
});

We have two imports of which userEvent is new. According to the docs,

user-event is a companion library for the React Testing Library that provides a more advanced simulation of browser interactions than the built-in fireEvent method.”

Yes. There is a fireEvent method for simulating user events. But userEvent is what you want to be using henceforth.

Before we start the testing process, we need to restore the original useParams hooks. This is necessary since we want to test actual behavior, so we should mock as little as possible. Jest provides us with requireActual method which returns the original react-router-dom module.

Note that we must do this before we enter the describe block, otherwise, Jest would ignore it. It states in the documentation that requireActual:

“...returns the actual module instead of a mock, bypassing all checks on whether the module should receive a mock implementation or not.”

Once this is done, Jest bypasses every other check and ignores the mocked version of the react-router-dom.

As usual, we render the <App /> component and wait for the Fetching todos loading indicator to disappear from the screen. We then check for the presence of the first 15 to-do items on the page.

Once we’re satisfied with that, we grab the first item in our to-do list. To prevent any chance of a URL collision with our global Axios mock, we override the global mock with Jest’s mockImplementationOnce. This mocked value is valid for one call to the Axios get method. We then grab a link by its data-testid attribute and fire a user click event on that link. Then we wait for the loading indicator for the single to-do page to disappear from the screen.

Now finish the test by adding the below expectations in the position indicated.

expect(screen.getByText(title)).toBeInTheDocument();
expect(screen.getByText(`Added by: ${userId}`)).toBeInTheDocument();
switch (completed) {
  case true:
    expect(
      screen.getByText(/This item has been completed/)
    ).toBeInTheDocument();
    break;
  case false:
    expect(
      screen.getByText(/This item is yet to be completed/)
    ).toBeInTheDocument();
    break;
  default:
    throw new Error("No match");
    }
  

We expect to see the to-do title and the user who added it. Finally, since we can’t be sure about the to-do status, we create a switch block to handle both cases. If a match is not found we throw an error.

You should have 6 passing tests and a functional app at this point. In case you’re having trouble, the corresponding branch in my repo is 05-test-user-action.

Conclusion

Phew! That was some marathon. If you made it to this point, congratulations. You now have almost all you need to write tests for your React apps. I strongly advise that you read CRA’s testing docs and RTL’s documentation. Overall both are relatively short and direct.

I strongly encourage you to start writing tests for your React apps, no matter how small. Even if it’s just smoke tests to make sure your components render. You can incrementally add more test cases over time.

Smashing Editorial (ks, ra, yk, il)
[post_excerpt] => Today, we’ll briefly discuss why it’s important to write automated tests for any software project, and shed light on some of the common types of automated testing. We’ll build a to-do list app by following the Test-Driven Development (TDD) approach. I’ll show you how to write both unit and functional tests, and in the process, explain what code mocks are by mocking a few libraries. I’ll be using a combination of RTL and Jest — both of which come pre-installed in any new project created with Create-React-App (CRA). [post_date_gmt] => 2020-07-03 12:30:00 [post_date] => 2020-07-03 12:30:00 [post_modified_gmt] => 2020-07-03 12:30:00 [post_modified] => 2020-07-03 12:30:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/react-apps-testing-library/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/react-apps-testing-library/ [syndication_item_hash] => Array ( [0] => 45ce8ebc0e754dc76b43a88a5976ef32 [1] => 023fdb387ca5d085a38a5406c5d48647 [2] => 3e05daa986703e2cfdbb7293be42bee1 [3] => 2109de89aced5eab55476db388e1949f [4] => 53480fde75af5bcd0c83fe28d1795b0d [5] => 3682f94d003873b688a80dadc0e9eb93 [6] => f8e1895f1ddf1d4bb6e2cff4364e0a92 [7] => f1e94444f8866bdaeecad93b449b9660 [8] => 153ff66f0a7b49da28b23a7c027cb028 [9] => 1c2bc7013a3517b21a643b2dfd4f48ba [10] => bb60fdee96a38ca114f602ab78de6902 [11] => c37707798e0f8cd24d89d3bc43f8615c [12] => 3c5e185631ad266df0613a5e49f3c2a5 [13] => 4a5feb227b8477be620133c84b90b4a8 [14] => 2e2ebeb7a9a9d2408cce4a4964185d73 [15] => 8e96014edd85669c5a270bc82a6f97ce [16] => 20a4e2eb5c3160bbcbb30fec747d4b8b [17] => 6647f9c8a70ea42a2534ed610ff018cd [18] => f8ebd5029e3a73df267c97708646ad3a [19] => f160c474229ae770b7929584d64153ee [20] => 2024f1ebc9940ddcca74684038285f70 [21] => e3121767a42ae9cffa83788fad4f5d65 [22] => ea47685124281f2c114555d680711d50 [23] => 2abbb0c7cb3029d820cbc4551b48789f [24] => 03abfe67876a4b6d3fdd2e96d6f26fa8 [25] => a2c8dea4c6743d30d87ba778aebcdce3 [26] => 275a838cf98d104fd1a36317900ed23e [27] => 75c2b3cf4e1c9cd1dfbfb74cd0b8a035 [28] => 3bace8972ce054d74e9fc3c9d3c50a1f [29] => b9ff4175d95343819ffe30c631431909 [30] => c9b032bd9cf717488ca9fb9f29a0c9ee [31] => 25768726288d13384dfaf414ebfd912d [32] => 2dbd8fe69316699d388213e98ba8c715 [33] => 92d446d27d8133128cec15bc58bbc131 [34] => d3ab9a1d4d001473649f295eb99a9b43 [35] => 1638c88784fc7d1c9c73f2cf748f3a6b [36] => 5a6cd38a56f056286e360e4bcaef76aa [37] => 81decd0787bf9cd9480d80b672d09a8f [38] => cf6cf6dd3dc8ae8cb410b9348a689930 [39] => 9a1a6b9bd4a2d2b87a758e9be58efe34 [40] => 437d7e5a94c025d0292fb88ecc779142 [41] => d7e2b46a267359859b14b193e24cbe42 [42] => 8bf22f88b95d5a603b3c4ec36e3392ff [43] => f569a6df80d3dfb101746148fc1c46b0 [44] => 9ef8adf70ef6873f8ae547f5e89e9c45 [45] => c9e3f657c30416900ae716df1e1b2f2f [46] => 6fecdb75836ef0d9b32ef376ea4f3326 [47] => 0a52741efa99c9aada0b7fa5d642fbe8 [48] => bf727c4e3725f3543e8a27c1ade4a412 [49] => ff831d4557c226b625a9d934fa7c340a [50] => 08a4b7c3c768b0d2d55e9596d3beeebb [51] => f3f0f2fc01f88a7339a4ec7d8cce3936 [52] => 0e8e365d5b2c1cc10a6e8569757eadd4 [53] => 2ef294c746e7e5e81e5e73b1142d436b [54] => 89340ea21964924d0fb9f60ff8caa75a [55] => a1dae87af61dada5e72c93c7398c1a8e [56] => 4267fd8d4dd14b93172a8f8d2a4fe6ae [57] => 46fa48284e92df14181df08edaf99adc [58] => 20847827b43d9fbac18491ef973b9235 [59] => 762d05f411562dbbe783195cb8a5720a [60] => 762d05f411562dbbe783195cb8a5720a [61] => 456572e493ab60e47442fbea9048a5de [62] => b7604c930b36e17eb3fbb01606f606d3 [63] => c2c9772b4e36c00274d0c1a9d0d74793 [64] => cca648d3bc7e66389689d40cd8df2c45 ) ) [post_type] => post [post_author] => 494 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => how-to-test-your-react-apps-with-the-react-testing-library )

Decide filter: Returning post, everything seems orderly :How To Test Your React Apps With The React Testing Library

Array ( [post_title] => How To Test Your React Apps With The React Testing Library [post_content] => How To Test Your React Apps With The React Testing Library

How To Test Your React Apps With The React Testing Library

Chidi Orji

Today, we’ll briefly discuss why it’s important to write automated tests for any software project, and shed light on some of the common types of automated testing. We’ll build a to-do list app by following the Test-Driven Development (TDD) approach. I’ll show you how to write both unit and functional tests, and in the process, explain what code mocks are by mocking a few libraries. I’ll be using a combination of RTL and Jest — both of which come pre-installed in any new project created with Create-React-App (CRA).

To follow along, you need to know how to set up and navigate a new React project and how to work with the yarn package manager (or npm). Familiarities with Axios and React-Router are also required.

Best Practices With React

React is a fantastic JavaScript library for building rich user interfaces. It provides a great component abstraction for organizing your interfaces into well-functioning code, and there’s just about anything you can use it for. Read more articles on React →

Why You Should Test Your Code

Before shipping your software to end-users, you first have to confirm that it is working as expected. In other words, the app should satisfy its project specifications.

Just as it is important to test our project as a whole before shipping it to end-users, it’s also essential to keep testing our code during the lifetime of a project. This is necessary for a number of reasons. We may make updates to our application or refactor some parts of our code. A third-party library may undergo a breaking change. Even the browser that is running our web application may undergo breaking changes. In some cases, something stops working for no apparent reason — things could go wrong unexpectedly. Thus, it is necessary to test our code regularly for the lifetime of a project.

Broadly speaking, there are manual and automated software tests. In a manual test, a real user performs some action on our application to verify that they work correctly. This kind of test is less reliable when repeated several times because it’s easy for the tester to miss some details between test runs.

In an automated test, however, a test script is executed by a machine. With a test script, we can be sure that whatever details we set in the script will remain unchanged on every test run.

This kind of test gives us the benefits of being predictable and fast, such that we can quickly find and fix bugs in our code.

Having seen the necessity of testing our code, the next logical question is, what sort of automated tests should we write for our code? Let’s quickly go over a few of them.

Types Of Automated Testing

There are many different types of automated software testing. Some of the most common ones are unit tests, integration tests, functional tests, end-to-end tests, acceptance tests, performance tests, and smoke tests.

  1. Unit test
    In this kind of test, the goal is to verify that each unit of our application, considered in isolation, is working correctly. An example would be testing that a particular function returns an expected value, give some known inputs. We’ll see several examples in this article.
  2. Smoke test
    This kind of test is done to check that the system is up and running. For example, in a React app, we could just render our main app component and call it a day. If it renders correctly we can be fairly certain that our app would render on the browser.
  3. Integration test
    This sort of test is carried out to verify that two or more modules can work well together. For example, you might run a test to verify that your server and database are actually communicating correctly.
  4. Functional test
    A functional test exists to verify that the system meets its functional specification. We’ll see an example later.
  5. End-to-end test
    This kind of test involves testing the application the same way it would be used in the real world. You can use a tool like cypress for E2E tests.
  6. Acceptance test
    This is usually done by the business owner to verify that the system meets specifications.
  7. Performance test
    This sort of testing is carried out to see how the system performs under significant load. In frontend development, this is usually about how fast the app loads on the browser.

There’s more here if you’re interested.

Why Use React Testing Library?

When it comes to testing React applications, there are a few testing options available, of which the most common ones I know of are Enzyme and React Testing Library (RTL).

RTL is a subset of the @testing-library family of packages. Its philosophy is very simple. Your users don’t care whether you use redux or context for state management. They care less about the simplicity of hooks nor the distinction between class and functional components. They just want your app to work in a certain way. It is, therefore, no surprise that the testing library’s primary guiding principle is

“The more your tests resemble the way your software is used, the more confidence they can give you.”

So, whatever you do, have the end-user in mind and test your app just as they would use it.

Choosing RTL gives you a number of advantages. First, it’s much easier to get started with it. Every new React project bootstrapped with CRA comes with RTL and Jest configured. The React docs also recommend it as the testing library of choice. Lastly, the guiding principle makes a lot of sense — functionality over implementation details.

With that out of the way, let’s get started with building a to-do list app, following the TDD approach.

Project Setup

Open a terminal and copy and run the below command.

# start new react project and start the server
npx create-react-app start-rtl && cd start-rtl && yarn start

This should create a new React project and start the server on http://localhost:3000. With the project running, open a separate terminal, run yarn test and then press a. This runs all tests in the project in watch mode. Running the test in watch mode means that the test will automatically re-run when it detects a change in either the test file or the file that is being tested. On the test terminal, you should see something like the picture below:

Initial test passing
Initial test passing. (Large preview)

You should see a lot of greens, which indicates that the test we’re running passed in flying colors.

As I mentioned earlier, CRA sets up RTL and Jest for every new React project. It also includes a sample test. This sample test is what we just executed.

When you run the yarn test command, react-scripts calls upon Jest to execute the test. Jest is a JavaScript testing framework that’s used in running tests. You won’t find it listed in package.json but you can do a search inside yarn.lock to find it. You can also see it in node_modules/.

Jest is incredible in the range of functionality that it provides. It provides tools for assertions, mocking, spying, etc. I strongly encourage you to take at least a quick tour of the documentation. There’s a lot to learn there that I cannot scratch in this short piece. We’ll be using Jest a lot in the coming sections.

Open package.json let’s see what we have there. The section of interest is dependencies.

  "dependencies": {
    "@testing-library/jest-dom": "^4.2.4",
    "@testing-library/react": "^9.3.2",
    "@testing-library/user-event": "^7.1.2",
    ...
  },

We have the following packages installed specifically for testing purpose:

  1. @testing-library/jest-dom: provides custom DOM element matchers for Jest.
  2. @testing-library/react: provides the APIs for testing React apps.
  3. @testing-library/user-event: provides advanced simulation of browser interactions.

Open up App.test.js let’s take a look at its content.

import React from 'react';
import { render } from '@testing-library/react';
import App from './App';

test('renders learn react link', () => {
  const { getByText } = render();
  const linkElement = getByText(/learn react/i);
  expect(linkElement).toBeInTheDocument();
});

The render method of RTL renders the <App /> component and returns an object which is de-structured for the getByText query. This query finds elements in the DOM by their display text. Queries are the tools for finding elements in the DOM. The complete list of queries can be found here. All of the queries from the testing library are exported by RTL, in addition to the render, cleanup, and act methods. You can read more about these in the API section.

The text is matched with the regular expression /learn react/i. The i flag makes the regular expression case-insensitive. We expect to find the text Learn React in the document.

All of this mimics the behavior a user would experience in the browser when interacting with our app.

Let’s start making the changes required by our app. Open App.js and replace the content with the below code.

import React from "react";
import "./App.css";
function App() {
  return (
    <div className="App">
      <header className="App-header">
        <h2>Getting started with React testing library</h2>
      </header>
    </div>
  );
}
export default App;

If you still have the test running, you should see the test fail. Perhaps you can guess why that is the case, but we’ll return to it a bit later. Right now I want to refactor the test block.

Replace the test block in src/App.test.js with the code below:

# use describe, it pattern
describe("<App />", () => {
  it("Renders <App /> component correctly", () => {
    const { getByText } = render(<App />);
    expect(getByText(/Getting started with React testing library/i)).toBeInTheDocument();
  });
});

This refactor makes no material difference to how our test will run. I prefer the describe and it pattern as it allows me structure my test file into logical blocks of related tests. The test should re-run and this time it will pass. In case you haven’t guessed it, the fix for the failing test was to replace the learn react text with Getting started with React testing library.

In case you don’t have time to write your own styles you can just copy the one below into App.css.

.App {
  min-height: 100vh;
  text-align: center;
}
.App-header {
  height: 10vh;
  display: flex;
  background-color: #282c34;
  flex-direction: column;
  align-items: center;
  justify-content: center;
  font-size: calc(10px + 2vmin);
  color: white;
}
.App-body {
  width: 60%;
  margin: 20px auto;
}
ul {
  padding: 0;
  display: flex;
  list-style-type: decimal;
  flex-direction: column;
}
li {
  font-size: large;
  text-align: left;
  padding: 0.5rem 0;
}
li a {
  text-transform: capitalize;
  text-decoration: none;
}
.todo-title {
  text-transform: capitalize;
}
.completed {
  color: green;
}
.not-completed {
  color: red;
}

You should already see the page title move up after adding this CSS.

I consider this a good point for me to commit my changes and push to Github. The corresponding branch is 01-setup.

Let’s continue with our project setup. We know we’re going to need some navigation in our app so we need React-Router. We’ll also be making API calls with Axios. Let’s install both.

# install react-router-dom and axios
yarn add react-router-dom axios

Most React apps you’ll build will have to maintain state. There’s a lot of libraries available for managing state. But for this tutorial, I’ll be using React’s context API and the useContext hook. So let’s set up our app’s context.

Create a new file src/AppContext.js and enter the below content.

import React from "react";
export const AppContext = React.createContext({});

export const AppProvider = ({ children }) => {
  const reducer = (state, action) => {
    switch (action.type) {
      case "LOAD_TODOLIST":
        return { ...state, todoList: action.todoList };
      case "LOAD_SINGLE_TODO":
        return { ...state, activeToDoItem: action.todo };
      default:
        return state;
    }
  };
  const [appData, appDispatch] = React.useReducer(reducer, {
    todoList: [],
    activeToDoItem: { id: 0 },
  });
  return (
    <AppContext.Provider value={{ appData, appDispatch }}>
      {children}
    </AppContext.Provider>
  );
};

Here we create a new context with React.createContext({}), for which the initial value is an empty object. We then define an AppProvider component that accepts children component. It then wraps those children in AppContext.Provider, thus making the { appData, appDispatch } object available to all children anywhere in the render tree.

Our reducer function defines two action types.

  1. LOAD_TODOLIST which is used to update the todoList array.
  2. LOAD_SINGLE_TODO which is used to update activeToDoItem.

appData and appDispatch are both returned from the useReducer hook. appData gives us access to the values in the state while appDispatch gives us a function which we can use to update the app’s state.

Now open index.js, import the AppProvider component and wrap the <App /> component with <AppProvider />. Your final code should look like what I have below.

import { AppProvider } from "./AppContext";

ReactDOM.render(
  <React.StrictMode>
    <AppProvider>
      <App />
    </AppProvider>
  </React.StrictMode>,
  document.getElementById("root")
);

Wrapping <App /> inside <AppProvider /> makes AppContext available to every child component in our app.

Remember that with RTL, the aim is to test our app the same way a real user would interact with it. This implies that we also want our tests to interact with our app state. For that reason, we also need to make our <AppProvider /> available to our components during tests. Let’s see how to make that happen.

The render method provided by RTL is sufficient for simple components that don’t need to maintain state or use navigation. But most apps require at least one of both. For this reason, it provides a wrapper option. With this wrapper, we can wrap the UI rendered by the test renderer with any component we like, thus creating a custom render. Let’s create one for our tests.

Create a new file src/custom-render.js and paste the following code.

import React from "react";
import { render } from "@testing-library/react";
import { MemoryRouter } from "react-router-dom";

import { AppProvider } from "./AppContext";

const Wrapper = ({ children }) => {
  return (
    <AppProvider>
      <MemoryRouter>{children}</MemoryRouter>
    </AppProvider>
  );
};

const customRender = (ui, options) =>
  render(ui, { wrapper: Wrapper, ...options });

// re-export everything
export * from "@testing-library/react";

// override render method
export { customRender as render };

Here we define a <Wrapper /> component that accepts some children component. It then wraps those children inside <AppProvider /> and <MemoryRouter />. MemoryRouter is

A <Router> that keeps the history of your “URL” in memory (does not read or write to the address bar). Useful in tests and non-browser environments like React Native.

We then create our render function, providing it the Wrapper we just defined through its wrapper option. The effect of this is that any component we pass to the render function is rendered inside <Wrapper />, thus having access to navigation and our app’s state.

The next step is to export everything from @testing-library/react. Lastly, we export our custom render function as render, thus overriding the default render.

Note that even if you were using Redux for state management the same pattern still applies.

Let’s now make sure our new render function works. Import it into src/App.test.js and use it to render the <App /> component.

Open App.test.js and replace the import line. This

import { render } from '@testing-library/react';

should become

import { render } from './custom-render';

Does the test still pass? Good job.

There’s one small change I want to make before wrapping up this section. It gets tiring very quickly to have to write const { getByText } and other queries every time. So, I’m going to be using the screen object from the DOM testing library henceforth.

Import the screen object from our custom render file and replace the describe block with the code below.

import { render, screen } from "./custom-render";

describe("<App />", () => {
  it("Renders <App /> component correctly", () => {
    render(<App />);
    expect(
      screen.getByText(/Getting started with React testing library/i)
    ).toBeInTheDocument();
  });
});

We’re now accessing the getByText query from the screen object. Does your test still pass? I’m sure it does. Let’s continue.

If your tests don’t pass you may want to compare your code with mine. The corresponding branch at this point is 02-setup-store-and-render.

Testing And Building The To-Do List Index Page

In this section, we’ll pull to-do items from http://jsonplaceholder.typicode.com/. Our component specification is very simple. When a user visits our app homepage,

  1. show a loading indicator that says Fetching todos while waiting for the response from the API;
  2. display the title of 15 to-do items on the screen once the API call returns (the API call returns 200). Also, each item title should be a link that will lead to the to-do details page.

Following a test-driven approach, we’ll write our test before implementing the component logic. Before doing that we’ll need to have the component in question. So go ahead and create a file src/TodoList.js and enter the following content:

import React from "react";
import "./App.css";
export const TodoList = () => {
  return (
    <div>
    </div>
  );
};

Since we know the component specification we can test it in isolation before incorporating it into our main app. I believe it’s up to the developer at this point to decide how they want to handle this. One reason you might want to test a component in isolation is so that you don’t accidentally break any existing test and then having to fight fires in two locations. With that out of the way let’s now write the test.

Create a new file src/TodoList.test.js and enter the below code:

import React from "react";
import axios from "axios";
import { render, screen, waitForElementToBeRemoved } from "./custom-render";
import { TodoList } from "./TodoList";
import { todos } from "./makeTodos";

describe("<App />", () => {
  it("Renders <TodoList /> component", async () => {
    render(<TodoList />);
    await waitForElementToBeRemoved(() => screen.getByText(/Fetching todos/i));

    expect(axios.get).toHaveBeenCalledTimes(1);
    todos.slice(0, 15).forEach((td) => {
      expect(screen.getByText(td.title)).toBeInTheDocument();
    });
  });
});

Inside our test block, we render the <TodoList /> component and use the waitForElementToBeRemoved function to wait for the Fetching todos text to disappear from the screen. Once this happens we know that our API call has returned. We also check that an Axios get call was fired once. Finally, we check that each to-do title is displayed on the screen. Note that the it block receives an async function. This is necessary for us to be able to use await inside the function.

Each to-do item returned by the API has the following structure.

{
  id: 0,
  userId: 0,
  title: 'Some title',
  completed: true,
}

We want to return an array of these when we

import { todos } from "./makeTodos"

The only condition is that each id should be unique.

Create a new file src/makeTodos.js and enter the below content. This is the source of todos we’ll use in our tests.

const makeTodos = (n) => {
  // returns n number of todo items
  // default is 15
  const num = n || 15;
  const todos = [];
  for (let i = 0; i < num; i++) {
    todos.push({
      id: i,
      userId: i,
      title: `Todo item ${i}`,
      completed: [true, false][Math.floor(Math.random() * 2)],
    });
  }
  return todos;
};

export const todos = makeTodos(200);

This function simply generates a list of n to-do items. The completed line is set by randomly choosing between true and false.

Unit tests are supposed to be fast. They should run within a few seconds. Fail fast! This is one of the reasons why letting our tests make actual API calls is impractical. To avoid this we mock such unpredictable API calls. Mocking simply means replacing a function with a fake version, thus allowing us to customize the behavior. In our case, we want to mock the get method of Axios to return whatever we want it to. Jest already provides mocking functionality out of the box.

Let’s now mock Axios so it returns this list of to-dos when we make the API call in our test. Create a file src/__mocks__/axios.js and enter the below content:

import { todos } from "../makeTodos";

export default {
  get: jest.fn().mockImplementation((url) => {
    switch (url) {
      case "https://jsonplaceholder.typicode.com/todos":
        return Promise.resolve({ data: todos });
      default:
        throw new Error(`UNMATCHED URL: ${url}`);
    }
  }),
};

When the test starts, Jest automatically finds this mocks folder and instead of using the actual Axios from node_modules/ in our tests, it uses this one. At this point, we’re only mocking the get method using Jest’s mockImplementation method. Similarly, we can mock other Axios methods like post, patch, interceptors, defaults etc. Right now they’re all undefined and any attempt to access, axios.post for example, would result in an error.

Note that we can customize what to return based on the URL the Axios call receives. Also, Axios calls return a promise which resolves to the actual data we want, so we return a promise with the data we want.

At this point, we have one passing test and one failing test. Let’s implement the component logic.

Open src/TodoList.js let’s build out the implementation piece by piece. Start by replacing the code inside with this one below.

import React from "react";
import axios from "axios";
import { Link } from "react-router-dom";
import "./App.css";
import { AppContext } from "./AppContext";

export const TodoList = () => {
  const [loading, setLoading] = React.useState(true);
  const { appData, appDispatch } = React.useContext(AppContext);

  React.useEffect(() => {
    axios.get("https://jsonplaceholder.typicode.com/todos").then((resp) => {
      const { data } = resp;
      appDispatch({ type: "LOAD_TODOLIST", todoList: data });
      setLoading(false);
    });
  }, [appDispatch, setLoading]);

  return (
    <div>
      // next code block goes here
    </div>
  );
};

We import AppContext and de-structure appData and appDispatch from the return value of React.useContext. We then make the API call inside a useEffect block. Once the API call returns, we set the to-do list in state by firing the LOAD_TODOLIST action. Finally, we set the loading state to false to reveal our to-dos.

Now enter the final piece of code.

{loading ? (
  <p>Fetching todos</p>
) : (
  <ul>
    {appData.todoList.slice(0, 15).map((item) => {
      const { id, title } = item;
      return (
        <li key={id}>
          <Link to={`/item/${id}`} data-testid={id}>
            {title}
          </Link>
        </li>
      );
    })}
  </ul>
)}

We slice appData.todoList to get the first 15 items. We then map over those and render each one in a <Link /> tag so we can click on it and see the details. Note the data-testid attribute on each Link. This should be a unique ID that will aid us in finding individual DOM elements. In a case where we have similar text on the screen, we should never have the same ID for any two elements. We’ll see how to use this a bit later.

My tests now pass. Does yours pass? Great.

Let’s now incorporate this component into our render tree. Open up App.js let’s do that.

First things. Add some imports.

import { BrowserRouter, Route } from "react-router-dom";
import { TodoList } from "./TodoList";

We need BrowserRouter for navigation and Route for rendering each component in each navigation location.

Now add the below code after the <header /> element.

<div className="App-body">
  <BrowserRouter>
    <Route exact path="/" component={TodoList} />
  </BrowserRouter>
</div>

This is simply telling the browser to render the <TodoList /> component when we’re on the root location, /. Once this is done, our tests still pass but you should see some error messages on your console telling you about some act something. You should also see that the <TodoList /> component seems to be the culprit here.

Terminal showing act warnings
Terminal showing act warnings. (Large preview)

Since we’re sure that our TodoList component by itself is okay, we have to look at the App component, inside of which is rendered the <TodoList /> component.

This warning may seem complex at first but it is telling us that something is happening in our component that we’re not accounting for in our test. The fix is to wait for the loading indicator to be removed from the screen before we proceed.

Open up App.test.js and update the code to look like so:

import React from "react";
import { render, screen, waitForElementToBeRemoved } from "./custom-render";
import App from "./App";
describe("<App />", () => {
  it("Renders <App /> component correctly", async () => {
    render(<App />);
    expect(
      screen.getByText(/Getting started with React testing library/i)
    ).toBeInTheDocument();
    await waitForElementToBeRemoved(() => screen.getByText(/Fetching todos/i));
  });
});

We’ve made two changes. First, we changed the function in the it block to an async function. This is a necessary step to allow us to use await in the function body. Secondly, we wait for the Fetching todos text to be removed from the screen. And voila!. The warning is gone. Phew! I strongly advise that you bookmark this post by Kent Dodds for more on this act warning. You’re gonna need it.

Now open the page in your browser and you should see the list of to-dos. You can click on an item if you like, but it won’t show you anything because our router doesn’t yet recognize that URL.

For comparison, the branch of my repo at this point is 03-todolist.

Let’s now add the to-do details page.

Testing And Building The Single To-Do Page

To display a single to-do item we’ll follow a similar approach. The component specification is simple. When a user navigates to a to-do page:

  1. display a loading indicator that says Fetching todo item id where id represents the to-do’s id, while the API call to https://jsonplaceholder.typicode.com/todos/item_id runs.
  2. When the API call returns, show the following information:
    • Todo item title
    • Added by: userId
    • This item has been completed if the to-do has been completed or
    • This item is yet to be completed if the to-do has not been completed.

Let’s start with the component. Create a file src/TodoItem.js and add the following content.

import React from "react";
import { useParams } from "react-router-dom";

import "./App.css";

export const TodoItem = () => {
  const { id } = useParams()
  return (
    <div className="single-todo-item">
    </div>
  );
};

The only thing new to us in this file is the const { id } = useParams() line. This is a hook from react-router-dom that lets us read URL parameters. This id is going to be used in fetching a to-do item from the API.

This situation is a bit different because we’re going to be reading the id from the location URL. We know that when a user clicks a to-do link, the id will show up in the URL which we can then grab using the useParams() hook. But here we’re testing the component in isolation which means that there’s nothing to click, even if we wanted to. To get around this we’ll have to mock react-router-dom, but only some parts of it. Yes. It’s possible to mock only what we need to. Let’s see how it’s done.

Create a new mock file src/__mocks__ /react-router-dom.js. Now paste in the following code:

module.exports = {
  ...jest.requireActual("react-router-dom"),
  useParams: jest.fn(),
};

By now you should have noticed that when mocking a module we have to use the exact module name as the mock file name.

Here, we use the module.exports syntax because react-router-dom has mostly named exports. (I haven’t come across any default export since I’ve been working with it. If there are any, kindly share with me in the comments). This is unlike Axios where everything is bundled as methods in one default export.

We first spread the actual react-router-dom, then replace the useParams hook with a Jest function. Since this function is a Jest function, we can modify it anytime we want. Keep in mind that we’re only mocking the part we need to because if we mock everything, we’ll lose the implementation of MemoryHistory which is used in our render function.

Let’s start testing!

Now create src/TodoItem.test.js and enter the below content:

import React from "react";
import axios from "axios";
import { render, screen, waitForElementToBeRemoved } from "./custom-render";
import { useParams, MemoryRouter } from "react-router-dom";
import { TodoItem } from "./TodoItem";

describe("<TodoItem />", () => {
  it("can tell mocked from unmocked functions", () => {
    expect(jest.isMockFunction(useParams)).toBe(true);
    expect(jest.isMockFunction(MemoryRouter)).toBe(false);
  });
});

Just like before, we have all our imports. The describe block then follows. Our first case is only there as a demonstration that we’re only mocking what we need to. Jest’s isMockFunction can tell whether a function is mocked or not. Both expectations pass, confirming the fact that we have a mock where we want it.

Add the below test case for when a to-do item has been completed.

  it("Renders <TodoItem /> correctly for a completed item", async () => {
    useParams.mockReturnValue({ id: 1 });
    render(<TodoItem />);

    await waitForElementToBeRemoved(() =>
      screen.getByText(/Fetching todo item 1/i)
    );

    expect(axios.get).toHaveBeenCalledTimes(1);
    expect(screen.getByText(/todo item 1/)).toBeInTheDocument();
    expect(screen.getByText(/Added by: 1/)).toBeInTheDocument();
    expect(
      screen.getByText(/This item has been completed/)
    ).toBeInTheDocument();
  });

The very first thing we do is to mock the return value of useParams. We want it to return an object with an id property, having a value of 1. When this is parsed in the component, we end up with the following URL https://jsonplaceholder.typicode.com/todos/1. Keep in mind that we have to add a case for this URL in our Axios mock or it will throw an error. We will do that in just a moment.

We now know for sure that calling useParams() will return the object { id: 1 } which makes this test case predictable.

As with previous tests, we wait for the loading indicator, Fetching todo item 1 to be removed from the screen before making our expectations. We expect to see the to-do title, the id of the user who added it, and a message indicating the status.

Open src/__mocks__/axios.js and add the following case to the switch block.

      case "https://jsonplaceholder.typicode.com/todos/1":
        return Promise.resolve({
          data: { id: 1, title: "todo item 1", userId: 1, completed: true },
        });

When this URL is matched, a promise with a completed to-do is returned. Of course, this test case fails since we’re yet to implement the component logic. Go ahead and add a test case for when the to-do item has not been completed.

  it("Renders <TodoItem /> correctly for an uncompleted item", async () => {
    useParams.mockReturnValue({ id: 2 });
    render(<TodoItem />);
    await waitForElementToBeRemoved(() =>
      screen.getByText(/Fetching todo item 2/i)
    );
    expect(axios.get).toHaveBeenCalledTimes(2);
    expect(screen.getByText(/todo item 2/)).toBeInTheDocument();
    expect(screen.getByText(/Added by: 2/)).toBeInTheDocument();
    expect(
      screen.getByText(/This item is yet to be completed/)
    ).toBeInTheDocument();
  });

This is the same as the previous case. The only difference is the ID of the to-do, the userId, and the completion status. When we enter the component, we’ll need to make an API call to the URL https://jsonplaceholder.typicode.com/todos/2. Go ahead and add a matching case statement to the switch block of our Axios mock.

case "https://jsonplaceholder.typicode.com/todos/2":
  return Promise.resolve({
    data: { id: 2, title: "todo item 2", userId: 2, completed: false },
  });

When the URL is matched, a promise with an uncompleted to-do is returned.

Both test cases are failing. Now let’s add the component implementation to make them pass.

Open src/TodoItem.js and update the code to the following:

import React from "react";
import axios from "axios";
import { useParams } from "react-router-dom";
import "./App.css";
import { AppContext } from "./AppContext";

export const TodoItem = () => {
  const { id } = useParams();
  const [loading, setLoading] = React.useState(true);
  const {
    appData: { activeToDoItem },
    appDispatch,
  } = React.useContext(AppContext);

  const { title, completed, userId } = activeToDoItem;
  React.useEffect(() => {
    axios
      .get(`https://jsonplaceholder.typicode.com/todos/${id}`)
      .then((resp) => {
        const { data } = resp;
        appDispatch({ type: "LOAD_SINGLE_TODO", todo: data });
        setLoading(false);
      });
  }, [id, appDispatch]);
  return (
    <div className="single-todo-item">
      // next code block goes here.
    </div>
  );
};

As with the <TodoList /> component, we import AppContext. We read activeTodoItem from it, then we read the to-do title, userId, and completion status. After that we make the API call inside a useEffect block. When the API call returns we set the to-do in state by firing the LOAD_SINGLE_TODO action. Finally, we set our loading state to false to reveal the to-do details.

Let’s add the final piece of code inside the return div:

{loading ? (
  <p>Fetching todo item {id}</p>
) : (
  <div>
    <h2 className="todo-title">{title}</h2>
    <h4>Added by: {userId}</h4>
    {completed ? (
      <p className="completed">This item has been completed</p>
    ) : (
      <p className="not-completed">This item is yet to be completed</p>
    )}
  </div>
)}

Once this is done all tests should now pass. Yay! We have another winner.

Our component tests now pass. But we still haven’t added it to our main app. Let’s do that.

Open src/App.js and add the import line:

import { TodoItem } from './TodoItem'

Add the TodoItem route above the TodoList route. Be sure to preserve the order shown below.

# preserve this order
<Route path="/item/:id" component={TodoItem} />
<Route exact path="/" component={TodoList} />

Open your project in your browser and click on a to-do. Does it take you to the to-do page? Of course, it does. Good job.

In case you’re having any problem, you can check out my code at this point from the 04-test-todo branch.

Phew! This has been a marathon. But bear with me. There’s one last point I’d like us to touch. Let’s quickly have a test case for when a user visits our app, and then proceed to click on a to-do link. This is a functional test to mimic how our app should work. In practice, this is all the testing we need to be done for this app. It ticks every box in our app specification.

Open App.test.js and add a new test case. The code is a bit long so we’ll add it in two steps.

import userEvent from "@testing-library/user-event";
import { todos } from "./makeTodos";

jest.mock("react-router-dom", () => ({
  ...jest.requireActual("react-router-dom"),
}));

describe("<App />"
  ...
  // previous test case
  ...

  it("Renders todos, and I can click to view a todo item", async () => {
    render(<App />);
    await waitForElementToBeRemoved(() => screen.getByText(/Fetching todos/i));
    todos.slice(0, 15).forEach((td) => {
      expect(screen.getByText(td.title)).toBeInTheDocument();
    });
    // click on a todo item and test the result
    const { id, title, completed, userId } = todos[0];
    axios.get.mockImplementationOnce(() =>
      Promise.resolve({
        data: { id, title, userId, completed },
      })
    );
    userEvent.click(screen.getByTestId(String(id)));
    await waitForElementToBeRemoved(() =>
      screen.getByText(`Fetching todo item ${String(id)}`)
    );

    // next code block goes here
  });
});

We have two imports of which userEvent is new. According to the docs,

user-event is a companion library for the React Testing Library that provides a more advanced simulation of browser interactions than the built-in fireEvent method.”

Yes. There is a fireEvent method for simulating user events. But userEvent is what you want to be using henceforth.

Before we start the testing process, we need to restore the original useParams hooks. This is necessary since we want to test actual behavior, so we should mock as little as possible. Jest provides us with requireActual method which returns the original react-router-dom module.

Note that we must do this before we enter the describe block, otherwise, Jest would ignore it. It states in the documentation that requireActual:

“...returns the actual module instead of a mock, bypassing all checks on whether the module should receive a mock implementation or not.”

Once this is done, Jest bypasses every other check and ignores the mocked version of the react-router-dom.

As usual, we render the <App /> component and wait for the Fetching todos loading indicator to disappear from the screen. We then check for the presence of the first 15 to-do items on the page.

Once we’re satisfied with that, we grab the first item in our to-do list. To prevent any chance of a URL collision with our global Axios mock, we override the global mock with Jest’s mockImplementationOnce. This mocked value is valid for one call to the Axios get method. We then grab a link by its data-testid attribute and fire a user click event on that link. Then we wait for the loading indicator for the single to-do page to disappear from the screen.

Now finish the test by adding the below expectations in the position indicated.

expect(screen.getByText(title)).toBeInTheDocument();
expect(screen.getByText(`Added by: ${userId}`)).toBeInTheDocument();
switch (completed) {
  case true:
    expect(
      screen.getByText(/This item has been completed/)
    ).toBeInTheDocument();
    break;
  case false:
    expect(
      screen.getByText(/This item is yet to be completed/)
    ).toBeInTheDocument();
    break;
  default:
    throw new Error("No match");
    }
  

We expect to see the to-do title and the user who added it. Finally, since we can’t be sure about the to-do status, we create a switch block to handle both cases. If a match is not found we throw an error.

You should have 6 passing tests and a functional app at this point. In case you’re having trouble, the corresponding branch in my repo is 05-test-user-action.

Conclusion

Phew! That was some marathon. If you made it to this point, congratulations. You now have almost all you need to write tests for your React apps. I strongly advise that you read CRA’s testing docs and RTL’s documentation. Overall both are relatively short and direct.

I strongly encourage you to start writing tests for your React apps, no matter how small. Even if it’s just smoke tests to make sure your components render. You can incrementally add more test cases over time.

Smashing Editorial (ks, ra, yk, il)
[post_excerpt] => Today, we’ll briefly discuss why it’s important to write automated tests for any software project, and shed light on some of the common types of automated testing. We’ll build a to-do list app by following the Test-Driven Development (TDD) approach. I’ll show you how to write both unit and functional tests, and in the process, explain what code mocks are by mocking a few libraries. I’ll be using a combination of RTL and Jest — both of which come pre-installed in any new project created with Create-React-App (CRA). [post_date_gmt] => 2020-07-03 12:30:00 [post_date] => 2020-07-03 12:30:00 [post_modified_gmt] => 2020-07-03 12:30:00 [post_modified] => 2020-07-03 12:30:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/react-apps-testing-library/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/react-apps-testing-library/ [syndication_item_hash] => Array ( [0] => 45ce8ebc0e754dc76b43a88a5976ef32 [1] => 023fdb387ca5d085a38a5406c5d48647 [2] => 3e05daa986703e2cfdbb7293be42bee1 [3] => 2109de89aced5eab55476db388e1949f [4] => 53480fde75af5bcd0c83fe28d1795b0d [5] => 3682f94d003873b688a80dadc0e9eb93 [6] => f8e1895f1ddf1d4bb6e2cff4364e0a92 [7] => f1e94444f8866bdaeecad93b449b9660 [8] => 153ff66f0a7b49da28b23a7c027cb028 [9] => 1c2bc7013a3517b21a643b2dfd4f48ba [10] => bb60fdee96a38ca114f602ab78de6902 [11] => c37707798e0f8cd24d89d3bc43f8615c [12] => 3c5e185631ad266df0613a5e49f3c2a5 [13] => 4a5feb227b8477be620133c84b90b4a8 [14] => 2e2ebeb7a9a9d2408cce4a4964185d73 [15] => 8e96014edd85669c5a270bc82a6f97ce [16] => 20a4e2eb5c3160bbcbb30fec747d4b8b [17] => 6647f9c8a70ea42a2534ed610ff018cd [18] => f8ebd5029e3a73df267c97708646ad3a [19] => f160c474229ae770b7929584d64153ee [20] => 2024f1ebc9940ddcca74684038285f70 [21] => e3121767a42ae9cffa83788fad4f5d65 [22] => ea47685124281f2c114555d680711d50 [23] => 2abbb0c7cb3029d820cbc4551b48789f [24] => 03abfe67876a4b6d3fdd2e96d6f26fa8 [25] => a2c8dea4c6743d30d87ba778aebcdce3 [26] => 275a838cf98d104fd1a36317900ed23e [27] => 75c2b3cf4e1c9cd1dfbfb74cd0b8a035 [28] => 3bace8972ce054d74e9fc3c9d3c50a1f [29] => b9ff4175d95343819ffe30c631431909 [30] => c9b032bd9cf717488ca9fb9f29a0c9ee [31] => 25768726288d13384dfaf414ebfd912d [32] => 2dbd8fe69316699d388213e98ba8c715 [33] => 92d446d27d8133128cec15bc58bbc131 [34] => d3ab9a1d4d001473649f295eb99a9b43 [35] => 1638c88784fc7d1c9c73f2cf748f3a6b [36] => 5a6cd38a56f056286e360e4bcaef76aa [37] => 81decd0787bf9cd9480d80b672d09a8f [38] => cf6cf6dd3dc8ae8cb410b9348a689930 [39] => 9a1a6b9bd4a2d2b87a758e9be58efe34 [40] => 437d7e5a94c025d0292fb88ecc779142 [41] => d7e2b46a267359859b14b193e24cbe42 [42] => 8bf22f88b95d5a603b3c4ec36e3392ff [43] => f569a6df80d3dfb101746148fc1c46b0 [44] => 9ef8adf70ef6873f8ae547f5e89e9c45 [45] => c9e3f657c30416900ae716df1e1b2f2f [46] => 6fecdb75836ef0d9b32ef376ea4f3326 [47] => 0a52741efa99c9aada0b7fa5d642fbe8 [48] => bf727c4e3725f3543e8a27c1ade4a412 [49] => ff831d4557c226b625a9d934fa7c340a [50] => 08a4b7c3c768b0d2d55e9596d3beeebb [51] => f3f0f2fc01f88a7339a4ec7d8cce3936 [52] => 0e8e365d5b2c1cc10a6e8569757eadd4 [53] => 2ef294c746e7e5e81e5e73b1142d436b [54] => 89340ea21964924d0fb9f60ff8caa75a [55] => a1dae87af61dada5e72c93c7398c1a8e [56] => 4267fd8d4dd14b93172a8f8d2a4fe6ae [57] => 46fa48284e92df14181df08edaf99adc [58] => 20847827b43d9fbac18491ef973b9235 [59] => 762d05f411562dbbe783195cb8a5720a [60] => 762d05f411562dbbe783195cb8a5720a [61] => 456572e493ab60e47442fbea9048a5de [62] => b7604c930b36e17eb3fbb01606f606d3 [63] => c2c9772b4e36c00274d0c1a9d0d74793 [64] => cca648d3bc7e66389689d40cd8df2c45 ) ) [post_type] => post [post_author] => 494 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => how-to-test-your-react-apps-with-the-react-testing-library )

FAF deciding on filters on post to be syndicated:

Differences Between Static Generated Sites And Server-Side Rendered Apps

Array ( [post_title] => Differences Between Static Generated Sites And Server-Side Rendered Apps [post_content] => Differences Between Static Generated Sites And Server-Side Rendered Apps

Differences Between Static Generated Sites And Server-Side Rendered Apps

Timi Omoyeni

JavaScript currently has three types of applications that you can build with: Single Page Applications (SPAs), pre-rendering/static generated sites and server-side rendered applications. SPAs come with many challenges, one of which is Search Engine Optimization (SEO). Possible solutions are to make use of Static Site Generators or Server-Side Rendering (SSR).

In this article, I’m going to explain them alongside listing their pros and cons so you have a balanced view. We’re going to look at what static generated/pre-rendering is as well as frameworks such as Gatsby and VuePress that help in creating statically generated sites. We’re also going to look at what server-side rendered (SSR) applications are as well as frameworks like Nextjs and Nuxtjs that can help you create SSR applications. Finally, we’re going to cover the differences between these two methods and which of them you should use when building your next application.

Note: You can find all the code snippets in this article on GitHub.

What Is A Static Site Generator?

A Static Site Generator (SSG) is a software application that creates HTML pages from templates or components and a given content source. You give it some text files and content, and the generator will give you back a complete website, and this completed website is referred to as a static generated site. What this means is that your site pages are generated at build time and your site content does not change unless you add new contents or components and “rebuild” or you have to rebuild your site if you want it to be updated with new content.

Diagram explaining how static site generation works
How static site generation works (Large preview)

This approach is good for building applications that the content does not change too often — sites that the content does not have to change depending on the user, and sites that do not have a lot of user-generated content. An example of such a site is a blog or a personal website. Let’s look at some advantages of using static generated sites.

PROS

CONS

Examples of static site generators are GatsbyJS and VuePress. Let us take a look at how to create static sites using these two generators.

Gatsby

According to their official website,

“Gatsby is a free and open-source framework based on React that helps developers build blazing-fast websites and apps.”

This means developers familiar with React would find it easy to get started with Gatsby.

To use this generator, you first have to install it using NPM:

npm install -g gatsby-cli

This will install Gatsby globally on your machine, you only have to run this command once on your machine. After this installation is complete, you can create your first static site generator using the following command.

gatsby new demo-gatsby

This command will create a new Gatsby project that I have named demo-gatsby. When this is done, you can start up your app server by running the following command:

cd demo-gatsby
gatsby develop

Your Gatsby application should be running on localhost:8000.

Gatsby default landing page
Gatsby default starter page (Large preview)

The folder structure for this app looks like this;

--| gatsby-browser.js  
--| LICENSE        
--| README.md
--| gatsby-config.js
--| node_modules/  
--| src/
----| components
----| pages
----| images
--| gatsby-node.js     
--| package.json   
--| yarn.lock
--| gatsby-ssr.js      
--| public/
----| icons
----| page-data
----| static

For this tutorial, we’re only going to look at the src/pages folder. This folder contains files that would be generated into routes on your site.

To test this, let us add a new file (newPage.js) to this folder:

import React from "react"
import { Link } from "gatsby"
import Layout from "../components/layout"
import SEO from "../components/seo"
const NewPage = () => (
  <Layout>
    <SEO title="My New Page" />
    <h1>Hello Gatsby</h1>
    <p>This is my first Gatsby Page</p>
    <button>
      <Link to='/'>Home</Link>
    </button>
  </Layout>
)
export default NewPage

Here, we import React from the react package so when your code is transpiled to pure JavaScript, references to React will appear there. We also import a Link component from gatsby and this is one of React’s route tag that is used in place of the native anchor tag ( <a href='#'>Link</a>). It accepts a to prop that takes a route as a value.

We import a Layout component that was added to your app by default. This component handles the layout of pages nested inside it. We also import the SEO component into this new file. This component accepts a title prop and configures this value as part of your page’s metadata. Finally, we export the function NewPage that returns a JSX containing your new page’s content.

And in your index.js file, add a link to this new page we just created:

import React from "react"
import { Link } from "gatsby"
import Layout from "../components/layout"
import Image from "../components/image"
import SEO from "../components/seo"
const IndexPage = () => (
  <Layout>
    <SEO title="Home" />
    <h1>Hi people</h1>
    <p>Welcome to your new Gatsby site.</p>
    <p>Now go build something great.</p>
    <div style={{ maxWidth: `300px`, marginBottom: `1.45rem` }}>
      <Image />
    </div>
    <Link to="/page-2/">Go to page 2</Link>
    {/* new link */}
    <button>
      <Link to="/newPage/">Go to New Page</Link>
    </button>
  </Layout>
)
export default IndexPage

Here, we import the same components that were used in newPage.js file and they perform the same function in this file. We also import an Image component from our components folder. This component is added by default to your Gatsby application and it helps in lazy loading images and serving reduced file size. Finally, we export a function IndexPage that returns JSX containing our new link and some default content.

Now, if we open our browser, we should see our new link at the bottom of the page.

Gatsby default landing page with link to a new page
Gatsby landing page with new link (Large preview)

And if you click on Go To New Page, it should take you to your newly added page.

New page containing some texts
New gatsby page (Large preview)

VuePress

VuePress is a static site generator that is powered by Vue, Vue Router and Webpack. It requires little to no configuration for you to get started with it. While there are a number of tools that are static site generators, VuePress stands out from amongst the pack for a single reason: its primary directive is to make it easier for developers to create and maintain great documentation for their projects.

To use VuePress, you first have to install it:

//globally
yarn global add vuepress # OR npm install -g vuepress

//in an existing project
yarn add -D vuepress # OR npm install -D vuepress

Once the installation process is done, you can run the following command in your terminal:

# create the project folder
mkdir demo-vuepress && cd demo-vuepress

# create a markdown file
echo '# Hello VuePress' > README.md

# start writing
vuepress dev

Here, we create a folder for our VuePress application, add a README.md file with # Hello VuePress as the only content inside this file, and finally, start up our server.

When this is done, our application should be running on localhost:8080 and we should see this in our browser:

A VuePress webpage with a text saying ‘Hello VuePress’
VuePress landing page (Large preview)

VuePress supports VueJS syntax and markup inside this file. Update your README.md file with the following:

# Hello VuePress
_VuePress Rocks_
> **Yes!**
_It supports JavaScript interpolation code_
> **{{new Date()}}**
<p v-for="i of ['v','u', 'e', 'p', 'r', 'e', 's', 's']">{{i}}</p>

If you go back to your browser, your page should look like this:

Updated VuePress page
Updated Vuepress page (Large preview)

To add a new page to your VuePress site, you add a new markdown file to the root directory and name it whatever you want the route to be. In this case, I’ve gone ahead to name it Page-2.md and added the following to the file:

# hello World
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod
tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
consequat.

And now, if you navigate to /page-2 in your browser, we should see this:

A VuePress webpage containing hello world
A “Hello World” page in VuePress (Large preview)

What Is Server-Side Rendering? (SSR)

Server-Side Rendering (SSR), is the process of displaying web-pages on the server and passing it to the browser/client-side instead of rendering it in the browser. Server-side sends a fully rendered page to the client; the client’s JavaScript bundle takes over and allows the SPA framework to operate.

This means if you have an application that is server-side rendered, your content is fetched on the server side and passed to your browser to display to your user. With client-side rendering it is different, you would have to navigate to that page first before it fetches data from your server meaning your user would have to wait for some seconds before they’re served with the content on that page. Applications that have SSR enabled are called Server-side rendered applications.

A diagram explaining how server-side rendering works
How SSR works (Large preview)

This approach is good for building complex applications that require user interaction, rely on a database, or where the content changes very often. This is because content on these sites changes very often and the users need to see the updated content as soon as they’re updated. It is also good for applications that have tailored content depending on who is viewing it and applications where you need to store user-specific data like email and user preference while also catering for SEO. An example of this is a large e-commerce platform or a social media site. Let us look at some of the advantages of server-side rendering your applications.

Pros

CONS

Further examples of frameworks that offer SSR are Next.js and Nuxt.js.

Next.js

Next.js is a React.js framework that helps in building static sites, server-side rendered applications, and so on. Since it was built on React, knowledge of React is required to use this framework.

To create a Next.js app, you need to run the following:

npm init next-app
# or
yarn create next-app

You would be prompted to choose a name your application, I have named my application demo-next. The next option would be to select a template and I’ve selected the Default starter app after which it begins to set up your app. When this is done, we can now start our application

cd demo-next
yarn dev 
# or npm run dev

Your application should be running on localhost:3000 and you should see this in your browser;

Default Nextjs landing page
Next.js landing page (Large preview)

The page that is being rendered can be found in pages/index.js so if you open this file and modify the JSX inside the Home function, it would reflect in your browser. Replace the JSX with this:

import Head from 'next/head'
export default function Home() {
  return (
    <div className="container">
      <Head>
        <title>Hello Next.js</title>
        <link rel="icon" href="/favicon.ico" />
      </Head>
      <main>
        <h1 className="title">
          Welcome to <a href="https://nextjs.org">Next.js!</a>
        </h1>
        <p className='description'>Nextjs Rocks!</p>
      </main>
      <style jsx>{`
        main {
          padding: 5rem 0;
          flex: 1;
          display: flex;
          flex-direction: column;
          justify-content: center;
          align-items: center;
        }
        .title a {
          color: #0070f3;
          text-decoration: none;
        }
        .title a:hover,
        .title a:focus,
        .title a:active {
          text-decoration: underline;
        }
        .title {
          margin: 0;
          line-height: 1.15;
          font-size: 4rem;
        }
        .title,
        .description {
          text-align: center;
        }
        .description {
          line-height: 1.5;
          font-size: 1.5rem;
        }
      `}</style>
      <style jsx global>{`
        html,
        body {
          padding: 0;
          margin: 0;
          font-family: -apple-system, BlinkMacSystemFont, Segoe UI, Roboto,
            Oxygen, Ubuntu, Cantarell, Fira Sans, Droid Sans, Helvetica Neue,
            sans-serif;
        }
        * {
          box-sizing: border-box;
        }
      `}</style>
    </div>
  )
}

In this file, we make use of Next.js Head component to set our page’s metadata title and favicon for this page. We also export a Home function that returns a JSX containing our page’s content. This JSX contains our Head component together with our main page’s content. It also contains two style tags, one for styling this page and the other for the global styling of the app.

Now, you should see that the content on your app has changed to this:

Nextjs landing page containing ‘welcome to Nextjs’ text
Updated landing page (Large preview)

Now if we want to add a new page to our app, we have to add a new file inside the /pages folder. Routes are automatically created based on the /pages folder structure, this means that if you have a folder structure that looks like this:

--| pages
----| index.js ==> '/'
----| about.js ==> '/about'
----| projects
------| next.js ==> '/projects/next'

So in your pages folder, add a new file and name it hello.js then add the following to it:

import Head from 'next/head'
export default function Hello() {
  return (
    <div>
       <Head>
        <title>Hello World</title>
        <link rel="icon" href="/favicon.ico" />
      </Head>
      <main className='container'>
        <h1 className='title'>
         Hello <a href="https://en.wikipedia.org/wiki/Hello_World_(film)">World</a>
        </h1>
        <p className='subtitle'>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Voluptatem provident soluta, sit explicabo impedit nobis accusantium? Nihil beatae, accusamus modi assumenda, optio omnis aliquid nobis magnam facilis ipsam eum saepe!</p>
      </main>
      <style jsx> {`
      
      .container {
        margin: 0 auto;
        min-height: 100vh;
        max-width: 800px;
        text-align: center;
      }
      .title {
        font-family: "Quicksand", "Source Sans Pro", -apple-system, BlinkMacSystemFont,
          "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif;
        display: block;
        font-weight: 300;
        font-size: 100px;
        color: #35495e;
        letter-spacing: 1px;
      }
      .subtitle {
        font-weight: 300;
        font-size: 22px;
        color: #526488;
        word-spacing: 5px;
        padding-bottom: 15px;
      }
      `} </style>
    </div>
  )
}

This page is identical to the landing page we already have, we only changed the content and added new styling to the JSX. Now if we visit localhost:3000/hello, we should see our new page:

A Nextjs webpage containing ‘Hello world’
A “Hello World ” page in Next.js (Large preview)

Finally, we need to add a link to this new page on our index.js page, and to do this, we make use of Next’s Link component. To do that, we have to import it first.

# index.js
import Link from 'next/link'

#Add this to your JSX
<Link href='/hello'>
<Link href='/hello'>
  <a>Next</a>
</Link>

This link component is how we add links to pages created in Next in our application.

Now if we go back to our homepage and click on this link, it would take us to our /hello page.

Nuxt.js

According to their official documentation:

“Nuxt is a progressive framework based on Vue.js to create modern web applications. It is based on Vue.js official libraries (vue, vue-router and vuex) and powerful development tools (webpack, Babel and PostCSS). Nuxt’s goal is to make web development powerful and performant with a great developer experience in mind.”

It is based on Vue.js so that means Vue.js developers would find it easy getting started with it and knowledge of Vue.js is required to use this framework.

To create a Nuxt.js app, you need to run the following command in your terminal:

yarn create nuxt-app <project-name>
# or npx
npx create-nuxt-app <project-name>

This would prompt you to select a name along with some other options. I named mine demo-nuxt and selected default options for the other options. When this is done, you can open your app folder and open pages/index.vue. Every file in this folder file is turned into a route and so our landing page is controlled by index.vue file. So if you update it with the following:

<template>
  <div class="container">
    <div>
      <logo />
      <h1 class="title">
        Hello Nuxt
      </h1>
      <h2 class="subtitle">
        Nuxt.js ROcks!
      </h2>
      <div class="links">
        <a
          href="https://nuxtjs.org/"
          target="_blank"
          class="button--green"
        >
          Documentation
        </a>
        <a
          href="https://github.com/nuxt/nuxt.js"
          target="_blank"
          class="button--grey"
        >
          GitHub
        </a>
      </div>
    </div>
  </div>
</template>
<script>
import Logo from '~/components/Logo.vue'
export default {
  components: {
    Logo
  }
}
</script>
<style>
.container {
  margin: 0 auto;
  min-height: 100vh;
  display: flex;
  justify-content: center;
  align-items: center;
  text-align: center;
}
.title {
  font-family: 'Quicksand', 'Source Sans Pro', -apple-system, BlinkMacSystemFont,
    'Segoe UI', Roboto, 'Helvetica Neue', Arial, sans-serif;
  display: block;
  font-weight: 300;
  font-size: 100px;
  color: #35495e;
  letter-spacing: 1px;
}
.subtitle {
  font-weight: 300;
  font-size: 42px;
  color: #526488;
  word-spacing: 5px;
  padding-bottom: 15px;
}
.links {
  padding-top: 15px;
}
</style>

And run your application:

cd demo-nuxt
# start your applicatio
yarn dev # or npm run dev

Your application should be running on localhost:3000 and you should see this:

Default Nuxtjs landing page
Nuxt.js landing page (Large preview)

We can see that this page displays the content we added in to index.vue. The router structure works the same way Next.js router works; it renders every file inside /pages folder into a page. So let us add a new page (hello.vue) to our application.

<template>
  <div>
    <h1>Hello World!</h1>
    <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Id ipsa vitae tempora perferendis, voluptate a accusantium itaque vel ex, provident autem quod rem saepe ullam hic explicabo voluptas, libero distinctio?</p>
  </div>
</template>
<script>
export default {};
</script>
<style>
</style>

So if you open localhost:3000/hello, you should see your new page in your browser.

A Nuxtjs webpage containing ‘Hello World’
“Hello World” page in Nuxtjs (Large preview)

Taking A Closer Look At The Differences

Now that we have looked at both static-site generators and server-side rendering and how to get started with them by using some popular tools, let us look at the differences between them.

Static Sites Generators Server-Side Rendering
Can easily be deployed to a static CDN Cannot be deployed to a static CDN
Content and pages are generated at build time Content and pages are generated per request
Content can become stale quickly Content is always up to date
Fewer API calls since it only makes it at build time Makes API calls each time a new page is visited

Conclusion

We can see why it is so easy to think both static generated sites and server-side rendered applications are the same. Now that we know the differences between them are, I would advise that we try to learn more on how to build both static generated sites and server-side rendered applications in order to fully understand the differences between them.

Further Resources

Here are some useful links that are bound to help you get started in no time:

Smashing Editorial (ks, ra, il)
[post_excerpt] => JavaScript currently has three types of applications that you can build with: Single Page Applications (SPAs), pre-rendering/static generated sites and server-side rendered applications. SPAs come with many challenges, one of which is Search Engine Optimization (SEO). Possible solutions are to make use of Static Site Generators or Server-Side Rendering (SSR). In this article, I’m going to explain them alongside listing their pros and cons so you have a balanced view. [post_date_gmt] => 2020-07-02 12:00:00 [post_date] => 2020-07-02 12:00:00 [post_modified_gmt] => 2020-07-02 12:00:00 [post_modified] => 2020-07-02 12:00:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/differences-static-generated-sites-server-side-rendered-apps/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/differences-static-generated-sites-server-side-rendered-apps/ [syndication_item_hash] => Array ( [0] => 19132aa6c896b16cfaeede71256a9560 [1] => 554f2698931e8397a3ba083dd1d367ac [2] => 7d564ffd357f5357e587d1bb30a5cb34 [3] => 52e2eb00ac940b83f4783ba059dfbbaa [4] => 0c5aeda89c62f55b59e45e2be8759de3 [5] => 6d08f91d8600feb0ebe4e9d31b1e1239 [6] => ec3306eaf86ab759223c380f40646050 [7] => 071ce03c02f3a7f4662361179feb2103 [8] => 445f81e244d8a8c72701b12a832b9e96 [9] => 9ca21a07d89e5be75cf6de5b6ecaa153 [10] => fb6557a9cb161d22e919eb64d5d9349d [11] => e21ee9881d48e1d298c65f71542069be [12] => 2ee92d0279a9583b23097659b8b80efb [13] => 86a1e54c8eadd60e2899aad22c4ffe13 [14] => 044f869ef3eb911d072274ef0e06e190 [15] => d086f2eb8d357ac0b667c651e1c0c9b9 [16] => 5aa4d470c9448dcef4813e1ab7cde576 [17] => 0eda71e4135432c5adbab44fff013221 [18] => 847c6eee76cfb983eb92a1993ea7468b [19] => 6af60570a270e6796a02f7a55b597198 [20] => 4cb4d31acd26091b8eef75471414ae98 [21] => 9d71125a22c966bf6833d5198ee59f82 [22] => 19674bf53828dcea48c32c8af2bc9cd4 [23] => 6a5329c33231c548ddd0731fedcd4df8 [24] => 80131c15753a63365a7cfc87ef19cf52 [25] => 5e2961e22ea8139fde1b6a798b9aece3 [26] => 3933694141472667a80185fd8acb20f7 [27] => 35b99915e18d3df920e8566b070f289e [28] => 67fd2c295d92f991119726b300e33de3 [29] => cf308174001564fba5550ec603171162 [30] => cd0eefd4c8a0e610e121053bda6725d3 [31] => cb2f984176f47be74a8c8069234ab20d [32] => b1c3b78783bf16fcf47f71e9cdf21fdd [33] => c3346a7fd918da69a9cd2bf229641fc4 [34] => 12dd1f47eccd49d9d01c5e18757ba565 [35] => e27541685291d40e3c100477a6aacf0c [36] => dd4ef8ea031e01a93648696b64eeaf63 [37] => 3d5ef33322ecfdb82da507cddd12efec [38] => 65fc514f124fd496ea274c974821476e [39] => 2adf2b4209ed4b5adfce0b3daf980774 [40] => 21c4b362b58cb44aab13151641f55953 [41] => 3557981c0a55f38e4969e6fc86f477dd [42] => 7f0896a2045b96009f40b0486d5f37b1 [43] => ec773f21e6cf0d869002d426421154de [44] => be051c4c958a946ca8d7fbafe79d24b3 [45] => 65312656fb08573bb3fbf0a593f917ab [46] => 50ebf77d2503cbbd08ece3f3b516678a [47] => 664767fa5d9252781f92c0d9adeefde8 [48] => c93f6b990015f42a5848fe8b88cc8050 [49] => 25bd9e05ed8afb3aedcf543708b2533e [50] => 843b22618beeac90ed9689afc54b7cb6 [51] => d642cae4893a8368865a2bfc50e9fcc4 [52] => f8448aa779ff3e8b351fd2d77c1aff7a [53] => 2f6bdd3386afdec7c03f017c0496b93f [54] => a89dcaf7234bee43e96df94f76747834 [55] => f581b0a44a64eda2e3236a9649ada71c [56] => e71b5137ba4dd696416a6d00eef26dd6 [57] => 648948e5e69330b44c733ffda94a7cc2 [58] => 9f4abb903fa7039757f41eb7216fc64c [59] => 5a3d0c48f090a42939631dd0f7539bd7 [60] => 29112ccfcefbfe6400b068d50a183a8b [61] => 2d3fc2a4608f15f06fab08ab7f0a6422 [62] => 6129d4f7bec6cf06516d01561cd1dbca [63] => 46fbc8d698a84e9ef80d880c2a43af4b [64] => cbe307c98ed528ca4e578c4eaed90fb0 [65] => 778360f63df82777b76090abefc7d899 [66] => 94b413268bea3018f7dda869d36bdb24 [67] => 5a4362f821562971d1b0baa50cc19833 [68] => 5a4362f821562971d1b0baa50cc19833 [69] => bed04d902a3479e5864441d18c2a0a2f [70] => 40bb4d657b21eefb1397d521a077f791 [71] => 36ade23d39d64e770e33f636620b61d1 [72] => 74c81ddcbc91b004852b03cd78821476 ) ) [post_type] => post [post_author] => 551 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => differences-between-static-generated-sites-and-server-side-rendered-apps )

Decide filter: Returning post, everything seems orderly :Differences Between Static Generated Sites And Server-Side Rendered Apps

Array ( [post_title] => Differences Between Static Generated Sites And Server-Side Rendered Apps [post_content] => Differences Between Static Generated Sites And Server-Side Rendered Apps

Differences Between Static Generated Sites And Server-Side Rendered Apps

Timi Omoyeni

JavaScript currently has three types of applications that you can build with: Single Page Applications (SPAs), pre-rendering/static generated sites and server-side rendered applications. SPAs come with many challenges, one of which is Search Engine Optimization (SEO). Possible solutions are to make use of Static Site Generators or Server-Side Rendering (SSR).

In this article, I’m going to explain them alongside listing their pros and cons so you have a balanced view. We’re going to look at what static generated/pre-rendering is as well as frameworks such as Gatsby and VuePress that help in creating statically generated sites. We’re also going to look at what server-side rendered (SSR) applications are as well as frameworks like Nextjs and Nuxtjs that can help you create SSR applications. Finally, we’re going to cover the differences between these two methods and which of them you should use when building your next application.

Note: You can find all the code snippets in this article on GitHub.

What Is A Static Site Generator?

A Static Site Generator (SSG) is a software application that creates HTML pages from templates or components and a given content source. You give it some text files and content, and the generator will give you back a complete website, and this completed website is referred to as a static generated site. What this means is that your site pages are generated at build time and your site content does not change unless you add new contents or components and “rebuild” or you have to rebuild your site if you want it to be updated with new content.

Diagram explaining how static site generation works
How static site generation works (Large preview)

This approach is good for building applications that the content does not change too often — sites that the content does not have to change depending on the user, and sites that do not have a lot of user-generated content. An example of such a site is a blog or a personal website. Let’s look at some advantages of using static generated sites.

PROS

CONS

Examples of static site generators are GatsbyJS and VuePress. Let us take a look at how to create static sites using these two generators.

Gatsby

According to their official website,

“Gatsby is a free and open-source framework based on React that helps developers build blazing-fast websites and apps.”

This means developers familiar with React would find it easy to get started with Gatsby.

To use this generator, you first have to install it using NPM:

npm install -g gatsby-cli

This will install Gatsby globally on your machine, you only have to run this command once on your machine. After this installation is complete, you can create your first static site generator using the following command.

gatsby new demo-gatsby

This command will create a new Gatsby project that I have named demo-gatsby. When this is done, you can start up your app server by running the following command:

cd demo-gatsby
gatsby develop

Your Gatsby application should be running on localhost:8000.

Gatsby default landing page
Gatsby default starter page (Large preview)

The folder structure for this app looks like this;

--| gatsby-browser.js  
--| LICENSE        
--| README.md
--| gatsby-config.js
--| node_modules/  
--| src/
----| components
----| pages
----| images
--| gatsby-node.js     
--| package.json   
--| yarn.lock
--| gatsby-ssr.js      
--| public/
----| icons
----| page-data
----| static

For this tutorial, we’re only going to look at the src/pages folder. This folder contains files that would be generated into routes on your site.

To test this, let us add a new file (newPage.js) to this folder:

import React from "react"
import { Link } from "gatsby"
import Layout from "../components/layout"
import SEO from "../components/seo"
const NewPage = () => (
  <Layout>
    <SEO title="My New Page" />
    <h1>Hello Gatsby</h1>
    <p>This is my first Gatsby Page</p>
    <button>
      <Link to='/'>Home</Link>
    </button>
  </Layout>
)
export default NewPage

Here, we import React from the react package so when your code is transpiled to pure JavaScript, references to React will appear there. We also import a Link component from gatsby and this is one of React’s route tag that is used in place of the native anchor tag ( <a href='#'>Link</a>). It accepts a to prop that takes a route as a value.

We import a Layout component that was added to your app by default. This component handles the layout of pages nested inside it. We also import the SEO component into this new file. This component accepts a title prop and configures this value as part of your page’s metadata. Finally, we export the function NewPage that returns a JSX containing your new page’s content.

And in your index.js file, add a link to this new page we just created:

import React from "react"
import { Link } from "gatsby"
import Layout from "../components/layout"
import Image from "../components/image"
import SEO from "../components/seo"
const IndexPage = () => (
  <Layout>
    <SEO title="Home" />
    <h1>Hi people</h1>
    <p>Welcome to your new Gatsby site.</p>
    <p>Now go build something great.</p>
    <div style={{ maxWidth: `300px`, marginBottom: `1.45rem` }}>
      <Image />
    </div>
    <Link to="/page-2/">Go to page 2</Link>
    {/* new link */}
    <button>
      <Link to="/newPage/">Go to New Page</Link>
    </button>
  </Layout>
)
export default IndexPage

Here, we import the same components that were used in newPage.js file and they perform the same function in this file. We also import an Image component from our components folder. This component is added by default to your Gatsby application and it helps in lazy loading images and serving reduced file size. Finally, we export a function IndexPage that returns JSX containing our new link and some default content.

Now, if we open our browser, we should see our new link at the bottom of the page.

Gatsby default landing page with link to a new page
Gatsby landing page with new link (Large preview)

And if you click on Go To New Page, it should take you to your newly added page.

New page containing some texts
New gatsby page (Large preview)

VuePress

VuePress is a static site generator that is powered by Vue, Vue Router and Webpack. It requires little to no configuration for you to get started with it. While there are a number of tools that are static site generators, VuePress stands out from amongst the pack for a single reason: its primary directive is to make it easier for developers to create and maintain great documentation for their projects.

To use VuePress, you first have to install it:

//globally
yarn global add vuepress # OR npm install -g vuepress

//in an existing project
yarn add -D vuepress # OR npm install -D vuepress

Once the installation process is done, you can run the following command in your terminal:

# create the project folder
mkdir demo-vuepress && cd demo-vuepress

# create a markdown file
echo '# Hello VuePress' > README.md

# start writing
vuepress dev

Here, we create a folder for our VuePress application, add a README.md file with # Hello VuePress as the only content inside this file, and finally, start up our server.

When this is done, our application should be running on localhost:8080 and we should see this in our browser:

A VuePress webpage with a text saying ‘Hello VuePress’
VuePress landing page (Large preview)

VuePress supports VueJS syntax and markup inside this file. Update your README.md file with the following:

# Hello VuePress
_VuePress Rocks_
> **Yes!**
_It supports JavaScript interpolation code_
> **{{new Date()}}**
<p v-for="i of ['v','u', 'e', 'p', 'r', 'e', 's', 's']">{{i}}</p>

If you go back to your browser, your page should look like this:

Updated VuePress page
Updated Vuepress page (Large preview)

To add a new page to your VuePress site, you add a new markdown file to the root directory and name it whatever you want the route to be. In this case, I’ve gone ahead to name it Page-2.md and added the following to the file:

# hello World
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod
tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
consequat.

And now, if you navigate to /page-2 in your browser, we should see this:

A VuePress webpage containing hello world
A “Hello World” page in VuePress (Large preview)

What Is Server-Side Rendering? (SSR)

Server-Side Rendering (SSR), is the process of displaying web-pages on the server and passing it to the browser/client-side instead of rendering it in the browser. Server-side sends a fully rendered page to the client; the client’s JavaScript bundle takes over and allows the SPA framework to operate.

This means if you have an application that is server-side rendered, your content is fetched on the server side and passed to your browser to display to your user. With client-side rendering it is different, you would have to navigate to that page first before it fetches data from your server meaning your user would have to wait for some seconds before they’re served with the content on that page. Applications that have SSR enabled are called Server-side rendered applications.

A diagram explaining how server-side rendering works
How SSR works (Large preview)

This approach is good for building complex applications that require user interaction, rely on a database, or where the content changes very often. This is because content on these sites changes very often and the users need to see the updated content as soon as they’re updated. It is also good for applications that have tailored content depending on who is viewing it and applications where you need to store user-specific data like email and user preference while also catering for SEO. An example of this is a large e-commerce platform or a social media site. Let us look at some of the advantages of server-side rendering your applications.

Pros

CONS

Further examples of frameworks that offer SSR are Next.js and Nuxt.js.

Next.js

Next.js is a React.js framework that helps in building static sites, server-side rendered applications, and so on. Since it was built on React, knowledge of React is required to use this framework.

To create a Next.js app, you need to run the following:

npm init next-app
# or
yarn create next-app

You would be prompted to choose a name your application, I have named my application demo-next. The next option would be to select a template and I’ve selected the Default starter app after which it begins to set up your app. When this is done, we can now start our application

cd demo-next
yarn dev 
# or npm run dev

Your application should be running on localhost:3000 and you should see this in your browser;

Default Nextjs landing page
Next.js landing page (Large preview)

The page that is being rendered can be found in pages/index.js so if you open this file and modify the JSX inside the Home function, it would reflect in your browser. Replace the JSX with this:

import Head from 'next/head'
export default function Home() {
  return (
    <div className="container">
      <Head>
        <title>Hello Next.js</title>
        <link rel="icon" href="/favicon.ico" />
      </Head>
      <main>
        <h1 className="title">
          Welcome to <a href="https://nextjs.org">Next.js!</a>
        </h1>
        <p className='description'>Nextjs Rocks!</p>
      </main>
      <style jsx>{`
        main {
          padding: 5rem 0;
          flex: 1;
          display: flex;
          flex-direction: column;
          justify-content: center;
          align-items: center;
        }
        .title a {
          color: #0070f3;
          text-decoration: none;
        }
        .title a:hover,
        .title a:focus,
        .title a:active {
          text-decoration: underline;
        }
        .title {
          margin: 0;
          line-height: 1.15;
          font-size: 4rem;
        }
        .title,
        .description {
          text-align: center;
        }
        .description {
          line-height: 1.5;
          font-size: 1.5rem;
        }
      `}</style>
      <style jsx global>{`
        html,
        body {
          padding: 0;
          margin: 0;
          font-family: -apple-system, BlinkMacSystemFont, Segoe UI, Roboto,
            Oxygen, Ubuntu, Cantarell, Fira Sans, Droid Sans, Helvetica Neue,
            sans-serif;
        }
        * {
          box-sizing: border-box;
        }
      `}</style>
    </div>
  )
}

In this file, we make use of Next.js Head component to set our page’s metadata title and favicon for this page. We also export a Home function that returns a JSX containing our page’s content. This JSX contains our Head component together with our main page’s content. It also contains two style tags, one for styling this page and the other for the global styling of the app.

Now, you should see that the content on your app has changed to this:

Nextjs landing page containing ‘welcome to Nextjs’ text
Updated landing page (Large preview)

Now if we want to add a new page to our app, we have to add a new file inside the /pages folder. Routes are automatically created based on the /pages folder structure, this means that if you have a folder structure that looks like this:

--| pages
----| index.js ==> '/'
----| about.js ==> '/about'
----| projects
------| next.js ==> '/projects/next'

So in your pages folder, add a new file and name it hello.js then add the following to it:

import Head from 'next/head'
export default function Hello() {
  return (
    <div>
       <Head>
        <title>Hello World</title>
        <link rel="icon" href="/favicon.ico" />
      </Head>
      <main className='container'>
        <h1 className='title'>
         Hello <a href="https://en.wikipedia.org/wiki/Hello_World_(film)">World</a>
        </h1>
        <p className='subtitle'>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Voluptatem provident soluta, sit explicabo impedit nobis accusantium? Nihil beatae, accusamus modi assumenda, optio omnis aliquid nobis magnam facilis ipsam eum saepe!</p>
      </main>
      <style jsx> {`
      
      .container {
        margin: 0 auto;
        min-height: 100vh;
        max-width: 800px;
        text-align: center;
      }
      .title {
        font-family: "Quicksand", "Source Sans Pro", -apple-system, BlinkMacSystemFont,
          "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif;
        display: block;
        font-weight: 300;
        font-size: 100px;
        color: #35495e;
        letter-spacing: 1px;
      }
      .subtitle {
        font-weight: 300;
        font-size: 22px;
        color: #526488;
        word-spacing: 5px;
        padding-bottom: 15px;
      }
      `} </style>
    </div>
  )
}

This page is identical to the landing page we already have, we only changed the content and added new styling to the JSX. Now if we visit localhost:3000/hello, we should see our new page:

A Nextjs webpage containing ‘Hello world’
A “Hello World ” page in Next.js (Large preview)

Finally, we need to add a link to this new page on our index.js page, and to do this, we make use of Next’s Link component. To do that, we have to import it first.

# index.js
import Link from 'next/link'

#Add this to your JSX
<Link href='/hello'>
<Link href='/hello'>
  <a>Next</a>
</Link>

This link component is how we add links to pages created in Next in our application.

Now if we go back to our homepage and click on this link, it would take us to our /hello page.

Nuxt.js

According to their official documentation:

“Nuxt is a progressive framework based on Vue.js to create modern web applications. It is based on Vue.js official libraries (vue, vue-router and vuex) and powerful development tools (webpack, Babel and PostCSS). Nuxt’s goal is to make web development powerful and performant with a great developer experience in mind.”

It is based on Vue.js so that means Vue.js developers would find it easy getting started with it and knowledge of Vue.js is required to use this framework.

To create a Nuxt.js app, you need to run the following command in your terminal:

yarn create nuxt-app <project-name>
# or npx
npx create-nuxt-app <project-name>

This would prompt you to select a name along with some other options. I named mine demo-nuxt and selected default options for the other options. When this is done, you can open your app folder and open pages/index.vue. Every file in this folder file is turned into a route and so our landing page is controlled by index.vue file. So if you update it with the following:

<template>
  <div class="container">
    <div>
      <logo />
      <h1 class="title">
        Hello Nuxt
      </h1>
      <h2 class="subtitle">
        Nuxt.js ROcks!
      </h2>
      <div class="links">
        <a
          href="https://nuxtjs.org/"
          target="_blank"
          class="button--green"
        >
          Documentation
        </a>
        <a
          href="https://github.com/nuxt/nuxt.js"
          target="_blank"
          class="button--grey"
        >
          GitHub
        </a>
      </div>
    </div>
  </div>
</template>
<script>
import Logo from '~/components/Logo.vue'
export default {
  components: {
    Logo
  }
}
</script>
<style>
.container {
  margin: 0 auto;
  min-height: 100vh;
  display: flex;
  justify-content: center;
  align-items: center;
  text-align: center;
}
.title {
  font-family: 'Quicksand', 'Source Sans Pro', -apple-system, BlinkMacSystemFont,
    'Segoe UI', Roboto, 'Helvetica Neue', Arial, sans-serif;
  display: block;
  font-weight: 300;
  font-size: 100px;
  color: #35495e;
  letter-spacing: 1px;
}
.subtitle {
  font-weight: 300;
  font-size: 42px;
  color: #526488;
  word-spacing: 5px;
  padding-bottom: 15px;
}
.links {
  padding-top: 15px;
}
</style>

And run your application:

cd demo-nuxt
# start your applicatio
yarn dev # or npm run dev

Your application should be running on localhost:3000 and you should see this:

Default Nuxtjs landing page
Nuxt.js landing page (Large preview)

We can see that this page displays the content we added in to index.vue. The router structure works the same way Next.js router works; it renders every file inside /pages folder into a page. So let us add a new page (hello.vue) to our application.

<template>
  <div>
    <h1>Hello World!</h1>
    <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Id ipsa vitae tempora perferendis, voluptate a accusantium itaque vel ex, provident autem quod rem saepe ullam hic explicabo voluptas, libero distinctio?</p>
  </div>
</template>
<script>
export default {};
</script>
<style>
</style>

So if you open localhost:3000/hello, you should see your new page in your browser.

A Nuxtjs webpage containing ‘Hello World’
“Hello World” page in Nuxtjs (Large preview)

Taking A Closer Look At The Differences

Now that we have looked at both static-site generators and server-side rendering and how to get started with them by using some popular tools, let us look at the differences between them.

Static Sites Generators Server-Side Rendering
Can easily be deployed to a static CDN Cannot be deployed to a static CDN
Content and pages are generated at build time Content and pages are generated per request
Content can become stale quickly Content is always up to date
Fewer API calls since it only makes it at build time Makes API calls each time a new page is visited

Conclusion

We can see why it is so easy to think both static generated sites and server-side rendered applications are the same. Now that we know the differences between them are, I would advise that we try to learn more on how to build both static generated sites and server-side rendered applications in order to fully understand the differences between them.

Further Resources

Here are some useful links that are bound to help you get started in no time:

Smashing Editorial (ks, ra, il)
[post_excerpt] => JavaScript currently has three types of applications that you can build with: Single Page Applications (SPAs), pre-rendering/static generated sites and server-side rendered applications. SPAs come with many challenges, one of which is Search Engine Optimization (SEO). Possible solutions are to make use of Static Site Generators or Server-Side Rendering (SSR). In this article, I’m going to explain them alongside listing their pros and cons so you have a balanced view. [post_date_gmt] => 2020-07-02 12:00:00 [post_date] => 2020-07-02 12:00:00 [post_modified_gmt] => 2020-07-02 12:00:00 [post_modified] => 2020-07-02 12:00:00 [post_status] => publish [comment_status] => closed [ping_status] => closed [guid] => https://www.smashingmagazine.com/2020/07/differences-static-generated-sites-server-side-rendered-apps/ [meta] => Array ( [enclosure] => Array ( [0] => ) [syndication_source] => Articles on Smashing Magazine — For Web Designers And Developers [syndication_source_uri] => https://www.smashingmagazine.com/articles/ [syndication_source_id] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed] => https://www.smashingmagazine.com/category/uxdesign/feed/ [syndication_feed_id] => 8 [syndication_permalink] => https://www.smashingmagazine.com/2020/07/differences-static-generated-sites-server-side-rendered-apps/ [syndication_item_hash] => Array ( [0] => 19132aa6c896b16cfaeede71256a9560 [1] => 554f2698931e8397a3ba083dd1d367ac [2] => 7d564ffd357f5357e587d1bb30a5cb34 [3] => 52e2eb00ac940b83f4783ba059dfbbaa [4] => 0c5aeda89c62f55b59e45e2be8759de3 [5] => 6d08f91d8600feb0ebe4e9d31b1e1239 [6] => ec3306eaf86ab759223c380f40646050 [7] => 071ce03c02f3a7f4662361179feb2103 [8] => 445f81e244d8a8c72701b12a832b9e96 [9] => 9ca21a07d89e5be75cf6de5b6ecaa153 [10] => fb6557a9cb161d22e919eb64d5d9349d [11] => e21ee9881d48e1d298c65f71542069be [12] => 2ee92d0279a9583b23097659b8b80efb [13] => 86a1e54c8eadd60e2899aad22c4ffe13 [14] => 044f869ef3eb911d072274ef0e06e190 [15] => d086f2eb8d357ac0b667c651e1c0c9b9 [16] => 5aa4d470c9448dcef4813e1ab7cde576 [17] => 0eda71e4135432c5adbab44fff013221 [18] => 847c6eee76cfb983eb92a1993ea7468b [19] => 6af60570a270e6796a02f7a55b597198 [20] => 4cb4d31acd26091b8eef75471414ae98 [21] => 9d71125a22c966bf6833d5198ee59f82 [22] => 19674bf53828dcea48c32c8af2bc9cd4 [23] => 6a5329c33231c548ddd0731fedcd4df8 [24] => 80131c15753a63365a7cfc87ef19cf52 [25] => 5e2961e22ea8139fde1b6a798b9aece3 [26] => 3933694141472667a80185fd8acb20f7 [27] => 35b99915e18d3df920e8566b070f289e [28] => 67fd2c295d92f991119726b300e33de3 [29] => cf308174001564fba5550ec603171162 [30] => cd0eefd4c8a0e610e121053bda6725d3 [31] => cb2f984176f47be74a8c8069234ab20d [32] => b1c3b78783bf16fcf47f71e9cdf21fdd [33] => c3346a7fd918da69a9cd2bf229641fc4 [34] => 12dd1f47eccd49d9d01c5e18757ba565 [35] => e27541685291d40e3c100477a6aacf0c [36] => dd4ef8ea031e01a93648696b64eeaf63 [37] => 3d5ef33322ecfdb82da507cddd12efec [38] => 65fc514f124fd496ea274c974821476e [39] => 2adf2b4209ed4b5adfce0b3daf980774 [40] => 21c4b362b58cb44aab13151641f55953 [41] => 3557981c0a55f38e4969e6fc86f477dd [42] => 7f0896a2045b96009f40b0486d5f37b1 [43] => ec773f21e6cf0d869002d426421154de [44] => be051c4c958a946ca8d7fbafe79d24b3 [45] => 65312656fb08573bb3fbf0a593f917ab [46] => 50ebf77d2503cbbd08ece3f3b516678a [47] => 664767fa5d9252781f92c0d9adeefde8 [48] => c93f6b990015f42a5848fe8b88cc8050 [49] => 25bd9e05ed8afb3aedcf543708b2533e [50] => 843b22618beeac90ed9689afc54b7cb6 [51] => d642cae4893a8368865a2bfc50e9fcc4 [52] => f8448aa779ff3e8b351fd2d77c1aff7a [53] => 2f6bdd3386afdec7c03f017c0496b93f [54] => a89dcaf7234bee43e96df94f76747834 [55] => f581b0a44a64eda2e3236a9649ada71c [56] => e71b5137ba4dd696416a6d00eef26dd6 [57] => 648948e5e69330b44c733ffda94a7cc2 [58] => 9f4abb903fa7039757f41eb7216fc64c [59] => 5a3d0c48f090a42939631dd0f7539bd7 [60] => 29112ccfcefbfe6400b068d50a183a8b [61] => 2d3fc2a4608f15f06fab08ab7f0a6422 [62] => 6129d4f7bec6cf06516d01561cd1dbca [63] => 46fbc8d698a84e9ef80d880c2a43af4b [64] => cbe307c98ed528ca4e578c4eaed90fb0 [65] => 778360f63df82777b76090abefc7d899 [66] => 94b413268bea3018f7dda869d36bdb24 [67] => 5a4362f821562971d1b0baa50cc19833 [68] => 5a4362f821562971d1b0baa50cc19833 [69] => bed04d902a3479e5864441d18c2a0a2f [70] => 40bb4d657b21eefb1397d521a077f791 [71] => 36ade23d39d64e770e33f636620b61d1 [72] => 74c81ddcbc91b004852b03cd78821476 ) ) [post_type] => post [post_author] => 551 [tax_input] => Array ( [category] => Array ( ) [post_tag] => Array ( ) [post_format] => Array ( ) [avhec_catgroup] => Array ( ) ) [post_name] => differences-between-static-generated-sites-and-server-side-rendered-apps )

FAF deciding on filters on post to be syndicated:

Information And Information Architecture: The BIG Picture

Array ( [post_title] => Information And Information Architecture: The BIG Picture [post_content] => Information And Information Architecture: The BIG Picture

Information And Information Architecture: The BIG Picture

Carrie Webster

We are living in a world exploding with information, but how do we find what is relevant to us at the time that we need it? I believe that good information architecture is key to helping us navigate through the mountains of data and information we have created for ourselves. 

In this article, we will first describe what information architecture is, why it’s important, and approaches to effective implementation. Then we explore ideas around the broader view of the information age, how we use information, and how it impacts our world and our lives. These insights are designed to help you to understand the bigger picture, which enables us to grasp the value that good information architecture delivers to help our information-overloaded lives.

What Is Information Architecture And Why Is It Important?

“Information architecture is the practice of deciding how to arrange the parts of something to be understandable.”

The Information Architecture Institute

From a user experience perspective, this really means understanding how your users think, what problems they are trying to solve, and then presenting information in a logical way that makes sense from within this context. 

Whether it is a website, a software application or a smartphone app, it’s about first designing the structure of how your information is organized, and then translating this into a logical navigation hierarchy that makes sense to the users who will be accessing it. In this world where we can sometimes feel as though we are drowning in data, information architecture provides us with a logical way of organizing this data to make it easier to locate. 

Here are some other reasons why good information architecture is important:

For The User

Below are a couple of examples helping to illustrate the points about the user.

Wizard example
(Image source: Shaun Utter) (Large preview)

The example above demonstrates:

(Large preview)

The website example above, Punk Avenue shows another example of clear main navigation, with a brief summary of what you will find on each page. Below that is a series of tabs that keep you on the same page and visually indicate what information you are viewing. 

For A Business

The example below helps to illustrate some of the points above about business.