Using Default Parameters in ES6

I’ve recently begun doing more research into what’s new in JavaScript, catching up on a lot of the new features and syntax improvements that have been included in ES6 (i.e. ES2015 and later).

You’ve likely heard about and started using the usual stuff: arrow functions, let and const, rest and spread operators, and so on. One feature, however, that caught my attention is the use of default parameters in functions, which is now an official ES6+ feature. This is the ability to have your functions initialize parameters with default values even if the function call doesn’t include them.

The feature itself is pretty straightforward in its simplest form, but there are quite a few subtleties and gotchas that you’ll want to note, which I’ll try to make clear in this post with some code examples and demos.

Default Parameters in ES5 and Earlier

A function that automatically provides default values for undeclared parameters can be a beneficial safeguard for your programs, and this is nothing new.

Prior to ES6, you may have seen or used a pattern like this one:

function getInfo (name, year, color) { year = (typeof year !== 'undefined') ? year : 2018; color = (typeof color !== 'undefined') ? color : 'Blue'; // remainder of the function...
}

In this instance, the getInfo() function has only one mandatory parameter: name. The year and color parameters are optional, so if they’re not provided as arguments when getInfo() is called, they’ll be assigned default values:

getInfo('Chevy', 1957, 'Green');
getInfo('Benz', 1965); // default for color is "Blue"
getInfo('Honda'); // defaults are 2018 and "Blue"

Try it on CodePen

Without this kind of check and safeguard in place, any uninitiated parameters would default to a value of undefined, which is usually not desired.

You could also use a truthy/falsy pattern to check for parameters that don’t have values:

function getInfo (name, year, color) { year = year || 2018; color = color || 'Blue'; // remainder of the function...
}

But this may cause problems in some cases. In the above example, if you pass in a value of “0” for the year, the default 2018 will override it because 0 evaluates as falsy. In this specific example, it’s unlikely you’d be concerned about that, but there are many cases where your app might want to accept a value of 0 as a valid number rather than a falsy value.

Try it on CodePen

Of course, even with the typeof pattern, you may have to do further checks to have a truly bulletproof solution. For example, you might expect an optional callback function as a parameter. In that case, checking against undefined alone wouldn’t suffice. You’d also have to check if the passed-in value is a valid function.

So that’s a bit of a summary covering how we handled default parameters prior to ES6. Let’s look at a much better way.

Default Parameters in ES6

If your app requires that you use pre-ES6 features for legacy reasons or because of browser support, then you might have to do something similar to what I’ve described above. But ES6 has made this much easier. Here’s how to define default parameter values in ES6 and beyond:

function getInfo (name, year = 2018, color = 'blue') { // function body here...
}

Try it on CodePen

It’s that simple.

If year and color values are passed into the function call, the values passed in as arguments will supersede the ones defined as parameters in the function definition. This works exactly the same way as with the ES5 patterns, but without all that extra code. Much easier to maintain, and much easier to read.

This feature can be used for any of the parameters in the function head, so you could set a default for the first parameter along with two other expected values that don’t have defaults:

function getInfo (name = 'Pat', year, color) { // function body here...
}

Dealing With Omitted Values

Note that—in a case like the one above—if you wanted to omit the optional name argument (thus using the default) while including a year and color, you’d have to pass in undefined as a placeholder for the first argument:

getInfo(undefined, 1995, 'Orange');

If you don’t do this, then logically the first value will always be assumed to be name.

The same would apply if you wanted to omit the year argument (the second one) while including the other two (assuming, of course, the second parameter is optional):

getInfo('Charlie', undefined, 'Pink');

I should also note that the following may produce unexpected results:

function getInfo (name, year = 1965, color = 'blue') { console.log(year); // null
}
getInfo('Frankie', null, 'Purple');

Try it on CodePen

In this case, I’ve passed in the second argument as null, which might lead some to believe the year value inside the function should be 1965, which is the default. But this doesn’t happen, because null is considered a valid value. And this makes sense because, according to the spec, null is viewed by the JavaScript engine as the intentional absence of an object’s value, whereas undefined is viewed as something that happens incidentally (e.g. when a function doesn’t have a return value it returns undefined).

So make sure to use undefined and not null when you want the default value to be used. Of course, there might be cases where you want to use null and then deal with the null value within the function body, but you should be familiar with this distinction.

Default Parameter Values and the arguments Object

Another point worth mentioning here is in relation to the arguments object. The arguments object is an array-like object, accessible inside a function’s body, that represents the arguments passed to a function.

In non-strict mode, the arguments object reflects any changes made to the argument values inside the function body. For example:

function getInfo (name, year, color) { console.log(arguments); /* [object Arguments] { 0: "Frankie", 1: 1987, 2: "Red" } */ name = 'Jimmie'; year = 1995; color = 'Orange'; console.log(arguments); /* [object Arguments] { 0: "Jimmie", 1: 1987, 2: "Red" } */
} getInfo('Frankie', 1987, 'Red');

Try it on CodePen

Notice in the above example, if I change the values of the function’s parameters, those changes are reflected in the arguments object. This feature was viewed as more problematic than beneficial, so in strict mode the behavior is different:

function getInfo (name, year, color) { 'use strict'; name = 'Jimmie'; year = 1995; color = 'Orange'; console.log(arguments); /* [object Arguments] { 0: "Frankie", 1: 1987, 2: "Red" } */
} getInfo('Frankie', 1987, 'Red');

Try it on CodePen

As shown in the demo, in strict mode the arguments object retains its original values for the parameters.

That brings us to the use of default parameters. How does the arguments object behave when the default parameters feature is used? Take a look at the following code:

function getInfo (name, year = 1992, color = 'Blue') { console.log(arguments.length); // 1 console.log(year, color); // 1992 // "Blue" year = 1995; color = 'Orange'; console.log(arguments.length); // Still 1 console.log(arguments); /* [object Arguments] { 0: "Frankie" } */ console.log(year, color); // 1995 // "Orange"
} getInfo('Frankie');

Try it on CodePen

There are a few things to note in this example.

First, the inclusion of default parameters doesn’t change the arguments object. So, as in this case, if I pass only one argument in the functional call, the arguments object will hold a single item—even with the default parameters present for the optional arguments.

Second, when default parameters are present, the arguments object will always behave the same way in strict mode and non-strict mode. The above example is in non-strict mode, which usually allows the arguments object to be modified. But this doesn’t happen. As you can see, the length of arguments remains the same after modifying the values. Also, when the object itself is logged, the name value is the only one present.

Expressions as Default Parameters

The default parameters feature is not limited to static values but can include an expression to be evaluated to determine the default value. Here’s an example to demonstrate a few things that are possible:

function getAmount() { return 100;
} function getInfo (name, amount = getAmount(), color = name) { console.log(name, amount, color)
} getInfo('Scarlet');
// "Scarlet"
// 100
// "Scarlet" getInfo('Scarlet', 200);
// "Scarlet"
// 200
// "Scarlet" getInfo('Scarlet', 200, 'Pink');
// "Scarlet"
// 200
// "Pink"

Try it on CodePen

There are a few things to take note of in the code above. First, I’m allowing the second parameter, when it’s not included in the function call, to be evaluated by means of the getAmount() function. This function will be called only if a second argument is not passed in. This is evident in the second getInfo() call and the subsequent log.

The next key point is that I can use a previous parameter as the default for another parameter. I’m not entirely sure how useful this would be, but it’s good to know it’s possible. As you can see in the above code, the getInfo() function sets the third parameter (color) to equal the first parameter’s value (name), if the third parameter is not included.

And of course, since it’s possible to use functions to determine default parameters, you can also pass an existing parameter into a function used as a later parameter, as in the following example:

function getFullPrice(price) { return (price * 1.13);
} function getValue (price, pricePlusTax = getFullPrice(price)) { console.log(price.toFixed(2), pricePlusTax.toFixed(2))
} getValue(25);
// "25.00"
// "28.25" getValue(25, 30);
// "25.00"
// "28.25"

Try it on CodePen

In the above example, I’m doing a rudimentary tax calculation in the getFullPrice() function. When this function is called, it uses the existing price parameter as part of the pricePlusTax evaluation. As mentioned earlier, the getFullPrice() function is not called if a second argument is passed into getValue() (as demonstrated in the second getValue() call).

Two things to keep in mind with regards to the above. First, the function call in the default parameter expression needs to include the parentheses, otherwise you’ll receive a function reference rather than an evaluation of the function call.

Second, you can only reference previous parameters with default parameters. In other words, you can’t reference the second parameter as an argument in a function to determine the default of the first parameter:

// this won't work
function getValue (pricePlusTax = getFullPrice(price), price) { console.log(price.toFixed(2), pricePlusTax.toFixed(2))
} getValue(25); // throws an error

Try it on CodePen

Similarly, as you would expect, you can’t access a variable defined inside the function body from a function parameter.

Conclusion

That should cover just about everything you’ll need to know to get the most out of using default parameters in your functions in ES6 and above. The feature itself is quite easy to use in its simplest form but, as I’ve discussed here, there are quite a few details worth understanding.

If you’d like to read more on this topic, here are some sources:

  • Understanding ECMAScript 6 by Nicholas Zakas. This was my primary source for this article. Nicholas is definitely my favorite JavaScript author.
  • Arguments object on MDN
  • Default Parameters on MDN

Using Default Parameters in ES6 is a post from CSS-Tricks

Fallbacks for Videos-as-Images

Safari 11.1 shipped a strange-but-very-useful feature: the ability to use a video source in the <img> tag. The idea is it does the same job as a GIF (silent, autoplaying, repeating), but with big performance gains. How big? “20x faster and decode 7x faster than the GIF equivalent,” says Colin Bendell.

Not all browsers support this so, to do a fallback, the <picture> element is ready. Bruce Lawson shows how easy it can be:

<picture> <source type="video/mp4" srcset="adorable-cat.mp4"> <!-- perhaps even an animated WebP fallback here as well --> <img src="adorable-cat.gif" alt="adorable cat tears throat out of owner and eats his eyeballs">
</picture>

Šime Vidas notes you get wider browser support by using the <video> tag:

<video src="https://media.giphy.com/media/klIaoXlnH9TMY/giphy.mp4" muted autoplay loop playsinline></video>

But as Bendell noted, the performance benefits aren’t there with video, notably the fact that video isn’t helped out by the preloader. Sadly, <video> it is for now, as:

there is this nasty WebKit bug in Safari that causes the preloader to download the first <source> regardless of the mimetype declaration. The main DOM loader realizes the error and selects the correct one. However, the damage will be done. The preloader squanders its opportunity to download the image early and on top of that, downloads the wrong version wasting bytes. The good news is that I’ve patched this bug and it should land in Safari TP 45.

In short, using the <picture> and <source type> for mime-type selection is not advisable until the next version of Safari reaches the 90%+ of the user base.

Still, eventually, it’ll be quite useful.


Fallbacks for Videos-as-Images is a post from CSS-Tricks

A Short History of WaSP and Why Web Standards Matter

In August of 2013, Aaron Gustafson posted to the WaSP blog. He had a bittersweet message for a community that he had helped lead:

Thanks to the hard work of countless WaSP members and supporters (like you), Tim Berners-Lee’s vision of the web as an open, accessible, and universal community is largely the reality. While there is still work to be done, the sting of the WaSP is no longer necessary. And so it is time for us to close down The Web Standards Project.

If there’s just the slightest hint of wistful regret in Gustafson’s message, it’s because the Web Standards Project changed everything that had become the norm on the web during its 15+ years of service. Through dedication and developer advocacy, they hoisted the web up from a nest of browser incompatibility and meaningless markup to the standardized and feature-rich application platform most of us know today.

I previously covered what it took to bring CSS to the World Wide Web. This is the other side of that story. It was only through the efforts of many volunteers working tirelessly behind the scenes that CSS ever had a chance to become what it is today. They are the reason we have web standards at all.

Introducing Web Standards

Web standards weren’t even a thing in 1998. There were HTML and CSS specifications and drafts of recommendations that were managed by the W3C, but they had spotty and uneven browser support which made them little more than words on a page. At the time, web designers stood at the precipice of what would soon be known as the Browser Wars, where Netscape and Microsoft raced to implement exclusive features and add-ons in an escalating fight for market share. Rather than stick to any official specification, these browsers forced designers to support either Netscape Navigator or Internet Explorer. And designers were definitely not happy about it.

Supporting both browsers and their competing feature implementations was possible, but it was also difficult and unreliable, like building a house on sand. To help each other along, many developers began joining mailing lists to swap tips and hacks for dealing with sites that needed to look good no matter where it was rendered.

From these mailing lists, a group began to form around an entirely new idea. The problem, this new group realized, wasn’t with the code, but with the browsers that refused to adhere to the codified, open specifications passed down by the W3C. Browsers touted new presentational HTML elements like the <blink> tag, but they were proprietary and provided no layout options. What the web needed was browsers that could follow the standards of the web.

The group decided they needed to step up and push browsers in the right direction. They called themselves the Web Standards Project. And, since the process would require a bit of a sting, they went by WaSP for short.

Launching the Web Standards Project

In August of 1998, WaSP announced their mission to the public on a brand new website: to “support these core standards and encourage browser makers to do the same, thereby ensuring simple, affordable access to Web technologies for all.” Within a few hours, 450 people joined WaSP. In a few months, that number would jump to thousands.

WaSP took what was basically a two-pronged approach. The first was in public, tapping into the groundswell of developer support they had gathered to lobby for better standards support in browsers. Using grassroots tactics and targeted outreach, WaSP would often send its members on “missions” such as sending emails to browsers explaining in great detail their troubles working with a lack of consistent web standards support.

They also published scathing reports that put browsers on blast, highlighting all the ways that Netscape or Internet Explorer failed to add necessary support, even go so far to encourage users to use alternative browsers. It was these reports where the project truly lived up to its acronym. One needs to look no further then a quote from WaSP’s savage takedown of Internet Explorer as an example of its ability to sting:

Quit before the job’s done, and the flamethrower’s the only answer. Because that’s our job. We speak for thousands of Web developers, and through them, millions of Web users.

The second prong of WaSP’s approach included privately reaching out to passionate developers on browser teams. The problem, for big companies like Netscape and Microsoft, wasn’t that engineers were against web standards. Quite the contrary, actually. Many browser engineers believed deeply in WaSP’s mission but were resisted by perceived business interests and red-tape bureaucracy time and time again. As a result, WaSP would often work with browser developers to find the best path forward and advocate on their behalf to the higher-ups when necessary.

Holding it All Together

To help WaSP navigate its way through its missions, reports, and outreach, a Steering Committee was formed. This committee helped set the project’s goals and reached out to the community to gather support. They were the heralds of a better day soon to come, and more than a few influential members would pass through their ranks before the project was over, including: Rachel Cox, Tim Bray, Steve Champeon, Glenn Davis, Glenda Sims, Todd Fahrner, Molly Holzschalg and Aaron Gustafson, among many, many others.

At the top of it all was a project lead who set the tone for the group and gave developers a unified voice. The position was initially held by George Olsen, one of the founders of the project, but was soon picked up by another founding member: Jeffrey Zeldman.

A network of loosely connected satellite groups orbiting around the Steering Committee helped developers and browsers alike understand the importance of web standards. There was, for instance, an Accessibility group that bridged the W3C with browser makers to ensure the web was open and accessible to everyone. Then there was the CSS Samurai, who published reports about CSS support (or, more commonly, lack thereof) in different browsers. They were the ones that devised the Box Acid test and offered guidance to browsers as they worked to expand CSS support. Todd Fahrner, who helped save CSS with doctype switching, counted himself among the CSS Samurai.

Making an Impact

WaSP was huge and growing all the time. Its members were passionate and, little by little, clusters of the community came together to enact change. And that is exactly what happened.

The changes felt kind of small at first but soon they bordered on massive. When Netscape was kicking around the idea of a new rendering engine named Gecko that would include much better standards support across the board, their initial timeline would have taken months to release. But the WaSP swarmed, emailing and reaching out to Netscape to put pressure on them to release Gecko sooner. It worked and, by the next release, Gecko (and better web standards) shipped.

Tantek Çelik was another member of WaSP. The community inspired him to take a stand on web standards at his day job as lead developer of Internet Explorer for Mac. It was through the encouragement and support of WaSP that he and his team released version 5 with full CSS Level 1 support.

Internet Explorer 5 for Mac was released with full CSS Level 1 support

In August of 2001, after years of public reports and private outreach and developer advocacy, the WaSP sting provoked seismic change in Internet Explorer as version 6 released with CSS Level 1 support and the latest HTML features. The upgrades were due in no small part to the work at the Web Standards Project and their work with dedicated members of the browser team. It appeared that standards were beginning to actually win out. The WaSP’s mission may have even been over.

But instead of calling it quits, they shifted tactics a bit.

Teaching Standards to a New Generation

In the early 2000’s, WaSP would radically change its approach to education and developer outreach.

They started with the launch of the Browser Upgrade Campaign which educated users who were coming online for the very first time and knew absolutely nothing about web standards and modern browsers. Site owners were encouraged to add some JavaScript and a banner to their sites to target these users. As a result, those surfing to a site on older versions of standards-compliant browsers, like Firefox or Opera, were greeted by a banner simply directing them to upgrade. Users visiting the site on a really old browser, like pre-IE5 or Netscape 5, would redirect visitors to an entirely new page explaining why upgrading to a modern browser with standards support was in their best interest.

A page from the Browser Upgrade Campaign

WaSP was going to bring the web up to speed, even if they had to do it one person at a time. Perhaps no one articulated this sentiment better than Molly Holzschalg when she wrote “Raise Your Standards” in February 2002. In the article, she broke down what web standards are and what they meant for developers and designers. She celebrated the work that had been done by browsers and the community working to make web standards a thing in the first place.

But, she argued, the web was far from done. It was now time for developers to step up to the plate and assume the responsibility for standards themselves by coding it into all of their sites. She wrote:

The Consortium is fraught with its own internal issues, and its actions—while almost always in the best interests of professional Web authors—are occasionally politicized.

Therefore, as Web authors, we’re personally responsible for making implementation decisions within the framework of a site’s markup needs. It’s our job to administer recommendations to the best of our abilities.

This, however, would not be easy. It would once again require the combined efforts of WaSP members to pull together and teach the web a new way to code. Some began publishing tutorials to their personal blogs or on A List Apart. Others created a standards-based online curriculum for web developers who were new to the field. A few members even formed brand-new task forces to work with popular software tools, like Adobe Dreamweaver, and ensure that standards were supported there as well.

The redesigns of ESPN and Wired, which stood as a testament and example for standards-based designs for years to come, were undertaken in part because members of those teams were inspired by the work that WaSP was doing. They would not have been able to take those crucial first steps if not for the examples and tutorials made freely available to them by gracious WaSP members.

That is why web standards is basically second nature to many web developers today. It’s also why we have such a free spirit of creative exchange in our industry. It all started when WaSP decided to share the correct way of doing things right out in the open.

Looking Past Web Standards

It was this openness that carried WaSP into the late 2010’s. When Holzschlag took over as lead, she advocated for transparency and collaboration between browser makers and the web community. The WaSP, Holzschlag realized, was no longer necessary and could be done from within. For example, she made inroads at Microsoft to help make web standards a top priority on their browser team.

With each subsequent release, browsers began to catch up to the latest standards from the W3C. Browsers like Opera and Firefox actually competed on supporting the latest standards. Google Chrome used web standards as a selling point when it was initially released around the same time. The decade-and-a-half of work by WaSP was paying off. Browser makers were listening to the W3C and the web community, even going so far as to experiment with new standards before they were officially published for recommendation.

In 2013, WaSP posted its farewell announcement and closed up shop for good. It was a difficult decision for those who had fought long and hard for a better, more accessible and more open web, but it was necessary. There are still a number of battlegrounds for the open web but, thanks to the efforts of WaSP, the one for web standards has been won.

Enjoy learning about web history? Jay Hoffmann has a weekly newsletter called The History of the Web you can sign up for here.


A Short History of WaSP and Why Web Standards Matter is a post from CSS-Tricks

Counting With CSS Counters and CSS Grid

You’ve heard of CSS Grid, I’m sure of that. It would be hard to miss it considering that the whole front-end developer universe has been raving about it for the past year.

Whether you’re new to Grid or have already spent some time with it, we should start this post with a short definition directly from the words of W3C:

Grid Layout is a new layout model for CSS that has powerful abilities to control the sizing and positioning of boxes and their contents. Unlike Flexible Box Layout, which is single-axis–oriented, Grid Layout is optimized for 2-dimensional layouts: those in which alignment of content is desired in both dimensions.

In my own words, CSS Grid is a mesh of invisible horizontal and vertical lines. We arrange elements in the spaces between those lines to create a desired layout. An easier, stable, and standardized way to structure contents in a web page.

Besides the graph paper foundation, CSS Grid also provides the advantage of a layout model that’s source order independent: irrespective of where a grid item is placed in the source code, it can be positioned anywhere in the grid across both the axes on screen. This is very important, not only for when you’d find it troublesome to update HTML while rearranging elements on page but also at times when you’d find certain source placements being restrictive to layouts.

Although we can always move an element to the desired coordinate on screen using other techniques like translate, position, or margin, they’re both harder to code and to update for situations like building a responsive design, compared to true layout mechanisms like CSS Grid.

In this post, we’re going to demonstrate how we can use the source order independence of CSS Grid to solve a layout issue that’s the result of a source order constraint. Specifically, we’re going to look at checkboxes and CSS Counters.

Counting With Checkboxes

If you’ve never used CSS Counters, don’t worry, the concept is pretty simple! We set a counter to count a set of elements at the same DOM level. That counter is incremented in the CSS rules of those individual elements, essentially counting them.

Here’s the code to count checked and unchecked checkboxes:

<input type="checkbox">Checkbox #1<br>
<input type="checkbox">Checkbox #2
<!-- more checkboxes, if we want them --> <div class="total"> <span class="totalChecked"> Total Checked: </span><br> <span class="totalUnChecked"> Total Unchecked: </span>
</div>
::root { counter-reset: checked-sum, unchecked-sum;
} input[type="checkbox"] { counter-increment: unchecked-sum;
} input[type="checkbox"]:checked { counter-increment: checked-sum;
} .totalUnChecked::after { content: counter(unchecked-sum);
} .totalChecked::after { content: counter(checked-sum);
}

In the above code, two counters are set at the root element using the counter-reset property and are incremented at their respective rules, one for checked and the other for unchecked checkboxes, using counter-increment. The values of the counters are then shown as contents of two empty <span>s’ pseudo elements using counter().

Here’s a stripped-down version of what we get with this code:

See the Pen CSS Counter Grid by CSS-Tricks (@css-tricks) on CodePen.

This is pretty cool. We can use it in to-do lists, email inbox interfaces, survey forms, or anywhere where users toggle boxes and will appreciate being shown how many items are checked and how many are unselected. All this with just CSS! Useful, isn’t it?

But the effectiveness of counter() wanes when we realize that an element displaying the total count can only appear after all the elements to be counted in the source code. This is because the browser first needs the chance to count all the elements, before showing the total. Hence, we can’t simply change the markup to place the counters above the checkboxes like this:

<!-- This will not work! -->
<div class="total"> <span class="totalChecked"> Total Checked: </span><br> <span class="totalUnChecked"> Total Unchecked: </span>
</div>
<input type="checkbox">Checkbox #1<br>
<input type="checkbox">Checkbox #2

Then, how else can we get the counters above the checkboxes in our layout? This is where CSS Grid and its layout-rendering powers come into play.

Adding Grid

We’re basically wrapping the previous HTML in a new <div> element that’ll serve as the grid container:

<div class="grid"> <input type=checkbox id="c-1"> <label for="c-1">checkbox #1</label> <input type=checkbox id="c-2"> <label for="c-2">checkbox #2</label> <input type=checkbox id="c-3"> <label for="c-3">checkbox #3</label> <input type=checkbox id="c-4"> <label for="c-4">checkbox #4</label> <input type=checkbox id="c-5"> <label for="c-5">checkbox #5</label> <input type=checkbox id="c-6"> <label for="c-6">checkbox #6</label> <div class=total> <span class="totalChecked"> Total Checked: </span> <span class="totalUnChecked"> Total Unchecked: </span> </div> </div>

And, here is the CSS for our grid:

.grid { display: grid; /* creates the grid */ grid-template-columns: repeat(2, max-content); /* creates two columns on the grid that are sized based on the content they contain */
} .total { grid-row: 1; /* places the counters on the first row */ grid-column: 1 / 3; /* ensures the counters span the full grid width, forcing other content below */
}

This is what we get as a result (with some additional styling):

See the Pen CSS Counter Grid by Preethi (@rpsthecoder) on CodePen.

See that? The counters are now located above the checkboxes!

We defined two columns on the grid element in the CSS, each accommodating its own content to their maximum size.

When we grid-ify an element, its contents (text including) block-ify, meaning they acquire a grid-level box (similar to block-level box) and are automatically placed in the available grid cells.

In the demo above, the counters take up both the grid cells in the first row as specified, and following that, every checkbox resides in the first column and the text after each checkbox stays in the last column.

The checkboxes are forced below the counters without changing the actual source order!

Since we didn’t change the source order, the counter works and we can see the running total count of checked and unchecked checkboxes at the top the same way we did when they were at the bottom. The functionality is left unaffected!

To be honest, there’s a staggering number of ways to code and implement a CSS Grid. You can use grid line numbers, named grid areas, among many other methods. The more you know about them, the easier it gets and the more useful they become. What we covered here is just the tip of the iceberg and you may find other approaches to create a grid that work equally well (or better).


Counting With CSS Counters and CSS Grid is a post from CSS-Tricks

Boilerform: A Follow-Up

When Chris wrote his idea for a Boilerform, I had already been thinking about starting a new project. I’d just decided to put my front-end boilerplate to bed, and wanted something new to think about. Chris’ idea struck a chord with me immediately, so I got enthusiastically involved in the comments like an excitable puppy. That excitement led me to go ahead and build out the initial version of Boilerform, which you can check out here.

The reason for my initial excitement was that I have a guilty pleasure for forms. In various jobs, I’ve worked with forms at a pretty intense level and have learned a lot about them. This has ranged from building dynamic form builders to high-level spam protection for a Harley-Davidson® website platform. Each different project has given me a look at the front-end and back-end of the process. Each of these projects has also picked away at my tolerance for quick, lazy implementations of forms, because I’ve seen the drastic implementations of this at scale.

But hey, we’re not bad people. Forms are a nightmare to work with. Although better now: each browser treats them slightly differently. For example, check out these select menus from a selection of browsers and OSs. Not one of them looks the same.

These are just the tip of the inconsistency iceberg.

Because of these inconsistencies, it’s easy to see why developers bail out of digging too deep or just spin up a copy of Bootstrap and be done with it. Also, in my experience, the design of minor forms, such as a contact form are left until later in the project when most of the positive momentum has already gone. I’ve even been guilty of building contact forms a day before a website’s launch. 😬

There’s clearly an opportunity to make the process of working with forms—on the front-end, at least—better and I couldn’t resist the temptation to make it!

The Planning

I sat and thought about what pain-points there are when working with forms and what annoys me as a user of forms. I decided that as a developer, I hate styling forms. As a user, poorly implemented form fields annoy me.

An example of the latter is email fields. Now, if you try to fill in an email field on an iOS device, you get that annoying trait of the first letter being capitalized by the browser, because it treats it like a sentence. All you have to do to stop that behaviour is add autocapitalize="none" to your field and this stops. I know this isn’t commonly known because I rarely see it in place, but it’s such a quick win to have a positive impact on your users.

I wanted to bake these little tricks right into Boilerform to help developers make a user’s life easier. Creating a front-end boilerplate or framework is about so much more than styling and aesthetics. It’s about sharing your gained experience with others to make the landscape better as a whole.

The Specification

I needed to think about what I wanted Boilerform to do as a minimum viable product, at initial launch. I came up with the following rules:

  • It had to be compatible with most front-ends
  • It had to be well documented
  • It had to be lightweight
  • Someone should be able to drop a CDN link to their <head> and have it just work
  • Someone should also be able to expand on the source for their own projects
  • It shouldn’t be too opinionated

To achieve these points, I had some technology decisions to make. I decided to go for a low barrier-to-entry setup. This was:

  • Sass powered CSS
  • BEM
  • Plain ol’ HTML
  • A basic compilation setup

I also focused my attention on samples. CodePen was the natural fit for this because they embed really well. Users can also fork them and play with them themselves.

The last decision was to roll out a pattern library to break up components into little pieces. This helped me in a couple of ways. It helped with organization mainly—but it also helped me build Boilerform in a bitty, sporadic nature as I was working on it in the evenings.

I had my plan and my stack, so got cracking.

Keeping it simple

It’s easy for a project like this to get out of hand, so it’s useful to create some points about what Boilerform will be and also what it won’t be.

What Boilerform will be:

  • It’ll always be a boilerplate to get you off to a good start with your project
  • It’ll provide high-level help with HTML, CSS and JavaScript to make both developers’ and users’ lives easier
  • It’ll aim to be super lightweight, so it doesn’t become a heavy burden
  • It’ll offer configurable options that make it flexible and easy to mould into most web projects

What Boilerform won’t be:

  • It won’t be a silver bullet for your forms—it’ll still need some work
  • It won’t be a framework like Bootstrap or Foundation, because it’ll always be a starting point
  • It won’t be overly opinionated with its CSS and JavaScript
  • It’ll never be aimed at one particular framework or web technology

The Specifics

I know y’all like to dive in to the specifics of how things work, so let me give you a whistle-stop tour!

Namespacing the CSS

The first thing I got sorted was namespacing. I’ve worked on a multitude of different sites and setups and they all share something when it comes to CSS: conflicts. With this in mind, I wrote a @mixin that wrapped all the CSS in a .boilerform namespace.

// Source Sass
.c-button { @include namespace() { background: gray; }
} // This compiles to this with Sass: .boilerform .c-button { background: gray; }

The mixin is basic right now, but it gives us flexibility to scale. If we wanted to make the namespacing optional down-the-line, we only have to update this mixin. I love that sort of modularity.

Right now, what it does give us is safety. Nothing leaks out of Boilerform and hopefully, whatever leaks in will be handled by the namespaced resets and rules.

BEM With a Garnish of Prefixes

I love BEM. It’s been core to my CSS and markup for a few years now. One thing I love about BEM is that it helps you build small, encapsulated components. This is perfect for a project like Boilerform.

I could probably target naked elements safely because of the namespacing, but BEM is about more than just putting classes on everything. It gives me and others the freedom to write whatever markup structure we want. It’s also really easy for someone to pickup the code and understand what’s related to what, in both HTML and CSS.

Another thing I added to this setup was a component prefix. Instead of an .input-field component, we’ve got a .c-input-field component. I hope little things like that will help a new contributor see what’s a component right off the bat.

Horror Inputs Get Some Cool Styling

As mentioned above, select menus are awful to style. So are radio buttons and checkboxes.

A trick I’ve been using for a while now is abstracting the styling to other friendlier HTML elements. For example, with <select> elements, I wrap them in a .c-select-field component and use siblings to add a consistent caret.

For checkboxes and radio buttons, I visually-hide the main input and use adjacent <label> elements to display state change. Using this approach makes working with these controls so much easier. Importantly, we maintain accessibility and native events too.

Base Attributes to Make Fields Easier to Use

I touched on it above with my example about email fields and capitalization, but that wasn’t the only addition of useful attributes.

  • Search fields have autocorrect="off" on them to prevent browsers trying to fix spelling. I strongly recommend that you add this to inputs that a user inserts their name into as well.
  • Number fields have min, max and step attributes set to help with validation. It’s also great for keyboard users.
  • All fields have blank name and id attributes to hopefully speed up the wiring-up process

I’m certainly keen for this to be expanded on, because little tweaks like this are great for user experience.

Going Forward. Can You Help?

Boilerform is in a good place right now, but it has real potential to be useful. Some ideas I’ve had for its ongoing development are:

  • Introducing multiple JavaScript library integrations, such as React, Vue, and Angular
  • Create some base form layouts in the pattern library
  • Create Sass mixins for styling pesky stuff like placeholders
  • Improve configurability
  • Add new elements such as the range input
  • Create multilingual documentation

As you can see, that’s a lot of work, so it would be awesome if we can get some contributors into the project to make something truly useful for our community. Pulling in contributors with different areas of expertise and backgrounds will help us make it useful for as many people as possible, from end-users to back-end developers.

Let’s make something great together. 🙂

Check out the project site or the GitHub repository.


Boilerform: A Follow-Up is a post from CSS-Tricks

People Writing About Style Guides

Are you thinking about style guides lately? It seems to me it couldn’t be a hotter topic these days. I’m delighted to see it, as someone who was trying to think and build this way when the prevailing wisdom was nice thought, but these never work. I suspect it’s threefold why style guides and design systems have taken off:

  1. Component-based front-end architectures becoming very popular
  2. Styling philosophies that scope styles becoming very popular
  3. A shift in community attitude that style guides work

That last one feels akin to cryptocurrency to me. If everyone believes in the value, it works. If people stop believing in the value, it dies.

Anyway, in my typical Coffee-and-RSS mornings, I’ve come across quite a few recently written articles on style guides, so I figured I’d round them up for your enjoyment.


How to Build a Design System with a Small Team by Naema Baskanderi:

As a small team working on B2B enterprise software, we were diving into creating a design system with limited time, budget and resources … Where do you start when you don’t have enough resources, time or budget?

Her five tips feel about right to me:

  1. Don’t start from scratch
  2. Know what you’re working with (an audit)
  3. Build as you go
  4. Know your limits
  5. Stay organized

Style guide-driven design systems by Brad Frost:

I’ll often have teams stand up the style guide website on Day 1 of their design system initiative. A style guide serves as the storefront that showcases all of the design system’s ingredients and serves as a tangible center of mass for the whole endeavor.


This Also published their style guide (Here’s 100’s of others, if you like peaking at other people’s take on this kind of thing).

What is notable about this to me is that it’s the closest to the meaning of style guide to me (as opposed to a pattern library or design system that are more about design instructions for building out parts of the website). They only include the three things that are most important to their brand: typography, writing, and identity. Smart.

Everything you write should be easy to understand. Clarity of writing usually follows clarity of thought. Take time to think about what you’re going to say, then say it as simply as possible. Keep these rules in mind whenever you’re writing on behalf of the studio.


Laying the foundations for system design by Andrew Couldwell:

I use the term ‘foundations’ as part of a hierarchy for design systems and thinking. Think of the foundations as digital brand guidelines. They inspire and dovetail into our design systems, guiding all our digital products.

  • At a brand level they cover things like values, identity, tone of voice, photography, illustration, colours and typography.
  • At a digital level they cover things like formatting, localization, calls to action, responsive design and accessibility.
  • And in design systems they are the basis of, and cover the application of, things like text styles, form inputs, buttons and responsive grids.

Again a step back and wider view. Yes, a design system, but one that works alongside brand values.


How to create a living style guide by Adriana De La Cuadra:

Similar to a standard style guide, a living style guide provides a set of standards for the use and creation of styles for an application. In the case of a standard style guide, the purpose is to maintain brand cohesiveness and prevent the misuse of graphics and design elements. In the same way LSGs are used to maintain consistency in an application and to guide their implementation. But what makes a LSG different and more powerful is that much of its information comes right from the source code

An easy first reaction might be: Of course our style guide is “living”, we aren’t setting out to build a dead style guide. But I think it’s an interesting distinction to make. Style guides can sit in your development process in different places, as I wrote a few years back.

It’s all to easy to make a style guide that sits on the sidelines or is “the exhaust” of the process. It’s different entirely to place your style guide smack in the middle of a development workflow and not allow any sidestepping.


Lastly, Punit Web rounds up some very recently published style guides, in case you’re particularly interested in fresh ones you perhaps haven’t seen before.


People Writing About Style Guides is a post from CSS-Tricks

One File, Many Options: Using Variable Fonts on the Web

In 2016, an important development in web typography was jointly announced by representatives from Adobe, Microsoft, Apple, and Google. Version 1.8 of the OpenType font format introduced variable fonts. With so many big names involved, it’s unsurprising that all browsers are on-board and racing ahead with implementation.

Font weights can be far more than just bold and normal—most professionally designed typefaces are available in variants ranging from a thin hairline ultralight to a black extra-heavy bold. To make use of all those weights, we would need a separate file for each. While a design is unlikely to need every font-weight, a wider variety than bold and normal adds visual hierarchy and interest to a page.

The Google Fonts GUI makes clear: the more weights you choose, the slower your site

There’s more than various weights to consider. CSS3 introduced the font-stretch property, with values from ultra-condensed to ultra-expanded. Until now, these values only worked if you provided a separate file for each width. If you wanted every combination of weight and width in both normal and italic, you would need dozens of files.

The popular Gotham font, available in many width and weight combinations

With variable fonts, we can get all this variety with a single file.

The OpenType spec lists five standard axes of variation—all labeled by a four-character string. These are aspects of the typeface that we have control over.

  • wght – Weight is controlled by the CSS font-weight property. The value can be anything from 1 to 999. This will allow for a more granular level of control.
  • wdth – Width is controlled by the CSS font-stretch property. It can take a keyword or a percentage value. While it’s long been possible to use a transform to scaleX or scaleY, that distorts the font in ugly ways unintended by the typographer. The width axis is defined by the font designer to expand or condense elegantly.
  • opsz – Optical sizing can be turned on or off using the new font-optical-sizing property. (I’ll explain what optical sizing is later on.)
  • ital – Italicization is achieved by setting the CSS font-style property to italic
  • slnt– Slant is controlled by setting the CSS font-style property set to oblique. It will default to a 20 degree slant but it can also accept a specified degree between -90deg and 90deg.

Unfortunately, not every variable font will necessarily make use of all five axes. It’s entirely dependent on the creator of the particular typeface. After testing every variable font I could get my hands on, by far the most commonly implemented is weight, followed closely by width. Much of the time you will need two files: one for italic and one for regular, as the ital axis isn’t always implemented. As Frank Grießhammer of Adobe told me:

Italic and Roman styles have (often radically) different construction principles, therefore point structures may not always be compatible.

The browser can make any non-italic font emulate italics, but this is typographically ill-advised.

Typographers can define named instances within their variable font. A named instance is a preset—a particular variation the font is capable of accessing with a name (e.g. “Extra Light”) rather than with numbers alone. In the current CSS spec, however, there is no way to access these named instances. It’s important to note that when you use a value like extra-condensed or semi-expanded for font-stretch, the value maps to a percentage predefined in the CSS spec—not to any named instance chosen by the font creator. For font-weight, the bold value maps to 700 and normal to 400. As the spec puts it, “a font might internally provide its own mappings, but those mappings within the font are disregarded.”

The CSS Fonts Module Level 4 spec introduces the new font-variation-settings property to control variable font options. The following two CSS declarations are equivalent:

h1 { font-weight: 850; font-style: italic; font-stretch: normal;
} h1 { font-variation-settings: "wght" 850, "wdth" 100, "ital" 1;
}

The spec strongly prefers using font-optical-sizing, font-style, font-weight and font-stretch over font-variation-settings for controlling any of the five standard axes. As Myles Maxfield kindly explained to me:

font-variation-settings is not identical to the other variation-aware properties, because with these other properties, the browser has insight into the meaning of the variations, and can therefore do things like applying them to other font file formats, or creating synthesized versions if the font file doesn’t support the axis.

Microsoft will register more standard axes tags over time. As new axes are added, we can also expect new CSS properties to control them. Font creators are also free to invent their own axes. This is why font-variation-settings was added to CSS—it is the only way to control custom axes. Lab DJR and Decovar are two typeface made with the express intention of demonstrating just how malleable a single variable font can be. Lab DJR, for example, offers four custom axes:

h1 { font-variation-settings: 'SIZE' 100, 'QUAD' 80, 'BEVL' 950, 'OVAL' 210;
}
Courtesy of David Jonathan Ross. David is by the typographer of Lab DJR and already has several variable fonts to his name.

These foundry-defined custom axes must use uppercase letters while the standardized axes always use lower case. With unique and unstandardized options, CSS authors must count on font developers to properly document their work.

The versatility of Decovar is the perfect showcase for the power of variable fonts being more than just a saved HTTP request

Performance

You might download a variable font in TTF format rather than as a pre-compressed file. You’ll definitely want to convert it into .woff2. Google offer a command line tool predictably named woff2 to make it easy. If you cd into the folder containing your font while in the command line, you can type:

woff2_compress examplefont.ttf

We’ve established that we’ll only need one HTTP request per typeface (or possibly two to separate Roman and Italic styles). Because they’re doing so much work, you might expect the file size of a variable font to be far larger than a typical font file. Let’s have a (not entirely scientific) look.

Here are some of the variable fonts I have hanging around my laptop, along with their file sizes:

Decovar is only 71 KB even though it has 15 axes

Let’s compare that to single instances of a non-variable version of Source Sans:

Animation

Variable fonts also mean that, for the first time, font-weight (and any other axis) can be animated. While adding type animation may sound like a superfluous embellishment a website can happily survive without, something like adding weight on focus, for example, seems like a natural and intuitive way to denote state to the user. In the past, switching from a normal to a bold weight was utterly jarring. With variable fonts it can be smooth and graceful.

One Size Fits All?

While Lab DJR and Decovar are excitingly creative, variable fonts aren’t all about avant-garde experimentalism. Optical sizing should bring a better reading experience to the web. Currently, type on the web is size agnostic; you can change the font-size and it will still look the same. Optical sizing means making size-specific optimizations for a typeface where the variation of a letter’s form at different sizes can improve readability. We don’t want larger text to look inelegant or clunky, while smaller text benefits from the removal of fine details. More open counters, the thickening of subtle serifs, and an increase in x-height, width, weight and letter-spacing all improve legibility at smaller sizes. The initial value is auto so if you are using a font that makes use of an optical sizing index, you get the benefit for free out of the box.

What Fonts Are Available?

This technology is quickly making its way into browsers. Making use of it requires you to find a variable font you actually want to use. Google Fonts Early Access has three available, with many more likely to follow. Adobe is remaking some of the most well-known families (i.e. Minion, Myriad, Acumin) to be variable. The open source fonts Source Sans and Source Serif have also been released. Monotype, one of the world’s largest typography companies, has so far introduced beta versions of Avenir Next and Kairos Sans. Some independent type foundries have also started to release variable typefaces. With variable font support now available in all major font-creation software, we can expect the availability to greatly expand over 2018.

Using Your Font

Once you’ve found your font, you need to use @font-face to include it on your site.

We don’t want any browsers to download a font they can’t use. For that reason, we should specify the format inside the @font-face rule. Depending on the file type of your variable font, you can specify woff-variations, woff2-variations, opentype-variations or truetype-variations. As already mentioned, you should always use woff2.

@font-face { font-family: 'source sans'; src: url(SourceSansVariable.woff2) format("woff2-variations"), url(SourceSans.woff2) format("woff2"); /* for older browsers */ font-weight: normal; font-style: normal;
} @font-face { font-family: 'source sans'; src: url(SourceSansVariable-italic.woff2) format("woff2-variations"), url(SourceSans-italic.woff2) format("woff2"); font-weight: normal; font-style: italic;
}

A third @font-face is only necessary to provide a backup bold font for browsers that do not support variable fonts. Notice that we are using the same variable font file as for the first @font-face rule, as that file can be both bold and normal:

@font-face { font-family: 'source sans'; src: url(SourceSansVariable.woff2) format("woff2-variations"), url(SourceSans-bold.woff2) format("woff2"); font-weight: 700; font-style: normal;
}

If the browser supports variable fonts, SourceSansVariable.woff2 and SourceSansVariable-italic.woff2 will be downloaded and used. If not, SourceSans.woff2, SourceSans-bold.woff2 and SourceSans-italic.woff2 will be downloaded instead.

From here, we can apply the font on an element as we normally would:

html { font-family: 'source sans', Verdana, sans-serif;
}

San Francisco

While variable fonts bring performance benefits, “web-safe” system fonts still remain the most performant option because the font is already installed and there is nothing to download. If you want to use a variable font without the need of downloading anything, Apple’s San Francisco, perhaps the prettiest of all system fonts, is also a variable font. Using system fonts no longer requires a massive font-stack:

html { font-family: system-ui, -apple-system;
}

The system-ui value is the new standard to access system fonts, while -apple-system is non-standardized syntax that works on Firefox. Traditionally, system fonts have not come in a wide range of weights or widths. Hopefully more will be made available as variable fonts, bringing all the benefits of variable fonts without a single HTTP request.

Browser Support

Variable fonts have shipped in Chrome and Safari. They are already in the insider preview version of Edge and behind a flag in Firefox. At the current time, not all parts of the spec are fully implemented by Chrome. Using variable fonts in conjunction with font-style, font-stretch, font-weight and font-optical-sizing does not work in Chrome, so using font-variation-settings to control the five standard axes is necessary for the time being. Specifying the format as woff2-variations inside of @font-face also lacks support in Chrome (you can specify only woff2 and the font will still work, but then you are unable to have a non-variable woff2 fallback).


One File, Many Options: Using Variable Fonts on the Web is a post from CSS-Tricks

Tools for Thinking and Tools for Systems

I’ve been obsessed with design tools the past two years, with apps like as Sketch, Figma and Photoshop perhaps being the most prolific of the bunch. We use these tools to make high fidelity mockups and ensure high quality user experiences. These tools (and others) are awesome and are generally upping our game as designers and developers, but I believe that the way they’ve changed the way we produce work and define UX will soon produce yet another new wave of tools.

In the future, I predict two separate categories of design applications: tools for thinking and tools for systems.

Let me explain.

Tools for Thinking

A short while ago Oliver Reichenstein described why we like distractions and how, in order to make great things, we need dedicated moments of focus, discipline, and concentration:

Starting and finishing need courage. There is no app or tool that will give you courage. But there are environments that encourage distraction. And there are environments that encourage you to focus.

When I read that, I thought about how the design apps I use are wonderfully powerful and built for a specific purpose—but they also encourage distraction. I rarely need mockups of an interface to be as high fidelity as the apps are capable of producing, and any time spent moving pixels around in those apps is almost a complete waste on my part.

Instead, I need a tool to focus on the complex, UX sort of work that underpins all the visual aspects of a large website, and I desperately require focus to do that work. I don’t need to select a pretty typeface, I don’t want custom colors, and I don’t care about how accurate the typographic hierarchy is compared to what is actually released. At this stage of the design process, everything is a suggestion, everything is a sketch, and that’s okay. Also, the messier the sketch, the more freedom I’ve had to experiment wildly in all directions and, crucially, there’s a lot more time to truly understand the information that I’m manipulating.

Herein lies the problem: our current tools encourage me to design the finished product first. They beg me to mess with rounded corners, colors, typefaces and stroke styles. But it’s only when I’m working within a strict design system that I ever need to declare those things.

Let’s say we have a half-decent component library where we’ve decided on our typographic hierarchy, our border-radius options, and what sort of background color our buttons have. Do we really need to be so specific in our mockups? Can we be intentionally vague without using pixel-perfect mockups and instead ditch those wireframes in favor of real-life working prototypes built with components from our libraries? Others have suggested this sort of process in the past, of course, but what I find interesting here is the framing, or rather the identification of this new category of tools and the concept that a design can slowly gain fidelity over time. I reckon we shouldn’t expect one tool to carry us through the entire design process.

I believe that Balsamiq is quite possibly the closest example that we have today of what I have in mind: tools that help us think and that remove distractions so that we can focus on the larger and more important architectural, organizational and content problems that we ought to deal with first.

But there will always be a need for another set of tools as well.

Tools for Systems

I hear an awful lot of arguments against using Balsamiq-esque, low fidelity design mockups. The main complaint seems to be that they’re far too imprecise to be useful; they’re not interactive and they’re not responsive. They’re merely drawings of the final app.

A short wile ago, Dan Eden wrote an interesting piece called “The Burden of Precision” which digs into this issue a little bit:

Without engineers, our products are mere static pictures of products. A pale shadow of the finished result. At best, our designs are sandboxed emulations of the real thing; complex prototypes that demonstrate a small fraction of the real-world states the product may encounter. We spend all this time and energy using precise tools to produce perfect caricatures of things we rarely understand the complexities of making real.

I completely agree with Dan here, especially where he argues that:

The precision is introduced by the engineer, where it rightfully belongs. After all, our designs are completely useless until they are built—what exists in the users’ hands is the final design, and nothing less.

This argument reminds me of the scene in Scott McCloud’s Understanding Comics where the author draws a number of faces on a line with a stickman drawing on one end and a realistic drawing of a human face on the other. McCloud argues that:

The ability of cartoons to focus our attention on an idea is, I think, an important part of their special power…

Likewise, with an app that provides the ability to focus on the necessary details can be hugely beneficial to improving the quality of the interfaces we create. If we already have a component library, then why do we need to make high fidelity mockups in the first place? We already have all those parts in place:

I think what we need are tools to help us translate our low fidelity mockups into real-life working code from component libraries. We can ditch a huge part of the design process that involves adding borders and choosing fonts, because all of those decisions have already been made before and we can lean on those previous decisions to focus on the UX instead.

</rant>

If we have tools to help us think, then the UX of our products, services and apps will improve exponentially because we’ll have an environment that encourages focus on the right things at the right time. We won’t be distracted by making pretty pictures.

Likewise, if we have tools to help us translate our low fidelity mockups into finished code examples—code from our own codebases mind you, not WYSIWYG generators—then we can work much faster and focus on improving our UI as a whole instead of burrowing our heads into one feature at a time. We’ll have fewer inconsistencies and our styleguides and component libraries can act as prototypes to test our designs quickly and iteratively. Airbnb’s recent explorations with Sketch is interesting but I can’t help but see those experiments as hacks on top of a complicated design tool. Instead, let us imagine an app built from the ground up that’s designed to do these sorts of system-y things and leaves the high fidelity mockup features behind.

With that being said, I think there’ll always be a demand for tools like Figma or Sketch and I know that I’ll certainly be using them for the foreseeable future. But I believe there are enormous opportunities to split our design tools up into the two categories I mentioned earlier: tools for thinking and tools for systems.


Tools for Thinking and Tools for Systems is a post from CSS-Tricks

Routing and Route Protection in Server-Rendered Vue Apps Using Nuxt.js

This tutorial assumes basic knowledge of Vue. If you haven’t worked with it before, then you may want to check out this CSS-Tricks guide on getting started.

You might have had some experience trying to render an app built with Vue on a server. The concept and implementation details of Server-Side Rendering (from Github.

Why Should I Render to a Server?

If you already know why you should server-render and just want to learn about routing or route protection, then you can jump to Setting Up a Nuxt.js App from Scratch section.

Sarah Drasner wrote a great post on what Nuxt.js is and why you should use it. She also showed off some of the amazing things you can do with this tool like page routing and page transitions. Nuxt.js is a tool in the Vue ecosystem that you can use to build server-rendered apps from scratch without being bothered by the underlying complexities of rendering a JavaScript app to a server.

Nuxt.js is an option to what Vue already offers. It builds upon the Vue SSR and routing libraries to expose a seamless platform for your own apps. Nuxt.js boils down to one thing: to simplify your experience as a developer building SSR apps with Vue.

We already did a lot of talking (which they say is cheap); now let’s get our hands dirty.

Setting Up a Nuxt.js App from Scratch

You can quickly scaffold a new project using the Vue CLI tool by running the following command:

vue init nuxt-community/starter-template <project-name>

But that’s not the deal, and we want to get our hands dirty. This way, you would learn the underlying processes that powers the engine of a Nuxt project.

Start by creating an empty folder on your computer, open your terminal to point to this folder, and run the following command to start a new node project:

npm init -y # OR yarn init -y

This will generate a package.json file that looks like this:

{ "name": "nuxt-shop", "version": "1.0.0", "main": "index.js", "license": "MIT"
}

The name property is the same as the name of the folder you working in.

Install the Nuxt.js library via npm:

npm install --save nuxt # OR yarn add nuxt

Then configure a npm script to launch nuxt build process in the package.json file:

"scripts": { "dev": "nuxt"
}

You can then start-up by running the command you just created:

npm run dev # OR yarn dev

It’s OK to watch the build fail. This is because Nuxt.js looks into a pages folder for contents which it wills serve to the browser. At this point, this folder does not exist:

Exit the build process then create a pages folder in the root of your project and try running once more. This time your should get a successful build:

The app launches on Port 3000 but you get a 404 when you try to access it:

Nuxt.js maps page routes to file names in the pages folder. This implies that if you had a file named index.vue and another about.vue in the pages folder, the will resolve to / and /about, respectively. Right now, / is throwing a 404 because, index.vue does not exist in the pages folder.

Create the index.vue file with this dead simple snippet:

<template> <h1>Greetings from Vue + Nuxt</h1>
</template>

Now, restart the server and the 404 should be replaced with an index route showing the greetings message:

Project-Wide Layout and Assets

Before we get deep into routing, let’s take some time to discuss how to structure your project in such a way that you have a reusable layout as sharing global assets on all pages. Let’s start with the global assets. We need these two assets in our project:

  1. Favicon
  2. Base Styles

Nuxt.js provides two root folder options (depending on what you’re doing) for managing assets:

  1. assets: Files here are webpacked (bundled and transformed by webpack). Files like your CSS, global JS, LESS, SASS, images, should be here.
  2. static: Files here don’t go through webpack. They are served to the browser as is. Makes sense for robot.txt, favicons, Github CNAME file, etc.

In our case, our favicon belongs to static while the base style goes to the assets folder. Hence, create the two folders and add base.css in /assets/css/base.css. Also download this favicon file and put it in the static folder. We need normalize.css but we can install it via npm rather than putting it in assets:

yarn add normalize.css

Finally, tell Nuxt.js about all these assets in a config file. This config file should live in the root of your project as nuxt.config.js:

module.exports = { head: { titleTemplate: '%s - Nuxt Shop', meta: [ { charset: 'utf-8' }, { name: 'viewport', content: 'width=device-width, initial-scale=1' }, { hid: 'description', name: 'description', content: 'Nuxt online shop' } ], link: [ { rel: 'stylesheet', href: 'https://fonts.googleapis.com/css?family=Raleway' }, { rel: 'icon', type: 'image/x-icon', href: 'https://cdn.css-tricks.com/favicon.ico' } ] }, css: ['normalize.css', '@/assets/css/base.css']
};

We just defined our title template, page meta information, fonts, favicon and all our styles. Nuxt.js will automatically include them all in the head of our pages.

Add this in the base.css file and let’s see if everything works as expected:

html, body, #__nuxt { height: 100%;
} html { font-size: 62.5%;
} body { font-size: 1.5em; line-height: 1.6; font-weight: 400; font-family: 'Raleway', 'HelveticaNeue', 'Helvetica Neue', Helvetica, Arial, sans-serif; color: #222;
}

You should see that the font of the greeting message has changed to reflect the CSS:

Now we can talk about layout. Nuxt.js already has a default layout you can customize. Create a layouts folder on the root and add a default.vue file in it with the following layout content:

<template> <div class="main"> <app-nav></app-nav> <!-- Mount the page content here --> <nuxt/> </div>
</template>
<style>
/* You can get the component styles from the Github repository for this demo */
</style> <script>
import nav from '@/components/nav';
export default { components: { 'app-nav': nav }
};
</script>

I am omitting all the styles in the style tag but you can get them from the code repository. I omitted them for brevity.

The layout file is also a component but wraps the nuxt component. Everything in the this file is shared among all other pages while each page content replaces the nuxt component. Speaking of shared contents, the app-nav component in the file should show a simple navigation.

Add the nav component by creating a components folder and adding a nav.vue file in it:

<template> <nav> <div class="logo"> <app-h1 is-brand="true">Nuxt Shop</app-h1> </div> <div class="menu"> <ul> <li> <nuxt-link to="/">Home</nuxt-link> </li> <li> <nuxt-link to="/about">About</nuxt-link> </li> </ul> </div> </nav>
</template>
<style>
/* You can get the component styles from the Github repository for this demo */
</style>
<script>
import h1 from './h1';
export default { components: { 'app-h1': h1 }
}
</script>

The component shows brand text and two links. Notice that for Nuxt to handle routing appropriately, we are not using the <a> tag but the <nuxt-link> component. The brand text is rendered using a reusable <h1> component that wraps and extends a <h1> tag. This component is in components/h1.vue:

<template> <h1 :class="{brand: isBrand}"> <slot></slot> </h1>
</template>
<style>
/* You can get the component styles from the Github repository for this demo
*/
</style>
<script>
export default { props: ['isBrand']
}
</script>

This is the output of the index page with the layout and these components added:

When you inspect the output, you should see the contents are rendered to the server:

Implicit Routing and Automatic Code Splitting

As mentioned earlier, Nuxt.js uses its file system to generate routes. All the files in the pages directory are mapped to a URL on the server. So, if I had this kind of directory structure:

pages/
--| product/
-----| index.vue
-----| new.vue
--| index.vue
--| about.vue

…then I would automatically get a Vue router object with the following structure:

router: { routes: [ { name: 'index', path: '/', component: 'pages/index.vue' }, { name: 'about', path: '/about', component: 'pages/about.vue' }, { name: 'product', path: '/product', component: 'pages/product/index.vue' }, { name: 'product-new', path: '/product/new', component: 'pages/product/new.vue' } ]
}

This is what I prefer to refer to as implicit routing.

On the other hand, each of these pages are not bundled in one
bundle.js. This would be the expectation when using webpack. In plain Vue projects, this is what we get and we would manually split the code for each route into their own files. With Nuxt.js, you get this out of the box and it’s referred to as automatic code splitting.

You can see this whole thing in action when you add another file in the pages folder. Name this file, about.vue with the following content:

<template> <div> <app-h1>About our Shop</app-h1> <p class="about">Lorem ipsum dolor sit amet consectetur adipisicing ...</p> <p class="about">Lorem ipsum dolor sit amet consectetur adipisicing ...</p> <p class="about">Lorem ipsum dolor sit amet consectetur adipisicing ...</p> <p class="about">Lorem ipsum dolor sit amet consectetur adipisicing ...</p> ... </div>
</template>
<style>
...
</style>
<script>
import h1 from '@/components/h1';
export default { components: { 'app-h1': h1 }
};
</script>

Now click on the About link in the navigation bar and it should take you to /about with the page content looking like this:

A look at the Network tab in DevTools will show you that no pages/index.[hash].js file was loaded, rather, a pages/about.[hash].js:

You should take out one thing from this: Routes === Pages. Therefore, you’re free to use them interchangeably in the server-side rendering world.

Data Fetching

This is where the game changes a bit. In plain Vue apps, we would usually wait for the component to load, then make a HTTP request in the created lifecycle method. Unfortunately, when you are also rendering to the server, the server is ready way before the component is ready. Therefore, if you stick to the created method, you can’t render fetched data to the server because it’s already too late.

For this reason, Nuxt.js exposes another instance method like created called asyncData. This method has access to two contexts: the client and the server. Therefore, when you make request in this method and return a data payload, the payload is automatically attached to the Vue instance.

Let’s see an example. Create a services folder in the root and add a data.js file to it. We are going to simulate data fetching by requesting data from this file:

export default [ { id: 1, price: 4, title: 'Drinks', imgUrl: 'http://res.cloudinary.com/christekh/image/upload/v1515183358/pro3_tqlsyl.png' }, { id: 2, price: 3, title: 'Home', imgUrl: 'http://res.cloudinary.com/christekh/image/upload/v1515183358/pro2_gpa4su.png' }, // Truncated for brevity. See repo for full code.
]

Next, update the index page to consume this file:

<template> <div> <app-banner></app-banner> <div class="cta"> <app-button>Start Shopping</app-button> </div> <app-product-list :products="products"></app-product-list> </div>
</template>
<style>
...
</style>
<script>
import h1 from '@/components/h1';
import banner from '@/components/banner';
import button from '@/components/button';
import productList from '@/components/product-list';
import data from '@/services/data';
export default { asyncData(ctx, callback) { setTimeout(() => { callback(null, { products: data }); }, 2000); }, components: { 'app-h1': h1, 'app-banner': banner, 'app-button': button, 'app-product-list': productList }
};
</script>

Ignore the imported components and focus on the asyncData method for now. I am simulating an async operation with setTimeout and fetching data after two seconds. The callback method is called with the data you want to expose to the component.

Now back to the imported components. You have already seen the <h1> component. I have created few more to serve as UI components for our app. All these components live in the components directory and you can get the code for them from the Github repo. Rest assured that they contain mostly HTML and CSS so you should be fine understanding what they do.

This is what the output should look like:

Guess what? The fetched data is still rendered to the server!

Parameterized (Dynamic) Routes

Sometimes the data you show in your page views are determined by the state of the routes. A common pattern in web apps is to have a dynamic parameter in a URL. This parameter is used to query data or a database for a given resource. The parameters can come in this form:

https://example.com/product/2

The value 2 in the URL can be 3 or 4 or any value. The most important thing is that your app would fetch that value and run a query against a dataset to retrieve relative information.

In Nuxt.js, you have the following structure in the pages folder:

pages/
--| product/
-----| _id.vue

This resolves to:

router: { routes: [ { name: 'product-id', path: '/product/:id?', component: 'pages/product/_id.vue' } ]
}

To see how that works out, create a product folder in the
pages directory and add a _id.vue file to it:

<template> <div class="product-page"> <app-h1>{{product.title}}</app-h1> <div class="product-sale"> <div class="image"> <img :src="product.imgUrl" :alt="product.title"> </div> <div class="description"> <app-h2>${{product.price}}</app-h2> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit.</p> </div> </div> </div>
</template>
<style> </style>
<script>
import h1 from '@/components/h1';
import h2 from '@/components/h2';
import data from '@/services/data';
export default { asyncData({ params }, callback) { setTimeout(() => { callback(null,{product: data.find(v => v.id === parseInt(params.id))}) }, 2000) }, components: { 'app-h1': h1, 'app-h2': h2 },
};
</script>

What’s important is the asyncData again. We are simulating an async request with setTimout. The request uses the id received via the context object’s params to query our dataset for the first matching id. The rest is just the component rendering the product.

Protecting Routes With Middleware

It won’t take too long before you start realizing that you need to secure some of your website’s contents from unauthorized users. Yes, the data source might be secured (which is important) but user experience demands that you prevent users from accessing unauthorized contents. You can do this by showing a friendly walk-away error or redirecting them to a login page.

In Nuxt.js, you can use a middleware to protect your pages (and in turn your contents). A middleware is a piece of logic that is executed before a route is accessed. This logic can prevent the route from being accessed entirely (probably with redirections).

Create a middleware folder in the root of the project and add an auth.js file:

export default function (ctx) { if(!isAuth()) { return ctx.redirect('/login') }
}
function isAuth() { // Check if user session exists somehow return false;
}

The middleware checks if a method, isAuth, returns false. If that is the case, it implies that the user is not authenticated and would redirect the user to a login page. The isAuth method just returns false by default for test purposes. Usually, you would check a session to see if the user is logged in.

Don’t rely on localStorage because the server does not know that it exists.

You can use this middleware to protect pages by adding it as value to the middleware instance property. You can add it to the _id.vue file we just created:

export default { asyncData({ params }, callback) { setTimeout(() => { callback(null,{product: data.find(v => v.id === parseInt(params.id))}) }, 2000) }, components: { //... }, middleware: 'auth'
};

This automatically shuts this page out every single time we access it. This is because the isAuth method is always returning false.

Long Story, Short

I can safely assume that you have learned what Nuxt.js guide for more features and use cases. If you’re working on a React project and need this kind of tool, then I think you should try Next.js.


Routing and Route Protection in Server-Rendered Vue Apps Using Nuxt.js is a post from CSS-Tricks

2017/2018 JavaScript

There has been a lot of research on the landscape this year! Here are a few snippets from a bunch of articles. There is a ton of information in each, so I’m just picking out a few juicy quotes from each here.

Perhaps the most interesting bit is how different the data looked at is. Each of these is different: a big developer survey, npm data, GitHub data, and StackOverflow data. Yet, they mostly tell the same stories.

The Brutal Lifecycle of JavaScript Frameworks

Ian Allen of StackOverflow writes:

JavaScript UI frameworks and libraries work in cycles. Every six months or so, a new one pops up, claiming that it has revolutionized UI development. Thousands of developers adopt it into their new projects, blog posts are written, Stack Overflow questions are asked and answered, and then a newer (and even more revolutionary) framework pops up to usurp the throne.

Using the Stack Overflow Trends tool and some of our internal traffic data, we decided to take a look at some of the more prominent UI frameworks: Angular, React, Vue.js, Backbone, Knockout, and Ember.

Read More

The Top JavaScript Trends to Watch in 2018

Ryan Chartrand of X-Team for Hackernoon writes:

This time last year, not many had faith that Vue would ever become a big competitor to React when it comes to major companies adopting it, but it was impossible to ignore Vue this year, even sending Angular a bit into the shadows in terms of developer hype.

Read More

The State of JavaScript 2017

Sacha Greif uses a survey rather than usage data:

We asked over a hundred questions to more than 28,000 developers all over the world, covering topics going from front-end libraries all the way to back-end frameworks.

I particularly enjoyed the opinions. Lots of people who love working with JavaScript and find it to be moving in the right direction and find it overly complex.

Read More

The State of JavaScript Frameworks, 2017

This one is from Laurie Voss of npm, which is probably the best source of data for usage but faces interesting challenges with that data:

You can use npm’s download statistics to give you insight into the amount of people actively invested in using and maintaining a package. However, probably more important than absolute popularity is growth.

Packages, once incorporated into software, have very long lives. People very seldom rip packages out of software once they’re installed. Because of this very low “churn,” packages hardly ever decline in usage. Furthermore, nearly all packages in the npm Registry grow in usage as the number of total npm users continues to skyrocket. They vary only in how fast they’re growing.

This makes measuring growth harder, since measuring absolute growth in downloads all the time makes almost everything look popular.

All in all it tells a familiar story: React is incredibly popular and Vue is the one to watch.

Read More

Top JavaScript Libraries & Tech to Learn in 2018

Eric Elliott writes:

Vue.js did do very well in 2017. It got a lot of headlines and a lot of people got interested. As I predicted, it did not come close to unseating React, and I’m confident to predict it won’t unseat React in 2018, either. That said, it could overtake Angular in 2018.

Read More

2017 JavaScript Rising Stars

Michael Rambeau’s writes:

Once again, Vue.js is the trendiest project of the year, with more than 40,000 stars added on GitHub during the year.

It’s far more than in 2016 (26,000 stars), and the gap with the next contender (React) is even bigger.

Read More


2017/2018 JavaScript is a post from CSS-Tricks