Fallbacks for Videos-as-Images

Safari 11.1 shipped a strange-but-very-useful feature: the ability to use a video source in the <img> tag. The idea is it does the same job as a GIF (silent, autoplaying, repeating), but with big performance gains. How big? “20x faster and decode 7x faster than the GIF equivalent,” says Colin Bendell.

Not all browsers support this so, to do a fallback, the <picture> element is ready. Bruce Lawson shows how easy it can be:

<picture> <source type="video/mp4" srcset="adorable-cat.mp4"> <!-- perhaps even an animated WebP fallback here as well --> <img src="adorable-cat.gif" alt="adorable cat tears throat out of owner and eats his eyeballs">
</picture>

Šime Vidas notes you get wider browser support by using the <video> tag:

<video src="https://media.giphy.com/media/klIaoXlnH9TMY/giphy.mp4" muted autoplay loop playsinline></video>

But as Bendell noted, the performance benefits aren’t there with video, notably the fact that video isn’t helped out by the preloader. Sadly, <video> it is for now, as:

there is this nasty WebKit bug in Safari that causes the preloader to download the first <source> regardless of the mimetype declaration. The main DOM loader realizes the error and selects the correct one. However, the damage will be done. The preloader squanders its opportunity to download the image early and on top of that, downloads the wrong version wasting bytes. The good news is that I’ve patched this bug and it should land in Safari TP 45.

In short, using the <picture> and <source type> for mime-type selection is not advisable until the next version of Safari reaches the 90%+ of the user base.

Still, eventually, it’ll be quite useful.


Fallbacks for Videos-as-Images is a post from CSS-Tricks

A Short History of WaSP and Why Web Standards Matter

In August of 2013, Aaron Gustafson posted to the WaSP blog. He had a bittersweet message for a community that he had helped lead:

Thanks to the hard work of countless WaSP members and supporters (like you), Tim Berners-Lee’s vision of the web as an open, accessible, and universal community is largely the reality. While there is still work to be done, the sting of the WaSP is no longer necessary. And so it is time for us to close down The Web Standards Project.

If there’s just the slightest hint of wistful regret in Gustafson’s message, it’s because the Web Standards Project changed everything that had become the norm on the web during its 15+ years of service. Through dedication and developer advocacy, they hoisted the web up from a nest of browser incompatibility and meaningless markup to the standardized and feature-rich application platform most of us know today.

I previously covered what it took to bring CSS to the World Wide Web. This is the other side of that story. It was only through the efforts of many volunteers working tirelessly behind the scenes that CSS ever had a chance to become what it is today. They are the reason we have web standards at all.

Introducing Web Standards

Web standards weren’t even a thing in 1998. There were HTML and CSS specifications and drafts of recommendations that were managed by the W3C, but they had spotty and uneven browser support which made them little more than words on a page. At the time, web designers stood at the precipice of what would soon be known as the Browser Wars, where Netscape and Microsoft raced to implement exclusive features and add-ons in an escalating fight for market share. Rather than stick to any official specification, these browsers forced designers to support either Netscape Navigator or Internet Explorer. And designers were definitely not happy about it.

Supporting both browsers and their competing feature implementations was possible, but it was also difficult and unreliable, like building a house on sand. To help each other along, many developers began joining mailing lists to swap tips and hacks for dealing with sites that needed to look good no matter where it was rendered.

From these mailing lists, a group began to form around an entirely new idea. The problem, this new group realized, wasn’t with the code, but with the browsers that refused to adhere to the codified, open specifications passed down by the W3C. Browsers touted new presentational HTML elements like the <blink> tag, but they were proprietary and provided no layout options. What the web needed was browsers that could follow the standards of the web.

The group decided they needed to step up and push browsers in the right direction. They called themselves the Web Standards Project. And, since the process would require a bit of a sting, they went by WaSP for short.

Launching the Web Standards Project

In August of 1998, WaSP announced their mission to the public on a brand new website: to “support these core standards and encourage browser makers to do the same, thereby ensuring simple, affordable access to Web technologies for all.” Within a few hours, 450 people joined WaSP. In a few months, that number would jump to thousands.

WaSP took what was basically a two-pronged approach. The first was in public, tapping into the groundswell of developer support they had gathered to lobby for better standards support in browsers. Using grassroots tactics and targeted outreach, WaSP would often send its members on “missions” such as sending emails to browsers explaining in great detail their troubles working with a lack of consistent web standards support.

They also published scathing reports that put browsers on blast, highlighting all the ways that Netscape or Internet Explorer failed to add necessary support, even go so far to encourage users to use alternative browsers. It was these reports where the project truly lived up to its acronym. One needs to look no further then a quote from WaSP’s savage takedown of Internet Explorer as an example of its ability to sting:

Quit before the job’s done, and the flamethrower’s the only answer. Because that’s our job. We speak for thousands of Web developers, and through them, millions of Web users.

The second prong of WaSP’s approach included privately reaching out to passionate developers on browser teams. The problem, for big companies like Netscape and Microsoft, wasn’t that engineers were against web standards. Quite the contrary, actually. Many browser engineers believed deeply in WaSP’s mission but were resisted by perceived business interests and red-tape bureaucracy time and time again. As a result, WaSP would often work with browser developers to find the best path forward and advocate on their behalf to the higher-ups when necessary.

Holding it All Together

To help WaSP navigate its way through its missions, reports, and outreach, a Steering Committee was formed. This committee helped set the project’s goals and reached out to the community to gather support. They were the heralds of a better day soon to come, and more than a few influential members would pass through their ranks before the project was over, including: Rachel Cox, Tim Bray, Steve Champeon, Glenn Davis, Glenda Sims, Todd Fahrner, Molly Holzschalg and Aaron Gustafson, among many, many others.

At the top of it all was a project lead who set the tone for the group and gave developers a unified voice. The position was initially held by George Olsen, one of the founders of the project, but was soon picked up by another founding member: Jeffrey Zeldman.

A network of loosely connected satellite groups orbiting around the Steering Committee helped developers and browsers alike understand the importance of web standards. There was, for instance, an Accessibility group that bridged the W3C with browser makers to ensure the web was open and accessible to everyone. Then there was the CSS Samurai, who published reports about CSS support (or, more commonly, lack thereof) in different browsers. They were the ones that devised the Box Acid test and offered guidance to browsers as they worked to expand CSS support. Todd Fahrner, who helped save CSS with doctype switching, counted himself among the CSS Samurai.

Making an Impact

WaSP was huge and growing all the time. Its members were passionate and, little by little, clusters of the community came together to enact change. And that is exactly what happened.

The changes felt kind of small at first but soon they bordered on massive. When Netscape was kicking around the idea of a new rendering engine named Gecko that would include much better standards support across the board, their initial timeline would have taken months to release. But the WaSP swarmed, emailing and reaching out to Netscape to put pressure on them to release Gecko sooner. It worked and, by the next release, Gecko (and better web standards) shipped.

Tantek Çelik was another member of WaSP. The community inspired him to take a stand on web standards at his day job as lead developer of Internet Explorer for Mac. It was through the encouragement and support of WaSP that he and his team released version 5 with full CSS Level 1 support.

Internet Explorer 5 for Mac was released with full CSS Level 1 support

In August of 2001, after years of public reports and private outreach and developer advocacy, the WaSP sting provoked seismic change in Internet Explorer as version 6 released with CSS Level 1 support and the latest HTML features. The upgrades were due in no small part to the work at the Web Standards Project and their work with dedicated members of the browser team. It appeared that standards were beginning to actually win out. The WaSP’s mission may have even been over.

But instead of calling it quits, they shifted tactics a bit.

Teaching Standards to a New Generation

In the early 2000’s, WaSP would radically change its approach to education and developer outreach.

They started with the launch of the Browser Upgrade Campaign which educated users who were coming online for the very first time and knew absolutely nothing about web standards and modern browsers. Site owners were encouraged to add some JavaScript and a banner to their sites to target these users. As a result, those surfing to a site on older versions of standards-compliant browsers, like Firefox or Opera, were greeted by a banner simply directing them to upgrade. Users visiting the site on a really old browser, like pre-IE5 or Netscape 5, would redirect visitors to an entirely new page explaining why upgrading to a modern browser with standards support was in their best interest.

A page from the Browser Upgrade Campaign

WaSP was going to bring the web up to speed, even if they had to do it one person at a time. Perhaps no one articulated this sentiment better than Molly Holzschalg when she wrote “Raise Your Standards” in February 2002. In the article, she broke down what web standards are and what they meant for developers and designers. She celebrated the work that had been done by browsers and the community working to make web standards a thing in the first place.

But, she argued, the web was far from done. It was now time for developers to step up to the plate and assume the responsibility for standards themselves by coding it into all of their sites. She wrote:

The Consortium is fraught with its own internal issues, and its actions—while almost always in the best interests of professional Web authors—are occasionally politicized.

Therefore, as Web authors, we’re personally responsible for making implementation decisions within the framework of a site’s markup needs. It’s our job to administer recommendations to the best of our abilities.

This, however, would not be easy. It would once again require the combined efforts of WaSP members to pull together and teach the web a new way to code. Some began publishing tutorials to their personal blogs or on A List Apart. Others created a standards-based online curriculum for web developers who were new to the field. A few members even formed brand-new task forces to work with popular software tools, like Adobe Dreamweaver, and ensure that standards were supported there as well.

The redesigns of ESPN and Wired, which stood as a testament and example for standards-based designs for years to come, were undertaken in part because members of those teams were inspired by the work that WaSP was doing. They would not have been able to take those crucial first steps if not for the examples and tutorials made freely available to them by gracious WaSP members.

That is why web standards is basically second nature to many web developers today. It’s also why we have such a free spirit of creative exchange in our industry. It all started when WaSP decided to share the correct way of doing things right out in the open.

Looking Past Web Standards

It was this openness that carried WaSP into the late 2010’s. When Holzschlag took over as lead, she advocated for transparency and collaboration between browser makers and the web community. The WaSP, Holzschlag realized, was no longer necessary and could be done from within. For example, she made inroads at Microsoft to help make web standards a top priority on their browser team.

With each subsequent release, browsers began to catch up to the latest standards from the W3C. Browsers like Opera and Firefox actually competed on supporting the latest standards. Google Chrome used web standards as a selling point when it was initially released around the same time. The decade-and-a-half of work by WaSP was paying off. Browser makers were listening to the W3C and the web community, even going so far as to experiment with new standards before they were officially published for recommendation.

In 2013, WaSP posted its farewell announcement and closed up shop for good. It was a difficult decision for those who had fought long and hard for a better, more accessible and more open web, but it was necessary. There are still a number of battlegrounds for the open web but, thanks to the efforts of WaSP, the one for web standards has been won.

Enjoy learning about web history? Jay Hoffmann has a weekly newsletter called The History of the Web you can sign up for here.


A Short History of WaSP and Why Web Standards Matter is a post from CSS-Tricks

Counting With CSS Counters and CSS Grid

You’ve heard of CSS Grid, I’m sure of that. It would be hard to miss it considering that the whole front-end developer universe has been raving about it for the past year.

Whether you’re new to Grid or have already spent some time with it, we should start this post with a short definition directly from the words of W3C:

Grid Layout is a new layout model for CSS that has powerful abilities to control the sizing and positioning of boxes and their contents. Unlike Flexible Box Layout, which is single-axis–oriented, Grid Layout is optimized for 2-dimensional layouts: those in which alignment of content is desired in both dimensions.

In my own words, CSS Grid is a mesh of invisible horizontal and vertical lines. We arrange elements in the spaces between those lines to create a desired layout. An easier, stable, and standardized way to structure contents in a web page.

Besides the graph paper foundation, CSS Grid also provides the advantage of a layout model that’s source order independent: irrespective of where a grid item is placed in the source code, it can be positioned anywhere in the grid across both the axes on screen. This is very important, not only for when you’d find it troublesome to update HTML while rearranging elements on page but also at times when you’d find certain source placements being restrictive to layouts.

Although we can always move an element to the desired coordinate on screen using other techniques like translate, position, or margin, they’re both harder to code and to update for situations like building a responsive design, compared to true layout mechanisms like CSS Grid.

In this post, we’re going to demonstrate how we can use the source order independence of CSS Grid to solve a layout issue that’s the result of a source order constraint. Specifically, we’re going to look at checkboxes and CSS Counters.

Counting With Checkboxes

If you’ve never used CSS Counters, don’t worry, the concept is pretty simple! We set a counter to count a set of elements at the same DOM level. That counter is incremented in the CSS rules of those individual elements, essentially counting them.

Here’s the code to count checked and unchecked checkboxes:

<input type="checkbox">Checkbox #1<br>
<input type="checkbox">Checkbox #2
<!-- more checkboxes, if we want them --> <div class="total"> <span class="totalChecked"> Total Checked: </span><br> <span class="totalUnChecked"> Total Unchecked: </span>
</div>
::root { counter-reset: checked-sum, unchecked-sum;
} input[type="checkbox"] { counter-increment: unchecked-sum;
} input[type="checkbox"]:checked { counter-increment: checked-sum;
} .totalUnChecked::after { content: counter(unchecked-sum);
} .totalChecked::after { content: counter(checked-sum);
}

In the above code, two counters are set at the root element using the counter-reset property and are incremented at their respective rules, one for checked and the other for unchecked checkboxes, using counter-increment. The values of the counters are then shown as contents of two empty <span>s’ pseudo elements using counter().

Here’s a stripped-down version of what we get with this code:

See the Pen CSS Counter Grid by CSS-Tricks (@css-tricks) on CodePen.

This is pretty cool. We can use it in to-do lists, email inbox interfaces, survey forms, or anywhere where users toggle boxes and will appreciate being shown how many items are checked and how many are unselected. All this with just CSS! Useful, isn’t it?

But the effectiveness of counter() wanes when we realize that an element displaying the total count can only appear after all the elements to be counted in the source code. This is because the browser first needs the chance to count all the elements, before showing the total. Hence, we can’t simply change the markup to place the counters above the checkboxes like this:

<!-- This will not work! -->
<div class="total"> <span class="totalChecked"> Total Checked: </span><br> <span class="totalUnChecked"> Total Unchecked: </span>
</div>
<input type="checkbox">Checkbox #1<br>
<input type="checkbox">Checkbox #2

Then, how else can we get the counters above the checkboxes in our layout? This is where CSS Grid and its layout-rendering powers come into play.

Adding Grid

We’re basically wrapping the previous HTML in a new <div> element that’ll serve as the grid container:

<div class="grid"> <input type=checkbox id="c-1"> <label for="c-1">checkbox #1</label> <input type=checkbox id="c-2"> <label for="c-2">checkbox #2</label> <input type=checkbox id="c-3"> <label for="c-3">checkbox #3</label> <input type=checkbox id="c-4"> <label for="c-4">checkbox #4</label> <input type=checkbox id="c-5"> <label for="c-5">checkbox #5</label> <input type=checkbox id="c-6"> <label for="c-6">checkbox #6</label> <div class=total> <span class="totalChecked"> Total Checked: </span> <span class="totalUnChecked"> Total Unchecked: </span> </div> </div>

And, here is the CSS for our grid:

.grid { display: grid; /* creates the grid */ grid-template-columns: repeat(2, max-content); /* creates two columns on the grid that are sized based on the content they contain */
} .total { grid-row: 1; /* places the counters on the first row */ grid-column: 1 / 3; /* ensures the counters span the full grid width, forcing other content below */
}

This is what we get as a result (with some additional styling):

See the Pen CSS Counter Grid by Preethi (@rpsthecoder) on CodePen.

See that? The counters are now located above the checkboxes!

We defined two columns on the grid element in the CSS, each accommodating its own content to their maximum size.

When we grid-ify an element, its contents (text including) block-ify, meaning they acquire a grid-level box (similar to block-level box) and are automatically placed in the available grid cells.

In the demo above, the counters take up both the grid cells in the first row as specified, and following that, every checkbox resides in the first column and the text after each checkbox stays in the last column.

The checkboxes are forced below the counters without changing the actual source order!

Since we didn’t change the source order, the counter works and we can see the running total count of checked and unchecked checkboxes at the top the same way we did when they were at the bottom. The functionality is left unaffected!

To be honest, there’s a staggering number of ways to code and implement a CSS Grid. You can use grid line numbers, named grid areas, among many other methods. The more you know about them, the easier it gets and the more useful they become. What we covered here is just the tip of the iceberg and you may find other approaches to create a grid that work equally well (or better).


Counting With CSS Counters and CSS Grid is a post from CSS-Tricks

Boilerform: A Follow-Up

When Chris wrote his idea for a Boilerform, I had already been thinking about starting a new project. I’d just decided to put my front-end boilerplate to bed, and wanted something new to think about. Chris’ idea struck a chord with me immediately, so I got enthusiastically involved in the comments like an excitable puppy. That excitement led me to go ahead and build out the initial version of Boilerform, which you can check out here.

The reason for my initial excitement was that I have a guilty pleasure for forms. In various jobs, I’ve worked with forms at a pretty intense level and have learned a lot about them. This has ranged from building dynamic form builders to high-level spam protection for a Harley-Davidson® website platform. Each different project has given me a look at the front-end and back-end of the process. Each of these projects has also picked away at my tolerance for quick, lazy implementations of forms, because I’ve seen the drastic implementations of this at scale.

But hey, we’re not bad people. Forms are a nightmare to work with. Although better now: each browser treats them slightly differently. For example, check out these select menus from a selection of browsers and OSs. Not one of them looks the same.

These are just the tip of the inconsistency iceberg.

Because of these inconsistencies, it’s easy to see why developers bail out of digging too deep or just spin up a copy of Bootstrap and be done with it. Also, in my experience, the design of minor forms, such as a contact form are left until later in the project when most of the positive momentum has already gone. I’ve even been guilty of building contact forms a day before a website’s launch. 😬

There’s clearly an opportunity to make the process of working with forms—on the front-end, at least—better and I couldn’t resist the temptation to make it!

The Planning

I sat and thought about what pain-points there are when working with forms and what annoys me as a user of forms. I decided that as a developer, I hate styling forms. As a user, poorly implemented form fields annoy me.

An example of the latter is email fields. Now, if you try to fill in an email field on an iOS device, you get that annoying trait of the first letter being capitalized by the browser, because it treats it like a sentence. All you have to do to stop that behaviour is add autocapitalize="none" to your field and this stops. I know this isn’t commonly known because I rarely see it in place, but it’s such a quick win to have a positive impact on your users.

I wanted to bake these little tricks right into Boilerform to help developers make a user’s life easier. Creating a front-end boilerplate or framework is about so much more than styling and aesthetics. It’s about sharing your gained experience with others to make the landscape better as a whole.

The Specification

I needed to think about what I wanted Boilerform to do as a minimum viable product, at initial launch. I came up with the following rules:

  • It had to be compatible with most front-ends
  • It had to be well documented
  • It had to be lightweight
  • Someone should be able to drop a CDN link to their <head> and have it just work
  • Someone should also be able to expand on the source for their own projects
  • It shouldn’t be too opinionated

To achieve these points, I had some technology decisions to make. I decided to go for a low barrier-to-entry setup. This was:

  • Sass powered CSS
  • BEM
  • Plain ol’ HTML
  • A basic compilation setup

I also focused my attention on samples. CodePen was the natural fit for this because they embed really well. Users can also fork them and play with them themselves.

The last decision was to roll out a pattern library to break up components into little pieces. This helped me in a couple of ways. It helped with organization mainly—but it also helped me build Boilerform in a bitty, sporadic nature as I was working on it in the evenings.

I had my plan and my stack, so got cracking.

Keeping it simple

It’s easy for a project like this to get out of hand, so it’s useful to create some points about what Boilerform will be and also what it won’t be.

What Boilerform will be:

  • It’ll always be a boilerplate to get you off to a good start with your project
  • It’ll provide high-level help with HTML, CSS and JavaScript to make both developers’ and users’ lives easier
  • It’ll aim to be super lightweight, so it doesn’t become a heavy burden
  • It’ll offer configurable options that make it flexible and easy to mould into most web projects

What Boilerform won’t be:

  • It won’t be a silver bullet for your forms—it’ll still need some work
  • It won’t be a framework like Bootstrap or Foundation, because it’ll always be a starting point
  • It won’t be overly opinionated with its CSS and JavaScript
  • It’ll never be aimed at one particular framework or web technology

The Specifics

I know y’all like to dive in to the specifics of how things work, so let me give you a whistle-stop tour!

Namespacing the CSS

The first thing I got sorted was namespacing. I’ve worked on a multitude of different sites and setups and they all share something when it comes to CSS: conflicts. With this in mind, I wrote a @mixin that wrapped all the CSS in a .boilerform namespace.

// Source Sass
.c-button { @include namespace() { background: gray; }
} // This compiles to this with Sass: .boilerform .c-button { background: gray; }

The mixin is basic right now, but it gives us flexibility to scale. If we wanted to make the namespacing optional down-the-line, we only have to update this mixin. I love that sort of modularity.

Right now, what it does give us is safety. Nothing leaks out of Boilerform and hopefully, whatever leaks in will be handled by the namespaced resets and rules.

BEM With a Garnish of Prefixes

I love BEM. It’s been core to my CSS and markup for a few years now. One thing I love about BEM is that it helps you build small, encapsulated components. This is perfect for a project like Boilerform.

I could probably target naked elements safely because of the namespacing, but BEM is about more than just putting classes on everything. It gives me and others the freedom to write whatever markup structure we want. It’s also really easy for someone to pickup the code and understand what’s related to what, in both HTML and CSS.

Another thing I added to this setup was a component prefix. Instead of an .input-field component, we’ve got a .c-input-field component. I hope little things like that will help a new contributor see what’s a component right off the bat.

Horror Inputs Get Some Cool Styling

As mentioned above, select menus are awful to style. So are radio buttons and checkboxes.

A trick I’ve been using for a while now is abstracting the styling to other friendlier HTML elements. For example, with <select> elements, I wrap them in a .c-select-field component and use siblings to add a consistent caret.

For checkboxes and radio buttons, I visually-hide the main input and use adjacent <label> elements to display state change. Using this approach makes working with these controls so much easier. Importantly, we maintain accessibility and native events too.

Base Attributes to Make Fields Easier to Use

I touched on it above with my example about email fields and capitalization, but that wasn’t the only addition of useful attributes.

  • Search fields have autocorrect="off" on them to prevent browsers trying to fix spelling. I strongly recommend that you add this to inputs that a user inserts their name into as well.
  • Number fields have min, max and step attributes set to help with validation. It’s also great for keyboard users.
  • All fields have blank name and id attributes to hopefully speed up the wiring-up process

I’m certainly keen for this to be expanded on, because little tweaks like this are great for user experience.

Going Forward. Can You Help?

Boilerform is in a good place right now, but it has real potential to be useful. Some ideas I’ve had for its ongoing development are:

  • Introducing multiple JavaScript library integrations, such as React, Vue, and Angular
  • Create some base form layouts in the pattern library
  • Create Sass mixins for styling pesky stuff like placeholders
  • Improve configurability
  • Add new elements such as the range input
  • Create multilingual documentation

As you can see, that’s a lot of work, so it would be awesome if we can get some contributors into the project to make something truly useful for our community. Pulling in contributors with different areas of expertise and backgrounds will help us make it useful for as many people as possible, from end-users to back-end developers.

Let’s make something great together. 🙂

Check out the project site or the GitHub repository.


Boilerform: A Follow-Up is a post from CSS-Tricks

People Writing About Style Guides

Are you thinking about style guides lately? It seems to me it couldn’t be a hotter topic these days. I’m delighted to see it, as someone who was trying to think and build this way when the prevailing wisdom was nice thought, but these never work. I suspect it’s threefold why style guides and design systems have taken off:

  1. Component-based front-end architectures becoming very popular
  2. Styling philosophies that scope styles becoming very popular
  3. A shift in community attitude that style guides work

That last one feels akin to cryptocurrency to me. If everyone believes in the value, it works. If people stop believing in the value, it dies.

Anyway, in my typical Coffee-and-RSS mornings, I’ve come across quite a few recently written articles on style guides, so I figured I’d round them up for your enjoyment.


How to Build a Design System with a Small Team by Naema Baskanderi:

As a small team working on B2B enterprise software, we were diving into creating a design system with limited time, budget and resources … Where do you start when you don’t have enough resources, time or budget?

Her five tips feel about right to me:

  1. Don’t start from scratch
  2. Know what you’re working with (an audit)
  3. Build as you go
  4. Know your limits
  5. Stay organized

Style guide-driven design systems by Brad Frost:

I’ll often have teams stand up the style guide website on Day 1 of their design system initiative. A style guide serves as the storefront that showcases all of the design system’s ingredients and serves as a tangible center of mass for the whole endeavor.


This Also published their style guide (Here’s 100’s of others, if you like peaking at other people’s take on this kind of thing).

What is notable about this to me is that it’s the closest to the meaning of style guide to me (as opposed to a pattern library or design system that are more about design instructions for building out parts of the website). They only include the three things that are most important to their brand: typography, writing, and identity. Smart.

Everything you write should be easy to understand. Clarity of writing usually follows clarity of thought. Take time to think about what you’re going to say, then say it as simply as possible. Keep these rules in mind whenever you’re writing on behalf of the studio.


Laying the foundations for system design by Andrew Couldwell:

I use the term ‘foundations’ as part of a hierarchy for design systems and thinking. Think of the foundations as digital brand guidelines. They inspire and dovetail into our design systems, guiding all our digital products.

  • At a brand level they cover things like values, identity, tone of voice, photography, illustration, colours and typography.
  • At a digital level they cover things like formatting, localization, calls to action, responsive design and accessibility.
  • And in design systems they are the basis of, and cover the application of, things like text styles, form inputs, buttons and responsive grids.

Again a step back and wider view. Yes, a design system, but one that works alongside brand values.


How to create a living style guide by Adriana De La Cuadra:

Similar to a standard style guide, a living style guide provides a set of standards for the use and creation of styles for an application. In the case of a standard style guide, the purpose is to maintain brand cohesiveness and prevent the misuse of graphics and design elements. In the same way LSGs are used to maintain consistency in an application and to guide their implementation. But what makes a LSG different and more powerful is that much of its information comes right from the source code

An easy first reaction might be: Of course our style guide is “living”, we aren’t setting out to build a dead style guide. But I think it’s an interesting distinction to make. Style guides can sit in your development process in different places, as I wrote a few years back.

It’s all to easy to make a style guide that sits on the sidelines or is “the exhaust” of the process. It’s different entirely to place your style guide smack in the middle of a development workflow and not allow any sidestepping.


Lastly, Punit Web rounds up some very recently published style guides, in case you’re particularly interested in fresh ones you perhaps haven’t seen before.


People Writing About Style Guides is a post from CSS-Tricks

One File, Many Options: Using Variable Fonts on the Web

In 2016, an important development in web typography was jointly announced by representatives from Adobe, Microsoft, Apple, and Google. Version 1.8 of the OpenType font format introduced variable fonts. With so many big names involved, it’s unsurprising that all browsers are on-board and racing ahead with implementation.

Font weights can be far more than just bold and normal—most professionally designed typefaces are available in variants ranging from a thin hairline ultralight to a black extra-heavy bold. To make use of all those weights, we would need a separate file for each. While a design is unlikely to need every font-weight, a wider variety than bold and normal adds visual hierarchy and interest to a page.

The Google Fonts GUI makes clear: the more weights you choose, the slower your site

There’s more than various weights to consider. CSS3 introduced the font-stretch property, with values from ultra-condensed to ultra-expanded. Until now, these values only worked if you provided a separate file for each width. If you wanted every combination of weight and width in both normal and italic, you would need dozens of files.

The popular Gotham font, available in many width and weight combinations

With variable fonts, we can get all this variety with a single file.

The OpenType spec lists five standard axes of variation—all labeled by a four-character string. These are aspects of the typeface that we have control over.

  • wght – Weight is controlled by the CSS font-weight property. The value can be anything from 1 to 999. This will allow for a more granular level of control.
  • wdth – Width is controlled by the CSS font-stretch property. It can take a keyword or a percentage value. While it’s long been possible to use a transform to scaleX or scaleY, that distorts the font in ugly ways unintended by the typographer. The width axis is defined by the font designer to expand or condense elegantly.
  • opsz – Optical sizing can be turned on or off using the new font-optical-sizing property. (I’ll explain what optical sizing is later on.)
  • ital – Italicization is achieved by setting the CSS font-style property to italic
  • slnt– Slant is controlled by setting the CSS font-style property set to oblique. It will default to a 20 degree slant but it can also accept a specified degree between -90deg and 90deg.

Unfortunately, not every variable font will necessarily make use of all five axes. It’s entirely dependent on the creator of the particular typeface. After testing every variable font I could get my hands on, by far the most commonly implemented is weight, followed closely by width. Much of the time you will need two files: one for italic and one for regular, as the ital axis isn’t always implemented. As Frank Grießhammer of Adobe told me:

Italic and Roman styles have (often radically) different construction principles, therefore point structures may not always be compatible.

The browser can make any non-italic font emulate italics, but this is typographically ill-advised.

Typographers can define named instances within their variable font. A named instance is a preset—a particular variation the font is capable of accessing with a name (e.g. “Extra Light”) rather than with numbers alone. In the current CSS spec, however, there is no way to access these named instances. It’s important to note that when you use a value like extra-condensed or semi-expanded for font-stretch, the value maps to a percentage predefined in the CSS spec—not to any named instance chosen by the font creator. For font-weight, the bold value maps to 700 and normal to 400. As the spec puts it, “a font might internally provide its own mappings, but those mappings within the font are disregarded.”

The CSS Fonts Module Level 4 spec introduces the new font-variation-settings property to control variable font options. The following two CSS declarations are equivalent:

h1 { font-weight: 850; font-style: italic; font-stretch: normal;
} h1 { font-variation-settings: "wght" 850, "wdth" 100, "ital" 1;
}

The spec strongly prefers using font-optical-sizing, font-style, font-weight and font-stretch over font-variation-settings for controlling any of the five standard axes. As Myles Maxfield kindly explained to me:

font-variation-settings is not identical to the other variation-aware properties, because with these other properties, the browser has insight into the meaning of the variations, and can therefore do things like applying them to other font file formats, or creating synthesized versions if the font file doesn’t support the axis.

Microsoft will register more standard axes tags over time. As new axes are added, we can also expect new CSS properties to control them. Font creators are also free to invent their own axes. This is why font-variation-settings was added to CSS—it is the only way to control custom axes. Lab DJR and Decovar are two typeface made with the express intention of demonstrating just how malleable a single variable font can be. Lab DJR, for example, offers four custom axes:

h1 { font-variation-settings: 'SIZE' 100, 'QUAD' 80, 'BEVL' 950, 'OVAL' 210;
}
Courtesy of David Jonathan Ross. David is by the typographer of Lab DJR and already has several variable fonts to his name.

These foundry-defined custom axes must use uppercase letters while the standardized axes always use lower case. With unique and unstandardized options, CSS authors must count on font developers to properly document their work.

The versatility of Decovar is the perfect showcase for the power of variable fonts being more than just a saved HTTP request

Performance

You might download a variable font in TTF format rather than as a pre-compressed file. You’ll definitely want to convert it into .woff2. Google offer a command line tool predictably named woff2 to make it easy. If you cd into the folder containing your font while in the command line, you can type:

woff2_compress examplefont.ttf

We’ve established that we’ll only need one HTTP request per typeface (or possibly two to separate Roman and Italic styles). Because they’re doing so much work, you might expect the file size of a variable font to be far larger than a typical font file. Let’s have a (not entirely scientific) look.

Here are some of the variable fonts I have hanging around my laptop, along with their file sizes:

Decovar is only 71 KB even though it has 15 axes

Let’s compare that to single instances of a non-variable version of Source Sans:

Animation

Variable fonts also mean that, for the first time, font-weight (and any other axis) can be animated. While adding type animation may sound like a superfluous embellishment a website can happily survive without, something like adding weight on focus, for example, seems like a natural and intuitive way to denote state to the user. In the past, switching from a normal to a bold weight was utterly jarring. With variable fonts it can be smooth and graceful.

One Size Fits All?

While Lab DJR and Decovar are excitingly creative, variable fonts aren’t all about avant-garde experimentalism. Optical sizing should bring a better reading experience to the web. Currently, type on the web is size agnostic; you can change the font-size and it will still look the same. Optical sizing means making size-specific optimizations for a typeface where the variation of a letter’s form at different sizes can improve readability. We don’t want larger text to look inelegant or clunky, while smaller text benefits from the removal of fine details. More open counters, the thickening of subtle serifs, and an increase in x-height, width, weight and letter-spacing all improve legibility at smaller sizes. The initial value is auto so if you are using a font that makes use of an optical sizing index, you get the benefit for free out of the box.

What Fonts Are Available?

This technology is quickly making its way into browsers. Making use of it requires you to find a variable font you actually want to use. Google Fonts Early Access has three available, with many more likely to follow. Adobe is remaking some of the most well-known families (i.e. Minion, Myriad, Acumin) to be variable. The open source fonts Source Sans and Source Serif have also been released. Monotype, one of the world’s largest typography companies, has so far introduced beta versions of Avenir Next and Kairos Sans. Some independent type foundries have also started to release variable typefaces. With variable font support now available in all major font-creation software, we can expect the availability to greatly expand over 2018.

Using Your Font

Once you’ve found your font, you need to use @font-face to include it on your site.

We don’t want any browsers to download a font they can’t use. For that reason, we should specify the format inside the @font-face rule. Depending on the file type of your variable font, you can specify woff-variations, woff2-variations, opentype-variations or truetype-variations. As already mentioned, you should always use woff2.

@font-face { font-family: 'source sans'; src: url(SourceSansVariable.woff2) format("woff2-variations"), url(SourceSans.woff2) format("woff2"); /* for older browsers */ font-weight: normal; font-style: normal;
} @font-face { font-family: 'source sans'; src: url(SourceSansVariable-italic.woff2) format("woff2-variations"), url(SourceSans-italic.woff2) format("woff2"); font-weight: normal; font-style: italic;
}

A third @font-face is only necessary to provide a backup bold font for browsers that do not support variable fonts. Notice that we are using the same variable font file as for the first @font-face rule, as that file can be both bold and normal:

@font-face { font-family: 'source sans'; src: url(SourceSansVariable.woff2) format("woff2-variations"), url(SourceSans-bold.woff2) format("woff2"); font-weight: 700; font-style: normal;
}

If the browser supports variable fonts, SourceSansVariable.woff2 and SourceSansVariable-italic.woff2 will be downloaded and used. If not, SourceSans.woff2, SourceSans-bold.woff2 and SourceSans-italic.woff2 will be downloaded instead.

From here, we can apply the font on an element as we normally would:

html { font-family: 'source sans', Verdana, sans-serif;
}

San Francisco

While variable fonts bring performance benefits, “web-safe” system fonts still remain the most performant option because the font is already installed and there is nothing to download. If you want to use a variable font without the need of downloading anything, Apple’s San Francisco, perhaps the prettiest of all system fonts, is also a variable font. Using system fonts no longer requires a massive font-stack:

html { font-family: system-ui, -apple-system;
}

The system-ui value is the new standard to access system fonts, while -apple-system is non-standardized syntax that works on Firefox. Traditionally, system fonts have not come in a wide range of weights or widths. Hopefully more will be made available as variable fonts, bringing all the benefits of variable fonts without a single HTTP request.

Browser Support

Variable fonts have shipped in Chrome and Safari. They are already in the insider preview version of Edge and behind a flag in Firefox. At the current time, not all parts of the spec are fully implemented by Chrome. Using variable fonts in conjunction with font-style, font-stretch, font-weight and font-optical-sizing does not work in Chrome, so using font-variation-settings to control the five standard axes is necessary for the time being. Specifying the format as woff2-variations inside of @font-face also lacks support in Chrome (you can specify only woff2 and the font will still work, but then you are unable to have a non-variable woff2 fallback).


One File, Many Options: Using Variable Fonts on the Web is a post from CSS-Tricks

Tools for Thinking and Tools for Systems

I’ve been obsessed with design tools the past two years, with apps like as Sketch, Figma and Photoshop perhaps being the most prolific of the bunch. We use these tools to make high fidelity mockups and ensure high quality user experiences. These tools (and others) are awesome and are generally upping our game as designers and developers, but I believe that the way they’ve changed the way we produce work and define UX will soon produce yet another new wave of tools.

In the future, I predict two separate categories of design applications: tools for thinking and tools for systems.

Let me explain.

Tools for Thinking

A short while ago Oliver Reichenstein described why we like distractions and how, in order to make great things, we need dedicated moments of focus, discipline, and concentration:

Starting and finishing need courage. There is no app or tool that will give you courage. But there are environments that encourage distraction. And there are environments that encourage you to focus.

When I read that, I thought about how the design apps I use are wonderfully powerful and built for a specific purpose—but they also encourage distraction. I rarely need mockups of an interface to be as high fidelity as the apps are capable of producing, and any time spent moving pixels around in those apps is almost a complete waste on my part.

Instead, I need a tool to focus on the complex, UX sort of work that underpins all the visual aspects of a large website, and I desperately require focus to do that work. I don’t need to select a pretty typeface, I don’t want custom colors, and I don’t care about how accurate the typographic hierarchy is compared to what is actually released. At this stage of the design process, everything is a suggestion, everything is a sketch, and that’s okay. Also, the messier the sketch, the more freedom I’ve had to experiment wildly in all directions and, crucially, there’s a lot more time to truly understand the information that I’m manipulating.

Herein lies the problem: our current tools encourage me to design the finished product first. They beg me to mess with rounded corners, colors, typefaces and stroke styles. But it’s only when I’m working within a strict design system that I ever need to declare those things.

Let’s say we have a half-decent component library where we’ve decided on our typographic hierarchy, our border-radius options, and what sort of background color our buttons have. Do we really need to be so specific in our mockups? Can we be intentionally vague without using pixel-perfect mockups and instead ditch those wireframes in favor of real-life working prototypes built with components from our libraries? Others have suggested this sort of process in the past, of course, but what I find interesting here is the framing, or rather the identification of this new category of tools and the concept that a design can slowly gain fidelity over time. I reckon we shouldn’t expect one tool to carry us through the entire design process.

I believe that Balsamiq is quite possibly the closest example that we have today of what I have in mind: tools that help us think and that remove distractions so that we can focus on the larger and more important architectural, organizational and content problems that we ought to deal with first.

But there will always be a need for another set of tools as well.

Tools for Systems

I hear an awful lot of arguments against using Balsamiq-esque, low fidelity design mockups. The main complaint seems to be that they’re far too imprecise to be useful; they’re not interactive and they’re not responsive. They’re merely drawings of the final app.

A short wile ago, Dan Eden wrote an interesting piece called “The Burden of Precision” which digs into this issue a little bit:

Without engineers, our products are mere static pictures of products. A pale shadow of the finished result. At best, our designs are sandboxed emulations of the real thing; complex prototypes that demonstrate a small fraction of the real-world states the product may encounter. We spend all this time and energy using precise tools to produce perfect caricatures of things we rarely understand the complexities of making real.

I completely agree with Dan here, especially where he argues that:

The precision is introduced by the engineer, where it rightfully belongs. After all, our designs are completely useless until they are built—what exists in the users’ hands is the final design, and nothing less.

This argument reminds me of the scene in Scott McCloud’s Understanding Comics where the author draws a number of faces on a line with a stickman drawing on one end and a realistic drawing of a human face on the other. McCloud argues that:

The ability of cartoons to focus our attention on an idea is, I think, an important part of their special power…

Likewise, with an app that provides the ability to focus on the necessary details can be hugely beneficial to improving the quality of the interfaces we create. If we already have a component library, then why do we need to make high fidelity mockups in the first place? We already have all those parts in place:

I think what we need are tools to help us translate our low fidelity mockups into real-life working code from component libraries. We can ditch a huge part of the design process that involves adding borders and choosing fonts, because all of those decisions have already been made before and we can lean on those previous decisions to focus on the UX instead.

</rant>

If we have tools to help us think, then the UX of our products, services and apps will improve exponentially because we’ll have an environment that encourages focus on the right things at the right time. We won’t be distracted by making pretty pictures.

Likewise, if we have tools to help us translate our low fidelity mockups into finished code examples—code from our own codebases mind you, not WYSIWYG generators—then we can work much faster and focus on improving our UI as a whole instead of burrowing our heads into one feature at a time. We’ll have fewer inconsistencies and our styleguides and component libraries can act as prototypes to test our designs quickly and iteratively. Airbnb’s recent explorations with Sketch is interesting but I can’t help but see those experiments as hacks on top of a complicated design tool. Instead, let us imagine an app built from the ground up that’s designed to do these sorts of system-y things and leaves the high fidelity mockup features behind.

With that being said, I think there’ll always be a demand for tools like Figma or Sketch and I know that I’ll certainly be using them for the foreseeable future. But I believe there are enormous opportunities to split our design tools up into the two categories I mentioned earlier: tools for thinking and tools for systems.


Tools for Thinking and Tools for Systems is a post from CSS-Tricks

Routing and Route Protection in Server-Rendered Vue Apps Using Nuxt.js

This tutorial assumes basic knowledge of Vue. If you haven’t worked with it before, then you may want to check out this CSS-Tricks guide on getting started.

You might have had some experience trying to render an app built with Vue on a server. The concept and implementation details of Server-Side Rendering (from Github.

Why Should I Render to a Server?

If you already know why you should server-render and just want to learn about routing or route protection, then you can jump to Setting Up a Nuxt.js App from Scratch section.

Sarah Drasner wrote a great post on what Nuxt.js is and why you should use it. She also showed off some of the amazing things you can do with this tool like page routing and page transitions. Nuxt.js is a tool in the Vue ecosystem that you can use to build server-rendered apps from scratch without being bothered by the underlying complexities of rendering a JavaScript app to a server.

Nuxt.js is an option to what Vue already offers. It builds upon the Vue SSR and routing libraries to expose a seamless platform for your own apps. Nuxt.js boils down to one thing: to simplify your experience as a developer building SSR apps with Vue.

We already did a lot of talking (which they say is cheap); now let’s get our hands dirty.

Setting Up a Nuxt.js App from Scratch

You can quickly scaffold a new project using the Vue CLI tool by running the following command:

vue init nuxt-community/starter-template <project-name>

But that’s not the deal, and we want to get our hands dirty. This way, you would learn the underlying processes that powers the engine of a Nuxt project.

Start by creating an empty folder on your computer, open your terminal to point to this folder, and run the following command to start a new node project:

npm init -y # OR yarn init -y

This will generate a package.json file that looks like this:

{ "name": "nuxt-shop", "version": "1.0.0", "main": "index.js", "license": "MIT"
}

The name property is the same as the name of the folder you working in.

Install the Nuxt.js library via npm:

npm install --save nuxt # OR yarn add nuxt

Then configure a npm script to launch nuxt build process in the package.json file:

"scripts": { "dev": "nuxt"
}

You can then start-up by running the command you just created:

npm run dev # OR yarn dev

It’s OK to watch the build fail. This is because Nuxt.js looks into a pages folder for contents which it wills serve to the browser. At this point, this folder does not exist:

Exit the build process then create a pages folder in the root of your project and try running once more. This time your should get a successful build:

The app launches on Port 3000 but you get a 404 when you try to access it:

Nuxt.js maps page routes to file names in the pages folder. This implies that if you had a file named index.vue and another about.vue in the pages folder, the will resolve to / and /about, respectively. Right now, / is throwing a 404 because, index.vue does not exist in the pages folder.

Create the index.vue file with this dead simple snippet:

<template> <h1>Greetings from Vue + Nuxt</h1>
</template>

Now, restart the server and the 404 should be replaced with an index route showing the greetings message:

Project-Wide Layout and Assets

Before we get deep into routing, let’s take some time to discuss how to structure your project in such a way that you have a reusable layout as sharing global assets on all pages. Let’s start with the global assets. We need these two assets in our project:

  1. Favicon
  2. Base Styles

Nuxt.js provides two root folder options (depending on what you’re doing) for managing assets:

  1. assets: Files here are webpacked (bundled and transformed by webpack). Files like your CSS, global JS, LESS, SASS, images, should be here.
  2. static: Files here don’t go through webpack. They are served to the browser as is. Makes sense for robot.txt, favicons, Github CNAME file, etc.

In our case, our favicon belongs to static while the base style goes to the assets folder. Hence, create the two folders and add base.css in /assets/css/base.css. Also download this favicon file and put it in the static folder. We need normalize.css but we can install it via npm rather than putting it in assets:

yarn add normalize.css

Finally, tell Nuxt.js about all these assets in a config file. This config file should live in the root of your project as nuxt.config.js:

module.exports = { head: { titleTemplate: '%s - Nuxt Shop', meta: [ { charset: 'utf-8' }, { name: 'viewport', content: 'width=device-width, initial-scale=1' }, { hid: 'description', name: 'description', content: 'Nuxt online shop' } ], link: [ { rel: 'stylesheet', href: 'https://fonts.googleapis.com/css?family=Raleway' }, { rel: 'icon', type: 'image/x-icon', href: 'https://cdn.css-tricks.com/favicon.ico' } ] }, css: ['normalize.css', '@/assets/css/base.css']
};

We just defined our title template, page meta information, fonts, favicon and all our styles. Nuxt.js will automatically include them all in the head of our pages.

Add this in the base.css file and let’s see if everything works as expected:

html, body, #__nuxt { height: 100%;
} html { font-size: 62.5%;
} body { font-size: 1.5em; line-height: 1.6; font-weight: 400; font-family: 'Raleway', 'HelveticaNeue', 'Helvetica Neue', Helvetica, Arial, sans-serif; color: #222;
}

You should see that the font of the greeting message has changed to reflect the CSS:

Now we can talk about layout. Nuxt.js already has a default layout you can customize. Create a layouts folder on the root and add a default.vue file in it with the following layout content:

<template> <div class="main"> <app-nav></app-nav> <!-- Mount the page content here --> <nuxt/> </div>
</template>
<style>
/* You can get the component styles from the Github repository for this demo */
</style> <script>
import nav from '@/components/nav';
export default { components: { 'app-nav': nav }
};
</script>

I am omitting all the styles in the style tag but you can get them from the code repository. I omitted them for brevity.

The layout file is also a component but wraps the nuxt component. Everything in the this file is shared among all other pages while each page content replaces the nuxt component. Speaking of shared contents, the app-nav component in the file should show a simple navigation.

Add the nav component by creating a components folder and adding a nav.vue file in it:

<template> <nav> <div class="logo"> <app-h1 is-brand="true">Nuxt Shop</app-h1> </div> <div class="menu"> <ul> <li> <nuxt-link to="/">Home</nuxt-link> </li> <li> <nuxt-link to="/about">About</nuxt-link> </li> </ul> </div> </nav>
</template>
<style>
/* You can get the component styles from the Github repository for this demo */
</style>
<script>
import h1 from './h1';
export default { components: { 'app-h1': h1 }
}
</script>

The component shows brand text and two links. Notice that for Nuxt to handle routing appropriately, we are not using the <a> tag but the <nuxt-link> component. The brand text is rendered using a reusable <h1> component that wraps and extends a <h1> tag. This component is in components/h1.vue:

<template> <h1 :class="{brand: isBrand}"> <slot></slot> </h1>
</template>
<style>
/* You can get the component styles from the Github repository for this demo
*/
</style>
<script>
export default { props: ['isBrand']
}
</script>

This is the output of the index page with the layout and these components added:

When you inspect the output, you should see the contents are rendered to the server:

Implicit Routing and Automatic Code Splitting

As mentioned earlier, Nuxt.js uses its file system to generate routes. All the files in the pages directory are mapped to a URL on the server. So, if I had this kind of directory structure:

pages/
--| product/
-----| index.vue
-----| new.vue
--| index.vue
--| about.vue

…then I would automatically get a Vue router object with the following structure:

router: { routes: [ { name: 'index', path: '/', component: 'pages/index.vue' }, { name: 'about', path: '/about', component: 'pages/about.vue' }, { name: 'product', path: '/product', component: 'pages/product/index.vue' }, { name: 'product-new', path: '/product/new', component: 'pages/product/new.vue' } ]
}

This is what I prefer to refer to as implicit routing.

On the other hand, each of these pages are not bundled in one
bundle.js. This would be the expectation when using webpack. In plain Vue projects, this is what we get and we would manually split the code for each route into their own files. With Nuxt.js, you get this out of the box and it’s referred to as automatic code splitting.

You can see this whole thing in action when you add another file in the pages folder. Name this file, about.vue with the following content:

<template> <div> <app-h1>About our Shop</app-h1> <p class="about">Lorem ipsum dolor sit amet consectetur adipisicing ...</p> <p class="about">Lorem ipsum dolor sit amet consectetur adipisicing ...</p> <p class="about">Lorem ipsum dolor sit amet consectetur adipisicing ...</p> <p class="about">Lorem ipsum dolor sit amet consectetur adipisicing ...</p> ... </div>
</template>
<style>
...
</style>
<script>
import h1 from '@/components/h1';
export default { components: { 'app-h1': h1 }
};
</script>

Now click on the About link in the navigation bar and it should take you to /about with the page content looking like this:

A look at the Network tab in DevTools will show you that no pages/index.[hash].js file was loaded, rather, a pages/about.[hash].js:

You should take out one thing from this: Routes === Pages. Therefore, you’re free to use them interchangeably in the server-side rendering world.

Data Fetching

This is where the game changes a bit. In plain Vue apps, we would usually wait for the component to load, then make a HTTP request in the created lifecycle method. Unfortunately, when you are also rendering to the server, the server is ready way before the component is ready. Therefore, if you stick to the created method, you can’t render fetched data to the server because it’s already too late.

For this reason, Nuxt.js exposes another instance method like created called asyncData. This method has access to two contexts: the client and the server. Therefore, when you make request in this method and return a data payload, the payload is automatically attached to the Vue instance.

Let’s see an example. Create a services folder in the root and add a data.js file to it. We are going to simulate data fetching by requesting data from this file:

export default [ { id: 1, price: 4, title: 'Drinks', imgUrl: 'http://res.cloudinary.com/christekh/image/upload/v1515183358/pro3_tqlsyl.png' }, { id: 2, price: 3, title: 'Home', imgUrl: 'http://res.cloudinary.com/christekh/image/upload/v1515183358/pro2_gpa4su.png' }, // Truncated for brevity. See repo for full code.
]

Next, update the index page to consume this file:

<template> <div> <app-banner></app-banner> <div class="cta"> <app-button>Start Shopping</app-button> </div> <app-product-list :products="products"></app-product-list> </div>
</template>
<style>
...
</style>
<script>
import h1 from '@/components/h1';
import banner from '@/components/banner';
import button from '@/components/button';
import productList from '@/components/product-list';
import data from '@/services/data';
export default { asyncData(ctx, callback) { setTimeout(() => { callback(null, { products: data }); }, 2000); }, components: { 'app-h1': h1, 'app-banner': banner, 'app-button': button, 'app-product-list': productList }
};
</script>

Ignore the imported components and focus on the asyncData method for now. I am simulating an async operation with setTimeout and fetching data after two seconds. The callback method is called with the data you want to expose to the component.

Now back to the imported components. You have already seen the <h1> component. I have created few more to serve as UI components for our app. All these components live in the components directory and you can get the code for them from the Github repo. Rest assured that they contain mostly HTML and CSS so you should be fine understanding what they do.

This is what the output should look like:

Guess what? The fetched data is still rendered to the server!

Parameterized (Dynamic) Routes

Sometimes the data you show in your page views are determined by the state of the routes. A common pattern in web apps is to have a dynamic parameter in a URL. This parameter is used to query data or a database for a given resource. The parameters can come in this form:

https://example.com/product/2

The value 2 in the URL can be 3 or 4 or any value. The most important thing is that your app would fetch that value and run a query against a dataset to retrieve relative information.

In Nuxt.js, you have the following structure in the pages folder:

pages/
--| product/
-----| _id.vue

This resolves to:

router: { routes: [ { name: 'product-id', path: '/product/:id?', component: 'pages/product/_id.vue' } ]
}

To see how that works out, create a product folder in the
pages directory and add a _id.vue file to it:

<template> <div class="product-page"> <app-h1>{{product.title}}</app-h1> <div class="product-sale"> <div class="image"> <img :src="product.imgUrl" :alt="product.title"> </div> <div class="description"> <app-h2>${{product.price}}</app-h2> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit.</p> </div> </div> </div>
</template>
<style> </style>
<script>
import h1 from '@/components/h1';
import h2 from '@/components/h2';
import data from '@/services/data';
export default { asyncData({ params }, callback) { setTimeout(() => { callback(null,{product: data.find(v => v.id === parseInt(params.id))}) }, 2000) }, components: { 'app-h1': h1, 'app-h2': h2 },
};
</script>

What’s important is the asyncData again. We are simulating an async request with setTimout. The request uses the id received via the context object’s params to query our dataset for the first matching id. The rest is just the component rendering the product.

Protecting Routes With Middleware

It won’t take too long before you start realizing that you need to secure some of your website’s contents from unauthorized users. Yes, the data source might be secured (which is important) but user experience demands that you prevent users from accessing unauthorized contents. You can do this by showing a friendly walk-away error or redirecting them to a login page.

In Nuxt.js, you can use a middleware to protect your pages (and in turn your contents). A middleware is a piece of logic that is executed before a route is accessed. This logic can prevent the route from being accessed entirely (probably with redirections).

Create a middleware folder in the root of the project and add an auth.js file:

export default function (ctx) { if(!isAuth()) { return ctx.redirect('/login') }
}
function isAuth() { // Check if user session exists somehow return false;
}

The middleware checks if a method, isAuth, returns false. If that is the case, it implies that the user is not authenticated and would redirect the user to a login page. The isAuth method just returns false by default for test purposes. Usually, you would check a session to see if the user is logged in.

Don’t rely on localStorage because the server does not know that it exists.

You can use this middleware to protect pages by adding it as value to the middleware instance property. You can add it to the _id.vue file we just created:

export default { asyncData({ params }, callback) { setTimeout(() => { callback(null,{product: data.find(v => v.id === parseInt(params.id))}) }, 2000) }, components: { //... }, middleware: 'auth'
};

This automatically shuts this page out every single time we access it. This is because the isAuth method is always returning false.

Long Story, Short

I can safely assume that you have learned what Nuxt.js guide for more features and use cases. If you’re working on a React project and need this kind of tool, then I think you should try Next.js.


Routing and Route Protection in Server-Rendered Vue Apps Using Nuxt.js is a post from CSS-Tricks

2017/2018 JavaScript

There has been a lot of research on the landscape this year! Here are a few snippets from a bunch of articles. There is a ton of information in each, so I’m just picking out a few juicy quotes from each here.

Perhaps the most interesting bit is how different the data looked at is. Each of these is different: a big developer survey, npm data, GitHub data, and StackOverflow data. Yet, they mostly tell the same stories.

The Brutal Lifecycle of JavaScript Frameworks

Ian Allen of StackOverflow writes:

JavaScript UI frameworks and libraries work in cycles. Every six months or so, a new one pops up, claiming that it has revolutionized UI development. Thousands of developers adopt it into their new projects, blog posts are written, Stack Overflow questions are asked and answered, and then a newer (and even more revolutionary) framework pops up to usurp the throne.

Using the Stack Overflow Trends tool and some of our internal traffic data, we decided to take a look at some of the more prominent UI frameworks: Angular, React, Vue.js, Backbone, Knockout, and Ember.

Read More

The Top JavaScript Trends to Watch in 2018

Ryan Chartrand of X-Team for Hackernoon writes:

This time last year, not many had faith that Vue would ever become a big competitor to React when it comes to major companies adopting it, but it was impossible to ignore Vue this year, even sending Angular a bit into the shadows in terms of developer hype.

Read More

The State of JavaScript 2017

Sacha Greif uses a survey rather than usage data:

We asked over a hundred questions to more than 28,000 developers all over the world, covering topics going from front-end libraries all the way to back-end frameworks.

I particularly enjoyed the opinions. Lots of people who love working with JavaScript and find it to be moving in the right direction and find it overly complex.

Read More

The State of JavaScript Frameworks, 2017

This one is from Laurie Voss of npm, which is probably the best source of data for usage but faces interesting challenges with that data:

You can use npm’s download statistics to give you insight into the amount of people actively invested in using and maintaining a package. However, probably more important than absolute popularity is growth.

Packages, once incorporated into software, have very long lives. People very seldom rip packages out of software once they’re installed. Because of this very low “churn,” packages hardly ever decline in usage. Furthermore, nearly all packages in the npm Registry grow in usage as the number of total npm users continues to skyrocket. They vary only in how fast they’re growing.

This makes measuring growth harder, since measuring absolute growth in downloads all the time makes almost everything look popular.

All in all it tells a familiar story: React is incredibly popular and Vue is the one to watch.

Read More

Top JavaScript Libraries & Tech to Learn in 2018

Eric Elliott writes:

Vue.js did do very well in 2017. It got a lot of headlines and a lot of people got interested. As I predicted, it did not come close to unseating React, and I’m confident to predict it won’t unseat React in 2018, either. That said, it could overtake Angular in 2018.

Read More

2017 JavaScript Rising Stars

Michael Rambeau’s writes:

Once again, Vue.js is the trendiest project of the year, with more than 40,000 stars added on GitHub during the year.

It’s far more than in 2016 (26,000 stars), and the gap with the next contender (React) is even bigger.

Read More


2017/2018 JavaScript is a post from CSS-Tricks

Creating a Vue.js Serverless Checkout Form: Configure the Checkout Component

This is the fourth post in a four-part series. In Part one, we set up a serverless Stripe function on Azure. Part two covered how we hosted the function on Github. The third part covered Stripe Elements in Vue. This last post shows how to configure the checkout component and make the shopping cart fully functional.

Article Series:

  1. Setup and Testing
  2. Stripe Function and Hosting
  3. Application and Checkout Component
  4. Configure the Checkout Component (This Post)

As a reminder, here’s where we are in our application at this point:

cart and checkout

Configuring the Checkout Component

We have to do a few things to adjust the component in order for it to meet our needs:

  • Make sure the form is only displaying if we haven’t submitted it—we’ll deal with the logic for this in our pay method in a moment
  • Allow the form to take a customer’s email address in case something is wrong with the order.
  • Disable the submit button until the required email address is provided
  • Finally and most importantly, change to our testing key

Here’s our updated checkout component with the changes to the original code highlighted:

<div v-if="!submitted" class="payment"> <h3>Please enter your payment details:</h3> <label for="email">Email</label> <input id="email" type="email" v-model="stripeEmail" placeholder="name@example.com"/> <label for="card">Credit Card</label> <p>Test using this credit card: <span class="cc-number">4242 4242 4242 4242</span>, and enter any 5 digits for the zip code</p> <card class='stripe-card' id="card" :class='{ complete }' stripe='pk_test_5ThYi0UvX3xwoNdgxxxTxxrG' :options='stripeOptions' @change='complete = $event.complete' /> <button class='pay-with-stripe' @click='pay' :disabled='!complete || !stripeEmail'>Pay with credit card</button>
</div>

There are a number of things we need to store and use for this component, so let’s add them to data or bring them in as props. The props that we need from our parent component will be total and success. We’ll need the total amount of the purchase so we can send it to Stripe, and the success will be something we need to coordinate between this component and the parent, because both components need to know if the payment was successful. I write out the datatypes as well as a default for my props.

props: { total: { type: [Number, String], default: '50.00' }, success: { type: Boolean, default: false }
},

Next, the data we need to store will be the stripeEmail we collected from the form, stripeOptions that I left in to show you can configure some options for your form, as well as status and response that we’ll get from communicating with the server and Stripe. We also want to hold whether or not the form was submitted, and whether the form was completed for enabling and disabling the submit button, which can both be booleans.

data() { return { submitted: false, complete: false, status: '', response: '', stripeOptions: { // you can configure that cc element. I liked the default, but you can // see https://stripe.com/docs/stripe.js#element-options for details }, stripeEmail: '' };
},

Now for the magic! We have everything we need—we just need to alter our pay() method, which will do all the heavy lifting for us. I’m going to use Axios for this, which will receive and transform the data:

npm i axios --save

…or using Yarn:

yarn add axios

If you’re not familiar with Axios and what it does, check out this article for some background information.

pay() { createToken().then(data => { this.submitted = true; // we'll change the flag to let ourselves know it was submitted console.log(data.token); // this is a token we would use for the stripeToken below axios .post( 'https://sdras-stripe.azurewebsites.net/api/charge?code=zWwbn6LLqMxuyvwbWpTFXdRxFd7a27KCRCEseL7zEqbM9ijAgj1c1w==', { stripeEmail: this.stripeEmail, // send the email stripeToken: data.token.id, // testing token stripeAmt: this.total // send the amount }, { headers: { 'Content-Type': 'application/json' } } ) .then(response => { this.status = 'success'; this.$emit('successSubmit'); this.$store.commit('clearCartCount'); // console logs for you 🙂 this.response = JSON.stringify(response, null, 2); console.log(this.response); }) .catch(error => { this.status = 'failure'; // console logs for you 🙂 this.response = 'Error: ' + JSON.stringify(error, null, 2); console.log(this.response); }); });
},

The code above does a number of things:

  • It allows us to track whether we’ve submitted the form or not, with this.submitted
  • It uses Axios to post to our function. We got this URL from going to where the function lives in the portal, and clicking “Get Function URL” on the right side of the screen.
    get function url
  • It sends the email, token, and total to the serverless function
  • If it’s successful, it changes the status to success, commits to our Vuex store, uses a mutation to clear our cart, and emits to the parent cart component that the payment was successful. It also logs the response to the console, though this is for educational purposes and should be deleted in production.
  • If it errors, it changes the status to failure, and logs the error response for help with debugging

If the payment fails, which we’ll know from our status, we need to let the user know something went wrong, clear our cart, and allow them to try again. In our template:

<div v-if="status === 'failure'"> <h3>Oh No!</h3> <p>Something went wrong!</p> <button @click="clearCheckout">Please try again</button>
</div>

The button executes the following clearCheckout method that clears a number of the fields and allow the customer to try again:

clearCheckout() { this.submitted = false; this.status = ''; this.complete = false; this.response = '';
}

If the payment succeeds, we will show a loading component, that will play an SVG animation until we hear back from the server. Sometimes this can take a couple of seconds, so it’s important that our animation make sense if it is seen for a short or long amount of time, and can loop as necessary.

<div v-else class="loadcontain"> <h4>Please hold, we're filling up your cart with goodies</h4> <app-loader />
</div>

Here’s what that looks like:

See the Pen shop loader by Sarah Drasner (@sdras) on CodePen.

Now if we revisit the first cart component we looked at in pages/cart.vue, we can fill that page based on the logic we set up before because it’s been completed:

<div v-if="cartTotal > 0"> <h1>Cart</h1> ... <app-checkout :total="total" @successSubmit="success = true"></app-checkout>
</div> <div v-else-if="cartTotal === 0 && success === false" class="empty"> <h1>Cart</h1> <h3>Your cart is empty.</h3> <nuxt-link exact to="/"><button>Fill 'er up!</button></nuxt-link>
</div> <div v-else> <app-success @restartCart="success = false"/> <h2>Success!</h2> <p>Your order has been processed, it will be delivered shortly.</p>
</div>

If we have items in our cart, we show the cart. If the cart is empty and the success is false, we’ll let them know that their cart is empty. Otherwise, if the checkout has just been processed, we’ll let them know that everything has been executed successfully!

We are now here:

success.vue in the application

In the AppSuccess.vue component, we have a small SVG animation designed to make them feel good about the purchase:

See the Pen success by Sarah Drasner (@sdras) on CodePen.

(You may have to hit “Rerun” to replay the animation.)

We also put a small timer in the mounted() lifecycle hook:

window.setTimeout(() => this.$emit('restartCart'), 3000);

This will show the success for three seconds while they read it then kick off the restartCart that was shown in the component above. This allows us to reset the cart in case they would like to continue shopping.

Conclusion

You learned how to make a serverless function, host it on Github, add required dependencies, communicate with Stripe, set up a Shopping Cart in a Vue application, establish a connection with the serverless function and Stripe, and handle the logic for all of the cart states. Whew, way to go!

It’s worth mentioning that the demo app we looked at is a sample application built for specifically for this purpose of this tutorial. There are a number of steps you’d want to go through for a production site, including testing, building the app’s dist folder, and using real Stripe keys. There are also so many ways to set this up, Serverless functions can be so flexible in tandem with something like Vue. Hopefully this gets you on track and saves you time as you try it out yourself.


Creating a Vue.js Serverless Checkout Form: Configure the Checkout Component is a post from CSS-Tricks