Direction Aware Hover Effects

This is a particular design trick that never fails to catch people’s eye! I don’t know the exact history of who-thought-of-what first and all that, but I know I have seen a number of implementations of it over the years. I figured I’d round a few of them up here.

Noel Delagado

See the Pen Direction-aware 3D hover effect (Concept) by Noel Delgado (@noeldelgado) on CodePen.

The detection here is done by tracking the mouse position on mouseover and mouseout and calculating which side was crossed. It’s a small amount of clever JavaScript, the meat of which is figuring out that direction:

var getDirection = function (ev, obj) { var w = obj.offsetWidth, h = obj.offsetHeight, x = (ev.pageX - obj.offsetLeft - (w / 2) * (w > h ? (h / w) : 1)), y = (ev.pageY - obj.offsetTop - (h / 2) * (h > w ? (w / h) : 1)), d = Math.round( Math.atan2(y, x) / 1.57079633 + 5 ) % 4; return d;
};

Then class names are applied depending on that direction to trigger the directional CSS animations.

Fabrice Weinberg

See the Pen Direction aware hover pure CSS by Fabrice Weinberg (@FWeinb) on CodePen.

Fabrice uses just pure CSS here. They don’t detect the outgoing direction, but they do detect the incoming direction by way of four hidden hoverable boxes, each rotated to cover a triangle. Like this:

Codrops

Demo

In an article by Mary Lou on Codrops from 2012, Direction-Aware Hover Effect with CSS3 and jQuery, the detection is also done in JavaScript. Here’s that part of the plugin:

_getDir: function (coordinates) { // the width and height of the current div var w = this.$el.width(), h = this.$el.height(), // calculate the x and y to get an angle to the center of the div from that x and y. // gets the x value relative to the center of the DIV and "normalize" it x = (coordinates.x - this.$el.offset().left - (w / 2)) * (w > h ? (h / w) : 1), y = (coordinates.y - this.$el.offset().top - (h / 2)) * (h > w ? (w / h) : 1), // the angle and the direction from where the mouse came in/went out clockwise (TRBL=0123); // first calculate the angle of the point, // add 180 deg to get rid of the negative values // divide by 90 to get the quadrant // add 3 and do a modulo by 4 to shift the quadrants to a proper clockwise TRBL (top/right/bottom/left) **/ direction = Math.round((((Math.atan2(y, x) * (180 / Math.PI)) + 180) / 90) + 3) % 4; return direction;
},

It’s technically CSS doing the animation though, as inline styles are applied as needed to the elements.

John Stewart

See the Pen Direction Aware Hover Goodness by John Stewart (@johnstew) on CodePen.

John leaned on Greensock to do all the detection and animation work here. Like all the examples, it has its own homegrown geometric math to calculate the direction in which the elements were hovered.

// Detect Closest Edge
function closestEdge(x,y,w,h) { var topEdgeDist = distMetric(x,y,w/2,0); var bottomEdgeDist = distMetric(x,y,w/2,h); var leftEdgeDist = distMetric(x,y,0,h/2); var rightEdgeDist = distMetric(x,y,w,h/2); var min = Math.min(topEdgeDist,bottomEdgeDist,leftEdgeDist,rightEdgeDist); switch (min) { case leftEdgeDist: return "left"; case rightEdgeDist: return "right"; case topEdgeDist: return "top"; case bottomEdgeDist: return "bottom"; }
} // Distance Formula
function distMetric(x,y,x2,y2) { var xDiff = x - x2; var yDiff = y - y2; return (xDiff * xDiff) + (yDiff * yDiff);
}

Gabrielle Wee

See the Pen CSS-Only Direction-Aware Cube Links by Gabrielle Wee ✨ (@gabriellewee) on CodePen.

Gabrielle gets it done entirely in CSS by positioning four hoverable child elements which trigger the animation on a sibling element (the cube) depending on which one was hovered. There is some tricky stuff here involving clip-path and transforms that I admit I don’t fully understand. The hoverable areas don’t appear to be triangular like you might expect, but rectangles covering half the area. It seems like they would overlap ineffectively, but they don’t seem to. I think it might be that they hang off the edges slightly giving a hover area that allows each edge full edge coverage.

Elmer Balbin

See the Pen Direction Aware Tiles using clip-path Pure CSS by Elmer Balbin (@elmzarnsi) on CodePen.

Elmer is also using clip-path here, but the four hoverable elements are clipped into triangles. You can see how each of them has a point at 50% 50%, the center of the square, and has two other corner points.

clip-path: polygon(0 0, 100% 0, 50% 50%)
clip-path: polygon(100% 0, 100% 100%, 50% 50%);
clip-path: polygon(0 100%, 50% 50%, 100% 100%);
clip-path: polygon(0 0, 50% 50%, 0 100%);

Nigel O Toole

Demo

Raw JavaScript powers Nigel’s demo here, which is all modernized to work with npm and modules and all that. It’s familiar calculations though:

const _getDirection = function (e, item) { // Width and height of current item let w = item.offsetWidth; let h = item.offsetHeight; let position = _getPosition(item); // Calculate the x/y value of the pointer entering/exiting, relative to the center of the item. let x = (e.pageX - position.x - (w / 2) * (w > h ? (h / w) : 1)); let y = (e.pageY - position.y - (h / 2) * (h > w ? (w / h) : 1)); // Calculate the angle the pointer entered/exited and convert to clockwise format (top/right/bottom/left = 0/1/2/3). See https://stackoverflow.com/a/3647634 for a full explanation. let d = Math.round(Math.atan2(y, x) / 1.57079633 + 5) % 4; // console.table([x, y, w, h, e.pageX, e.pageY, item.offsetLeft, item.offsetTop, position.x, position.y]); return d;
};

The JavaScript ultimately applies classes, which are animated in CSS based on some fancy Sass-generated animations.

Giana

A CSS-only take that handles the outgoing direction nicely!

See the Pen CSS-only directionally aware hover by Giana (@giana) on CodePen.


Seen any others out there? Ever used this on something you’ve built?


Direction Aware Hover Effects is a post from CSS-Tricks

Offline *Only* Viewing

It made the rounds a while back that Chris Bolin built a page of his personal website that could only be viewed while you are offline.

This page itself is an experiment in that vein: What if certain content required us to disconnect? What if readers had access to that glorious focus that makes devouring a novel for hours at a time so satisfying? What if creators could pair that with the power of modern devices? Our phones and laptops are amazing platforms for inventive content—if only we could harness our own attention.

Now Bolin has a whole magazine around this same concept called The Disconnect!

The Disconnect is an offline-only, digital magazine of commentary, fiction, and poetry. Each issue forces you to disconnect from the internet, giving you a break from constant distractions and relentless advertisements.

I believe it’s some Service Worker trickery to serve different files depending on the state of the network. Usually, Service Workers are meant to serve cached files when the network is off or slow such as to make the website continue to work. This flips that logic on its head, preventing files from being served until the network is off.


Offline *Only* Viewing is a post from CSS-Tricks

Wufoo Forms Integrate With Everything

(This is a sponsored post.)

Wufoo helps you build forms you can put on any website. There’s a million reasons you might need to do that, from the humble contact form, to a sales lead generation form, to a sales or registration form.

That’s powerful and useful all by itself. But Wufoo is even more powerful when you consider that it integrates with over 1,000 other web services.

  • Wufoo integrates with over a dozen CRM solutions, including Salesforce, Nutshell, Highrise, Nimble, and more.
  • Wufoo integrates with the most popular email marketing services like MailChimp, Campaign Monitor, Sendloop, and more.
  • Wufoo integrates with payment processors like Stripe, PayPal, Chargify, and more.
  • You can put Wufoo forms on any website, including WordPress sites, Wix sites, Squarespace sites.

Direct Link to Article — Permalink


Wufoo Forms Integrate With Everything is a post from CSS-Tricks

Using Default Parameters in ES6

I’ve recently begun doing more research into what’s new in JavaScript, catching up on a lot of the new features and syntax improvements that have been included in ES6 (i.e. ES2015 and later).

You’ve likely heard about and started using the usual stuff: arrow functions, let and const, rest and spread operators, and so on. One feature, however, that caught my attention is the use of default parameters in functions, which is now an official ES6+ feature. This is the ability to have your functions initialize parameters with default values even if the function call doesn’t include them.

The feature itself is pretty straightforward in its simplest form, but there are quite a few subtleties and gotchas that you’ll want to note, which I’ll try to make clear in this post with some code examples and demos.

Default Parameters in ES5 and Earlier

A function that automatically provides default values for undeclared parameters can be a beneficial safeguard for your programs, and this is nothing new.

Prior to ES6, you may have seen or used a pattern like this one:

function getInfo (name, year, color) { year = (typeof year !== 'undefined') ? year : 2018; color = (typeof color !== 'undefined') ? color : 'Blue'; // remainder of the function...
}

In this instance, the getInfo() function has only one mandatory parameter: name. The year and color parameters are optional, so if they’re not provided as arguments when getInfo() is called, they’ll be assigned default values:

getInfo('Chevy', 1957, 'Green');
getInfo('Benz', 1965); // default for color is "Blue"
getInfo('Honda'); // defaults are 2018 and "Blue"

Try it on CodePen

Without this kind of check and safeguard in place, any uninitiated parameters would default to a value of undefined, which is usually not desired.

You could also use a truthy/falsy pattern to check for parameters that don’t have values:

function getInfo (name, year, color) { year = year || 2018; color = color || 'Blue'; // remainder of the function...
}

But this may cause problems in some cases. In the above example, if you pass in a value of “0” for the year, the default 2018 will override it because 0 evaluates as falsy. In this specific example, it’s unlikely you’d be concerned about that, but there are many cases where your app might want to accept a value of 0 as a valid number rather than a falsy value.

Try it on CodePen

Of course, even with the typeof pattern, you may have to do further checks to have a truly bulletproof solution. For example, you might expect an optional callback function as a parameter. In that case, checking against undefined alone wouldn’t suffice. You’d also have to check if the passed-in value is a valid function.

So that’s a bit of a summary covering how we handled default parameters prior to ES6. Let’s look at a much better way.

Default Parameters in ES6

If your app requires that you use pre-ES6 features for legacy reasons or because of browser support, then you might have to do something similar to what I’ve described above. But ES6 has made this much easier. Here’s how to define default parameter values in ES6 and beyond:

function getInfo (name, year = 2018, color = 'blue') { // function body here...
}

Try it on CodePen

It’s that simple.

If year and color values are passed into the function call, the values passed in as arguments will supersede the ones defined as parameters in the function definition. This works exactly the same way as with the ES5 patterns, but without all that extra code. Much easier to maintain, and much easier to read.

This feature can be used for any of the parameters in the function head, so you could set a default for the first parameter along with two other expected values that don’t have defaults:

function getInfo (name = 'Pat', year, color) { // function body here...
}

Dealing With Omitted Values

Note that—in a case like the one above—if you wanted to omit the optional name argument (thus using the default) while including a year and color, you’d have to pass in undefined as a placeholder for the first argument:

getInfo(undefined, 1995, 'Orange');

If you don’t do this, then logically the first value will always be assumed to be name.

The same would apply if you wanted to omit the year argument (the second one) while including the other two (assuming, of course, the second parameter is optional):

getInfo('Charlie', undefined, 'Pink');

I should also note that the following may produce unexpected results:

function getInfo (name, year = 1965, color = 'blue') { console.log(year); // null
}
getInfo('Frankie', null, 'Purple');

Try it on CodePen

In this case, I’ve passed in the second argument as null, which might lead some to believe the year value inside the function should be 1965, which is the default. But this doesn’t happen, because null is considered a valid value. And this makes sense because, according to the spec, null is viewed by the JavaScript engine as the intentional absence of an object’s value, whereas undefined is viewed as something that happens incidentally (e.g. when a function doesn’t have a return value it returns undefined).

So make sure to use undefined and not null when you want the default value to be used. Of course, there might be cases where you want to use null and then deal with the null value within the function body, but you should be familiar with this distinction.

Default Parameter Values and the arguments Object

Another point worth mentioning here is in relation to the arguments object. The arguments object is an array-like object, accessible inside a function’s body, that represents the arguments passed to a function.

In non-strict mode, the arguments object reflects any changes made to the argument values inside the function body. For example:

function getInfo (name, year, color) { console.log(arguments); /* [object Arguments] { 0: "Frankie", 1: 1987, 2: "Red" } */ name = 'Jimmie'; year = 1995; color = 'Orange'; console.log(arguments); /* [object Arguments] { 0: "Jimmie", 1: 1987, 2: "Red" } */
} getInfo('Frankie', 1987, 'Red');

Try it on CodePen

Notice in the above example, if I change the values of the function’s parameters, those changes are reflected in the arguments object. This feature was viewed as more problematic than beneficial, so in strict mode the behavior is different:

function getInfo (name, year, color) { 'use strict'; name = 'Jimmie'; year = 1995; color = 'Orange'; console.log(arguments); /* [object Arguments] { 0: "Frankie", 1: 1987, 2: "Red" } */
} getInfo('Frankie', 1987, 'Red');

Try it on CodePen

As shown in the demo, in strict mode the arguments object retains its original values for the parameters.

That brings us to the use of default parameters. How does the arguments object behave when the default parameters feature is used? Take a look at the following code:

function getInfo (name, year = 1992, color = 'Blue') { console.log(arguments.length); // 1 console.log(year, color); // 1992 // "Blue" year = 1995; color = 'Orange'; console.log(arguments.length); // Still 1 console.log(arguments); /* [object Arguments] { 0: "Frankie" } */ console.log(year, color); // 1995 // "Orange"
} getInfo('Frankie');

Try it on CodePen

There are a few things to note in this example.

First, the inclusion of default parameters doesn’t change the arguments object. So, as in this case, if I pass only one argument in the functional call, the arguments object will hold a single item—even with the default parameters present for the optional arguments.

Second, when default parameters are present, the arguments object will always behave the same way in strict mode and non-strict mode. The above example is in non-strict mode, which usually allows the arguments object to be modified. But this doesn’t happen. As you can see, the length of arguments remains the same after modifying the values. Also, when the object itself is logged, the name value is the only one present.

Expressions as Default Parameters

The default parameters feature is not limited to static values but can include an expression to be evaluated to determine the default value. Here’s an example to demonstrate a few things that are possible:

function getAmount() { return 100;
} function getInfo (name, amount = getAmount(), color = name) { console.log(name, amount, color)
} getInfo('Scarlet');
// "Scarlet"
// 100
// "Scarlet" getInfo('Scarlet', 200);
// "Scarlet"
// 200
// "Scarlet" getInfo('Scarlet', 200, 'Pink');
// "Scarlet"
// 200
// "Pink"

Try it on CodePen

There are a few things to take note of in the code above. First, I’m allowing the second parameter, when it’s not included in the function call, to be evaluated by means of the getAmount() function. This function will be called only if a second argument is not passed in. This is evident in the second getInfo() call and the subsequent log.

The next key point is that I can use a previous parameter as the default for another parameter. I’m not entirely sure how useful this would be, but it’s good to know it’s possible. As you can see in the above code, the getInfo() function sets the third parameter (color) to equal the first parameter’s value (name), if the third parameter is not included.

And of course, since it’s possible to use functions to determine default parameters, you can also pass an existing parameter into a function used as a later parameter, as in the following example:

function getFullPrice(price) { return (price * 1.13);
} function getValue (price, pricePlusTax = getFullPrice(price)) { console.log(price.toFixed(2), pricePlusTax.toFixed(2))
} getValue(25);
// "25.00"
// "28.25" getValue(25, 30);
// "25.00"
// "28.25"

Try it on CodePen

In the above example, I’m doing a rudimentary tax calculation in the getFullPrice() function. When this function is called, it uses the existing price parameter as part of the pricePlusTax evaluation. As mentioned earlier, the getFullPrice() function is not called if a second argument is passed into getValue() (as demonstrated in the second getValue() call).

Two things to keep in mind with regards to the above. First, the function call in the default parameter expression needs to include the parentheses, otherwise you’ll receive a function reference rather than an evaluation of the function call.

Second, you can only reference previous parameters with default parameters. In other words, you can’t reference the second parameter as an argument in a function to determine the default of the first parameter:

// this won't work
function getValue (pricePlusTax = getFullPrice(price), price) { console.log(price.toFixed(2), pricePlusTax.toFixed(2))
} getValue(25); // throws an error

Try it on CodePen

Similarly, as you would expect, you can’t access a variable defined inside the function body from a function parameter.

Conclusion

That should cover just about everything you’ll need to know to get the most out of using default parameters in your functions in ES6 and above. The feature itself is quite easy to use in its simplest form but, as I’ve discussed here, there are quite a few details worth understanding.

If you’d like to read more on this topic, here are some sources:

  • Understanding ECMAScript 6 by Nicholas Zakas. This was my primary source for this article. Nicholas is definitely my favorite JavaScript author.
  • Arguments object on MDN
  • Default Parameters on MDN

Using Default Parameters in ES6 is a post from CSS-Tricks

Fallbacks for Videos-as-Images

Safari 11.1 shipped a strange-but-very-useful feature: the ability to use a video source in the <img> tag. The idea is it does the same job as a GIF (silent, autoplaying, repeating), but with big performance gains. How big? “20x faster and decode 7x faster than the GIF equivalent,” says Colin Bendell.

Not all browsers support this so, to do a fallback, the <picture> element is ready. Bruce Lawson shows how easy it can be:

<picture> <source type="video/mp4" srcset="adorable-cat.mp4"> <!-- perhaps even an animated WebP fallback here as well --> <img src="adorable-cat.gif" alt="adorable cat tears throat out of owner and eats his eyeballs">
</picture>

Šime Vidas notes you get wider browser support by using the <video> tag:

<video src="https://media.giphy.com/media/klIaoXlnH9TMY/giphy.mp4" muted autoplay loop playsinline></video>

But as Bendell noted, the performance benefits aren’t there with video, notably the fact that video isn’t helped out by the preloader. Sadly, <video> it is for now, as:

there is this nasty WebKit bug in Safari that causes the preloader to download the first <source> regardless of the mimetype declaration. The main DOM loader realizes the error and selects the correct one. However, the damage will be done. The preloader squanders its opportunity to download the image early and on top of that, downloads the wrong version wasting bytes. The good news is that I’ve patched this bug and it should land in Safari TP 45.

In short, using the <picture> and <source type> for mime-type selection is not advisable until the next version of Safari reaches the 90%+ of the user base.

Still, eventually, it’ll be quite useful.


Fallbacks for Videos-as-Images is a post from CSS-Tricks

A Short History of WaSP and Why Web Standards Matter

In August of 2013, Aaron Gustafson posted to the WaSP blog. He had a bittersweet message for a community that he had helped lead:

Thanks to the hard work of countless WaSP members and supporters (like you), Tim Berners-Lee’s vision of the web as an open, accessible, and universal community is largely the reality. While there is still work to be done, the sting of the WaSP is no longer necessary. And so it is time for us to close down The Web Standards Project.

If there’s just the slightest hint of wistful regret in Gustafson’s message, it’s because the Web Standards Project changed everything that had become the norm on the web during its 15+ years of service. Through dedication and developer advocacy, they hoisted the web up from a nest of browser incompatibility and meaningless markup to the standardized and feature-rich application platform most of us know today.

I previously covered what it took to bring CSS to the World Wide Web. This is the other side of that story. It was only through the efforts of many volunteers working tirelessly behind the scenes that CSS ever had a chance to become what it is today. They are the reason we have web standards at all.

Introducing Web Standards

Web standards weren’t even a thing in 1998. There were HTML and CSS specifications and drafts of recommendations that were managed by the W3C, but they had spotty and uneven browser support which made them little more than words on a page. At the time, web designers stood at the precipice of what would soon be known as the Browser Wars, where Netscape and Microsoft raced to implement exclusive features and add-ons in an escalating fight for market share. Rather than stick to any official specification, these browsers forced designers to support either Netscape Navigator or Internet Explorer. And designers were definitely not happy about it.

Supporting both browsers and their competing feature implementations was possible, but it was also difficult and unreliable, like building a house on sand. To help each other along, many developers began joining mailing lists to swap tips and hacks for dealing with sites that needed to look good no matter where it was rendered.

From these mailing lists, a group began to form around an entirely new idea. The problem, this new group realized, wasn’t with the code, but with the browsers that refused to adhere to the codified, open specifications passed down by the W3C. Browsers touted new presentational HTML elements like the <blink> tag, but they were proprietary and provided no layout options. What the web needed was browsers that could follow the standards of the web.

The group decided they needed to step up and push browsers in the right direction. They called themselves the Web Standards Project. And, since the process would require a bit of a sting, they went by WaSP for short.

Launching the Web Standards Project

In August of 1998, WaSP announced their mission to the public on a brand new website: to “support these core standards and encourage browser makers to do the same, thereby ensuring simple, affordable access to Web technologies for all.” Within a few hours, 450 people joined WaSP. In a few months, that number would jump to thousands.

WaSP took what was basically a two-pronged approach. The first was in public, tapping into the groundswell of developer support they had gathered to lobby for better standards support in browsers. Using grassroots tactics and targeted outreach, WaSP would often send its members on “missions” such as sending emails to browsers explaining in great detail their troubles working with a lack of consistent web standards support.

They also published scathing reports that put browsers on blast, highlighting all the ways that Netscape or Internet Explorer failed to add necessary support, even go so far to encourage users to use alternative browsers. It was these reports where the project truly lived up to its acronym. One needs to look no further then a quote from WaSP’s savage takedown of Internet Explorer as an example of its ability to sting:

Quit before the job’s done, and the flamethrower’s the only answer. Because that’s our job. We speak for thousands of Web developers, and through them, millions of Web users.

The second prong of WaSP’s approach included privately reaching out to passionate developers on browser teams. The problem, for big companies like Netscape and Microsoft, wasn’t that engineers were against web standards. Quite the contrary, actually. Many browser engineers believed deeply in WaSP’s mission but were resisted by perceived business interests and red-tape bureaucracy time and time again. As a result, WaSP would often work with browser developers to find the best path forward and advocate on their behalf to the higher-ups when necessary.

Holding it All Together

To help WaSP navigate its way through its missions, reports, and outreach, a Steering Committee was formed. This committee helped set the project’s goals and reached out to the community to gather support. They were the heralds of a better day soon to come, and more than a few influential members would pass through their ranks before the project was over, including: Rachel Cox, Tim Bray, Steve Champeon, Glenn Davis, Glenda Sims, Todd Fahrner, Molly Holzschalg and Aaron Gustafson, among many, many others.

At the top of it all was a project lead who set the tone for the group and gave developers a unified voice. The position was initially held by George Olsen, one of the founders of the project, but was soon picked up by another founding member: Jeffrey Zeldman.

A network of loosely connected satellite groups orbiting around the Steering Committee helped developers and browsers alike understand the importance of web standards. There was, for instance, an Accessibility group that bridged the W3C with browser makers to ensure the web was open and accessible to everyone. Then there was the CSS Samurai, who published reports about CSS support (or, more commonly, lack thereof) in different browsers. They were the ones that devised the Box Acid test and offered guidance to browsers as they worked to expand CSS support. Todd Fahrner, who helped save CSS with doctype switching, counted himself among the CSS Samurai.

Making an Impact

WaSP was huge and growing all the time. Its members were passionate and, little by little, clusters of the community came together to enact change. And that is exactly what happened.

The changes felt kind of small at first but soon they bordered on massive. When Netscape was kicking around the idea of a new rendering engine named Gecko that would include much better standards support across the board, their initial timeline would have taken months to release. But the WaSP swarmed, emailing and reaching out to Netscape to put pressure on them to release Gecko sooner. It worked and, by the next release, Gecko (and better web standards) shipped.

Tantek Çelik was another member of WaSP. The community inspired him to take a stand on web standards at his day job as lead developer of Internet Explorer for Mac. It was through the encouragement and support of WaSP that he and his team released version 5 with full CSS Level 1 support.

Internet Explorer 5 for Mac was released with full CSS Level 1 support

In August of 2001, after years of public reports and private outreach and developer advocacy, the WaSP sting provoked seismic change in Internet Explorer as version 6 released with CSS Level 1 support and the latest HTML features. The upgrades were due in no small part to the work at the Web Standards Project and their work with dedicated members of the browser team. It appeared that standards were beginning to actually win out. The WaSP’s mission may have even been over.

But instead of calling it quits, they shifted tactics a bit.

Teaching Standards to a New Generation

In the early 2000’s, WaSP would radically change its approach to education and developer outreach.

They started with the launch of the Browser Upgrade Campaign which educated users who were coming online for the very first time and knew absolutely nothing about web standards and modern browsers. Site owners were encouraged to add some JavaScript and a banner to their sites to target these users. As a result, those surfing to a site on older versions of standards-compliant browsers, like Firefox or Opera, were greeted by a banner simply directing them to upgrade. Users visiting the site on a really old browser, like pre-IE5 or Netscape 5, would redirect visitors to an entirely new page explaining why upgrading to a modern browser with standards support was in their best interest.

A page from the Browser Upgrade Campaign

WaSP was going to bring the web up to speed, even if they had to do it one person at a time. Perhaps no one articulated this sentiment better than Molly Holzschalg when she wrote “Raise Your Standards” in February 2002. In the article, she broke down what web standards are and what they meant for developers and designers. She celebrated the work that had been done by browsers and the community working to make web standards a thing in the first place.

But, she argued, the web was far from done. It was now time for developers to step up to the plate and assume the responsibility for standards themselves by coding it into all of their sites. She wrote:

The Consortium is fraught with its own internal issues, and its actions—while almost always in the best interests of professional Web authors—are occasionally politicized.

Therefore, as Web authors, we’re personally responsible for making implementation decisions within the framework of a site’s markup needs. It’s our job to administer recommendations to the best of our abilities.

This, however, would not be easy. It would once again require the combined efforts of WaSP members to pull together and teach the web a new way to code. Some began publishing tutorials to their personal blogs or on A List Apart. Others created a standards-based online curriculum for web developers who were new to the field. A few members even formed brand-new task forces to work with popular software tools, like Adobe Dreamweaver, and ensure that standards were supported there as well.

The redesigns of ESPN and Wired, which stood as a testament and example for standards-based designs for years to come, were undertaken in part because members of those teams were inspired by the work that WaSP was doing. They would not have been able to take those crucial first steps if not for the examples and tutorials made freely available to them by gracious WaSP members.

That is why web standards is basically second nature to many web developers today. It’s also why we have such a free spirit of creative exchange in our industry. It all started when WaSP decided to share the correct way of doing things right out in the open.

Looking Past Web Standards

It was this openness that carried WaSP into the late 2010’s. When Holzschlag took over as lead, she advocated for transparency and collaboration between browser makers and the web community. The WaSP, Holzschlag realized, was no longer necessary and could be done from within. For example, she made inroads at Microsoft to help make web standards a top priority on their browser team.

With each subsequent release, browsers began to catch up to the latest standards from the W3C. Browsers like Opera and Firefox actually competed on supporting the latest standards. Google Chrome used web standards as a selling point when it was initially released around the same time. The decade-and-a-half of work by WaSP was paying off. Browser makers were listening to the W3C and the web community, even going so far as to experiment with new standards before they were officially published for recommendation.

In 2013, WaSP posted its farewell announcement and closed up shop for good. It was a difficult decision for those who had fought long and hard for a better, more accessible and more open web, but it was necessary. There are still a number of battlegrounds for the open web but, thanks to the efforts of WaSP, the one for web standards has been won.

Enjoy learning about web history? Jay Hoffmann has a weekly newsletter called The History of the Web you can sign up for here.


A Short History of WaSP and Why Web Standards Matter is a post from CSS-Tricks

Counting With CSS Counters and CSS Grid

You’ve heard of CSS Grid, I’m sure of that. It would be hard to miss it considering that the whole front-end developer universe has been raving about it for the past year.

Whether you’re new to Grid or have already spent some time with it, we should start this post with a short definition directly from the words of W3C:

Grid Layout is a new layout model for CSS that has powerful abilities to control the sizing and positioning of boxes and their contents. Unlike Flexible Box Layout, which is single-axis–oriented, Grid Layout is optimized for 2-dimensional layouts: those in which alignment of content is desired in both dimensions.

In my own words, CSS Grid is a mesh of invisible horizontal and vertical lines. We arrange elements in the spaces between those lines to create a desired layout. An easier, stable, and standardized way to structure contents in a web page.

Besides the graph paper foundation, CSS Grid also provides the advantage of a layout model that’s source order independent: irrespective of where a grid item is placed in the source code, it can be positioned anywhere in the grid across both the axes on screen. This is very important, not only for when you’d find it troublesome to update HTML while rearranging elements on page but also at times when you’d find certain source placements being restrictive to layouts.

Although we can always move an element to the desired coordinate on screen using other techniques like translate, position, or margin, they’re both harder to code and to update for situations like building a responsive design, compared to true layout mechanisms like CSS Grid.

In this post, we’re going to demonstrate how we can use the source order independence of CSS Grid to solve a layout issue that’s the result of a source order constraint. Specifically, we’re going to look at checkboxes and CSS Counters.

Counting With Checkboxes

If you’ve never used CSS Counters, don’t worry, the concept is pretty simple! We set a counter to count a set of elements at the same DOM level. That counter is incremented in the CSS rules of those individual elements, essentially counting them.

Here’s the code to count checked and unchecked checkboxes:

<input type="checkbox">Checkbox #1<br>
<input type="checkbox">Checkbox #2
<!-- more checkboxes, if we want them --> <div class="total"> <span class="totalChecked"> Total Checked: </span><br> <span class="totalUnChecked"> Total Unchecked: </span>
</div>
::root { counter-reset: checked-sum, unchecked-sum;
} input[type="checkbox"] { counter-increment: unchecked-sum;
} input[type="checkbox"]:checked { counter-increment: checked-sum;
} .totalUnChecked::after { content: counter(unchecked-sum);
} .totalChecked::after { content: counter(checked-sum);
}

In the above code, two counters are set at the root element using the counter-reset property and are incremented at their respective rules, one for checked and the other for unchecked checkboxes, using counter-increment. The values of the counters are then shown as contents of two empty <span>s’ pseudo elements using counter().

Here’s a stripped-down version of what we get with this code:

See the Pen CSS Counter Grid by CSS-Tricks (@css-tricks) on CodePen.

This is pretty cool. We can use it in to-do lists, email inbox interfaces, survey forms, or anywhere where users toggle boxes and will appreciate being shown how many items are checked and how many are unselected. All this with just CSS! Useful, isn’t it?

But the effectiveness of counter() wanes when we realize that an element displaying the total count can only appear after all the elements to be counted in the source code. This is because the browser first needs the chance to count all the elements, before showing the total. Hence, we can’t simply change the markup to place the counters above the checkboxes like this:

<!-- This will not work! -->
<div class="total"> <span class="totalChecked"> Total Checked: </span><br> <span class="totalUnChecked"> Total Unchecked: </span>
</div>
<input type="checkbox">Checkbox #1<br>
<input type="checkbox">Checkbox #2

Then, how else can we get the counters above the checkboxes in our layout? This is where CSS Grid and its layout-rendering powers come into play.

Adding Grid

We’re basically wrapping the previous HTML in a new <div> element that’ll serve as the grid container:

<div class="grid"> <input type=checkbox id="c-1"> <label for="c-1">checkbox #1</label> <input type=checkbox id="c-2"> <label for="c-2">checkbox #2</label> <input type=checkbox id="c-3"> <label for="c-3">checkbox #3</label> <input type=checkbox id="c-4"> <label for="c-4">checkbox #4</label> <input type=checkbox id="c-5"> <label for="c-5">checkbox #5</label> <input type=checkbox id="c-6"> <label for="c-6">checkbox #6</label> <div class=total> <span class="totalChecked"> Total Checked: </span> <span class="totalUnChecked"> Total Unchecked: </span> </div> </div>

And, here is the CSS for our grid:

.grid { display: grid; /* creates the grid */ grid-template-columns: repeat(2, max-content); /* creates two columns on the grid that are sized based on the content they contain */
} .total { grid-row: 1; /* places the counters on the first row */ grid-column: 1 / 3; /* ensures the counters span the full grid width, forcing other content below */
}

This is what we get as a result (with some additional styling):

See the Pen CSS Counter Grid by Preethi (@rpsthecoder) on CodePen.

See that? The counters are now located above the checkboxes!

We defined two columns on the grid element in the CSS, each accommodating its own content to their maximum size.

When we grid-ify an element, its contents (text including) block-ify, meaning they acquire a grid-level box (similar to block-level box) and are automatically placed in the available grid cells.

In the demo above, the counters take up both the grid cells in the first row as specified, and following that, every checkbox resides in the first column and the text after each checkbox stays in the last column.

The checkboxes are forced below the counters without changing the actual source order!

Since we didn’t change the source order, the counter works and we can see the running total count of checked and unchecked checkboxes at the top the same way we did when they were at the bottom. The functionality is left unaffected!

To be honest, there’s a staggering number of ways to code and implement a CSS Grid. You can use grid line numbers, named grid areas, among many other methods. The more you know about them, the easier it gets and the more useful they become. What we covered here is just the tip of the iceberg and you may find other approaches to create a grid that work equally well (or better).


Counting With CSS Counters and CSS Grid is a post from CSS-Tricks

Web-Powered Augmented Reality: a Hands-On Tutorial

Uri Shaked has written about his journey in AR on the web from the very early days of Google’s Project Tango to the recent A-Frame experiments from Mozilla. Front-end devs might be interested in A-Frame because of how you work with it – it’s a declarative language like HTML! I particularly like this section where Uri describes how it felt to first play around with AR:

The ability to place virtual objects in the real space, and have them stick in place even when you move around, seemed to me like we were diving down the uncanny valley, where the boundaries between the physical world and the game were beginning to blur. This was the first time I experienced AR without the need for markers or special props — it just worked out of the box, everywhere.

Direct Link to Article — Permalink


Web-Powered Augmented Reality: a Hands-On Tutorial is a post from CSS-Tricks

Boilerform: A Follow-Up

When Chris wrote his idea for a Boilerform, I had already been thinking about starting a new project. I’d just decided to put my front-end boilerplate to bed, and wanted something new to think about. Chris’ idea struck a chord with me immediately, so I got enthusiastically involved in the comments like an excitable puppy. That excitement led me to go ahead and build out the initial version of Boilerform, which you can check out here.

The reason for my initial excitement was that I have a guilty pleasure for forms. In various jobs, I’ve worked with forms at a pretty intense level and have learned a lot about them. This has ranged from building dynamic form builders to high-level spam protection for a Harley-Davidson® website platform. Each different project has given me a look at the front-end and back-end of the process. Each of these projects has also picked away at my tolerance for quick, lazy implementations of forms, because I’ve seen the drastic implementations of this at scale.

But hey, we’re not bad people. Forms are a nightmare to work with. Although better now: each browser treats them slightly differently. For example, check out these select menus from a selection of browsers and OSs. Not one of them looks the same.

These are just the tip of the inconsistency iceberg.

Because of these inconsistencies, it’s easy to see why developers bail out of digging too deep or just spin up a copy of Bootstrap and be done with it. Also, in my experience, the design of minor forms, such as a contact form are left until later in the project when most of the positive momentum has already gone. I’ve even been guilty of building contact forms a day before a website’s launch. 😬

There’s clearly an opportunity to make the process of working with forms—on the front-end, at least—better and I couldn’t resist the temptation to make it!

The Planning

I sat and thought about what pain-points there are when working with forms and what annoys me as a user of forms. I decided that as a developer, I hate styling forms. As a user, poorly implemented form fields annoy me.

An example of the latter is email fields. Now, if you try to fill in an email field on an iOS device, you get that annoying trait of the first letter being capitalized by the browser, because it treats it like a sentence. All you have to do to stop that behaviour is add autocapitalize="none" to your field and this stops. I know this isn’t commonly known because I rarely see it in place, but it’s such a quick win to have a positive impact on your users.

I wanted to bake these little tricks right into Boilerform to help developers make a user’s life easier. Creating a front-end boilerplate or framework is about so much more than styling and aesthetics. It’s about sharing your gained experience with others to make the landscape better as a whole.

The Specification

I needed to think about what I wanted Boilerform to do as a minimum viable product, at initial launch. I came up with the following rules:

  • It had to be compatible with most front-ends
  • It had to be well documented
  • It had to be lightweight
  • Someone should be able to drop a CDN link to their <head> and have it just work
  • Someone should also be able to expand on the source for their own projects
  • It shouldn’t be too opinionated

To achieve these points, I had some technology decisions to make. I decided to go for a low barrier-to-entry setup. This was:

  • Sass powered CSS
  • BEM
  • Plain ol’ HTML
  • A basic compilation setup

I also focused my attention on samples. CodePen was the natural fit for this because they embed really well. Users can also fork them and play with them themselves.

The last decision was to roll out a pattern library to break up components into little pieces. This helped me in a couple of ways. It helped with organization mainly—but it also helped me build Boilerform in a bitty, sporadic nature as I was working on it in the evenings.

I had my plan and my stack, so got cracking.

Keeping it simple

It’s easy for a project like this to get out of hand, so it’s useful to create some points about what Boilerform will be and also what it won’t be.

What Boilerform will be:

  • It’ll always be a boilerplate to get you off to a good start with your project
  • It’ll provide high-level help with HTML, CSS and JavaScript to make both developers’ and users’ lives easier
  • It’ll aim to be super lightweight, so it doesn’t become a heavy burden
  • It’ll offer configurable options that make it flexible and easy to mould into most web projects

What Boilerform won’t be:

  • It won’t be a silver bullet for your forms—it’ll still need some work
  • It won’t be a framework like Bootstrap or Foundation, because it’ll always be a starting point
  • It won’t be overly opinionated with its CSS and JavaScript
  • It’ll never be aimed at one particular framework or web technology

The Specifics

I know y’all like to dive in to the specifics of how things work, so let me give you a whistle-stop tour!

Namespacing the CSS

The first thing I got sorted was namespacing. I’ve worked on a multitude of different sites and setups and they all share something when it comes to CSS: conflicts. With this in mind, I wrote a @mixin that wrapped all the CSS in a .boilerform namespace.

// Source Sass
.c-button { @include namespace() { background: gray; }
} // This compiles to this with Sass: .boilerform .c-button { background: gray; }

The mixin is basic right now, but it gives us flexibility to scale. If we wanted to make the namespacing optional down-the-line, we only have to update this mixin. I love that sort of modularity.

Right now, what it does give us is safety. Nothing leaks out of Boilerform and hopefully, whatever leaks in will be handled by the namespaced resets and rules.

BEM With a Garnish of Prefixes

I love BEM. It’s been core to my CSS and markup for a few years now. One thing I love about BEM is that it helps you build small, encapsulated components. This is perfect for a project like Boilerform.

I could probably target naked elements safely because of the namespacing, but BEM is about more than just putting classes on everything. It gives me and others the freedom to write whatever markup structure we want. It’s also really easy for someone to pickup the code and understand what’s related to what, in both HTML and CSS.

Another thing I added to this setup was a component prefix. Instead of an .input-field component, we’ve got a .c-input-field component. I hope little things like that will help a new contributor see what’s a component right off the bat.

Horror Inputs Get Some Cool Styling

As mentioned above, select menus are awful to style. So are radio buttons and checkboxes.

A trick I’ve been using for a while now is abstracting the styling to other friendlier HTML elements. For example, with <select> elements, I wrap them in a .c-select-field component and use siblings to add a consistent caret.

For checkboxes and radio buttons, I visually-hide the main input and use adjacent <label> elements to display state change. Using this approach makes working with these controls so much easier. Importantly, we maintain accessibility and native events too.

Base Attributes to Make Fields Easier to Use

I touched on it above with my example about email fields and capitalization, but that wasn’t the only addition of useful attributes.

  • Search fields have autocorrect="off" on them to prevent browsers trying to fix spelling. I strongly recommend that you add this to inputs that a user inserts their name into as well.
  • Number fields have min, max and step attributes set to help with validation. It’s also great for keyboard users.
  • All fields have blank name and id attributes to hopefully speed up the wiring-up process

I’m certainly keen for this to be expanded on, because little tweaks like this are great for user experience.

Going Forward. Can You Help?

Boilerform is in a good place right now, but it has real potential to be useful. Some ideas I’ve had for its ongoing development are:

  • Introducing multiple JavaScript library integrations, such as React, Vue, and Angular
  • Create some base form layouts in the pattern library
  • Create Sass mixins for styling pesky stuff like placeholders
  • Improve configurability
  • Add new elements such as the range input
  • Create multilingual documentation

As you can see, that’s a lot of work, so it would be awesome if we can get some contributors into the project to make something truly useful for our community. Pulling in contributors with different areas of expertise and backgrounds will help us make it useful for as many people as possible, from end-users to back-end developers.

Let’s make something great together. 🙂

Check out the project site or the GitHub repository.


Boilerform: A Follow-Up is a post from CSS-Tricks