An Open Source Etiquette Guidebook

Open source software is thriving. Large corporations are building on software that rests on open collaboration, enjoying the many benefits of significant community adoption. Free and open source software is amazing for its ability to bring together many people from all over the world, and join their efforts and skills by their interests.

That said, and because we come from so many different backgrounds, it’s worth taking a moment to reflect on how we work together. The manner in which you conduct yourself while working with others can sometimes impact whether your work is merged, whether someone works on your issue, or in some cases, why you might be blocked from participating in the repository in the future. This post was written to guide people as best as possible on how to keep these communications running smoothly. Here’s a bullet point list of etiquette in open source to help you have a more enjoyable time in the community and contribute to making it a better place.

For the Maintainer

  • Use labels like “help wanted” or “beginner friendly” to guide people to issues they can work on if they are new to the project.
  • When running benchmarks, show the authors of the framework/library/etc the code you’re going to run to benchmark on before running it. Allow them to PR (it’s ok to give a deadline). That way when your benchmark is run you know they have your approval and it’s as fair as possible. This also fixes issues like benchmarking dev instead of prod or some user errors.
  • When you ask someone for help or label an issue help wanted and someone PRs, please write a comment explaining why you are closing it if you decide not to merge. It’s disrespectful of their time otherwise, as they were following your call to action. I would even go so far as to say it would be nice to comment on any PR that you close OR merge, to explain why or say thank you, respectively.
  • Don’t close a PR from an active contributor and reimplement the same thing yourself. Just… don’t do this.
  • If a fight breaks out on an issue that gets personal, shut it down to core maintainers as soon as possible. Lock the issue and ensure to enforce the code of conduct if necessary.
  • Have a code of conduct and make its presence clear. You might consider the contributor covenant code of conduct. GitHub also now offers easy code of conduct integration with some base templates.

For the User

  • Saying thank you for the project before making an inquiry about a new feature or filing a bug is usually appreciated.
  • When opening an issue, create a small, isolated, simple, reproduction of the issue using an online code editor (like codepen or codesandbox) if possible and a GitHub repository if not. The process may help you discover the underlying issue (or realize that it’s not an issue with the project). It will also make it easier for maintainers to help you resolve the problem.
  • When opening an issue, please suggest a solution to the problem. Take a few minutes to do a little digging. This blog post has a few suggestions for how to dive into the source code a little. If you’re not sure, explain you’re unsure what to do.
  • When opening an issue, if you’re unable to resolve it yourself, please explain that. The expectation is that you resolve the issues you bring up. If someone else does it, that’s a gift they’re giving to you (so you should express the appropriate gratitude in that case).
  • Don’t file issues that say things like “is this even maintained anymore?” A comment like this is insulting to the time they have put in, it reads as though the project is not valid anymore just because they needed a break, or were working on something else, or their dad died or they had a kid or any other myriad human reasons for not being at the beck and call of code. It’s totally ok to ask if there’s a roadmap for the future, or to decide based on past commits that it’s not maintained enough for your liking. It’s not ok to be passive aggressive to someone who created something for you for free.
  • If someone respectfully declines a PR because, though valid code, it’s not the direction they’d like to take the project, don’t keep commenting on the pull request. At that point, it might be a better idea to fork the project if you feel strongly the need for a feature.
  • When you want to submit a really large pull request to a project you’re not a core contributor on, it’s a good idea to ask via an issue if the direction you’d like to go makes sense. This also means you’re more likely to get the pull request merged because you have given them a heads up and communicated the plan. Better yet, break it into smaller pull requests so that it’s not too much to grok at one time.
  • Avoid entitlement. The maintainers of the project don’t owe you anything. When you start using the project, it becomes your responsibility to help maintain it. If you don’t like the way the project is being maintained, be respectful when you provide suggestions and offer help to improve the situation. You can always fork the project to work on on your own if you feel very strongly it’s not the direction you would personally take it.
  • Before doing anything on a project, familiarize yourself with the contributor guidelines often found in a CONTRIBUTING.md file at the root of the repository. If one does not exist, file an issue to ask if you could help create one.

Final Thoughts

The overriding theme of these tips is to be polite, respectful, and kind. The value of open source to our industry is immeasurable. We can make it a better place for everyone by following some simple rules of etiquette. Remember that often maintainers of projects are working on it in their spare time. Also don’t forget that users of projects are sometimes new to the ever-growing software world. We should keep this in mind when communicating and working together. By so doing, we can make the open source community a better place.


An Open Source Etiquette Guidebook is a post from CSS-Tricks

On Building Features

We’ve released a couple of features recently at CodePen that I played a role in. It got me thinking a little bit about the process of that. It’s always unique, and for a lot of reasons. Let’s explore that.

What was the spark?

Features start with ideas.

Was it a big bright spark that happened all the sudden? Was it a tiny spark that happened a long time ago, but has slowly grown bright?

Documenting ideas can help a lot. We talked about that on CodePen Radio recently. If you actually write down ideas (both your own and as requested by users), it can clarify and contextualize them.

Documenting all ideas in Notion

There is tooling (e.g. Uservoice) which is specifically for user feedback guiding feature development as well.

Personally, I prefer a mix of internal product vision with measured customer requests, staying light on the public roadmap.

The addition of design assets on CodePen, one of the recent features I worked on, was more of a slowly-intenifying spark than a hot-and-fast one. It came from years of aggregated user requests. CodePen should have a color picker. That’d be neat, we would think. It should be easier to use custom fonts. Yeah… we also jump around copying code from Google Fonts awful regularly.

Then we get an email from Unsplash that was essentially hey, ya know, we have an API. Hmmmm. You sure do! The spark then was gosh all these things feel really related. They are all things that help you with design. Design assets, as it were.

Perhaps we could say this is a good recipe to kick off a new feature: It seems like a good idea. Your instinct is to do it. You want it yourself. You have enough research that your users want it too.

While you’re in there…

The spark has been lit. It feels like a good idea and should be done now. Now what?

When you’re working on a new feature for an existing project, you can’t help but consider where it fit’s into the applications UI and UX. Perhaps it’s just the designer in me, but design-led feature development really seems like the way to go. First, decide exactly what it’s going to do, be like to use, and look like, then build around that.

I’m always the buzzkill when it comes to non UI and UX features and improvements. I try to turn Let’s switch to Postgres into Let’s find a way, if we really gotta switch to Postgres, to give something to the users while we do it. But I digress.

I’d wager most new features aren’t let’s add an entirely new area to the site. Most site work is adding/removing/refining smaller bits to what is already there.

In the case of the new design assets feature we were building, it was clear we wanted to add it inside our code editor, as that’s where you would need them. Our tendency is generally let’s make a new modal! I’m not anti-modal in a situation like this. Click a button, switch mental contexts for a moment to find a design asset, copy what you need, then close it and use it. Plus we already use modals quite a bit within the editor, so there is a built-up affordance to this kind of interaction.

But a new modal? Maybe. Some things warrant entirely new UI. The minute we start considering new UI though, I always consider that a woah there cowboy moment. Not because new UI is difficult, in fact, because it’s too easy. I’d much rather refine what we already have, when possible. That’s where this feature took us.

We already have an assets feature, which allows people to upload files that are, quite often, design assets! Why not combine those two worlds? And this is the while you’re in there… moment. Our existing Assets modal needed some love anyway. There is a similar backlog of ideas for improving that.

So this became an opportunity to not just create a new feature, but clean up an existing feature. We fleshed out the feature set for existing asset uploads as well, offering easier UX, like click-to-copy buttons and action buttons that allow you to add the URLs as external resources, or pop them open in our asset editor to make changes.

Cleaing up goes for UI design work, front-end code, and back-end code as well. Certainly the CSS, as readers of this site know! Features are a great excuse for spring cleaning.

Who can work on it?

This is a huge question to answer when it comes to new feature development. Even small teams (like I’m on) are subdivided into smaller ones on a per-feature basis.

To arrive at the answer for a new feature, it can be hugely beneficial to one-sheet that sucker. A one-sheet is a document that you construct at the beginning of building a new thing where you scope out what is required.

It forces you to not think narrowly about what you are about to do, but broadly. That way you avoid situations where you’re like I’ll just add this little checkbox over here … 7 months later, 2,427 files touched … done!

A one-sheet document might be like this:

  • Overview. Explain what you’re building and why.
  • Alternate solutions. Have you thought of multiple ways to approach this?
  • Front-end overview. Including design, accessibility, and performance.
  • Back-end overview.
  • Data Considerations. Does this touch the database?
  • API and services considerations.
  • Customer support considerations. Is it likely this will cause more or less support?
  • Monitoring, logging, and analytics considerations.
  • Security considerations.
  • Testing considerations.
  • Community safety considerations.
  • Cost considerations.

If you’ve gone through that whole list in earnest and written up everything, you’ll be in much better shape. You’ll know exactly what this new feature will take and be closer to estimating a timeline.

Crucially, you’ll know who you need to work on it.

This passage, from Fabricio Teixeira, rings true:

Designers have to understand how digital products work beyond the surface layer, and how even the tiniest design decision can create a ripple effect on many other places.

You have to bring other disciplines to the table when you start talking about a “minor design change” in your product. There’s a good chance “minor design changes” don’t really exist.

When it came to our design assets mini feature, one of the major reasons we were able to jump on it was because, assuming we scoped what it was going to do appropriately (see next section), 90%+ of the work could be done by a single front-end developer/designer. Of anyone, I had the most open schedule, so I was able to take it on.

Some feature, perhaps most features, require more interdiciplinary teams. Huge features, on our small team, usually take just about everybody.

Version One vs. The Future

It’s highly likely you’ll have to scope down your ideas to something manageable. It’s so tempting to go big with ideas, but the bigger you go, the slower you go. I’m sure we’ve all been in situations where even small features take three times as long as you expected them to.

I try to be the guy jamming stuff out the door, at least when I’m a place where I know refininements and polish aren’t pipe dreams. I tend to find more trouble with scope creep and delays than things going out too quickly.

When it came to our design assets feature, as I mentioned, I wanted to scope it to an almost front-end-only project at first, so that it didn’t require a bunch of us to work on it. That was balanced with the fact that it I was sure we could make it pretty useful without needing a ton of backend work. I wouldn’t hamstring a feature just for people availability reasons, but sometimes the stars align that way.

The color-picker part of the design assets modal is a good example of that. Right away we considered that someone might want to save their own favorite colors as a palette. Perhaps on a per-Pen basis or global to their account. I think that’s a neat idea too, but that requires some user database work to prepare us for that. It seems like a quite small thing, but we would definitely take our time with something like that to make sure the database changes were abstract enough that we werent just slapping on a “favorite colors” column, but a system that would scale appropriately.

So a simple color picker can be “v1”! No problem! Get that thing out the door. See how people use it. See what they ask for. Then refine it and add to it as needed.

To be perfectly honest, there hasn’t been an awful lot of feedback on it. That’s usually one for the win column. People vocally loving it is certainly better, but that’s rare. When products work and do what people want, the usually just silently and contentfully go about their business. But if you get in their way, at least at a certain scale, they’ll tell you.

Perhaps one day we’ll revisit the design assets area and do a v2! Saved favorites. More image search providers. More search in general. Better memory for what you’ve used before and showing you those things faster. That kind of refining might require a different team. It’s also just as satisfying of a project as the v1, if not more so.

Here’s a better look at what v1 turned out to be:

Another example… CodePen’s new External Assets

Speaking of refining a feature! Let’s map this stuff onto another feature we recently worked on at CodePen. We just revamped how our External Assets area works. This is what it’s like now:

It’s somewhat unlikely most people have a strong memory of what it was like before. This isn’t that different. The big UI difference is that big search box. Before, the inputs were the search fields, typeahead style. We’re still using typeahead, but have moved it to the search box, which I think is a stronger affordance.

Moving where typeahead takes place is a minor change indeed, but we had lots of evidence that people had no idea we even offered it. Using the visual search affordance completely fixes that.

Another significant UX improvement comes in the form of those remembered resources. Whenever you choose a resource, it remembers you did, and gives you a little button for adding it again. Hey! That’s a lot like favoriting design assets, isn’t it?! Good thing we didn’t make that “favorite colors” database column because already we’re seeing places a more abstract system would be useful.

In this case, we decided to save those favorites in localStorage. Now we get to experiment with a UI in a way that handles favorites, but still not need to touch the database quite yet. The advantage of moving it to the database is that favorites of any kind could follow a user across browsers and sessions and stuff without worry of losing them. There is always a v3!

There was also some behind-the-scenes updates here that, internally, we’re just as excited about. That typeahead feature? it searches like tens or hundreds of thousands of resources. That’s a lot of data. Before this, we handled it by not downloading that data until you clicked into a typeahead field. But then we did download it. A huge chunk of JSON we stored on our own servers. A huge chunk of JSON that went out of data regularly, and required us to update all the time. We had a system for updating it, but it still required work. The new system uses the CDNjs API directly, meaning that no huge download ever needs to take place and the resources are always up to date. Well, as up to date as CDNjs is, anyway.

Speaking of a v3, we already have loads of ideas for that. Speed is a slight concern. How can we speed up the search? How can we scope those results by popularity? How can we loosen up and guess better what resource you are searching for? Probably most significantly, how can we open this up to resources on npm? We’re hoping to address all of this stuff. But fortunately, none of it held up getting a better thing out the door now.

Wrapping Up

A bit of a ramble eh?

Definitely some incomplete thoughts here, but feature development has been on the ol’ brain a lot lately and I wanted to get something down. So many of us developers live in this cycle our entire career. A lot of us have significant say in what we build and how we build it. There is an incredible amount to think about related to all this, and arguably no obvious set of best practices. It’s too big, too nebulous. You can’t hold it in your hand and say this is how we do feature development. But you can think real hard about it, have some principles that work for you, and try to do the best you can.


On Building Features is a post from CSS-Tricks

Making your web app work offline, Part 2: The Implementation

This two-part series is a gentle, high-level introduction to offline web development. In Part 1 we got a basic service worker running, which caches our application resources. Now let’s extend it to support offline.

Article Series:

  1. The Setup
  2. The Implementation (you are here!)

Making an `offline.htm` file

Next, lets add some code to detect when the application is offline, and if so, redirect our users to a (cached) `offline.htm`.

But wait, if the service worker file is generated automatically, how do we go about adding in our own code, manually? Well, we can add an entry for importScripts, which tells our service worker to import the scripts we specify. It does this through the service worker’s native importScripts function, which is well-named. And we’ll also add our `offline.htm` file to our statically cached list of files. The new files are highlighted below:

new SWPrecacheWebpackPlugin({ mergeStaticsConfig: true, filename: "service-worker.js", importScripts: ["../sw-manual.js"], staticFileGlobs: [ //... "offline.htm" ], // the rest of the config is unchanged })

Now, let’s go in our `sw-manual.js` file, and add code to load the cached `offline.htm` file when the user is offline.

toolbox.router.get(/books$/, handleMain);
toolbox.router.get(/subjects$/, handleMain);
toolbox.router.get(/localhost:3000\/$/, handleMain);
toolbox.router.get(/mylibrary.io$/, handleMain); function handleMain(request) { return fetch(request).catch(() => { return caches.match("react-redux/offline.htm", { ignoreSearch: true }); });
}

We’ll use the toolbox.router object we saw before to catch all our top-level routes, and if the main page doesn’t load from the network, send back the (hopefully cached) `offline.htm` file.

This is one of the few times in this post you’ll see promises being used directly, instead of with the async syntax, mainly because in this case it’s actually easier to just tack on a .catch(), rather than set up a try{} catch{} block.

The `offline.htm` file will be pretty basic, just some HTML that reads cached books from IndexedDB, and displays them in a rudimentary table. But before showing that, let’s walk through how to actually use IndexedDB (if you want to just see it now, it’s here)

Hello World, IndexedDB

IndexedDB is an in-browser database. It’s ideal for enabling offline functionality since it can be accessed without network connectivity, but it’s by no means limited to that.

The API pre-dates Promises, so it’s callback based. We’ll go through everything with the native API, but in practice, you’ll likely want to wrap and simplify it, either with your own helper methods which wrap the functionality with Promises, or with a third-party utility.

Let me repeat: the API for IndexedDB is awful. Here’s Jake Archibald saying he wouldn’t even teach it directly

We’ll still go over it because I really want you to see everything as it is, but please don’t let it scare you away. There’s plenty of simplifying abstractions out there, for example dexie and idb.

Setting up our database

Let’s add code to sw-manual that subscribes to the service worker’s activate event, and checks to see if we already have an IndexedDB setup; if not, we’ll create, and then fill it with data.

First, the creating bit.

self.addEventListener("activate", () => { //1 is the version of IDB we're opening let open = indexedDB.open("books", 1); //should only be called the first time, when version 1 does not exist open.onupgradeneeded = evt => { let db = open.result; //this callback should only ever be called upon creation of our IDB, when an upgrade is needed //for version 1, but to be doubly safe, and also to demonstrade this, we'll check to see //if the stores exist if (!db.objectStoreNames.contains("books") || !db.objectStoreNames.contains("syncInfo")) { if (!db.objectStoreNames.contains("books")) { let bookStore = db.createObjectStore("books", { keyPath: "_id" }); bookStore.createIndex("imgSync", "imgSync", { unique: false }); } if (!db.objectStoreNames.contains("syncInfo")) { db.createObjectStore("syncInfo", { keyPath: "id" }); evt.target.transaction .objectStore("syncInfo") .add({ id: 1, lastImgSync: null, lastImgSyncStarted: null, lastLoadStarted: +new Date(), lastLoad: null }); } evt.target.transaction.oncomplete = fullSync; } };
});

The code’s messy and manual; as I said, you’ll likely want to add some abstractions in practice. Some of the key points: we check for the objectStores (tables) we’ll be using, and create them as needed. Note that we can even create indexes, which we can see on the books store, with the imgSync index. We also create a syncInfo store (table) which we’ll use to store information on when we last synced our data, so we don’t pester our servers too frequently, asking for updates.

When the transaction has completed, at the very bottom, we call the fullSync method, which loads all our data. Let’s see what that looks like.

Performing an initial sync

Below is the relevant portion of the syncing code, which makes repeated calls to our endpoint to load our books, page by page, adding each result to IDB along the way. Again, this is using zero abstractions, so expect a lot of bloat.

See this GitHub gist for the full code, which includes some additional error handling, and code which runs when the last page is finished.

function fullSyncPage(db, page) { let pageSize = 50; doFetch("/book/offlineSync", { page, pageSize }) .then(resp => resp.json()) .then(resp => { if (!resp.books) return; let books = resp.books; let i = 0; putNext(); function putNext() { //callback for an insertion, with indicators it hasn't had images cached yet if (i < pageSize) { let book = books[i++]; let transaction = db.transaction("books", "readwrite"); let booksStore = transaction.objectStore("books"); //extend the book with the imgSync indicated, add it, and on success, do this for the next book booksStore.add(Object.assign(book, { imgSync: 0 })).onsuccess = putNext; } else { //either load the next page, or call loadDone() } } });
}

The putNext() function is where the real work is done. This serves as the callback for each successful insertion’s success. In real life we’d hopefully have a nice method that adds each book, wrapped in a promise, so we could do a simple for of loop, and await each insertion. But this is the “vanilla” solution or at least one of them.

We modify each book before inserting it, to set the imgSync property to 0, to indicate that this book has not had its image cached, yet.

And after we’ve exhausted the last page, and there are no more results, we call loadDone(), to set some metadata indicating the last time we did a full data sync.

In real life, this would be a good time to sync all those images, but let’s instead do it on-demand by the web app itself, in order to demonstrate another feature of service workers.

Communicating between the web app, and service worker

Let’s just pretend it would be a good idea to have the books’ covers load the next time the user visits our page when the service worker is running. Let’s have our web app send a message to the service worker, and we’ll have the service worker receive it, and then sync the book covers.

From our app code, we attempt to send a message to a running service worker, instructing it to sync images.

In the web app:

if ("serviceWorker" in navigator) { try { navigator.serviceWorker.controller.postMessage({ command: "sync-images" }); } catch (er) {}
}

In `sw-manual.js`:

self.addEventListener("message", evt => { if (evt.data && evt.data.command == "sync-images") { let open = indexedDB.open("books", 1); open.onsuccess = evt => { let db = open.result; if (db.objectStoreNames.contains("books")) { syncImages(db); } }; }
});

In sw-manual we have code to catch that message, and call the syncImages() method. Let’s look at that, next.

function syncImages(db) { let tran = db.transaction("books"); let booksStore = tran.objectStore("books"); let idx = booksStore.index("imgSync"); let booksCursor = idx.openCursor(0); let booksToUpdate = []; //a cursor's onsuccess callback will fire for EACH item that's read from it booksCursor.onsuccess = evt => { let cursor = evt.target.result; //if (!cursor) means the cursor has been exhausted; there are no more results if (!cursor) return runIt(); let book = cursor.value; booksToUpdate.push({ _id: book._id, smallImage: book.smallImage }); //read the next item from the cursor cursor.continue(); }; async function runIt() { if (!booksToUpdate.length) return; for (let book of booksToUpdate) { try { //fetch, and cache the book's image await preCacheBookImage(book); let tran = db.transaction("books", "readwrite"); let booksStore = tran.objectStore("books"); //now save the updated book - we'll wrap the IDB callback-based opertion in //a manual promise, so we can await it await new Promise(res => { let req = booksStore.get(book._id); req.onsuccess = ({ target: { result: bookToUpdate } }) => { bookToUpdate.imgSync = 1; booksStore.put(bookToUpdate); res(); }; req.onerror = () => res(); }); } catch (er) { console.log("ERROR", er); } } }
}

We’re cracking open the imageSync index from before, and reading all books that have a zero, which means they haven’t had their images sync’d yet. The booksCursor.onsuccess will be called over and over again, until there are no books left; I’m using this to put them all into an array, at which point I call the runIt() method, which runs through them, calling preCacheBookImage() for each. This method will cache the image, and if there are no unforeseen errors, update the book in IDB to indicate that imgSync is now 1.

If you’re wondering why in the world I’m going through the trouble to save all the books from the cursor into an array, before calling runIt(), rather than just walking through the results of the cursor, and caching and updating as I go, well — it turns out transactions in IndexedDB are a bit weird. They complete when you yield to the event loop unless you yield to the event loop in a method provided by the transaction. So if we leave the event loop to go do other things, like make a network request to pull down an image, then the cursor’s transaction will complete, and we’ll get an error if we try to continue reading from it later.

Manually updating the cache.

Let’s wrap this up, and look at the preCacheBookImage method which actually pulls down a cover image, and adds it to the relevant cache, (but only if it’s not there already.)

async function preCacheBookImage(book) { let smallImage = book.smallImage; if (!smallImage) return; let cachedImage = await caches.match(smallImage); if (cachedImage) return; if (/https:\/\/s3.amazonaws.com\/my-library-cover-uploads/.test(smallImage)) { let cache = await caches.open("local-images1"); let img = await fetch(smallImage, { mode: "no-cors" }); await cache.put(smallImage, img); }
}

If the book has no image, we’re done. Next, we check if it’s cached already — if so, we’re done. Lastly, we inspect the URL, and figure out which cache it belongs in.

The local-images1 cache name is the same from before, which we set up in our dynamic cache. If the image in question isn’t already there, we fetch it, and add it to cache. Each cache operation returns a promise, so the async/await syntax simplifies things nicely.

Testing it out

The way it’s set up, if we clear our service worker either in dev tools, below, or by just opening a fresh incognito window…

…then the first time we view our app, all our books will get saved to IndexedDB.

When we refresh, the image sync will happen. So if we start on a page that’s already pulling down these images, we’ll see our normal service worker saving them to cache (ahem, assuming we delay the ajax call to give our Service Worker a chance to install), which is what these events are in our network tab.

Then, if we navigate elsewhere and refresh, we won’t see any network requests for those image, since our sync method is already finding everything in cache.

If we clear our service workers again, and start on this same page, which is not otherwise pulling these images down, then refresh, we’ll see the network requests to pull down, and sync these images to cache.

Then if we navigate back to the page that uses these images, we won’t see the calls to cache these images, since they’re already cached; moreover, we’ll see these images being retrieved from cache by the service worker.

Both our runtimeCaching provided by sw-toolbox, and our own manual code are working together, off of the same cache.

It works!

As promised, here’s the `offline.htm` page

<div style="padding: 15px"> <h1>Offline</h1> <table class="table table-condescend table-striped"> <thead> <tr> <th></th> <th>Title</th> <th>Author</th> </tr> </thead> <tbody id="booksTarget"> <!--insertion will happen here--> </tbody> </table>
</div>
let open = indexedDB.open("books");
open.onsuccess = evt => { let db = open.result; let transaction = db.transaction("books", "readonly"); let booksStore = transaction.objectStore("books"); var request = booksStore.openCursor(); let rows = ``; request.onsuccess = function(event) { var cursor = event.target.result; if(cursor) { let book = cursor.value; rows += ` <tr> <td><img src="${book.smallImage}" /></td> <td>${book.title}</td> <td>${Array.isArray(book.authors) ? book.authors.join("<br/>") : book.authors}</td> </tr>`; cursor.continue(); } else { document.getElementById("booksTarget").innerHTML = rows; } };
}

Now let’s tell Chrome to pretend to be offline, and test it out:

Cool!

Where to, from here?

We’re barely scratching the surface. Your users can update these data from multiple devices, and each one will need to keep in sync somehow. You could either periodically wipe your IDB tables and re-sync; have the user manually trigger a re-sync when they want; or you could get really ambitious and try to log all your mutations on your server, and have each service worker on each device request all changes that happened since the last time it ran, in order to sync up.

The most interesting solution here is PouchDB, which does this syncing for you; the catch is it’s designed to work with CouchDB, which you may or may not be using.

Syncing local changes

For one last piece of code, let’s consider an easier problem to solve: syncing your IndexedDB with changes that are made right this minute, by your user who’s using your web app. We can already intercept fetch requests in the service worker, so it should be easy to listen for the right mutation endpoint, run it, then then peak at the results and update IndexedDB accordingly. Let’s take a look.

toolbox.router.post(/graphql/, request => { //just run the request as is return fetch(request).then(response => { //clone it by necessity let respClone = response.clone(); //do this later - get the response back to our user NOW setTimeout(() => { respClone.json().then(resp => { //this graphQL endpoint is for lots of things - inspect the data response to see //which operation we just ran if (resp && resp.data && resp.data.updateBook && resp.data.updateBook.Book) { syncBook(resp.data.updateBook.Book); } }, 5); }); //return the response to our user NOW, before the IDB syncing return response; });
}); function syncBook(book) { let open = indexedDB.open("books", 1); open.onsuccess = evt => { let db = open.result; if (db.objectStoreNames.contains("books")) { let tran = db.transaction("books", "readwrite"); let booksStore = tran.objectStore("books"); booksStore.get(book._id).onsuccess = ({ target: { result: bookToUpdate } }) => { //update the book with the new values ["title", "authors", "isbn"].forEach(prop => (bookToUpdate[prop] = book[prop])); //and save it booksStore.put(bookToUpdate); }; } };
}

This may seem a bit more involved than you were hoping. We can only read the fetch response once, and our application thread will also need to read it, so we’ll first clone the response. Then, we’ll run a setTimeout() so we can return the original response to the web application/user as quickly as possible, and do what we need thereafter. Don’t just rely on the promise in respClone.json() to do this, since promises use microtasks. I’ll let Jake Archibald explain what exactly that means, but the short of it is that they can starve the main event loop. I’m not quite smart enough to be certain whether that applies here, so I just went with the safe approach of setTimeout.

Since I’m using GraphQL, the responses are in a predictable format, and it’s easy to see if I just performed the operation I’m interested in, and if so I can re-sync the affected data.

Further reading

Literally everything here is explained in wonderful depth in this book by Tal Ater. If you’re interested in learning more, you can’t beat that as a learning resource.

For some more immediate, quick resources, here’s an MDN article on IndexedDB, and a service workers introduction, and offline cookbook, both from Google.

Parting thoughts

Giving your user useful things to do with your web app when they don’t even have network connectivity is an amazing new ability web developers have. As you’ve seen though, it’s no easy task. Hopefully this post has given you a realistic idea of what to expect, and a decent introduction to the things you’ll need to do to accomplish this.

Article Series:

  1. The Setup
  2. The Implementation (you are here!)

Making your web app work offline, Part 2: The Implementation is a post from CSS-Tricks

Making your web app work offline, Part 1: The Setup

This two-part series is a gentle introduction to offline web development. Getting a web application to do something while offline is surprisingly tricky, requiring a lot of things to be in place and functioning correctly. We’re going to cover all of these pieces from a high level, with working examples. This post is an overview, but there are plenty of more-detailed resources listed throughout.

Article Series:

  1. The Setup (you are here!)
  2. The Implementation

Basic approach

I’ll be making heavy use of JavaScript’s async/await syntax. It’s supported in all major browsers and Node, and greatly simplifies Promise-based code. The link above explains async well, but in a nutshell they allow you to resolve a promise, and access its value directly in code with await, rather than calling .then and accessing the value in the callback, which often leads to the dreaded “rightward drift.”

What are we building?

We’ll be extending an existing booklist project to sync the current user’s books to IndexedDB, and create a simplified offline page that’ll show even when the user has no network connectivity.

Starting with a service worker

The one non-negotiable thing you need for offline development is a service worker. A service worker is a background process that can, among other things, intercept network requests; redirect them; short circuit them by returning cached responses; or execute them as normal and do custom things with the response, like caching.

Basic caching

Probably the first, most basic, yet high impact thing you’ll do with a service worker is have it cache your application’s resources. Service worker and the cache it uses are extremely low-level primitives; everything is manual. In order to properly cache your resources you’ll need to fetch and add them to a cache, but then you’ll also need to track changes to these resources. You’ll track when they change, remove the prior version, and fetch and update the new one.

In practice, this means your service worker code will need to be generated as part of a build step, which hashes your files, and generates a file that’s smart enough to record these changes between versions, and update caches as needed.

Abstractions to the rescue

This is extremely tedious and error-prone code that you’d likely never want to write yourself. Luckily some smart people have written abstractions to help, namely sw-precache, and sw-toolbox by the great people at Google. Note, Google has since deprecated these tools in favor of the newer Workbox. I’ve yet to move my code over since sw-* works so well, but in any event the ideas are the same, and I’m told the conversion is easy. And it’s worth mentioning that sw-precache currently has about 30,000 downloads per day, so it’s still widely used.

Hello World, sw-precache

Let’s jump right in. We’re using webpack, and as webpack goes, there’s a plugin, so let’s check that out first.

// inside your webpack config
new SWPrecacheWebpackPlugin({ mergeStaticsConfig: true, filename: "service-worker.js", staticFileGlobs: [ //static resources to cache "static/bootstrap/css/bootstrap-booklist-build.css", ... ], ignoreUrlParametersMatching: /./, stripPrefixMulti: { //any paths that need adjusting "static/": "react-redux/static/", ... }, ...
})

By default ALL of the bundles webpack makes will be precached. We’re also manually providing some paths to static resources I want cached in the staticFileGlobs property, and I’m adjusting some paths in stripPrefixMulti.

// inside your webpack config
const getCache = ({ name, pattern, expires, maxEntries }) => ({ urlPattern: pattern, handler: "cacheFirst", options: { cache: { maxEntries: maxEntries || 500, name: name, maxAgeSeconds: expires || 60 * 60 * 24 * 365 * 2 //2 years }, successResponses: /0|[123].*/ }
}); new SWPrecacheWebpackPlugin({ ... runtimeCaching: [ //pulls in sw-toolbox and caches dynamically based on a pattern getCache({ pattern: /^https:\/\/images-na.ssl-images-amazon.com/, name: "amazon-images1" }), getCache({ pattern: /book\/searchBooks/, name: "book-search", expires: 60 * 7 }), //7 minutes ... ]
})

Adding the runtimeCaching section to our SWPrecacheWebpackPlugin pulls in sw-toolbox and lets us cache urls matching a certain pattern, dynamically, as needed—with getCache helping keep the boilerplate to a minimum.

Hello World, sw-toolbox

The entire service worker file that’s generated is pretty big, but let’s just look at a small piece, namely one of the dynamic caches from above:

toolbox.router.get(/^https:\/\/images-na.ssl-images-amazon.com/, toolbox.cacheFirst, { cache: { maxEntries: 500, name: "amazon-images1", maxAgeSeconds: 63072000 }, successResponses: /0|[123].*/
});

sw-toolbox has provided us with a nice, high-level router object we can use to hook into various URL requests, MVC-style. We’ll use this to setup offline shortly.

Don’t forget to register the service worker

And, of course, the existence of the service worker file that’s generated above is of no use by itself; it needs to be registered. The code looks like this, but be sure to either have it inside an onload listener, or some other place that’ll be guaranteed to run after the page has loaded.

if ("serviceWorker" in navigator) { navigator.serviceWorker.register("https://cdn.css-tricks.com/service-worker.js");
}

There we have it! We got a basic service worker running, which caches our application resources. Tune in tomorrow when we extend it to support offline.

Article Series:

  1. The Setup (you are here!)
  2. The Implementation

Making your web app work offline, Part 1: The Setup is a post from CSS-Tricks

Animating Border

Transitioning border for a hover state. Simple, right? You might be unpleasantly surprised.

The Challenge

The challenge is simple: building a button with an expanding border on hover.

This article will focus on genuine CSS tricks that would be easy to drop into any project without having to touch the DOM or use JavaScript. The methods covered here will follow these rules

  • Single element (no helper divs, but psuedo-elements are allowed)
  • CSS only (no JavaScript)
  • Works for any size (not restricted to a specific width, height, or aspect ratio)
  • Supports transparent backgrounds
  • Smooth and performant transition

I proposed this challenge in the Animation at Work Slack and again on Twitter. Though there was no consensus on the best approach, I did receive some really clever ideas by some phenomenal developers.

Method 1: Animating border

The most straightforward way to animate a border is… well, by animating border.

.border-button { border: solid 5px #FC5185; transition: border-width 0.6s linear;
} .border-button:hover { border-width: 10px; }

See the Pen by Shaw (@shshaw) on CodePen.

Nice and simple, but there are some big performance issues.

Since border takes up space in the document’s layout, changing the border-width will trigger layout. Nearby elements will shift around because of the new border size, making browser reposition those elements every frame of the animation unless you set an explicit size on the button.

As if triggering layout wasn’t bad enough, the transition itself feels “stepped”. I’ll show why in the next example.

Method 2: Better border with outline

How can we change the border without triggering layout? By using outline instead! You’re probably most familiar with outline from removing it on :focus styles (though you shouldn’t), but outline is an outer line that doesn’t change an element’s size or position in the layout.

.border-button { outline: solid 5px #FC5185; transition: outline 0.6s linear; margin: 0.5em; /* Increased margin since the outline expands outside the element */
} .border-button:hover { outline-width: 10px; }

See the Pen by Shaw (@shshaw) on CodePen.

A quick check in Dev Tools’ Performance tab shows the outline transition does not trigger layout. Regardless, the movement still seems stepped because browsers are rounding the border-width and outline-width values so you don’t get sub-pixel rendering between 5 and 6 or smooth transitions from 5.4 to 5.5.

See the Pen by Shaw (@shshaw) on CodePen.

Strangely, Safari often doesn’t render the outline transition and occasionally leaves crazy artifacts.

border artifact in safari

Method 3: Cut it with clip-path

First implemented by Steve Gardner, this method uses clip-path with calc to trim the border down so on hover we can transition to reveal the full border.

.border-button { /* Full width border and a clip-path visually cutting it down to the starting size */ border: solid 10px #FC5185; clip-path: polygon( calc(0% + 5px) calc(0% + 5px), /* top left */ calc(100% - 5px) calc(0% + 5px), /* top right */ calc(100% - 5px) calc(100% - 5px), /* bottom right */ calc(0% + 5px) calc(100% - 5px) /* bottom left */ ); transition: clip-path 0.6s linear;
} .border-button:hover { /* Clip-path spanning the entire box so it's no longer hiding the full-width border. */ clip-path: polygon(0 0, 100% 0, 100% 100%, 0 100%);
}

See the Pen by Shaw (@shshaw) on CodePen.

clip-path technique is the smoothest and most performant method so far, but does come with a few caveats. Rounding errors may cause a little unevenness, depending on the exact size. The border also has to be full size from the start, which may make exact positioning tricky.

Unfortunately there’s no IE/Edge support yet, though it seems to be in development. You can and should encourage Microsoft’s team to implement those features by voting for masks/clip-path to be added.

Method 4: linear-gradient background

We can simulate a border using a clever combination of multiple linear-gradient backgrounds properly sized. In total we have four separate gradients, one for each side. The background-position and background-size properties get each gradient in the right spot and the right size, which can then be transitioned to make the border expand.

.border-button { background-repeat: no-repeat; /* background-size values will repeat so we only need to declare them once */ background-size: calc(100% - 10px) 5px, /* top & bottom */ 5px calc(100% - 10px); /* right & left */ background-position: 5px 5px, /* top */ calc(100% - 5px) 5px, /* right */ 5px calc(100% - 5px), /* bottom */ 5px 5px; /* left */ /* Since we're sizing and positioning with the above properties, we only need to set up a simple solid-color gradients for each side */ background-image: linear-gradient(0deg, #FC5185, #FC5185), linear-gradient(0deg, #FC5185, #FC5185), linear-gradient(0deg, #FC5185, #FC5185), linear-gradient(0deg, #FC5185, #FC5185); transition: all 0.6s linear; transition-property: background-size, background-position;
} .border-button:hover { background-position: 0 0, 100% 0, 0 100%, 0 0; background-size: 100% 10px, 10px 100%, 100% 10px, 10px 100%;
}

See the Pen by Shaw (@shshaw) on CodePen.

This method is quite difficult to set up and has quite a few cross-browser differences. Firefox and Safari animate the faux-border smoothly, exactly the effect we’re looking for. Chrome’s animation is jerky and even more stepped than the outline and border transitions. IE and Edge refuse to animate the background at all, but they do give the proper border expansion effect.

Method 5: Fake it with box-shadow

Hidden within box-shadow‘s spec is a fourth value for spread-radius. Set all the other length values to 0px and use the spread-radius to build your border alternative that, like outline, won’t affect layout.

.border-button { box-shadow: 0px 0px 0px 5px #FC5185; transition: box-shadow 0.6s linear; margin: 0.5em; /* Increased margin since the box-shado expands outside the element, like outline */
} .border-button:hover { box-shadow: 0px 0px 0px 10px #FC5185; }

See the Pen by Shaw (@shshaw) on CodePen.

The transition with box-shadow is adequately performant and feels much smoother, except in Safari where it’s snapping to whole-values during the transition like border and outline.

Pseudo-Elements

Several of these techniques can be modified to use a pseudo-element instead, but pseudo-elements ended up causing some additional performance issues in my tests.

For the box-shadow method, the transition occasionally triggered paint in a much larger area than necessary. Reinier Kaper pointed out that a pseudo-element can help isolate the paint to a more specific area. As I ran further tests, box-shadow was no longer causing paint in large areas of the document and the complication of the pseudo-element ended up being less performant. The change in paint and performance may have been due to a Chrome update, so feel free to test for yourself.

I also could not find a way to utilize pseudo-elements in a way that would allow for transform based animation.

Why not transform: scale?

You may be firing up Twitter to helpfully suggest using transform: scale for this. Since transform and opacity are the best style properties to animate for performance, why not use a pseudo-element and have the border scale up & down?

.border-button { position: relative; margin: 0.5em; border: solid 5px transparent; background: #3E4377;
} .border-button:after { content: ''; display: block; position: absolute; top: 0; right: 0; bottom: 0; left: 0; border: solid 10px #FC5185; margin: -15px; z-index: -1; transition: transform 0.6s linear; transform: scale(0.97, 0.93);
} .border-button:hover::after { transform: scale(1,1); }

See the Pen by Shaw (@shshaw) on CodePen.

There are a few issues:

  1. The border will show through a transparent button. I forced a background on the button to show how the border is hiding behind the button. If your design calls for buttons with a full background, then this could work.
  2. You can’t scale the border to specific sizes. Since the button’s dimensions vary with the text, there’s no way to animate the border from exactly 5px to 10px using only CSS. In this example I’ve done some magic-numbers on the scale to get it to appear right, but that won’t be universal.
  3. The border animates unevenly because the button’s aspect ratio isn’t 1:1. This usually means the left/right will appear larger than the top/bottom until the animation completes. This may not be an issue depending on how fast your transition is, the button’s aspect ratio, and how big your border is.

If your button has set dimensions, Cher pointed out a clever way to calculate the exact scales needed, though it may be subject to some rounding errors.

Beyond CSS

If we loosen our rules a bit, there are many interesting ways you can animate borders. Codrops consistently does outstanding work in this area, usually utilizing SVGs and JavaScript. The end results are very satisfying, though they can be a bit complex to implement. Here are a few worth checking out:

  • Creative Buttons
  • Button Styles Inspiration
  • Animated Checkboxes
  • Distorted Button Effects
  • Progress Button Styles

Conclusion

There’s more to borders than simply border, but if you want to animate a border you may have some trouble. The methods covered here will help, though none of them are a perfect solution. Which you choose will depend on your project’s requirements, so I’ve laid out a comparison table to help you decide.

See the Pen by Shaw (@shshaw) on CodePen.

My recommendation would be to use box-shadow, which has the best overall balance of ease-of-implementation, animation effect, performance and browser support.

Do you have another way of creating an animated border? Perhaps a clever way to utilize transforms for moving a border? Comment below or reach me on Twitter to share your solution to the challenge.

Special thanks to Martin Pitt, Steve Gardner, Cher, Reinier Kaper, Joseph Rex, David Khourshid, and the Animation at Work community.


Animating Border is a post from CSS-Tricks

The Front-End Checklist is just a tool… everything depends on you.

One month ago, I launched the Front-End Checklist on GitHub. In less than 2 weeks, more than 10,000 people around the world starred the repository. That was completely unexpected and incredible!

I’ve been working as a front-end developer since 2011, but I started to build websites in 2000. Since then, like us all, I’ve been trying to improve the quality of my code and deliver websites faster. Along the way, I’ve been managing developers from two different countries. That has helped me to produce a checklist a little different than what I’ve found on around the web over the years.

While I was creating the checklist, I continuously had the book “The Checklist Manifesto: How to Get Things Right” by Atul Gawade in mind. That book has helped me build checklists for my work and personal life, and simplify things that sometimes seem too complex.

If you are working alone or in a team, individually, remotely, or on-site, I wanted to share some advice on using the Front-End Checklist and the web application that goes with it. Perhaps I can convince you to integrate it into your development cycle.

#1 Decide which rules your project and team need to follow

Every project is different. Before starting a new project, the whole team (i.e. the project managers, designers, developers, QA, etc.) need to agree on what the deliverables will be.

To help you to decide, I created 3 different levels of priority: high, medium, and low. You don’t necessarily need to agree with those distinctions, but they may help order your tasks.

The Front-End Checklist app was done to facilitate the creation of personalized checklists. Change some JSON files to your liking and you are ready to start!

#2 Define the rules to check at beginning, during, and at the end of your project

You shouldn’t check all these rules only at the end of a project. You know as well as I do how projects are at the very end! Too hectic. Most of the items of the Front-End Checklist can be considered at the beginning of your development. It’s up to you to decide. Make it clear to your team upfront what happens when.

#3 Learn a little more about each rules

Who loves reading the docs? Not most of us, but it’s essential. If you want to understand the reasons for the rule, you can’t avoid reading up about them. The more you understand the why of each rule, the better developer you become.

#4 Start to check!

The Front-End Checklist app can facilitate your life as a developer. It’s a live checklist, so as you complete items your progress and grade are updated live. Everything is saved in localStorage so you can leave and come back as needed.

The project is open source, so feel free to fork it and use it however you like. I’m working on making sure all the files are commented. I especially invite those interested in Pug to take a look at the views folder.

#5 Integrate automated testing in your workflow

We all dream of automation (or is it just me?). For now, the Front-End Checklist is just an interactive list, but some of the tasks can be automated in your workflow.

Take a look at the gulpfile used to generate the project. All tasks are packages you can use with npm, webpack, etc.

#6 Validate every pages before sending to QA team and to production

If you’re passionate about generating clean code and care about your code quality, you should be regularly testing your pages. It’s so easy to make mistakes and remove some essential code. Or, someone else on your team might have done it, but it’s your shared responsibilty to be catching things like that.

The Front-End Checklist can generate beautiful reports you can send to a project manager or Quality Assurance team.

#7 Enjoy your work above all

Some people might look at such a long checklist and feel sick to their stomach. Going through such a list might cause anxiety and really not be any fun.

But the Front-End Checklist is just a tool to help you deliver higher quality code. Code that affects all aspects of a project: the SEO, the user experience, the ROI, and ultimately the success of the project. A tool that can help across all those things might actually help reduce your anxiety and improve your health!

Conclusion

The success the Front-End Checklist received in such a short time reminded me that a lot of people are really interested in finding ways to improve their work. But just because the tool exists doesn’t directly help with that. You also need to commit to using it.

In a time where AI is taking over many manual tasks, quality is a must-have. Even if automation takes over a lot of our tasks, some level of quality will remain impossible to automate, and us front-end developers still have many long days to enjoy our jobs.


The Front-End Checklist is just a tool… everything depends on you. is a post from CSS-Tricks

An Idea for a Simple Responsive Spreadsheet

How do you make a spreadsheet-like interface responsive without the use of any JavaScript? This is the question I’ve been asking myself all week as I’ve been working on a new project and trying to figure out how to make the simplest spreadsheet possible. I wanted to avoid using a framework and instead, I decided to experiment with some new properties in order to use nothing but a light touch of CSS.

Spoilers! This is what I’ve come up with so far (oh and please note that this demo currently works in the latest version of Chrome). Try scrolling around a little bit:

See the Pen A Simple Responsive Spreadsheet – Final by Robin Rendle (@robinrendle) on CodePen.

Notice how the first column sticks to the left and the heading sticks to the top of the spreadsheet? This lets us scan lots of data without having to keep scrolling to figure out which column or row we’re in — in a lot of interfaces like this it’s pretty easy to get lost.

So how did I go about making this thing? Let’s jump in!

Adding the markup

First we need to add our markup for the table and, just to make sure that this example is as realistic as possible, we’re going to add a lot of rows and columns here:

See the Pen A Simple Responsive Spreadsheet – 1st by Robin Rendle (@robinrendle) on CodePen.

There’s nothing really complex going on. We just have a regular ol’ table with a <thead> and a <tbody>, but we do wrap the whole table in the table-wrapper div which I’ll explain in just a little bit.

Next, we’ll add basic styling to that wrapper element to move it into the center of the page and also give it a max-width. We also need to make sure that the .table-wrapper has overflow set to scroll, although at larger screen sizes we won’t need that just yet:

body { display: flex; font-family: -apple-system; font-size: 90%; color: #333; justify-content: center;
} .table-wrapper { max-width: 700px; overflow: scroll;
}

See the Pen A Simple Responsive Spreadsheet – 2nd by Robin Rendle (@robinrendle) on CodePen.

Nifty! Now we can add styles for the first column of our table and the thead element as well as basic styling for each of the table cells:

table { border: 1px solid #ddd; border-collapse: collapse;
} td, th { white-space: nowrap; border: 1px solid #ddd; padding: 20px;
}

See the Pen A Simple Responsive Spreadsheet – 3 by Robin Rendle (@robinrendle) on CodePen.

The problem here is that we’ve now made a pretty inaccessible table; although we can scroll around in the spreadsheet we can’t read which column or row is associated to which bit of data. This can lead to a table that is almost completely illegible and if we were to populate this with real data then it would be even worse:

position: sticky to the rescue!

position: sticky is a wonderfully handy CSS trick that I’ve started experimenting with a great deal lately. It lets you stick child elements to their parent containers so that as you scroll around the child element is always visible. And this is exactly what we need here for the first column and the heading of our table element.

We can use this relatively new feature with CSS like this:

// The heading of our table
th { background-color: #eee; position: sticky; top: -1px; z-index: 2; // The first cell that lives in the top left of our spreadsheet &:first-of-type { left: 0; z-index: 3; }
} // The first column that we want to stick to the left
tbody tr td:first-of-type { background-color: #eee; position: sticky; left: -1px; z-index: 1;
}

This z-index values are important here because we want the header to overlap the first left hand column that will also be sticky. However! We also want that empty cell at the top left to overlap both our header and our left hand column, like this:

See the Pen A Simple Responsive Spreadsheet – Final by Robin Rendle (@robinrendle) on CodePen.

But there we have it! A simple responsive spreadsheet where you can view both the heading and the first column no matter where you are in the table. Although, it’s worth noting that your mileage may vary. position: sticky has relatively patchy support right now and so it’s worth thoroughly testing before you start using it. Or you could use something like Stickybits that would act as a lightweight polyfill.

Also, if you need to dig into tables in more depth then we’ve made a rather handy Complete Guide to the Table Element.


An Idea for a Simple Responsive Spreadsheet is a post from CSS-Tricks

Animating Layouts with the FLIP Technique

User interfaces are most effective when they are intuitive and easily understandable to the user. Animation plays a major role in this – as Nick Babich said, animation brings user interfaces to life. However, adding meaningful transitions and micro-interactions is often an afterthought, or something that is “nice to have” if time permits. All too often, we experience web apps that simply “jump” from view to view without giving the user time to process what just happened in the current context.

This leads to unintuitive user experiences, but we can do better, by avoiding “jump cuts” and “teleportation” in creating UIs. After all, what’s more natural than real life, where nothing teleports (except maybe car keys), and everything you interact with moves with natural motion?

In this article, we’ll explore a technique called “FLIP” that can be used to animate the positions and dimensions of any DOM element in a performant manner, regardless of how their layout is calculated or rendered (e.g., height, width, floats, absolute positioning, transform, flexbox, grid, etc.)

Why the FLIP technique?

Have you ever tried to animate height, width, top, left, or any other properties besides transform and opacity? You might have noticed that the animations look a bit janky, and there’s a reason for that. When any property that triggers layout changes (such as `height`), the browser has to recursively check if any other element’s layout has changed as a result, and that can be expensive. If that calculation takes longer than one animation frame (around 16.7 milliseconds), then the animation frame will be skipped, resulting in “jank” since that frame wasn’t rendered in time. In Paul Lewis’ article “Pixels are Expensive”, he goes further in depth at how pixels are rendered and the various performance expenses.

In short, our goal is to be short – we want to calculate the least amount of style changes necessary, as quickly as possible. The key to this is only animating transform and opacity, and FLIP explains how we can simulate layout changes using only transform.

What is FLIP?

FLIP is a mnemonic device and technique first coined by Paul Lewis, which stands for First, Last, Invert, Play. His article contains an excellent explanation of the technique, but I’ll outline it here:

  • First: before anything happens, record the current (i.e., first) position and dimensions of the element that will transition. You can use element.getBoundingClientRect() for this, as will be shown below.
  • Last: execute the code that causes the transition to instantaneously happen, and record the final (i.e., last) position and dimensions of the element.*
  • Invert: since the element is in the last position, we want to create the illusion that it’s in the first position, by using transform to modify its position and dimensions. This takes a little math, but it’s not too difficult.
  • Play: with the element inverted (and pretending to be in the first position), we can move it back to its last position by setting its transform to none.

Below is how these steps can be implemented with the Web Animations API:

const elm = document.querySelector('.some-element'); // First: get the current bounds
const first = elm.getBoundingClientRect(); // execute the script that causes layout change
doSomething(); // Last: get the final bounds
const last = elm.getBoundingClientRect(); // Invert: determine the delta between the // first and last bounds to invert the element
const deltaX = first.left - last.left;
const deltaY = first.top - last.top;
const deltaW = first.width / last.width;
const deltaH = first.height / last.height; // Play: animate the final element from its first bounds
// to its last bounds (which is no transform)
elm.animate([{ transformOrigin: 'top left', transform: ` translate(${deltaX}px, ${deltaY}px) scale(${deltaW}, ${deltaH}) `
}, { transformOrigin: 'top left', transform: 'none'
}], { duration: 300, easing: 'ease-in-out', fill: 'both'
});

Note: At the time of writing, the Web Animations API is not yet supported in all browsers. However, you can use a polyfill.

See the Pen How the FLIP technique works by David Khourshid (@davidkpiano) on CodePen.

There are two important things to note:

  1. If the element’s size changed, you can transform scale in order to “resize” it with no performance penalty; however, make sure to set transformOrigin to 'top left' since that’s where we based our delta calculations.
  2. We’re using the Web Animations API to animate the element here, but you’re free to use any other animation engine, such as GSAP, Anime, Velocity, Just-Animate, Mo.js and more.

Shared Element Transitions

One common use-case for transitioning an element between app views and states is that the final element might not be the same DOM element as the initial element. In Android, this is similar to a shared element transition, except that the element isn’t “recycled” from view to view in the DOM as it is on Android.
Nevertheless, we can still achieve the FLIP transition with a little magic illusion:

const firstElm = document.querySelector('.first-element'); // First: get the bounds and then hide the element (if necessary)
const first = firstElm.getBoundingClientRect();
firstElm.style.setProperty('visibility', 'hidden'); // execute the script that causes view change
doSomething(); // Last: get the bounds of the element that just appeared
const lastElm = document.querySelector('.last-element');
const last = lastElm.getBoundingClientRect(); // continue with the other steps, just as before.
// remember: you're animating the lastElm, not the firstElm.

Below is an example of how two completely disparate elements can appear to be the same element using shared element transitions. Click one of the pictures to see the effect.

See the Pen FLIP example with WAAPI by David Khourshid (@davidkpiano) on CodePen.

Parent-Child Transitions

With the previous implementations, the element bounds are based on the window. For most use cases, this is fine, but consider this scenario:

  • An element changes position and needs to transition.
  • That element contains a child element, which itself needs to transition to a different position inside the parent.

Since the previously calculated bounds are relative to the window, our calculations for the child element are going to be off. To solve this, we need to ensure that the bounds are calculated relative to the parent element instead:

const parentElm = document.querySelector('.parent');
const childElm = document.querySelector('.parent > .child'); // First: parent and child
const parentFirst = parentElm.getBoundingClientRect();
const childFirst = childElm.getBoundingClientRect(); doSomething(); // Last: parent and child
const parentLast = parentElm.getBoundingClientRect();
const childLast = childElm.getBoundingClientRect(); // Invert: parent
const parentDeltaX = parentFirst.left - parentLast.left;
const parentDeltaY = parentFirst.top - parentLast.top; // Invert: child relative to parent
const childDeltaX = (childFirst.left - parentFirst.left) - (childLast.left - parentLast.left);
const childDeltaY = (childFirst.top - parentFirst.top) - (childLast.top - parentLast.top); // Play: using the WAAPI
parentElm.animate([ { transform: `translate(${parentDeltaX}px, ${parentDeltaY}px)` }, { transform: 'none' }
], { duration: 300, easing: 'ease-in-out' }); childElm.animate([ { transform: `translate(${childDeltaX}px, ${childDeltaY}px)` }, { transform: 'none' }
], { duration: 300, easing: 'ease-in-out' });

A few things to note here, as well:

  1. The timing options for the parent and child (duration, easing, etc.) do not necessarily need to match with this technique. Feel free to be creative!
  2. Changing dimensions in parent and/or child (width, height) was purposefully omitted in this example, since it is an advanced and complex topic. Let’s save that for another tutorial.
  3. You can combine the shared element and parent-child techniques for greater flexibility.

Using Flipping.js for Full Flexibility

The above techniques might seem straightforward, but they can get quite tedious to code once you have to keep track of multiple elements transitioning. Android eases this burden by:

  • baking shared element transitions into the core SDK
  • allowing developers to identify which elements are shared by using a common android:transitionName XML attribute

I’ve created a small library called Flipping.js with the same idea in mind. By adding a data-flip-key="..." attribute to HTML elements, it’s possible to predictably and efficiently keep track of elements that might change position and dimensions from state to state.

For example, consider this initial view:

 <section class="gallery"> <div class="photo-1" data-flip-key="photo-1"> <img src="/photo-1"/> </div> <div class="photo-2" data-flip-key="photo-2"> <img src="/photo-2"/> </div> <div class="photo-3" data-flip-key="photo-3"> <img src="/photo-3"/> </div> </section>

And this separate detail view:

 <section class="details"> <div class="photo" data-flip-key="photo-1"> <img src="/photo-1"/> </div> <p class="description"> Lorem ipsum dolor sit amet... </section>

Notice in the above example that there are 2 elements with the same data-flip-key="photo-1". Flipping.js tracks the “active” element by choosing the first element that meet these criteria:

  • The element exists in the DOM (i.e., it hasn’t been removed or detached)
  • The element is not hidden (hint: elm.getBoundingClientRect() will have { width: 0, height: 0 } for hidden elements)
  • Any custom logic specified in the selectActive option.

Getting Started with Flipping.js

There’s a few different packages for Flipping, depending on your needs:

  • flipping.js: tiny and low-level; only emits events when element bounds change
  • flipping.web.js: uses WAAPI to animate transitions
  • flipping.gsap.js: uses GSAP to animate transitions
  • More adapters coming soon!

You can grab the minified code directly from unpkg:

  • https://unpkg.com/flipping@latest/dist/flipping.js
  • https://unpkg.com/flipping@latest/dist/flipping.web.js
  • https://unpkg.com/flipping@latest/dist/flipping.gsap.js

Or you can npm install flipping --save and import it into your projects:

// import not necessary when including the unpkg scripts in a <script src="..."> tag
import Flipping from 'flipping/adapters/web'; const flipping = new Flipping(); // First: let Flipping read all initial bounds
flipping.read(); // execute the change that causes any elements to change bounds
doSomething(); // Last, Invert, Play: the flip() method does it all
flipping.flip();

Handling FLIP transitions as a result of a function call is such a common pattern, that the .wrap(fn) method transparently wraps (or “decorates”) the given function by first calling .read(), then getting the return value of the function, then calling .flip(), then returning the return value. This leads to much less code:

const flipping = new Flipping(); const flippingDoSomething = flipping.wrap(doSomething); // anytime this is called, FLIP will animate changed elements
flippingDoSomething();

Here is an example of using flipping.wrap() to easily achieve the shifting letters effect. Click anywhere to see the effect.

See the Pen Flipping Birthstones #Codevember by David Khourshid (@davidkpiano) on CodePen.

Adding Flipping.js to Existing Projects

In another article, we created a simple React gallery app using finite state machines. It works just as expected, but the UI could use some smooth transitions between states to prevent “jumping” and improve the user experience. Let’s add Flipping.js into our React app to accomplish this. (Keep in mind, Flipping.js is framework-agnostic.)

Step 1: Initialize Flipping.js

The Flipping instance will live on the React component itself, so that it’s isolated to only changes that occur within that component. Initialize Flipping.js by setting it up in the componentDidMount lifecycle hook:

 componentDidMount() { const { node } = this; if (!node) return; this.flipping = new Flipping({ parentElement: node }); // initialize flipping with the initial bounds this.flipping.read(); }

By specifying parentElement: node, we’re telling Flipping to only look for elements with a data-flip-key in the rendered App, instead of the entire document.
Then, modify the HTML elements with the data-flip-key attribute (similar to React’s key prop) to identify unique and “shared” elements:

 renderGallery(state) { return ( <section className="ui-items" data-state={state}> {this.state.items.map((item, i) => <img src={item.media.m} className="ui-item" style={{'--i': i}} key={item.link} onClick={() => this.transition({ type: 'SELECT_PHOTO', item })} data-flip-key={item.link} /> )} </section> ); } renderPhoto(state) { if (state !== 'photo') return; return ( <section className="ui-photo-detail" onClick={() => this.transition({ type: 'EXIT_PHOTO' })}> <img src={this.state.photo.media.m} className="ui-photo" data-flip-key={this.state.photo.link} /> </section> ) }

Notice how the img.ui-item and img.ui-photo are represented by data-flip-key={item.link} and data-flip-key={this.state.photo.link} respectively: when the user clicks on an img.ui-item, that item is set to this.state.photo, so the .link values will be equal.

And since they are equal, Flipping will smoothly transition from the img.ui-item thumbnail to the larger img.ui-photo.

Now we need to do two more things:

  1. call this.flipping.read() whenever the component will update
  2. call this.flipping.flip() whenever the component did update

Some of you might have already guessed where these method calls are going to occur: componentWillUpdate and componentDidUpdate, respectively:

 componentWillUpdate() { this.flipping.read(); } componentDidUpdate() { this.flipping.flip(); }

And, just like that, if you’re using a Flipping adapter (such as flipping.web.js or flipping.gsap.js), Flipping will keep track of all elements with a [data-flip-key] and smoothly transition them to their new bounds whenever they change. Here is the final result:

See the Pen FLIPping Gallery App by David Khourshid (@davidkpiano) on CodePen.

If you would rather implement custom animations yourself, you can use flipping.js as a simple event emitter. Read the documentation for more advanced use-cases.

Flipping.js and its adapters handle the shared element and parent-child transitions by default, as well as:

  • interrupted transitions (in adapters)
  • enter/move/leave states
  • plugin support for plugins such as mirror, which allows newly entered elements to “mirror” another element’s movement
  • and more planned in the future!

Resources

Similar libraries include:

  • FlipJS by Paul Lewis himself, which handles simple single-element FLIP transitions
  • React-Flip-Move, a useful React library by Josh Comeau
  • BarbaJS, not necessarily a FLIP library, but one that allows you to add smooth transitions between different URLs, without page jumps.

Further resources:

  • Animating the Unanimatable – Joshua Comeau
  • FLIP your Animations – Paul Lewis
  • Pixels are Expensive – Paul Lewis
  • Improving User Flow Through Page Transitions – Luigi de Rosa
  • Smart Transitions in User Experience Design – Adrian Zumbrunnen
  • What Makes a Good Transition? – Nick Babich
  • Motion Guidelines in Google’s Material Design
  • Shared Element Transition with React Native

Animating Layouts with the FLIP Technique is a post from CSS-Tricks

Recreating the Apple Watch Breathe App Animation

The Apple Watch comes with a stock app called Breathe that reminds you to, um, breathe. There’s actually more to it than that, but the thought of needing a reminder to breathe makes me giggle. The point is, the app has this kinda awesome interface with a nice animation.

Photo courtesy of Apple Support

I thought it would be fun to recreate the design in vanilla CSS. Here’s how far I got, which feels pretty close.

See the Pen Apple Watch Breathe App Animation by Geoff Graham (@geoffgraham) on CodePen.

Making the circles

First things first, we need a set of circles that make up that flower looking design. The app itself adds a circle to the layout for each minute that is added to the timer, but we’re going to stick with a static set of six for this demo. It feels like we could get tricky by using ::before and ::after to reduce the HTML markup, but we can keep it simple.

<div class="circle"></div>
<div class="circle"></div>
<div class="circle"></div>
<div class="circle"></div>
<div class="circle"></div>
<div class="circle"></div>

We’re going to make the full size of each circle 125px which is an arbitrary number. The important thing is that the default state of the circles should be all of them stacked on top of one another. We can use absolute positioning to do that.

.circle { border-radius: 50%; height: 125px; position: absolute; transform: translate(0, 0); width: 125px;
}

Note that we’re using the translate function of the transform property to center everything. I had originally tried using basic top, right, bottom, left properties but found later that animating translate is much smoother. I also originally thought that positioning the circles in the full expanded state would be the best place to start, but also found that the animations were cumbersome to create that way because it required resetting each one to center. Lessons learned!

If we were to stop here, there would be nothing on the screen and that’s because we have not set a background color. We’ll get to the nice fancy colors used in the app in a bit, but it might be helpful to add a white background for now with a hint of opacity to help see what’s happening as we work.

See the Pen Apple Watch Breathe App – Step 1 by Geoff Graham (@geoffgraham) on CodePen.

We need a container!

You may have noticed that our circles are nicely stacked, but nowhere near the actual center of the viewport. We’re going to need to wrap these bad boys in a parent element that we can use to position the entire bunch. Plus, that container will serve as the element that pulses and rotates the entire set later. That was another lesson I had to learn the hard way because I stubbornly did not want the extra markup of a container and thought I could work around it.

We’re calling the container .watch-face here and setting it to the same width and height as a single circle.

<div class="watch-face"> <div class="circle"></div> <div class="circle"></div> <div class="circle"></div> <div class="circle"></div> <div class="circle"></div> <div class="circle"></div>
</div>

Now, we can add a little flex to the body element to center everything up.

body { background: #000; display: flex; align-items: center; justify-content: center; height: 100vh;
}

See the Pen Apple Watch Breathe App – Step 2 by Geoff Graham (@geoffgraham) on CodePen.

Next up, animate the circles

At this point, I was eager to see the circles positioned in that neat floral, overlapping arrangement. I knew that it would be difficult to animate the exact position of each circle without seeing them positioned first, so I overrode the transform property in each circle to see where they’d land.

We could set up a class for each circle, but using :nth-child seems easier.

.circle:nth-child(1) { transform: translate(-35px, -50px);
} /* Skipping 2-5 for brevity... */ .circle:nth-child(6) { transform: translate(35px, 50px);
}

It took me a few swings and misses to find coordinates that worked. It ultimately depends on the size of the circles and it may take some finessing.

See the Pen Apple Watch Breathe App – Step 3 by Geoff Graham (@geoffgraham) on CodePen.

Armed with the coordinates, we can register the animations. I removed the transform coordinates that were applied to each :nth-child and moved them into keyframes:

@keyframes circle-1 { 0% { transform: translate(0, 0); } 100% { transform: translate(-35px, -50px); }
} /* And so on... */

I have to admit that the way I went about it feels super clunky because each circle has it’s own animation. It would be slicker to have one animation that can rule them all to push and re-center the circles, but maybe someone else reading has an idea and can share it in the comments.

Now we can apply those animations to each :nth-child in place of transform:

.circle:nth-child(1) { animation: circle-1 4s ease alternate infinite;
} /* And so on... */

Note that we set the animation-timing-function to ease because that feels smooth…at least to me! We also set the animation-direction to alternate so it plays back and forth and set the animation-iteration-count to inifinite so it stays running.

See the Pen Apple Watch Breathe App – Step 4 by Geoff Graham (@geoffgraham) on CodePen.

Color, color, color!

Oh yeah, let’s paint this in! From what I can tell, there are really only two colors in the design and the opacity is what makes it feel like more of a spectrum.

The circles on the left are a greenish color and the ones on the right are sorta blue. We can select the odd-numbered circles to apply the green and the even-numbered ones to apply the blue.

.circle:nth-child(odd) { background: #61bea2;
} .circle:nth-child(even) { background: #529ca0;
}

Oh, and don’t forget to remove the white background from the .circle element. It won’t hurt anything, but it’s nice to clean up after ourselves. I admittedly forgot to do this on the first go.

See the Pen Apple Watch Breathe App – Step 5 by Geoff Graham (@geoffgraham) on CodePen.

Pulse and rotate

Remember that pesky .watch-face container we created? Well, we can animate it to pulse the circles in and out while rotating the entire bunch.

I had totally forgotten that transform functions can be chained together. That makes things a little cleaner because it allows us to apply scale() and rotate() on the same line.

@keyframes pulse { 0% { transform: scale(.15) rotate(180deg); } 100% { transform: scale(1); }
}

…and apply that to the .watch-face element.

.watch-face { height: 125px; width: 125px; animation: pulse 4s cubic-bezier(0.5, 0, 0.5, 1) alternate infinite;
}

Like the circles, we want the animation to run both ways and repeat infinitely. In this case, the scale drops to a super small size as the circles stack on top of each other and the whole thing rotates halfway on the way out before returning back on the way in.

I’ll admit that I am not a buff when it comes to finding the right animation-timing-function for the smoothest or exact animations. I played with cubic-bezier and found something I think feels pretty good, but it’s possible that a stock value like ease-in would work just as well.

All together now!

Here’s everything smushed into the same demo.

See the Pen Apple Watch Breathe App Animation by Geoff Graham (@geoffgraham) on CodePen.

Now, just remember to breathe and let’s all have a peaceful day. ☮️


Recreating the Apple Watch Breathe App Animation is a post from CSS-Tricks

Simple Patterns for Separation (Better Than Color Alone)

Color is pretty good for separating things. That’s what your basic pie chart is, isn’t it? You tell the slices apart by color. With enough color contrast, you might be OK, but you might be even better off (particularly where accessibility is concerned) using patterns, or a combination.

Patrick Dillon tackled the Pie Chart thing

Enhancing Charts With SVG Patterns:

When one of the slices is filled with something more than color, it’s easier to figure out [who the Independents are]:

See the Pen Political Party Affiliation – #2 by Patrick Dillon (@pdillon) on CodePen.

Filling a pie slice with a pattern is not a common charting library feature (yet), but if your library of choice is SVG-based, you are free to implement SVG patterns.

As in, literally a <pattern /> in SVG!

Here’s a simple one for horizontal lines:

<pattern id="horzLines" width="8" height="4" patternUnits="userSpaceOnUse"> <line x1="0" y1="0" x2="8" y2="0" style="stroke:#999;stroke-width:1.5" />
</pattern>

Now any SVG element can use that pattern as a fill. Even strokes. Here’s an example of mixed usage of two simple patterns:

See the Pen Simple Line Patterns by Chris Coyier (@chriscoyier) on CodePen.

That’s nice for filling SVG elements, but what about HTML elements?

Irene Ros created Pattern Fills that are SVG based, but usable in CSS also.

Using SVG Patterns as Fills:

There are several ways to use Pattern Fills:

  • You can use the patterns.css file that contains all the current patterns. That will only work for non-SVG elements.

  • You can use individual patterns, but copying them from the sample pages. CSS class definitions can be found here and SVG pattern defs can be found here

  • You can add your own patterns or modify mine! The conversion process from SVG document to pattern is very tedious. The purpose of the pattern fills toolchain is to simplify this process. You can clone the repo, run npm install and grunt dev to get a local server going. After that, any changes or additions to the src/patterns/**/* files will be automatically picked up and will re-render the CSS file and the sample pages. If you make new patterns, send them over in a pull request!

Here’s me applying them to SVG elements (but could just as easily be applied to HTML elements):

See the Pen Practical Patterns by Chris Coyier (@chriscoyier) on CodePen.

The CSS usage is as base64 data URLs though, so once they are there they aren’t super duper manageable/changeable.

Here’s Irene with an old timey chart, using d3:

Managing an SVG pattern in CSS

If your URL encode the SVG just right, you and plop it right into CSS and have it remain fairly managable.

See the Pen Simple Line Patterns by Chris Coyier (@chriscoyier) on CodePen.

Other Examples Combining Color

Here’s one by John Schulz:

See the Pen SVG Colored Patterns by Chris Coyier (@chriscoyier) on CodePen.

Ricardo Marimón has an example creating the pattern in d3. The pattern looks largely the same on the slices, but perhaps it’s a start to modify.

Other Pattern Sources

We rounded a bunch of them up recently!


Simple Patterns for Separation (Better Than Color Alone) is a post from CSS-Tricks