Evolution of img: Gif without the GIF

Colin Bendell writes about a new and particularly weird addition to Safari Technology Preview in this excellent post about the evolution of animated images on the web. He explains how we can now add an MP4 file directly to the source of an img tag. That would look something like this:

<img src="video.mp4"/>

The idea is that that code would render an image with a looping video inside. As Colin describes, this provides a host of performance benefits:

Animated GIFs are a hack. […] But they have become an awesome tool for cinemagraphs, memes, and creative expression. All of this awesomeness, however, comes at a cost. Animated GIFs are terrible for web performance. They are HUGE in size, impact cellular data bills, require more CPU and memory, cause repaints, and are battery killers. Typically GIFs are 12x larger files than H.264 videos, and take 2x the energy to load and display in a browser. And we’re spending all of those resources on something that doesn’t even look very good – the GIF 256 color limitation often makes GIF files look terrible…

By enabling video content in img tags, Safari Technology Preview is paving the way for awesome Gif-like experiences, without the terrible performance and quality costs associated with GIF files. This functionality will be fantastic for users, developers, designers, and the web. Besides the enormous performance wins that this change enables, it opens up many new use cases that media and ecommerce businesses have been yearning to implement for years. Here’s hoping the other browsers will soon follow.

This seems like a weird hack but, after mulling it over for a second, I get how simple and elegant a solution this is. It also sort of means that other browsers won’t have to support WebP in the future, too.

Direct Link to Article — Permalink


Evolution of img: Gif without the GIF is a post from CSS-Tricks

Calendar with CSS Grid

Here’s a nifty post by Jonathan Snook where he walks us through how to make a calendar interface with CSS Grid and there’s a lot of tricks in here that are worth digging into a little bit more, particularly where Jonathan uses grid-auto-flow: dense which will let Grid take the wheels of a design and try to fill up as much of the allotted space as possible.

As I was digging around, I found a post on Grid’s auto-placement algorithm by Ian Yates which kinda fleshes things out more succinctly. Might come in handy.

Oh, and we have an example of a Grid-based calendar in our ongoing collection of CSS Grid starter templates.

Direct Link to Article — Permalink


Calendar with CSS Grid is a post from CSS-Tricks

An Open Source Etiquette Guidebook

Open source software is thriving. Large corporations are building on software that rests on open collaboration, enjoying the many benefits of significant community adoption. Free and open source software is amazing for its ability to bring together many people from all over the world, and join their efforts and skills by their interests.

That said, and because we come from so many different backgrounds, it’s worth taking a moment to reflect on how we work together. The manner in which you conduct yourself while working with others can sometimes impact whether your work is merged, whether someone works on your issue, or in some cases, why you might be blocked from participating in the repository in the future. This post was written to guide people as best as possible on how to keep these communications running smoothly. Here’s a bullet point list of etiquette in open source to help you have a more enjoyable time in the community and contribute to making it a better place.

For the Maintainer

  • Use labels like “help wanted” or “beginner friendly” to guide people to issues they can work on if they are new to the project.
  • When running benchmarks, show the authors of the framework/library/etc the code you’re going to run to benchmark on before running it. Allow them to PR (it’s ok to give a deadline). That way when your benchmark is run you know they have your approval and it’s as fair as possible. This also fixes issues like benchmarking dev instead of prod or some user errors.
  • When you ask someone for help or label an issue help wanted and someone PRs, please write a comment explaining why you are closing it if you decide not to merge. It’s disrespectful of their time otherwise, as they were following your call to action. I would even go so far as to say it would be nice to comment on any PR that you close OR merge, to explain why or say thank you, respectively.
  • Don’t close a PR from an active contributor and reimplement the same thing yourself. Just… don’t do this.
  • If a fight breaks out on an issue that gets personal, shut it down to core maintainers as soon as possible. Lock the issue and ensure to enforce the code of conduct if necessary.
  • Have a code of conduct and make its presence clear. You might consider the contributor covenant code of conduct. GitHub also now offers easy code of conduct integration with some base templates.

For the User

  • Saying thank you for the project before making an inquiry about a new feature or filing a bug is usually appreciated.
  • When opening an issue, create a small, isolated, simple, reproduction of the issue using an online code editor (like codepen or codesandbox) if possible and a GitHub repository if not. The process may help you discover the underlying issue (or realize that it’s not an issue with the project). It will also make it easier for maintainers to help you resolve the problem.
  • When opening an issue, please suggest a solution to the problem. Take a few minutes to do a little digging. This blog post has a few suggestions for how to dive into the source code a little. If you’re not sure, explain you’re unsure what to do.
  • When opening an issue, if you’re unable to resolve it yourself, please explain that. The expectation is that you resolve the issues you bring up. If someone else does it, that’s a gift they’re giving to you (so you should express the appropriate gratitude in that case).
  • Don’t file issues that say things like “is this even maintained anymore?” A comment like this is insulting to the time they have put in, it reads as though the project is not valid anymore just because they needed a break, or were working on something else, or their dad died or they had a kid or any other myriad human reasons for not being at the beck and call of code. It’s totally ok to ask if there’s a roadmap for the future, or to decide based on past commits that it’s not maintained enough for your liking. It’s not ok to be passive aggressive to someone who created something for you for free.
  • If someone respectfully declines a PR because, though valid code, it’s not the direction they’d like to take the project, don’t keep commenting on the pull request. At that point, it might be a better idea to fork the project if you feel strongly the need for a feature.
  • When you want to submit a really large pull request to a project you’re not a core contributor on, it’s a good idea to ask via an issue if the direction you’d like to go makes sense. This also means you’re more likely to get the pull request merged because you have given them a heads up and communicated the plan. Better yet, break it into smaller pull requests so that it’s not too much to grok at one time.
  • Avoid entitlement. The maintainers of the project don’t owe you anything. When you start using the project, it becomes your responsibility to help maintain it. If you don’t like the way the project is being maintained, be respectful when you provide suggestions and offer help to improve the situation. You can always fork the project to work on on your own if you feel very strongly it’s not the direction you would personally take it.
  • Before doing anything on a project, familiarize yourself with the contributor guidelines often found in a CONTRIBUTING.md file at the root of the repository. If one does not exist, file an issue to ask if you could help create one.

Final Thoughts

The overriding theme of these tips is to be polite, respectful, and kind. The value of open source to our industry is immeasurable. We can make it a better place for everyone by following some simple rules of etiquette. Remember that often maintainers of projects are working on it in their spare time. Also don’t forget that users of projects are sometimes new to the ever-growing software world. We should keep this in mind when communicating and working together. By so doing, we can make the open source community a better place.


An Open Source Etiquette Guidebook is a post from CSS-Tricks

The User Experience of Design Systems

Rune Madsen jotted down his notes from a talk he gave at UX Camp Copenhagen back in May all about design systems and also, well, the potential problems that can arise when building a single unifying system:

When you start a redesign process for a company, it’s very easy to briefly look at all their products (apps, websites, newsletters, etc) and first of all make fun of how bad it all looks, and then design this one single design system for everything. However, once you start diving into why those decisions were made, they often reveal local knowledge that your design system doesn’t solve. I see this so often where a new design system completely ignores for example the difference between platforms because they standardized their components to make mobile and web look the same. Mobile design is just a different thing: Buttons need to be larger, elements should float to the bottom of the screen so they are easier to reach, etc.

This is born from one of Rune’s primary critiques on design systems: that they often benefit the designer over the user. Even if a company’s products aren’t the prettiest of all things, they were created in a way that solved for a need at the time and perhaps we can learn from that rather than assume that standardization is the only way to solve user needs. There’s a difference between standardization and consistency and erring too heavily on the side of standards could have a water-down effect on UX that tosses the baby out with the bath water.

A very good read (and presentation) indeed!

Direct Link to Article — Permalink


The User Experience of Design Systems is a post from CSS-Tricks

Slate’s URLs Are Getting a Makeover

Greg Lavallee writes about a project currently underway at Slate, where they’ve defined a new goal for themselves:

Our goal is speed: Readers should be able to get to what they want quickly, writers should be able to swiftly publish their posts, and developers should be able to code with speed.

They’ve already started shipping a lot of neat improvements to the website but the part that really interests me is where they focus on redefining their URLs:

As a web developer and product dabbler, I love URLs. URLs say a tremendous amount about an application’s structure, and their predictability is a testament to the elegance of the systems behind them. A good URL should let you play with it and find delightful new things as you do.

Each little piece of our new URL took a significant amount of planning and effort by the Slate tech team.

The key takeaway? URLs can improve user experience. In the case of Slate, their URL structure contained redundant subdirectory paths, unnecessary bits, and inverted information. The result is something that reads more like a true hierarchy and informs the reader that there may be more goodies to discover earlier in the path.

Direct Link to Article — Permalink


Slate’s URLs Are Getting a Makeover is a post from CSS-Tricks

On Building Features

We’ve released a couple of features recently at CodePen that I played a role in. It got me thinking a little bit about the process of that. It’s always unique, and for a lot of reasons. Let’s explore that.

What was the spark?

Features start with ideas.

Was it a big bright spark that happened all the sudden? Was it a tiny spark that happened a long time ago, but has slowly grown bright?

Documenting ideas can help a lot. We talked about that on CodePen Radio recently. If you actually write down ideas (both your own and as requested by users), it can clarify and contextualize them.

Documenting all ideas in Notion

There is tooling (e.g. Uservoice) which is specifically for user feedback guiding feature development as well.

Personally, I prefer a mix of internal product vision with measured customer requests, staying light on the public roadmap.

The addition of design assets on CodePen, one of the recent features I worked on, was more of a slowly-intenifying spark than a hot-and-fast one. It came from years of aggregated user requests. CodePen should have a color picker. That’d be neat, we would think. It should be easier to use custom fonts. Yeah… we also jump around copying code from Google Fonts awful regularly.

Then we get an email from Unsplash that was essentially hey, ya know, we have an API. Hmmmm. You sure do! The spark then was gosh all these things feel really related. They are all things that help you with design. Design assets, as it were.

Perhaps we could say this is a good recipe to kick off a new feature: It seems like a good idea. Your instinct is to do it. You want it yourself. You have enough research that your users want it too.

While you’re in there…

The spark has been lit. It feels like a good idea and should be done now. Now what?

When you’re working on a new feature for an existing project, you can’t help but consider where it fit’s into the applications UI and UX. Perhaps it’s just the designer in me, but design-led feature development really seems like the way to go. First, decide exactly what it’s going to do, be like to use, and look like, then build around that.

I’m always the buzzkill when it comes to non UI and UX features and improvements. I try to turn Let’s switch to Postgres into Let’s find a way, if we really gotta switch to Postgres, to give something to the users while we do it. But I digress.

I’d wager most new features aren’t let’s add an entirely new area to the site. Most site work is adding/removing/refining smaller bits to what is already there.

In the case of the new design assets feature we were building, it was clear we wanted to add it inside our code editor, as that’s where you would need them. Our tendency is generally let’s make a new modal! I’m not anti-modal in a situation like this. Click a button, switch mental contexts for a moment to find a design asset, copy what you need, then close it and use it. Plus we already use modals quite a bit within the editor, so there is a built-up affordance to this kind of interaction.

But a new modal? Maybe. Some things warrant entirely new UI. The minute we start considering new UI though, I always consider that a woah there cowboy moment. Not because new UI is difficult, in fact, because it’s too easy. I’d much rather refine what we already have, when possible. That’s where this feature took us.

We already have an assets feature, which allows people to upload files that are, quite often, design assets! Why not combine those two worlds? And this is the while you’re in there… moment. Our existing Assets modal needed some love anyway. There is a similar backlog of ideas for improving that.

So this became an opportunity to not just create a new feature, but clean up an existing feature. We fleshed out the feature set for existing asset uploads as well, offering easier UX, like click-to-copy buttons and action buttons that allow you to add the URLs as external resources, or pop them open in our asset editor to make changes.

Cleaing up goes for UI design work, front-end code, and back-end code as well. Certainly the CSS, as readers of this site know! Features are a great excuse for spring cleaning.

Who can work on it?

This is a huge question to answer when it comes to new feature development. Even small teams (like I’m on) are subdivided into smaller ones on a per-feature basis.

To arrive at the answer for a new feature, it can be hugely beneficial to one-sheet that sucker. A one-sheet is a document that you construct at the beginning of building a new thing where you scope out what is required.

It forces you to not think narrowly about what you are about to do, but broadly. That way you avoid situations where you’re like I’ll just add this little checkbox over here … 7 months later, 2,427 files touched … done!

A one-sheet document might be like this:

  • Overview. Explain what you’re building and why.
  • Alternate solutions. Have you thought of multiple ways to approach this?
  • Front-end overview. Including design, accessibility, and performance.
  • Back-end overview.
  • Data Considerations. Does this touch the database?
  • API and services considerations.
  • Customer support considerations. Is it likely this will cause more or less support?
  • Monitoring, logging, and analytics considerations.
  • Security considerations.
  • Testing considerations.
  • Community safety considerations.
  • Cost considerations.

If you’ve gone through that whole list in earnest and written up everything, you’ll be in much better shape. You’ll know exactly what this new feature will take and be closer to estimating a timeline.

Crucially, you’ll know who you need to work on it.

This passage, from Fabricio Teixeira, rings true:

Designers have to understand how digital products work beyond the surface layer, and how even the tiniest design decision can create a ripple effect on many other places.

You have to bring other disciplines to the table when you start talking about a “minor design change” in your product. There’s a good chance “minor design changes” don’t really exist.

When it came to our design assets mini feature, one of the major reasons we were able to jump on it was because, assuming we scoped what it was going to do appropriately (see next section), 90%+ of the work could be done by a single front-end developer/designer. Of anyone, I had the most open schedule, so I was able to take it on.

Some feature, perhaps most features, require more interdiciplinary teams. Huge features, on our small team, usually take just about everybody.

Version One vs. The Future

It’s highly likely you’ll have to scope down your ideas to something manageable. It’s so tempting to go big with ideas, but the bigger you go, the slower you go. I’m sure we’ve all been in situations where even small features take three times as long as you expected them to.

I try to be the guy jamming stuff out the door, at least when I’m a place where I know refininements and polish aren’t pipe dreams. I tend to find more trouble with scope creep and delays than things going out too quickly.

When it came to our design assets feature, as I mentioned, I wanted to scope it to an almost front-end-only project at first, so that it didn’t require a bunch of us to work on it. That was balanced with the fact that it I was sure we could make it pretty useful without needing a ton of backend work. I wouldn’t hamstring a feature just for people availability reasons, but sometimes the stars align that way.

The color-picker part of the design assets modal is a good example of that. Right away we considered that someone might want to save their own favorite colors as a palette. Perhaps on a per-Pen basis or global to their account. I think that’s a neat idea too, but that requires some user database work to prepare us for that. It seems like a quite small thing, but we would definitely take our time with something like that to make sure the database changes were abstract enough that we werent just slapping on a “favorite colors” column, but a system that would scale appropriately.

So a simple color picker can be “v1”! No problem! Get that thing out the door. See how people use it. See what they ask for. Then refine it and add to it as needed.

To be perfectly honest, there hasn’t been an awful lot of feedback on it. That’s usually one for the win column. People vocally loving it is certainly better, but that’s rare. When products work and do what people want, the usually just silently and contentfully go about their business. But if you get in their way, at least at a certain scale, they’ll tell you.

Perhaps one day we’ll revisit the design assets area and do a v2! Saved favorites. More image search providers. More search in general. Better memory for what you’ve used before and showing you those things faster. That kind of refining might require a different team. It’s also just as satisfying of a project as the v1, if not more so.

Here’s a better look at what v1 turned out to be:

Another example… CodePen’s new External Assets

Speaking of refining a feature! Let’s map this stuff onto another feature we recently worked on at CodePen. We just revamped how our External Assets area works. This is what it’s like now:

It’s somewhat unlikely most people have a strong memory of what it was like before. This isn’t that different. The big UI difference is that big search box. Before, the inputs were the search fields, typeahead style. We’re still using typeahead, but have moved it to the search box, which I think is a stronger affordance.

Moving where typeahead takes place is a minor change indeed, but we had lots of evidence that people had no idea we even offered it. Using the visual search affordance completely fixes that.

Another significant UX improvement comes in the form of those remembered resources. Whenever you choose a resource, it remembers you did, and gives you a little button for adding it again. Hey! That’s a lot like favoriting design assets, isn’t it?! Good thing we didn’t make that “favorite colors” database column because already we’re seeing places a more abstract system would be useful.

In this case, we decided to save those favorites in localStorage. Now we get to experiment with a UI in a way that handles favorites, but still not need to touch the database quite yet. The advantage of moving it to the database is that favorites of any kind could follow a user across browsers and sessions and stuff without worry of losing them. There is always a v3!

There was also some behind-the-scenes updates here that, internally, we’re just as excited about. That typeahead feature? it searches like tens or hundreds of thousands of resources. That’s a lot of data. Before this, we handled it by not downloading that data until you clicked into a typeahead field. But then we did download it. A huge chunk of JSON we stored on our own servers. A huge chunk of JSON that went out of data regularly, and required us to update all the time. We had a system for updating it, but it still required work. The new system uses the CDNjs API directly, meaning that no huge download ever needs to take place and the resources are always up to date. Well, as up to date as CDNjs is, anyway.

Speaking of a v3, we already have loads of ideas for that. Speed is a slight concern. How can we speed up the search? How can we scope those results by popularity? How can we loosen up and guess better what resource you are searching for? Probably most significantly, how can we open this up to resources on npm? We’re hoping to address all of this stuff. But fortunately, none of it held up getting a better thing out the door now.

Wrapping Up

A bit of a ramble eh?

Definitely some incomplete thoughts here, but feature development has been on the ol’ brain a lot lately and I wanted to get something down. So many of us developers live in this cycle our entire career. A lot of us have significant say in what we build and how we build it. There is an incredible amount to think about related to all this, and arguably no obvious set of best practices. It’s too big, too nebulous. You can’t hold it in your hand and say this is how we do feature development. But you can think real hard about it, have some principles that work for you, and try to do the best you can.


On Building Features is a post from CSS-Tricks

​HelloSign API: Your development time matters

(This is a sponsored post.)

We know that no API can write your code for you, but ours comes close. We’ve placed great importance on making sure our API is the most developer-friendly API available — prioritizing clean documentation, an industry-first API dashboard for easy tracking and debugging, and trained API support engineers to personally assist with your integration.  Meaning, you won’t find an eSignature product with an easier or faster path to implementation.  It’s 2x faster than other eSignature APIs.  

If you’re a business looking for a way to integrate eSignatures into your website or app, test drive HelloSign API for free today.

Direct Link to Article — Permalink


​HelloSign API: Your development time matters is a post from CSS-Tricks

Making your web app work offline, Part 2: The Implementation

This two-part series is a gentle, high-level introduction to offline web development. In Part 1 we got a basic service worker running, which caches our application resources. Now let’s extend it to support offline.

Article Series:

  1. The Setup
  2. The Implementation (you are here!)

Making an `offline.htm` file

Next, lets add some code to detect when the application is offline, and if so, redirect our users to a (cached) `offline.htm`.

But wait, if the service worker file is generated automatically, how do we go about adding in our own code, manually? Well, we can add an entry for importScripts, which tells our service worker to import the scripts we specify. It does this through the service worker’s native importScripts function, which is well-named. And we’ll also add our `offline.htm` file to our statically cached list of files. The new files are highlighted below:

new SWPrecacheWebpackPlugin({ mergeStaticsConfig: true, filename: "service-worker.js", importScripts: ["../sw-manual.js"], staticFileGlobs: [ //... "offline.htm" ], // the rest of the config is unchanged })

Now, let’s go in our `sw-manual.js` file, and add code to load the cached `offline.htm` file when the user is offline.

toolbox.router.get(/books$/, handleMain);
toolbox.router.get(/subjects$/, handleMain);
toolbox.router.get(/localhost:3000\/$/, handleMain);
toolbox.router.get(/mylibrary.io$/, handleMain); function handleMain(request) { return fetch(request).catch(() => { return caches.match("react-redux/offline.htm", { ignoreSearch: true }); });
}

We’ll use the toolbox.router object we saw before to catch all our top-level routes, and if the main page doesn’t load from the network, send back the (hopefully cached) `offline.htm` file.

This is one of the few times in this post you’ll see promises being used directly, instead of with the async syntax, mainly because in this case it’s actually easier to just tack on a .catch(), rather than set up a try{} catch{} block.

The `offline.htm` file will be pretty basic, just some HTML that reads cached books from IndexedDB, and displays them in a rudimentary table. But before showing that, let’s walk through how to actually use IndexedDB (if you want to just see it now, it’s here)

Hello World, IndexedDB

IndexedDB is an in-browser database. It’s ideal for enabling offline functionality since it can be accessed without network connectivity, but it’s by no means limited to that.

The API pre-dates Promises, so it’s callback based. We’ll go through everything with the native API, but in practice, you’ll likely want to wrap and simplify it, either with your own helper methods which wrap the functionality with Promises, or with a third-party utility.

Let me repeat: the API for IndexedDB is awful. Here’s Jake Archibald saying he wouldn’t even teach it directly

We’ll still go over it because I really want you to see everything as it is, but please don’t let it scare you away. There’s plenty of simplifying abstractions out there, for example dexie and idb.

Setting up our database

Let’s add code to sw-manual that subscribes to the service worker’s activate event, and checks to see if we already have an IndexedDB setup; if not, we’ll create, and then fill it with data.

First, the creating bit.

self.addEventListener("activate", () => { //1 is the version of IDB we're opening let open = indexedDB.open("books", 1); //should only be called the first time, when version 1 does not exist open.onupgradeneeded = evt => { let db = open.result; //this callback should only ever be called upon creation of our IDB, when an upgrade is needed //for version 1, but to be doubly safe, and also to demonstrade this, we'll check to see //if the stores exist if (!db.objectStoreNames.contains("books") || !db.objectStoreNames.contains("syncInfo")) { if (!db.objectStoreNames.contains("books")) { let bookStore = db.createObjectStore("books", { keyPath: "_id" }); bookStore.createIndex("imgSync", "imgSync", { unique: false }); } if (!db.objectStoreNames.contains("syncInfo")) { db.createObjectStore("syncInfo", { keyPath: "id" }); evt.target.transaction .objectStore("syncInfo") .add({ id: 1, lastImgSync: null, lastImgSyncStarted: null, lastLoadStarted: +new Date(), lastLoad: null }); } evt.target.transaction.oncomplete = fullSync; } };
});

The code’s messy and manual; as I said, you’ll likely want to add some abstractions in practice. Some of the key points: we check for the objectStores (tables) we’ll be using, and create them as needed. Note that we can even create indexes, which we can see on the books store, with the imgSync index. We also create a syncInfo store (table) which we’ll use to store information on when we last synced our data, so we don’t pester our servers too frequently, asking for updates.

When the transaction has completed, at the very bottom, we call the fullSync method, which loads all our data. Let’s see what that looks like.

Performing an initial sync

Below is the relevant portion of the syncing code, which makes repeated calls to our endpoint to load our books, page by page, adding each result to IDB along the way. Again, this is using zero abstractions, so expect a lot of bloat.

See this GitHub gist for the full code, which includes some additional error handling, and code which runs when the last page is finished.

function fullSyncPage(db, page) { let pageSize = 50; doFetch("/book/offlineSync", { page, pageSize }) .then(resp => resp.json()) .then(resp => { if (!resp.books) return; let books = resp.books; let i = 0; putNext(); function putNext() { //callback for an insertion, with indicators it hasn't had images cached yet if (i < pageSize) { let book = books[i++]; let transaction = db.transaction("books", "readwrite"); let booksStore = transaction.objectStore("books"); //extend the book with the imgSync indicated, add it, and on success, do this for the next book booksStore.add(Object.assign(book, { imgSync: 0 })).onsuccess = putNext; } else { //either load the next page, or call loadDone() } } });
}

The putNext() function is where the real work is done. This serves as the callback for each successful insertion’s success. In real life we’d hopefully have a nice method that adds each book, wrapped in a promise, so we could do a simple for of loop, and await each insertion. But this is the “vanilla” solution or at least one of them.

We modify each book before inserting it, to set the imgSync property to 0, to indicate that this book has not had its image cached, yet.

And after we’ve exhausted the last page, and there are no more results, we call loadDone(), to set some metadata indicating the last time we did a full data sync.

In real life, this would be a good time to sync all those images, but let’s instead do it on-demand by the web app itself, in order to demonstrate another feature of service workers.

Communicating between the web app, and service worker

Let’s just pretend it would be a good idea to have the books’ covers load the next time the user visits our page when the service worker is running. Let’s have our web app send a message to the service worker, and we’ll have the service worker receive it, and then sync the book covers.

From our app code, we attempt to send a message to a running service worker, instructing it to sync images.

In the web app:

if ("serviceWorker" in navigator) { try { navigator.serviceWorker.controller.postMessage({ command: "sync-images" }); } catch (er) {}
}

In `sw-manual.js`:

self.addEventListener("message", evt => { if (evt.data && evt.data.command == "sync-images") { let open = indexedDB.open("books", 1); open.onsuccess = evt => { let db = open.result; if (db.objectStoreNames.contains("books")) { syncImages(db); } }; }
});

In sw-manual we have code to catch that message, and call the syncImages() method. Let’s look at that, next.

function syncImages(db) { let tran = db.transaction("books"); let booksStore = tran.objectStore("books"); let idx = booksStore.index("imgSync"); let booksCursor = idx.openCursor(0); let booksToUpdate = []; //a cursor's onsuccess callback will fire for EACH item that's read from it booksCursor.onsuccess = evt => { let cursor = evt.target.result; //if (!cursor) means the cursor has been exhausted; there are no more results if (!cursor) return runIt(); let book = cursor.value; booksToUpdate.push({ _id: book._id, smallImage: book.smallImage }); //read the next item from the cursor cursor.continue(); }; async function runIt() { if (!booksToUpdate.length) return; for (let book of booksToUpdate) { try { //fetch, and cache the book's image await preCacheBookImage(book); let tran = db.transaction("books", "readwrite"); let booksStore = tran.objectStore("books"); //now save the updated book - we'll wrap the IDB callback-based opertion in //a manual promise, so we can await it await new Promise(res => { let req = booksStore.get(book._id); req.onsuccess = ({ target: { result: bookToUpdate } }) => { bookToUpdate.imgSync = 1; booksStore.put(bookToUpdate); res(); }; req.onerror = () => res(); }); } catch (er) { console.log("ERROR", er); } } }
}

We’re cracking open the imageSync index from before, and reading all books that have a zero, which means they haven’t had their images sync’d yet. The booksCursor.onsuccess will be called over and over again, until there are no books left; I’m using this to put them all into an array, at which point I call the runIt() method, which runs through them, calling preCacheBookImage() for each. This method will cache the image, and if there are no unforeseen errors, update the book in IDB to indicate that imgSync is now 1.

If you’re wondering why in the world I’m going through the trouble to save all the books from the cursor into an array, before calling runIt(), rather than just walking through the results of the cursor, and caching and updating as I go, well — it turns out transactions in IndexedDB are a bit weird. They complete when you yield to the event loop unless you yield to the event loop in a method provided by the transaction. So if we leave the event loop to go do other things, like make a network request to pull down an image, then the cursor’s transaction will complete, and we’ll get an error if we try to continue reading from it later.

Manually updating the cache.

Let’s wrap this up, and look at the preCacheBookImage method which actually pulls down a cover image, and adds it to the relevant cache, (but only if it’s not there already.)

async function preCacheBookImage(book) { let smallImage = book.smallImage; if (!smallImage) return; let cachedImage = await caches.match(smallImage); if (cachedImage) return; if (/https:\/\/s3.amazonaws.com\/my-library-cover-uploads/.test(smallImage)) { let cache = await caches.open("local-images1"); let img = await fetch(smallImage, { mode: "no-cors" }); await cache.put(smallImage, img); }
}

If the book has no image, we’re done. Next, we check if it’s cached already — if so, we’re done. Lastly, we inspect the URL, and figure out which cache it belongs in.

The local-images1 cache name is the same from before, which we set up in our dynamic cache. If the image in question isn’t already there, we fetch it, and add it to cache. Each cache operation returns a promise, so the async/await syntax simplifies things nicely.

Testing it out

The way it’s set up, if we clear our service worker either in dev tools, below, or by just opening a fresh incognito window…

…then the first time we view our app, all our books will get saved to IndexedDB.

When we refresh, the image sync will happen. So if we start on a page that’s already pulling down these images, we’ll see our normal service worker saving them to cache (ahem, assuming we delay the ajax call to give our Service Worker a chance to install), which is what these events are in our network tab.

Then, if we navigate elsewhere and refresh, we won’t see any network requests for those image, since our sync method is already finding everything in cache.

If we clear our service workers again, and start on this same page, which is not otherwise pulling these images down, then refresh, we’ll see the network requests to pull down, and sync these images to cache.

Then if we navigate back to the page that uses these images, we won’t see the calls to cache these images, since they’re already cached; moreover, we’ll see these images being retrieved from cache by the service worker.

Both our runtimeCaching provided by sw-toolbox, and our own manual code are working together, off of the same cache.

It works!

As promised, here’s the `offline.htm` page

<div style="padding: 15px"> <h1>Offline</h1> <table class="table table-condescend table-striped"> <thead> <tr> <th></th> <th>Title</th> <th>Author</th> </tr> </thead> <tbody id="booksTarget"> <!--insertion will happen here--> </tbody> </table>
</div>
let open = indexedDB.open("books");
open.onsuccess = evt => { let db = open.result; let transaction = db.transaction("books", "readonly"); let booksStore = transaction.objectStore("books"); var request = booksStore.openCursor(); let rows = ``; request.onsuccess = function(event) { var cursor = event.target.result; if(cursor) { let book = cursor.value; rows += ` <tr> <td><img src="${book.smallImage}" /></td> <td>${book.title}</td> <td>${Array.isArray(book.authors) ? book.authors.join("<br/>") : book.authors}</td> </tr>`; cursor.continue(); } else { document.getElementById("booksTarget").innerHTML = rows; } };
}

Now let’s tell Chrome to pretend to be offline, and test it out:

Cool!

Where to, from here?

We’re barely scratching the surface. Your users can update these data from multiple devices, and each one will need to keep in sync somehow. You could either periodically wipe your IDB tables and re-sync; have the user manually trigger a re-sync when they want; or you could get really ambitious and try to log all your mutations on your server, and have each service worker on each device request all changes that happened since the last time it ran, in order to sync up.

The most interesting solution here is PouchDB, which does this syncing for you; the catch is it’s designed to work with CouchDB, which you may or may not be using.

Syncing local changes

For one last piece of code, let’s consider an easier problem to solve: syncing your IndexedDB with changes that are made right this minute, by your user who’s using your web app. We can already intercept fetch requests in the service worker, so it should be easy to listen for the right mutation endpoint, run it, then then peak at the results and update IndexedDB accordingly. Let’s take a look.

toolbox.router.post(/graphql/, request => { //just run the request as is return fetch(request).then(response => { //clone it by necessity let respClone = response.clone(); //do this later - get the response back to our user NOW setTimeout(() => { respClone.json().then(resp => { //this graphQL endpoint is for lots of things - inspect the data response to see //which operation we just ran if (resp && resp.data && resp.data.updateBook && resp.data.updateBook.Book) { syncBook(resp.data.updateBook.Book); } }, 5); }); //return the response to our user NOW, before the IDB syncing return response; });
}); function syncBook(book) { let open = indexedDB.open("books", 1); open.onsuccess = evt => { let db = open.result; if (db.objectStoreNames.contains("books")) { let tran = db.transaction("books", "readwrite"); let booksStore = tran.objectStore("books"); booksStore.get(book._id).onsuccess = ({ target: { result: bookToUpdate } }) => { //update the book with the new values ["title", "authors", "isbn"].forEach(prop => (bookToUpdate[prop] = book[prop])); //and save it booksStore.put(bookToUpdate); }; } };
}

This may seem a bit more involved than you were hoping. We can only read the fetch response once, and our application thread will also need to read it, so we’ll first clone the response. Then, we’ll run a setTimeout() so we can return the original response to the web application/user as quickly as possible, and do what we need thereafter. Don’t just rely on the promise in respClone.json() to do this, since promises use microtasks. I’ll let Jake Archibald explain what exactly that means, but the short of it is that they can starve the main event loop. I’m not quite smart enough to be certain whether that applies here, so I just went with the safe approach of setTimeout.

Since I’m using GraphQL, the responses are in a predictable format, and it’s easy to see if I just performed the operation I’m interested in, and if so I can re-sync the affected data.

Further reading

Literally everything here is explained in wonderful depth in this book by Tal Ater. If you’re interested in learning more, you can’t beat that as a learning resource.

For some more immediate, quick resources, here’s an MDN article on IndexedDB, and a service workers introduction, and offline cookbook, both from Google.

Parting thoughts

Giving your user useful things to do with your web app when they don’t even have network connectivity is an amazing new ability web developers have. As you’ve seen though, it’s no easy task. Hopefully this post has given you a realistic idea of what to expect, and a decent introduction to the things you’ll need to do to accomplish this.

Article Series:

  1. The Setup
  2. The Implementation (you are here!)

Making your web app work offline, Part 2: The Implementation is a post from CSS-Tricks

Making your web app work offline, Part 1: The Setup

This two-part series is a gentle introduction to offline web development. Getting a web application to do something while offline is surprisingly tricky, requiring a lot of things to be in place and functioning correctly. We’re going to cover all of these pieces from a high level, with working examples. This post is an overview, but there are plenty of more-detailed resources listed throughout.

Article Series:

  1. The Setup (you are here!)
  2. The Implementation

Basic approach

I’ll be making heavy use of JavaScript’s async/await syntax. It’s supported in all major browsers and Node, and greatly simplifies Promise-based code. The link above explains async well, but in a nutshell they allow you to resolve a promise, and access its value directly in code with await, rather than calling .then and accessing the value in the callback, which often leads to the dreaded “rightward drift.”

What are we building?

We’ll be extending an existing booklist project to sync the current user’s books to IndexedDB, and create a simplified offline page that’ll show even when the user has no network connectivity.

Starting with a service worker

The one non-negotiable thing you need for offline development is a service worker. A service worker is a background process that can, among other things, intercept network requests; redirect them; short circuit them by returning cached responses; or execute them as normal and do custom things with the response, like caching.

Basic caching

Probably the first, most basic, yet high impact thing you’ll do with a service worker is have it cache your application’s resources. Service worker and the cache it uses are extremely low-level primitives; everything is manual. In order to properly cache your resources you’ll need to fetch and add them to a cache, but then you’ll also need to track changes to these resources. You’ll track when they change, remove the prior version, and fetch and update the new one.

In practice, this means your service worker code will need to be generated as part of a build step, which hashes your files, and generates a file that’s smart enough to record these changes between versions, and update caches as needed.

Abstractions to the rescue

This is extremely tedious and error-prone code that you’d likely never want to write yourself. Luckily some smart people have written abstractions to help, namely sw-precache, and sw-toolbox by the great people at Google. Note, Google has since deprecated these tools in favor of the newer Workbox. I’ve yet to move my code over since sw-* works so well, but in any event the ideas are the same, and I’m told the conversion is easy. And it’s worth mentioning that sw-precache currently has about 30,000 downloads per day, so it’s still widely used.

Hello World, sw-precache

Let’s jump right in. We’re using webpack, and as webpack goes, there’s a plugin, so let’s check that out first.

// inside your webpack config
new SWPrecacheWebpackPlugin({ mergeStaticsConfig: true, filename: "service-worker.js", staticFileGlobs: [ //static resources to cache "static/bootstrap/css/bootstrap-booklist-build.css", ... ], ignoreUrlParametersMatching: /./, stripPrefixMulti: { //any paths that need adjusting "static/": "react-redux/static/", ... }, ...
})

By default ALL of the bundles webpack makes will be precached. We’re also manually providing some paths to static resources I want cached in the staticFileGlobs property, and I’m adjusting some paths in stripPrefixMulti.

// inside your webpack config
const getCache = ({ name, pattern, expires, maxEntries }) => ({ urlPattern: pattern, handler: "cacheFirst", options: { cache: { maxEntries: maxEntries || 500, name: name, maxAgeSeconds: expires || 60 * 60 * 24 * 365 * 2 //2 years }, successResponses: /0|[123].*/ }
}); new SWPrecacheWebpackPlugin({ ... runtimeCaching: [ //pulls in sw-toolbox and caches dynamically based on a pattern getCache({ pattern: /^https:\/\/images-na.ssl-images-amazon.com/, name: "amazon-images1" }), getCache({ pattern: /book\/searchBooks/, name: "book-search", expires: 60 * 7 }), //7 minutes ... ]
})

Adding the runtimeCaching section to our SWPrecacheWebpackPlugin pulls in sw-toolbox and lets us cache urls matching a certain pattern, dynamically, as needed—with getCache helping keep the boilerplate to a minimum.

Hello World, sw-toolbox

The entire service worker file that’s generated is pretty big, but let’s just look at a small piece, namely one of the dynamic caches from above:

toolbox.router.get(/^https:\/\/images-na.ssl-images-amazon.com/, toolbox.cacheFirst, { cache: { maxEntries: 500, name: "amazon-images1", maxAgeSeconds: 63072000 }, successResponses: /0|[123].*/
});

sw-toolbox has provided us with a nice, high-level router object we can use to hook into various URL requests, MVC-style. We’ll use this to setup offline shortly.

Don’t forget to register the service worker

And, of course, the existence of the service worker file that’s generated above is of no use by itself; it needs to be registered. The code looks like this, but be sure to either have it inside an onload listener, or some other place that’ll be guaranteed to run after the page has loaded.

if ("serviceWorker" in navigator) { navigator.serviceWorker.register("https://cdn.css-tricks.com/service-worker.js");
}

There we have it! We got a basic service worker running, which caches our application resources. Tune in tomorrow when we extend it to support offline.

Article Series:

  1. The Setup (you are here!)
  2. The Implementation

Making your web app work offline, Part 1: The Setup is a post from CSS-Tricks

Animating Border

Transitioning border for a hover state. Simple, right? You might be unpleasantly surprised.

The Challenge

The challenge is simple: building a button with an expanding border on hover.

This article will focus on genuine CSS tricks that would be easy to drop into any project without having to touch the DOM or use JavaScript. The methods covered here will follow these rules

  • Single element (no helper divs, but psuedo-elements are allowed)
  • CSS only (no JavaScript)
  • Works for any size (not restricted to a specific width, height, or aspect ratio)
  • Supports transparent backgrounds
  • Smooth and performant transition

I proposed this challenge in the Animation at Work Slack and again on Twitter. Though there was no consensus on the best approach, I did receive some really clever ideas by some phenomenal developers.

Method 1: Animating border

The most straightforward way to animate a border is… well, by animating border.

.border-button { border: solid 5px #FC5185; transition: border-width 0.6s linear;
} .border-button:hover { border-width: 10px; }

See the Pen by Shaw (@shshaw) on CodePen.

Nice and simple, but there are some big performance issues.

Since border takes up space in the document’s layout, changing the border-width will trigger layout. Nearby elements will shift around because of the new border size, making browser reposition those elements every frame of the animation unless you set an explicit size on the button.

As if triggering layout wasn’t bad enough, the transition itself feels “stepped”. I’ll show why in the next example.

Method 2: Better border with outline

How can we change the border without triggering layout? By using outline instead! You’re probably most familiar with outline from removing it on :focus styles (though you shouldn’t), but outline is an outer line that doesn’t change an element’s size or position in the layout.

.border-button { outline: solid 5px #FC5185; transition: outline 0.6s linear; margin: 0.5em; /* Increased margin since the outline expands outside the element */
} .border-button:hover { outline-width: 10px; }

See the Pen by Shaw (@shshaw) on CodePen.

A quick check in Dev Tools’ Performance tab shows the outline transition does not trigger layout. Regardless, the movement still seems stepped because browsers are rounding the border-width and outline-width values so you don’t get sub-pixel rendering between 5 and 6 or smooth transitions from 5.4 to 5.5.

See the Pen by Shaw (@shshaw) on CodePen.

Strangely, Safari often doesn’t render the outline transition and occasionally leaves crazy artifacts.

border artifact in safari

Method 3: Cut it with clip-path

First implemented by Steve Gardner, this method uses clip-path with calc to trim the border down so on hover we can transition to reveal the full border.

.border-button { /* Full width border and a clip-path visually cutting it down to the starting size */ border: solid 10px #FC5185; clip-path: polygon( calc(0% + 5px) calc(0% + 5px), /* top left */ calc(100% - 5px) calc(0% + 5px), /* top right */ calc(100% - 5px) calc(100% - 5px), /* bottom right */ calc(0% + 5px) calc(100% - 5px) /* bottom left */ ); transition: clip-path 0.6s linear;
} .border-button:hover { /* Clip-path spanning the entire box so it's no longer hiding the full-width border. */ clip-path: polygon(0 0, 100% 0, 100% 100%, 0 100%);
}

See the Pen by Shaw (@shshaw) on CodePen.

clip-path technique is the smoothest and most performant method so far, but does come with a few caveats. Rounding errors may cause a little unevenness, depending on the exact size. The border also has to be full size from the start, which may make exact positioning tricky.

Unfortunately there’s no IE/Edge support yet, though it seems to be in development. You can and should encourage Microsoft’s team to implement those features by voting for masks/clip-path to be added.

Method 4: linear-gradient background

We can simulate a border using a clever combination of multiple linear-gradient backgrounds properly sized. In total we have four separate gradients, one for each side. The background-position and background-size properties get each gradient in the right spot and the right size, which can then be transitioned to make the border expand.

.border-button { background-repeat: no-repeat; /* background-size values will repeat so we only need to declare them once */ background-size: calc(100% - 10px) 5px, /* top & bottom */ 5px calc(100% - 10px); /* right & left */ background-position: 5px 5px, /* top */ calc(100% - 5px) 5px, /* right */ 5px calc(100% - 5px), /* bottom */ 5px 5px; /* left */ /* Since we're sizing and positioning with the above properties, we only need to set up a simple solid-color gradients for each side */ background-image: linear-gradient(0deg, #FC5185, #FC5185), linear-gradient(0deg, #FC5185, #FC5185), linear-gradient(0deg, #FC5185, #FC5185), linear-gradient(0deg, #FC5185, #FC5185); transition: all 0.6s linear; transition-property: background-size, background-position;
} .border-button:hover { background-position: 0 0, 100% 0, 0 100%, 0 0; background-size: 100% 10px, 10px 100%, 100% 10px, 10px 100%;
}

See the Pen by Shaw (@shshaw) on CodePen.

This method is quite difficult to set up and has quite a few cross-browser differences. Firefox and Safari animate the faux-border smoothly, exactly the effect we’re looking for. Chrome’s animation is jerky and even more stepped than the outline and border transitions. IE and Edge refuse to animate the background at all, but they do give the proper border expansion effect.

Method 5: Fake it with box-shadow

Hidden within box-shadow‘s spec is a fourth value for spread-radius. Set all the other length values to 0px and use the spread-radius to build your border alternative that, like outline, won’t affect layout.

.border-button { box-shadow: 0px 0px 0px 5px #FC5185; transition: box-shadow 0.6s linear; margin: 0.5em; /* Increased margin since the box-shado expands outside the element, like outline */
} .border-button:hover { box-shadow: 0px 0px 0px 10px #FC5185; }

See the Pen by Shaw (@shshaw) on CodePen.

The transition with box-shadow is adequately performant and feels much smoother, except in Safari where it’s snapping to whole-values during the transition like border and outline.

Pseudo-Elements

Several of these techniques can be modified to use a pseudo-element instead, but pseudo-elements ended up causing some additional performance issues in my tests.

For the box-shadow method, the transition occasionally triggered paint in a much larger area than necessary. Reinier Kaper pointed out that a pseudo-element can help isolate the paint to a more specific area. As I ran further tests, box-shadow was no longer causing paint in large areas of the document and the complication of the pseudo-element ended up being less performant. The change in paint and performance may have been due to a Chrome update, so feel free to test for yourself.

I also could not find a way to utilize pseudo-elements in a way that would allow for transform based animation.

Why not transform: scale?

You may be firing up Twitter to helpfully suggest using transform: scale for this. Since transform and opacity are the best style properties to animate for performance, why not use a pseudo-element and have the border scale up & down?

.border-button { position: relative; margin: 0.5em; border: solid 5px transparent; background: #3E4377;
} .border-button:after { content: ''; display: block; position: absolute; top: 0; right: 0; bottom: 0; left: 0; border: solid 10px #FC5185; margin: -15px; z-index: -1; transition: transform 0.6s linear; transform: scale(0.97, 0.93);
} .border-button:hover::after { transform: scale(1,1); }

See the Pen by Shaw (@shshaw) on CodePen.

There are a few issues:

  1. The border will show through a transparent button. I forced a background on the button to show how the border is hiding behind the button. If your design calls for buttons with a full background, then this could work.
  2. You can’t scale the border to specific sizes. Since the button’s dimensions vary with the text, there’s no way to animate the border from exactly 5px to 10px using only CSS. In this example I’ve done some magic-numbers on the scale to get it to appear right, but that won’t be universal.
  3. The border animates unevenly because the button’s aspect ratio isn’t 1:1. This usually means the left/right will appear larger than the top/bottom until the animation completes. This may not be an issue depending on how fast your transition is, the button’s aspect ratio, and how big your border is.

If your button has set dimensions, Cher pointed out a clever way to calculate the exact scales needed, though it may be subject to some rounding errors.

Beyond CSS

If we loosen our rules a bit, there are many interesting ways you can animate borders. Codrops consistently does outstanding work in this area, usually utilizing SVGs and JavaScript. The end results are very satisfying, though they can be a bit complex to implement. Here are a few worth checking out:

  • Creative Buttons
  • Button Styles Inspiration
  • Animated Checkboxes
  • Distorted Button Effects
  • Progress Button Styles

Conclusion

There’s more to borders than simply border, but if you want to animate a border you may have some trouble. The methods covered here will help, though none of them are a perfect solution. Which you choose will depend on your project’s requirements, so I’ve laid out a comparison table to help you decide.

See the Pen by Shaw (@shshaw) on CodePen.

My recommendation would be to use box-shadow, which has the best overall balance of ease-of-implementation, animation effect, performance and browser support.

Do you have another way of creating an animated border? Perhaps a clever way to utilize transforms for moving a border? Comment below or reach me on Twitter to share your solution to the challenge.

Special thanks to Martin Pitt, Steve Gardner, Cher, Reinier Kaper, Joseph Rex, David Khourshid, and the Animation at Work community.


Animating Border is a post from CSS-Tricks