Why Using reduce() to Sequentially Resolve Promises Works

Writing asynchronous JavaScript without using the Promise object is a lot like baking a cake with your eyes closed. It can be done, but it’s gonna be messy and you’ll probably end up burning yourself.

I won’t say it’s necessary, but you get the idea. It’s real nice. Sometimes, though, it needs a little help to solve some unique challenges, like when you’re trying to sequentially resolve a bunch of promises in order, one after the other. A trick like this is handy, for example, when you’re doing some sort of batch processing via AJAX. You want the server to process a bunch of things, but not all at once, so you space the processing out over time.

Ruling out packages that help make this task easier (like Caolan McMahon’s async library), the most commonly suggested solution for sequentially resolving promises is to use Array.prototype.reduce(). You might’ve heard of this one. Take a collection of things, and reduce them to a single value, like this:

let result = [1,2,5].reduce((accumulator, item) => { return accumulator + item;
}, 0); // <-- Our initial value. console.log(result); // 8

But, when using reduce() for our purposes, the setup looks more like this:

let userIDs = [1,2,3]; userIDs.reduce( (previousPromise, nextID) => { return previousPromise.then(() => { return methodThatReturnsAPromise(nextID); });
}, Promise.resolve());

Or, in a more modern format:

let userIDs = [1,2,3]; userIDs.reduce( async (previousPromise, nextID) => { await previousPromise; return methodThatReturnsAPromise(nextID);
}, Promise.resolve());

This is neat! But for the longest time, I just swallowed this solution and copied that chunk of code into my application because it “worked.” This post is me taking a stab at understanding two things:

  1. Why does this approach even work?
  2. Why can’t we use other Array methods to do the same thing?

Why does this even work?

Remember, the main purpose of reduce() is to “reduce” a bunch of things into one thing, and it does that by storing up the result in the accumulator as the loop runs. But that accumulator doesn’t have to be numeric. The loop can return whatever it wants (like a promise), and recycle that value through the callback every iteration. Notably, no matter what the accumulator value is, the loop itself never changes its behavior — including its pace of execution. It just keeps rolling through the collection as fast as the thread allows.

This is huge to understand because it probably goes against what you think is happening during this loop (at least, it did for me). When we use it to sequentially resolve promises, the reduce() loop isn’t actually slowing down at all. It’s completely synchronous, doing its normal thing as fast as it can, just like always.

Look at the following snippet and notice how the progress of the loop isn’t hindered at all by the promises returned in the callback.

function methodThatReturnsAPromise(nextID) { return new Promise((resolve, reject) => { setTimeout(() => { console.log(`Resolve! ${dayjs().format('hh:mm:ss')}`); resolve(); }, 1000); });
} [1,2,3].reduce( (accumulatorPromise, nextID) => { console.log(`Loop! ${dayjs().format('hh:mm:ss')}`); return accumulatorPromise.then(() => { return methodThatReturnsAPromise(nextID); });
}, Promise.resolve());

In our console:

"Loop! 11:28:06" "Loop! 11:28:06" "Loop! 11:28:06" "Resolve! 11:28:07" "Resolve! 11:28:08" "Resolve! 11:28:09"

The promises resolve in order as we expect, but the loop itself is quick, steady, and synchronous. After looking at the MDN polyfill for reduce(), this makes sense. There’s nothing asynchronous about a while() loop triggering the callback() over and over again, which is what’s happening under the hood:

while (k < len) { if (k in o) { value = callback(value, o[k], k, o); } k++;
}

With all that in mind, the real magic occurs in this piece right here:

return previousPromise.then(() => { return methodThatReturnsAPromise(nextID)
});

Each time our callback fires, we return a promise that resolves to another promise. And while reduce() doesn’t wait for any resolution to take place, the advantage it does provide is the ability to pass something back into the same callback after each run, a feature unique to reduce(). As a result, we’re able build a chain of promises that resolve into more promises, making everything nice and sequential:

new Promise( (resolve, reject) => { // Promise #1 resolve();
}).then( (result) => { // Promise #2 return result;
}).then( (result) => { // Promise #3 return result;
}); // ... and so on!

All of this should also reveal why we can’t just return a single, new promise each iteration. Because the loop runs synchronously, each promise will be fired immediately, instead of waiting for those created before it.

[1,2,3].reduce( (previousPromise, nextID) => { console.log(`Loop! ${dayjs().format('hh:mm:ss')}`); return new Promise((resolve, reject) => { setTimeout(() => { console.log(`Resolve! ${dayjs().format('hh:mm:ss')}`); resolve(nextID); }, 1000); });
}, Promise.resolve());

In our console:

"Loop! 11:31:20" "Loop! 11:31:20" "Loop! 11:31:20" "Resolve! 11:31:21" "Resolve! 11:31:21" "Resolve! 11:31:21"

Is it possible to wait until all processing is finished before doing something else? Yes. The synchronous nature of reduce() doesn’t mean you can’t throw a party after every item has been completely processed. Look:

function methodThatReturnsAPromise(id) { return new Promise((resolve, reject) => { setTimeout(() => { console.log(`Processing ${id}`); resolve(id); }, 1000); });
} let result = [1,2,3].reduce( (accumulatorPromise, nextID) => { return accumulatorPromise.then(() => { return methodThatReturnsAPromise(nextID); });
}, Promise.resolve()); result.then(e => { console.log("Resolution is complete! Let's party.")
});

Since all we’re returning in our callback is a chained promise, that’s all we get when the loop is finished: a promise. After that, we can handle it however we want, even long after reduce() has run its course.

Why won’t any other Array methods work?

Remember, under the hood of reduce(), we’re not waiting for our callback to complete before moving onto the next item. It’s completely synchronous. The same goes for all of these other methods:

But reduce() is special.

We found that the reason reduce() works for us is because we’re able to return something right back to our same callback (namely, a promise), which we can then build upon by having it resolve into another promise. With all of these other methods, however, we just can’t pass an argument to our callback that was returned from our callback. Instead, each of those callback arguments are predetermined, making it impossible for us to leverage them for something like sequential promise resolution.

[1,2,3].map((item, [index, array]) => [value]);
[1,2,3].filter((item, [index, array]) => [boolean]);
[1,2,3].some((item, [index, array]) => [boolean]);
[1,2,3].every((item, [index, array]) => [boolean]);

I hope this helps!

At the very least, I hope this helps shed some light on why reduce() is uniquely qualified to handle promises in this way, and maybe give you a better understanding of how common Array methods operate under the hood. Did I miss something? Get something wrong? Let me know!

The post Why Using reduce() to Sequentially Resolve Promises Works appeared first on CSS-Tricks.

Making your web app work offline, Part 1: The Setup

This two-part series is a gentle introduction to offline web development. Getting a web application to do something while offline is surprisingly tricky, requiring a lot of things to be in place and functioning correctly. We’re going to cover all of these pieces from a high level, with working examples. This post is an overview, but there are plenty of more-detailed resources listed throughout.

Article Series:

  1. The Setup (you are here!)
  2. The Implementation

Basic approach

I’ll be making heavy use of JavaScript’s async/await syntax. It’s supported in all major browsers and Node, and greatly simplifies Promise-based code. The link above explains async well, but in a nutshell they allow you to resolve a promise, and access its value directly in code with await, rather than calling .then and accessing the value in the callback, which often leads to the dreaded “rightward drift.”

What are we building?

We’ll be extending an existing booklist project to sync the current user’s books to IndexedDB, and create a simplified offline page that’ll show even when the user has no network connectivity.

Starting with a service worker

The one non-negotiable thing you need for offline development is a service worker. A service worker is a background process that can, among other things, intercept network requests; redirect them; short circuit them by returning cached responses; or execute them as normal and do custom things with the response, like caching.

Basic caching

Probably the first, most basic, yet high impact thing you’ll do with a service worker is have it cache your application’s resources. Service worker and the cache it uses are extremely low-level primitives; everything is manual. In order to properly cache your resources you’ll need to fetch and add them to a cache, but then you’ll also need to track changes to these resources. You’ll track when they change, remove the prior version, and fetch and update the new one.

In practice, this means your service worker code will need to be generated as part of a build step, which hashes your files, and generates a file that’s smart enough to record these changes between versions, and update caches as needed.

Abstractions to the rescue

This is extremely tedious and error-prone code that you’d likely never want to write yourself. Luckily some smart people have written abstractions to help, namely sw-precache, and sw-toolbox by the great people at Google. Note, Google has since deprecated these tools in favor of the newer Workbox. I’ve yet to move my code over since sw-* works so well, but in any event the ideas are the same, and I’m told the conversion is easy. And it’s worth mentioning that sw-precache currently has about 30,000 downloads per day, so it’s still widely used.

Hello World, sw-precache

Let’s jump right in. We’re using webpack, and as webpack goes, there’s a plugin, so let’s check that out first.

// inside your webpack config
new SWPrecacheWebpackPlugin({ mergeStaticsConfig: true, filename: "service-worker.js", staticFileGlobs: [ //static resources to cache "static/bootstrap/css/bootstrap-booklist-build.css", ... ], ignoreUrlParametersMatching: /./, stripPrefixMulti: { //any paths that need adjusting "static/": "react-redux/static/", ... }, ...
})

By default ALL of the bundles webpack makes will be precached. We’re also manually providing some paths to static resources I want cached in the staticFileGlobs property, and I’m adjusting some paths in stripPrefixMulti.

// inside your webpack config
const getCache = ({ name, pattern, expires, maxEntries }) => ({ urlPattern: pattern, handler: "cacheFirst", options: { cache: { maxEntries: maxEntries || 500, name: name, maxAgeSeconds: expires || 60 * 60 * 24 * 365 * 2 //2 years }, successResponses: /0|[123].*/ }
}); new SWPrecacheWebpackPlugin({ ... runtimeCaching: [ //pulls in sw-toolbox and caches dynamically based on a pattern getCache({ pattern: /^https:\/\/images-na.ssl-images-amazon.com/, name: "amazon-images1" }), getCache({ pattern: /book\/searchBooks/, name: "book-search", expires: 60 * 7 }), //7 minutes ... ]
})

Adding the runtimeCaching section to our SWPrecacheWebpackPlugin pulls in sw-toolbox and lets us cache urls matching a certain pattern, dynamically, as needed—with getCache helping keep the boilerplate to a minimum.

Hello World, sw-toolbox

The entire service worker file that’s generated is pretty big, but let’s just look at a small piece, namely one of the dynamic caches from above:

toolbox.router.get(/^https:\/\/images-na.ssl-images-amazon.com/, toolbox.cacheFirst, { cache: { maxEntries: 500, name: "amazon-images1", maxAgeSeconds: 63072000 }, successResponses: /0|[123].*/
});

sw-toolbox has provided us with a nice, high-level router object we can use to hook into various URL requests, MVC-style. We’ll use this to setup offline shortly.

Don’t forget to register the service worker

And, of course, the existence of the service worker file that’s generated above is of no use by itself; it needs to be registered. The code looks like this, but be sure to either have it inside an onload listener, or some other place that’ll be guaranteed to run after the page has loaded.

if ("serviceWorker" in navigator) { navigator.serviceWorker.register("https://cdn.css-tricks.com/service-worker.js");
}

There we have it! We got a basic service worker running, which caches our application resources. Tune in tomorrow when we extend it to support offline.

Article Series:

  1. The Setup (you are here!)
  2. The Implementation

Making your web app work offline, Part 1: The Setup is a post from CSS-Tricks