Durable Functions: Fan Out Fan In Patterns

This post is a collaboration between myself and my awesome coworker, Maxime Rouiller.

Durable Functions? Wat. If you’re new to Durable, I suggest you start here with this post that covers all the essentials so that you can properly dive in. In this post, we’re going to dive into one particular use case so that you can see a Durable Function pattern at work!

Today, let’s talk about the Fan Out, Fan In pattern. We’ll do so by retrieving an open issue count from GitHub and then storing what we get. Here’s the repo where all the code lives that we’ll walk through in this post.

View Repo

About the Fan Out/Fan In Pattern

We briefly mentioned this pattern in the previous article, so let’s review. You’d likely reach for this pattern when you need to execute multiple functions in parallel and then perform some other task with those results. You can imagine that this pattern is useful for quite a lot of projects, because it’s pretty often that we have to do one thing based on data from a few other sources.

For example, let’s say you are a takeout restaurant with a ton of orders coming through. You might use this pattern to first get the order, then use that order to figure out prices for all the items, the availability of those items, and see if any of them have any sales or deals. Perhaps the sales/deals are not hosted in the same place as your prices because they are controlled by an outside sales firm. You might also need to find out what your delivery queue is like and who on your staff should get it based on their location.

That’s a lot of coordination! But you’d need to then aggregate all of that information to complete the order and process it. This is a simplified, contrived example of course, but you can see how useful it is to work on a few things concurrently so that they can then be used by one final function.

Here’s what that looks like, in abstract code and visualization

See the Pen Durable Functions: Pattern #2, Fan Out, Fan In by Sarah Drasner (@sdras) on CodePen.

const df = require('durable-functions') module.exports = df(function*(ctx) { const tasks = [] // items to process concurrently, added to an array const taskItems = yield ctx.df.callActivityAsync('fn1') taskItems.forEach(item => tasks.push(ctx.df.callActivityAsync('fn2', item)) yield ctx.df.task.all(tasks) // send results to last function for processing yield ctx.df.callActivityAsync('fn3', tasks)
})

Now that we see why we would want to use this pattern, let’s dive in to a simplified example that explains how.

Setting up your environment to work with Durable Functions

First things first. We’ve got to get development environment ready to work with Durable Functions. Let’s break that down.

GitHub Personal Access Token

To run this sample, you’ll need to create a personal access token in GitHub. If you go under your account photo, open the dropdown, and select Settings, then Developer settings in the left sidebar. In the same sidebar on the next screen, click Personal access tokens option.

Then a prompt will come up and you can click the Generate new token button. You should give your token a name that makes sense for this project. Like “Durable functions are better than burritos.” You know, something standard like that.

For the scopes/permission option, I suggest selecting “repos” which then allows to click the Generate token button and copy the token to your clipboard. Please keep in mind that you should never commit your token. (It will be revoked if you do. Ask me why I know that.) If you need more info on creating tokens, there are further instructions here.

Functions CLI

First, we’ll install the latest version of the Azure Functions CLI. We can do so by running this in our terminal:

npm i -g azure-functions-core-tools@core --unsafe-perm true

Does the unsafe perm flag freak you out? It did for me as well. Really what it’s doing is preventing UID/GID switching when package scripts run, which is necessary because the package itself is a JavaScript wrapper around .NET. Brew installing without such a flag is also available and more information about that is here.

Optional: Setting up the project in VS Code

Totally not necessary, but I like working in VS Code with Azure Functions because it has great local debugging, which is typically a pain with Serverless functions. If you haven’t already installed it, you can do so here:

Set up a Free Trial for Azure and Create a Storage Account

To run this sample, you’ll need to test drive a free trial for Azure. You can go into the portal and sign in the lefthand corner. You’ll make a new Blob Storage account, and retrieve the keys. Since we have that all squared away, we’re ready to rock!

Setting up Our Durable Function

Let’s take a look at the repo we have set up. We’ll clone or fork it:

git clone https://github.com/Azure-Samples/durablefunctions-apiscraping-nodejs.git 

Here’s what that initial file structure is like.

file structure for the durable function repo

(This visualization was made from my CLI tool.)

In local.settings.json, change GitHubToken to the value you grabbed from GitHub earlier, and do the same for the two storage keys — paste in the keys from the storage account you set up earlier.

Then run:

func extensions install
npm i
func host start

And now we’re running locally!

Understanding the Orchestrator

As you can see, we have a number of folders within the FanOutFanInCrawler directory. The functions in the directories listed GetAllRepositoriesForOrganization, GetAllOpenedIssues, and SaveRepositories are the functions that we will be coordinating.

Here’s what we’ll be doing:

  • The Orchestrator will kick off the GetAllRepositoriesForOrganization function, where we’ll pass in the organization name, retrieved from getInput() from the Orchestrator_HttpStart function
  • Since this is likely to be more than one repo, we’ll first create an empty array, then loop through all of the repos and run GetOpenedIssues, and push those onto the array. What we’re running here will all fire concurrently because it isn’t within the yield in the iterator
  • Then we’ll wait for all of the tasks to finish executing and finally call SaveRepositories which will store all of the results in Blob Storage

Since the other functions are fairly standard, let’s dig into that Orchestrator for a minute. If we look inside the Orchestrator directory, we can see it has a fairly traditional setup for a function with index.js and function.json files.

Generators

Before we dive into the Orchestrator, let’s take a very brief side tour into generators, because you won’t be able to understand the rest of the code without them.

A generator is not the only way to write this code! It could be accomplished with other asynchronous JavaScript patterns as well. It just so happens that this is a pretty clean and legible way to write it, so let’s look at it really fast.

function* generator(i) {
yield i++;
yield i++;
yield i++;
} var gen = generator(1); console.log(gen.next().value); // 1
console.log(gen.next().value); // 2
console.log(gen.next().value); // 3
console.log(gen.next()); // {value: undefined, done: true}

After the initial little asterisk following function*, you can begin to use the yield keyword. Calling a generator function does not execute the whole function in its entirety; an iterator object is returned instead. The next() method will walk over them one by one, and we’ll be given an object that tells us both the value and done — which will be a boolean of whether we’re done walking through all of the yield statements. You can see in the example above that for the last .next() call, an object is returned where done is true, letting us know we’ve iterated through all values.

Orchestrator code

We’ll start with the require statement we’ll need for this to work:

const df = require('durable-functions') module.exports = df(function*(context) { // our orchestrator code will go here
})

It’s worth noting that the asterisk there will create an iterator function.

First, we’ll get the organization name from the Orchestrator_HttpStart function and get all the repos for that organization with GetAllRepositoriesForOrganization. Note we use yield within the repositories assignment to make the function perform in sequential order.

const df = require('durable-functions') module.exports = df(function*(context) { var organizationName = context.df.getInput() var repositories = yield context.df.callActivityAsync( 'GetAllRepositoriesForOrganization', organizationName )
})

Then we’re going to create an empty array named output, create a for loop from the array we got containing all of the organization’s repos, and use that to push the issues into the array. Note that we don’t use yield here so that they’re all running concurrently instead of waiting one after another.

const df = require('durable-functions') module.exports = df(function*(context) { var organizationName = context.df.getInput() var repositories = yield context.df.callActivityAsync( 'GetAllRepositoriesForOrganization', organizationName ) var output = [] for (var i = 0; i < repositories.length; i++) { output.push( context.df.callActivityAsync('GetOpenedIssues', repositories[i]) ) } })

Finally, when all of these executions are done, we’re going to store the results and pass that in to the SaveRepositories function, which will save them to Blob Storage. Then we’ll return the unique ID of the instance (context.instanceId).

const df = require('durable-functions') module.exports = df(function*(context) { var organizationName = context.df.getInput() var repositories = yield context.df.callActivityAsync( 'GetAllRepositoriesForOrganization', organizationName ) var output = [] for (var i = 0; i < repositories.length; i++) { output.push( context.df.callActivityAsync('GetOpenedIssues', repositories[i]) ) } const results = yield context.df.Task.all(output) yield context.df.callActivityAsync('SaveRepositories', results) return context.instanceId
})

Now we’ve got all the steps we need to manage all of our functions with this single orchestrator!

Deploy

Now the fun part. Let’s deploy! 🚀

To deploy components, Azure requires you to install the Azure CLI and login with it.

First, you will need to provision the service. Look into the provision.ps1 file that’s provided to familiarize yourself with the resources we are going to create. Then, you can execute the file with the previously generated GitHub token like this:

.\provision.ps1 -githubToken <TOKEN> -resourceGroup <ResourceGroupName> -storageName <StorageAccountName> -functionName <FunctionName>

If you don’t want to install PowerShell, you can also take the commands within provision.ps1 and run it manually.

And there we have it! Our Durable Function is up and running.

The post Durable Functions: Fan Out Fan In Patterns appeared first on CSS-Tricks.

Understanding the difference between grid-template and grid-auto

Ire Aderinokun:

Within a grid container, there are grid cells. Any cell positioned and sized using the grid-template-* properties forms part of the explicit grid. Any grid cell that is not positioned/sized using this property forms part of the implicit grid instead.

Understanding explicit grids and implicit grids is powerful. This is my quicky take:

  • Explicit: you define a grid and place items exactly where you want them to go.
  • Implicit: you define a grid and let items fall into it as they can.

Grids can be both!

Direct Link to ArticlePermalink

The post Understanding the difference between grid-template and grid-auto appeared first on CSS-Tricks.

Introducing GitHub Actions

It’s a common situation: you create a site and it’s ready to go. It’s all on GitHub. But you’re not really done. You need to set up deployment. You need to set up a process that runs your tests for you and you’re not manually running commands all the time. Ideally, every time you push to master, everything runs for you: the tests, the deployment… all in one place.

Previously, there only few options here that could help with that. You could piece together other services, set them up, and integrate them with GitHub. You could also write post-commit hooks, which also help.

But now, enter GitHub Actions.

Actions are small bits of code that can be run off of various GitHub events, the the most common of which is pushing to master. But it’s not necessarily limited to that. They’re all directly integrated with GitHub, meaning you no longer need a middleware service or have to write a solution yourself. And they already have many options for you to choose from. For example, you can publish straight to npm and deploy to a variety of cloud services, (Azure, AWS, Google Cloud, Zeit… you name it) just to name a couple.

But actions are more than deploy and publish. That’s what’s so cool about them. They’re containers all the way down, so you could quite literally do pretty much anything — the possibilities are endless! You could use them to minify and concatenate CSS and JavaScript, send you information when people create issues in your repo, and more… they sky really is the limit.

You also don’t need to configure/create the containers yourself, either. Actions let you point to someone else’s repo, an existing Dockerfile, or a path, and the action will behave accordingly. This is a whole new can of worms for open source possibilities, and ecosystems.

Setting up your first action

There are two ways you can set up an action: through the workflow GUI or by writing and committing the file by hand. We’ll start with the GUI because it’s so easy to understand, then move on to writing it by hand because that offers the most control.

First, we’ll sign up for the beta by clicking on the big blue button here. It might take a little bit for them to bring you into the beta, so hang tight.

A screenshot of the GitHub Actions beta site showing a large blue button to click to join the beta.
The GitHub Actions beta site.

Now let’s create a repo. I made a small demo repo with a tiny Node.js sample site. I can already notice that I have a new tab on my repo, called Actions:

A screenshot of the sample repo showing the Actions tab in the menu.

If I click on the Actions tab, this screen shows:

screen that shows

I click “Create a New Workflow,” and then I’m shown the screen below. This tells me a few things. First, I’m creating a hidden folder called .github, and within it, I’m creating a file called main.workflow. If you were to create a workflow from scratch (which we’ll get into), you’d need to do the same.

new workflow

Now, we see in this GUI that we’re kicking off a new workflow. If we draw a line from this to our first action, a sidebar comes up with a ton of options.

show all of the action options in the sidebar

There are actions in here for npm, Filters, Google Cloud, Azure, Zeit, AWS, Docker Tags, Docker Registry, and Heroku. As mentioned earlier, you’re not limited to these options — it’s capable of so much more!

I work for Azure, so I’ll use that as an example, but each action provides you with the same options, which we’ll walk through together.

shows options for azure in the sidebar

At the top where you see the heading “GitHub Action for Azure,” there’s a “View source” link. That will take you directly to the repo that’s used to run this action. This is really nice because you can also submit a pull request to improve any of these, and have the flexibility to change what action you’re using if you’d like, with the “uses” option in the Actions panel.

Here’s a rundown of the options we’re provided:

  • Label: This is the name of the Action, as you’d assume. This name is referenced by the Workflow in the resolves array — that is what’s creating the connection between them. This piece is abstracted away for you in the GUI, but you’ll see in the next section that, if you’re working in code, you’ll need to keep the references the same to have the chaining work.
  • Runs allows you to override the entry point. This is great because if you’d like to run something like git in a container, you can!
  • Args: This is what you’d expect — it allows you to pass arguments to the container.
  • secrets and env: These are both really important because this is how you’ll use passwords and protect data without committing them directly to the repo. If you’re using something that needs one token to deploy, you’d probably use a secret here to pass that in.

Many of these actions have readmes that tell you what you need. The setup for “secrets” and “env” usually looks something like this:

action "deploy" { uses = ... secrets = [ "THIS_IS_WHAT_YOU_NEED_TO_NAME_THE_SECRET", ]
}

You can also string multiple actions together in this GUI. It’s very easy to make things work one action at a time, or in parallel. This means you can have nicely running async code simply by chaining things together in the interface.

Writing an action in code

So, what if none of the actions shown here are quite what we need? Luckily, writing actions is really pretty fun! I wrote an action to deploy a Node.js web app to Azure because that will let me deploy any time I push to the repo’s master branch. This was super fun because now I can reuse it for the rest of my web apps. Happy Sarah!

Create the app services account

If you’re using other services, this part will change, but you do need to create an existing service in whatever you’re using in order to deploy there.

First you’ll need to get your free Azure account. I like using the Azure CLI, so if you don’t already have that installed, you’d run:

brew update && brew install azure-cli

Then, we’ll log in to Azure by running:

az login

Now, we’ll create a Service Principle by running:

az ad sp create-for-rbac --name ServicePrincipalName --password PASSWORD

It will pass us this bit of output, that we’ll use in creating our action:

{ "appId": "APP_ID", "displayName": "ServicePrincipalName", "name": "http://ServicePrincipalName", "password": ..., "tenant": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
}

What’s in an action?

Here is a base example of a workflow and an action so that you can see the bones of what it’s made of:

workflow "Name of Workflow" { on = "push" resolves = ["deploy"]
} action "deploy" { uses = "actions/someaction" secrets = [ "TOKEN", ]
}

We can see that we kick off the workflow, and specify that we want it to run on push (on = "push"). There are many other options you can use as well, the full list is here.

The resolves line beneath it resolves = ["deploy"] is an array of the actions that will be chained following the workflow. This doesn’t specify the order, but rather, is a full list of everything. You can see that we called the action following “deploy” — these strings need to match, that’s how they are referencing one another.

Next, we’ll look at that action block. The first uses line is really interesting: right out of the gate, you can use any of the predefined actions we talked about earlier (here’s a list of all of them). But you can also use another person’s repo, or even files hosted on the Docker site. For example, if we wanted to execute git inside a container, we would use this one. I could do so with: uses = "docker://alpine/git:latest". (Shout out to Matt Colyer for pointing me in the right direction for the URL.)

We may need some secrets or environment variables defined here and we would use them like this:

action "Deploy Webapp" { uses = ... args = "run some code here and use a $ENV_VARIABLE_NAME" secrets = ["SECRET_NAME"] env = { ENV_VARIABLE_NAME = "myEnvVariable" }
}

Creating a custom action

What we’re going to do with our custom action is take the commands we usually run to deploy a web app to Azure, and write them in such a way that we can just pass in a few values, so that the action executes it all for us. The files look more complicated than they are- really we’re taking that first base Azure action you saw in the GUI and building on top of it.

In entrypoint.sh:

#!/bin/sh set -e echo "Login"
az login --service-principal --username "${SERVICE_PRINCIPAL}" --password "${SERVICE_PASS}" --tenant "${TENANT_ID}" echo "Creating resource group ${APPID}-group"
az group create -n ${APPID}-group -l westcentralus echo "Creating app service plan ${APPID}-plan"
az appservice plan create -g ${APPID}-group -n ${APPID}-plan --sku FREE echo "Creating webapp ${APPID}"
az webapp create -g ${APPID}-group -p ${APPID}-plan -n ${APPID} --deployment-local-git echo "Getting username/password for deployment"
DEPLOYUSER=`az webapp deployment list-publishing-profiles -n ${APPID} -g ${APPID}-group --query '[0].userName' -o tsv`
DEPLOYPASS=`az webapp deployment list-publishing-profiles -n ${APPID} -g ${APPID}-group --query '[0].userPWD' -o tsv` git remote add azure https://${DEPLOYUSER}:${DEPLOYPASS}@${APPID}.scm.azurewebsites.net/${APPID}.git git push azure master

A couple of interesting things to note about this file:

  • set -e in a shell script will make sure that if anything blows up the rest of the file doesn’t keep evaluating.
  • The lines following “Getting username/password” look a little tricky — really what they’re doing is extracting the username and password from Azure’s publishing profiles. We can then use it for the following line of code where we add the remote.
  • You might also note that in those lines we passed in -o tsv, this is something we did to format the code so we could pass it directly into an environment variable, as tsv strips out excess headers, etc.

Now we can work on our main.workflow file!

workflow "New workflow" { on = "push" resolves = ["Deploy to Azure"]
} action "Deploy to Azure" { uses = "./.github/azdeploy" secrets = ["SERVICE_PASS"] env = { SERVICE_PRINCIPAL="http://sdrasApp", TENANT_ID="72f988bf-86f1-41af-91ab-2d7cd011db47", APPID="sdrasMoonshine" }
}

The workflow piece should look familiar to you — it’s kicking off on push and resolves to the action, called “Deploy to Azure.”

uses is pointing to within the directory, which is where we housed the other file. We need to add a secret, so we can store our password for the app. We called this service pass, and we’ll configure this by going here and adding it, in settings:

adding a secret in settings

Finally, we have all of the environment variables we’ll need to run the commands. We got all of these from the earlier section where we created our App Services Account. The tenant from earlier becomes TENANT_ID, name becomes the SERVICE_PRINCIPAL, and the APPID is actually whatever you’d like to name it 🙂

You can use this action too! All of the code is open source at this repo. Just bear in mind that since we created the main.workflow manually, you will have to also edit the env variables manually within the main.workflow file — once you stop using GUI, it doesn’t work the same way anymore.

Here you can see everything deploying nicely, turning green, and we have our wonderful “Hello World” app that redeploys whenever we push to master 🎉

successful workflow showing green
Hello Work app screenshot

Game changing

GitHub actions aren’t only about websites, though you can see how handy they are for them. It’s a whole new way of thinking about how we deal with infrastructure, events, and even hosting. Consider Docker in this model.

Normally when you create a Dockerfile, you would have to write the Dockerfile, use Docker to build the image, and then push the image up somewhere so that it’s hosted for other people to download. In this paradigm, you can point it at a git repo with an existing Docker file in it, or something that’s hosted on Docker directly.

You also need to host the image anywhere as GitHub will build it for you on the fly. This keeps everything within the GitHub ecosystem, which is huge for open source, and allows for forking and sharing so much more readily. You can also put the Dockerfile directly in your action which means you don’t have to maintain a separate repo for those Dockerfiles.

All in all, it’s pretty exciting. Partially because of the flexibility: on the one hand you can choose to have a lot of abstraction and create the workflow you need with a GUI and existing action, and on the other you can write the code yourself, building and fine-tuning anything you want within a container, and even chain multiple reusable custom actions together. All in the same place you’re hosting your code.

The post Introducing GitHub Actions appeared first on CSS-Tricks.

How to Import a Sass File into Every Vue Component in an App

If you’re working on a large-scale Vue application, chances are at some point you’re going to want to organize the structure of your application so that you have some globally defined variables for CSS that you can make use of in any part of your application.

This can be accomplished by writing this piece of code into every component in your application:

<style lang="scss"> @import "./styles/_variables.scss";
</style>

But who has time for that?! We’re programmers, let’s do this programmatically.

Why?

You might be wondering why we would want to do something like this, especially if you’re just starting out in web development. Globals are bad, right? Why would we need this? What even are Sass variables? If you already know all of this, then you can skip down to the next section for the implementation.

Companies big and small tend to have redesigns at least every one-to-two years. If your code base is large, managed by many people, and you need to change the line-height everywhere from 1.1rem to 1.2rem, do you really want to have to go back into every module and change that value? A global variable becomes extraordinarily useful here. You decide what can be at the top-level and what needs to be inherited by other, smaller, pieces. This avoids spaghetti code in CSS and keeps your code DRY.

I once worked for a company that had a gigantic, sprawling codebase. A day before a major release, orders came down from above that we were changing our primary brand color. Because the codebase was set up well with these types of variables defined correctly, I had to change the color in one location, and it propagated through 4,000 files. That’s pretty powerful. I also didn’t have to pull an all-nighter to get the change through in time.

Styles are about design. Good design is, by nature, successful when it’s cohesive. A codebase that reuses common pieces of structure can look more united, and also tends to look more professional. If you have to redefine some base pieces of your application in every component, it will begin to break down, just like a phrase does in a classic game of telephone.

Global definitions can be self-checking for designers as well: “Wait, we have another tertiary button? Why?” Leaks in cohesive UI/UX announce themselves well in this model.

How?

The first thing we need is to have vue-cli 3 installed. Then we create our project:

npm install -g @vue/cli
# OR
yarn global add @vue/cli # then run this to scaffold the project
vue create scss-loader-example

When we run this command, we’re going to make sure we use the template that has the Sass option:

? Please pick a preset: Manually select features
? Check the features needed for your project: ◉ Babel ◯ TypeScript ◯ Progressive Web App (PWA) Support ◯ Router ◉ Vuex
❯ ◉ CSS Pre-processors ◉ Linter / Formatter ◯ Unit Testing ◯ E2E Testing

The other options are up to you, but you need the CSS Pre-processors option checked. If you have an existing vue cli 3 project, have no fear! You can also run:

npm i node-sass sass-loader
# OR
yarn add node-sass sass-loader

First, let’s make a new folder within the src directory. I called mine styles. Inside of that, I created a _variables.scss file, like you would see in popular projects like bootstrap. For now, I just put a single variable inside of it to test:

$primary: purple;

Now, let’s create a file called vue.config.js at the root of the project at the same level as your package.json. In it, we’re going to define some configuration settings. You can read more about this file here.

Inside of it, we’ll add in that import statement that we saw earlier:

module.exports = { css: { loaderOptions: { sass: { data: `@import "@/styles/_variables.scss";` } } }
};

OK, a couple of key things to note here:

  • You will need to shut down and restart your local development server to make any of these changes take hold.
  • That @/ in the directory structure before styles will tell this configuration file to look within the src directory.
  • You don’t need the underscore in the name of the file to get this to work. This is a Sass naming convention.
  • The components you import into will need the lang="scss" (or sass, or less, or whatever preprocessor you’re using) attribute on the style tag in the .vue single file component. (See example below.)

Now, we can go into our default App.vue component and start using our global variable!

<style lang="scss">
#app { font-family: "Avenir", Helvetica, Arial, sans-serif; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; text-align: center; //this is where we use the variable color: $primary; margin-top: 60px;
}
</style>

Here’s a working example! You can see the text in our app turn purple:

Shout out to Ives, who created CodeSandbox, for setting up a special configuration for us so we could see these changes in action in the browser. If you’d like to make changes to this sandbox, there’s a special Server Control Panel option in the left sidebar, where you can restart the server. Thanks, Ives!

And there you have it! You no longer have to do the repetitive task of @import-ing the same variables file throughout your entire Vue application. Now, if you need to refactor the design of your application, you can do it all in one place and it will propagate throughoutyour app. This is especially important for applications at scale.

The post How to Import a Sass File into Every Vue Component in an App appeared first on CSS-Tricks.

Why Using reduce() to Sequentially Resolve Promises Works

Writing asynchronous JavaScript without using the Promise object is a lot like baking a cake with your eyes closed. It can be done, but it’s gonna be messy and you’ll probably end up burning yourself.

I won’t say it’s necessary, but you get the idea. It’s real nice. Sometimes, though, it needs a little help to solve some unique challenges, like when you’re trying to sequentially resolve a bunch of promises in order, one after the other. A trick like this is handy, for example, when you’re doing some sort of batch processing via AJAX. You want the server to process a bunch of things, but not all at once, so you space the processing out over time.

Ruling out packages that help make this task easier (like Caolan McMahon’s async library), the most commonly suggested solution for sequentially resolving promises is to use Array.prototype.reduce(). You might’ve heard of this one. Take a collection of things, and reduce them to a single value, like this:

let result = [1,2,5].reduce((accumulator, item) => { return accumulator + item;
}, 0); // <-- Our initial value. console.log(result); // 8

But, when using reduce() for our purposes, the setup looks more like this:

let userIDs = [1,2,3]; userIDs.reduce( (previousPromise, nextID) => { return previousPromise.then(() => { return methodThatReturnsAPromise(nextID); });
}, Promise.resolve());

Or, in a more modern format:

let userIDs = [1,2,3]; userIDs.reduce( async (previousPromise, nextID) => { await previousPromise; return methodThatReturnsAPromise(nextID);
}, Promise.resolve());

This is neat! But for the longest time, I just swallowed this solution and copied that chunk of code into my application because it “worked.” This post is me taking a stab at understanding two things:

  1. Why does this approach even work?
  2. Why can’t we use other Array methods to do the same thing?

Why does this even work?

Remember, the main purpose of reduce() is to “reduce” a bunch of things into one thing, and it does that by storing up the result in the accumulator as the loop runs. But that accumulator doesn’t have to be numeric. The loop can return whatever it wants (like a promise), and recycle that value through the callback every iteration. Notably, no matter what the accumulator value is, the loop itself never changes its behavior — including its pace of execution. It just keeps rolling through the collection as fast as the thread allows.

This is huge to understand because it probably goes against what you think is happening during this loop (at least, it did for me). When we use it to sequentially resolve promises, the reduce() loop isn’t actually slowing down at all. It’s completely synchronous, doing its normal thing as fast as it can, just like always.

Look at the following snippet and notice how the progress of the loop isn’t hindered at all by the promises returned in the callback.

function methodThatReturnsAPromise(nextID) { return new Promise((resolve, reject) => { setTimeout(() => { console.log(`Resolve! ${dayjs().format('hh:mm:ss')}`); resolve(); }, 1000); });
} [1,2,3].reduce( (accumulatorPromise, nextID) => { console.log(`Loop! ${dayjs().format('hh:mm:ss')}`); return accumulatorPromise.then(() => { return methodThatReturnsAPromise(nextID); });
}, Promise.resolve());

In our console:

"Loop! 11:28:06" "Loop! 11:28:06" "Loop! 11:28:06" "Resolve! 11:28:07" "Resolve! 11:28:08" "Resolve! 11:28:09"

The promises resolve in order as we expect, but the loop itself is quick, steady, and synchronous. After looking at the MDN polyfill for reduce(), this makes sense. There’s nothing asynchronous about a while() loop triggering the callback() over and over again, which is what’s happening under the hood:

while (k < len) { if (k in o) { value = callback(value, o[k], k, o); } k++;
}

With all that in mind, the real magic occurs in this piece right here:

return previousPromise.then(() => { return methodThatReturnsAPromise(nextID)
});

Each time our callback fires, we return a promise that resolves to another promise. And while reduce() doesn’t wait for any resolution to take place, the advantage it does provide is the ability to pass something back into the same callback after each run, a feature unique to reduce(). As a result, we’re able build a chain of promises that resolve into more promises, making everything nice and sequential:

new Promise( (resolve, reject) => { // Promise #1 resolve();
}).then( (result) => { // Promise #2 return result;
}).then( (result) => { // Promise #3 return result;
}); // ... and so on!

All of this should also reveal why we can’t just return a single, new promise each iteration. Because the loop runs synchronously, each promise will be fired immediately, instead of waiting for those created before it.

[1,2,3].reduce( (previousPromise, nextID) => { console.log(`Loop! ${dayjs().format('hh:mm:ss')}`); return new Promise((resolve, reject) => { setTimeout(() => { console.log(`Resolve! ${dayjs().format('hh:mm:ss')}`); resolve(nextID); }, 1000); });
}, Promise.resolve());

In our console:

"Loop! 11:31:20" "Loop! 11:31:20" "Loop! 11:31:20" "Resolve! 11:31:21" "Resolve! 11:31:21" "Resolve! 11:31:21"

Is it possible to wait until all processing is finished before doing something else? Yes. The synchronous nature of reduce() doesn’t mean you can’t throw a party after every item has been completely processed. Look:

function methodThatReturnsAPromise(id) { return new Promise((resolve, reject) => { setTimeout(() => { console.log(`Processing ${id}`); resolve(id); }, 1000); });
} let result = [1,2,3].reduce( (accumulatorPromise, nextID) => { return accumulatorPromise.then(() => { return methodThatReturnsAPromise(nextID); });
}, Promise.resolve()); result.then(e => { console.log("Resolution is complete! Let's party.")
});

Since all we’re returning in our callback is a chained promise, that’s all we get when the loop is finished: a promise. After that, we can handle it however we want, even long after reduce() has run its course.

Why won’t any other Array methods work?

Remember, under the hood of reduce(), we’re not waiting for our callback to complete before moving onto the next item. It’s completely synchronous. The same goes for all of these other methods:

But reduce() is special.

We found that the reason reduce() works for us is because we’re able to return something right back to our same callback (namely, a promise), which we can then build upon by having it resolve into another promise. With all of these other methods, however, we just can’t pass an argument to our callback that was returned from our callback. Instead, each of those callback arguments are predetermined, making it impossible for us to leverage them for something like sequential promise resolution.

[1,2,3].map((item, [index, array]) => [value]);
[1,2,3].filter((item, [index, array]) => [boolean]);
[1,2,3].some((item, [index, array]) => [boolean]);
[1,2,3].every((item, [index, array]) => [boolean]);

I hope this helps!

At the very least, I hope this helps shed some light on why reduce() is uniquely qualified to handle promises in this way, and maybe give you a better understanding of how common Array methods operate under the hood. Did I miss something? Get something wrong? Let me know!

The post Why Using reduce() to Sequentially Resolve Promises Works appeared first on CSS-Tricks.

Why don’t we add a `lovely` element to HTML?

<person>, <subhead>, <location>, <logo>… It’s not hard to come up with a list of HTML elements that you think would be useful. So, why don’t we?

Bruce Lawson has a look. The conclusion is largely that we don’t really need to and perhaps shouldn’t.

By my count, we now have 124 HTML elements, many of which are unknown to many web authors, or regularly confused with each other—for example, the difference between <article> and <section>. This suggests to me that the cognitive load of learning all these different elements is getting too much.

Direct Link to ArticlePermalink

The post Why don’t we add a `lovely` element to HTML? appeared first on CSS-Tricks.

WordPress.com

Hey! Chris here, with a big thanks to WordPress, for not just their sponsorship here the last few months, but for being a great product for so many sites I’ve worked on over the years. I’ve been a web designer and developer for the better part of two decades, and it’s been a great career for me.

I’m all about learning. The more you know, the more you’re capable of doing and the more doors open for you, so to speak, for getting things done as a web worker. And yet it’s a dance. Just because you know how to do particular things doesn’t mean that you always should. Part of this job is knowing what you should do yourself and what you should outsource or rely on for a trustworthy service.

With that in mind, I think if you can build a site with WordPress.com, you should build your site on WordPress.com. Allow me to ellaborate.

Do I know how to build a functional contact form from absolute scratch? I do! I can design the form, I can build the form with HTML and style it with CSS, I can enhance the form with JavaScript, I can process the form with a backend language and send the data where I need it. It’s a tremendous amount of work, which is fine, because hey, that’s the job sometimes. But it’s rare that I actually do all of that work.

Instead of doing everything from scratch when I need a form on a site I’m building, I often choose a form building service that does most of this work for me and leaves me with just the job of designing the form and telling it where I want the data collected to go. Or I might build the form myself but use some sort of library for processing the data. Or I might use a form framework on the front end but handle the data processing myself. It depends on the project! I want to make sure whatever time I spend working on it is the most valuable it can be &mdash not doing something rote.

Part of the trick is understanding how to evaluate technology and choose things that serve your needs best. You’ll get that with experience. It’s also different for everyone. We all have different needs and different skills, so the technology choices you make will likely be different than what choices I make.

Here’s one choice that I found to be in many people’s best interest: if you don’t have to deal with hosting, security, and upgrading all the underlying software that powers a website…don’t! In other words, as I said, if you can use WordPress.com, do use WordPress.com.

This is an often-quoted fact, but it bears repeating: WordPress powers about a third of the Internet, which is a staggering feature. There are an awful lot of people that are happily running their sites on WordPress and that number wouldn’t be nearly so high if WordPress wasn’t flexible and very usable.

There are some sites that WordPress.com isn’t a good match for. Say you’re going to build the next big Fantasy Football app with real-time scores, charts and graphs on dashboards, and live chat rooms. That’s custom development work probably suited for different technology.

But say you want to have a personal portfolio site with a blog. Can WordPress.com do that? Heck yes, that’s bread and butter stuff. What if you want to sell products? Sure. What if you want to have a showcase for your photography? Absolutely. How about the homepage for a laundromat, restaurant, bakery, or coffeeshop? Check, check, check and check. A website for your conference? A place to publish a book chapter by chapter? A mini-site for your family? A road trip blog? Yes to all.

So, if you can build your site on WordPress.com, then I’m saying that you should because what you’re doing is saving time, saving money, and most importantly, saving a heaping pile of technical debt. You don’t deal with hosting, your site will be fast without you ever having to think about it. You don’t deal with any software upgrades or weird incompatibilities. You just get a reliable system.

The longer I work in design and development, the more weight I put on just how valuable that reliability is and how dangerous technical debt is. I’ve seen too many sites fall off the face of the Earth because the people taking care of them couldn’t deal with the technical debt. Do yourself, your client and, heck, me a favor (seriously, I’ll sleep better) and build your site on WordPress.com.

The post WordPress.com appeared first on CSS-Tricks.

Getting Started with Vue Plugins

In the last months, I’ve learned a lot about Vue. From building SEO-friendly SPAs to crafting killer blogs or playing with transitions and animations, I’ve experimented with the framework thoroughly.

But there’s been a missing piece throughout my learning: plugins.

Most folks working with Vue have either comes to rely on plugins as part of their workflow or will certainly cross paths with plugins somewhere down the road. Whatever the case, they’re a great way to leverage existing code without having to constantly write from scratch.

Many of you have likely used jQuery and are accustomed to using (or making!) plugins to create anything from carousels and modals to responsive videos and type. We’re basically talking about the same thing here with Vue plugins.

So, you want to make one? I’m going to assume you’re nodding your head so we can get our hands dirty together with a step-by-step guide for writing a custom Vue plugin.

First, a little context…

Plugins aren’t something specific to Vue and — just like jQuery — you’ll find that there’s a wide variety of plugins that do many different things. By definition, they indicate that an interface is provided to allow for extensibility.

Brass tacks: they’re a way to plug global features into an app and extend them for your use.

The Vue documentation covers plugins in great detail and provides an excellent list of broad categories that plugins generally fall into:

  1. Add some global methods or properties.
  2. Add one or more global assets: directives/filters/transitions etc.
  3. Add some component options by global mixin.
  4. Add some Vue instance methods by attaching them to Vue.prototype.
  5. A library that provides an API of its own, while at the same time injecting some combination of the above.

OK, OK. Enough prelude. Let’s write some code!

What we’re making

At Spektrum, Snipcart’s mother agency, our designs go through an approval process, as I’m sure is typical at most other shops and companies. We allow a client to comment and make suggestions on designs as they review them so that, ultimately, we get the green light to proceed and build the thing.

We generally use InVision for all this. The commenting system is a core component in InVision. It lets people click on any portion of the design and leave a comment for collaborators directly where that feedback makes sense. It’s pretty rad.

As cool as InVision is, I think we can do the same thing ourselves with a little Vue magic and come out with a plugin that anyone can use as well.

The good news here is they’re not that intimidating. A basic knowledge of Vue is all you need to start fiddling with plugins right away.

Step 1. Prepare the codebase

A Vue plugin should contain an install method that takes two parameters:

  1. The global Vue object
  2. An object incorporating user-defined options

Firing up a Vue project is super simple, thanks to Vue CLI 3. Once you have that installed, run the following in your command line:

$ vue create vue-comments-overlay
# Answer the few questions
$ cd vue-comments-overlay
$ npm run serve

This gives us the classic “Hello World” start we need to crank out a test app that will put our plugin to use.

Step 2. Create the plugin directory

Our plugin has to live somewhere in the project, so let’s create a directory where we can cram all our work, then navigate our command line to the new directory:

$ mkdir src/plugins
$ mkdir src/plugins/CommentsOverlay
$ cd src/plugins/CommentsOverlay

Step 3: Hook up the basic wiring

A Vue plugin is basically an object with an install function that gets executed whenever the application using it includes it with Vue.use().

The install function receives the global Vue object as a parameter and an options object:

// src/plugins/CommentsOverlay/index.js
// export default { install(vue, opts){ console.log('Installing the CommentsOverlay plugin!') // Fun will happen here }
}

Now, let’s plug this in our “Hello World” test app:

// src/main.js
import Vue from 'vue'
import App from './App.vue'
import CommentsOverlay from './plugins/CommentsOverlay' // import the plugin Vue.use(CommentsOverlay) // put the plugin to use! Vue.config.productionTip = false new Vue({ render: createElement => createElement(App)}).$mount('#app')

Step 4: Provide support for options

We want the plugin to be configurable. This will allow anyone using it in their own app to tweak things up. It also makes our plugin more versatile.

We’ll make options the second argument of the install function. Let’s create the default options that will represent the base behavior of the plugin, i.e. how it operates when no custom option is specified:

// src/plugins/CommentsOverlay/index.js const optionsDefaults = { // Retrieves the current logged in user that is posting a comment commenterSelector() { return { id: null, fullName: 'Anonymous', initials: '--', email: null } }, data: { // Hash object of all elements that can be commented on targets: {}, onCreate(created) { this.targets[created.targetId].comments.push(created) }, onEdit(editted) { // This is obviously not necessary // It's there to illustrate what could be done in the callback of a remote call let comments = this.targets[editted.targetId].comments comments.splice(comments.indexOf(editted), 1, editted); }, onRemove(removed) { let comments = this.targets[removed.targetId].comments comments.splice(comments.indexOf(removed), 1); } }
}

Then, we can merge the options that get passed into the install function on top of these defaults:

// src/plugins/CommentsOverlay/index.js export default { install(vue, opts){ // Merge options argument into options defaults const options = { ...optionsDefaults, ...opts } // ... }
}

Step 5: Create an instance for the commenting layer

One thing you want to avoid with this plugin is having its DOM and styles interfere with the app it is installed on. To minimize the chances of this happening, one way to go is making the plugin live in another root Vue instance, outside of the main app’s component tree.

Add the following to the install function:

// src/plugins/CommentsOverlay/index.js export default { install(vue, opts){ ... // Create plugin's root Vue instance const root = new Vue({ data: { targets: options.data.targets }, render: createElement => createElement(CommentsRootContainer) }) // Mount root Vue instance on new div element added to body root.$mount(document.body.appendChild(document.createElement('div'))) // Register data mutation handlers on root instance root.$on('create', options.data.onCreate) root.$on('edit', options.data.onEdit) root.$on('remove', options.data.onRemove) // Make the root instance available in all components vue.prototype.$commentsOverlay = root ... }
}

Essential bits in the snippet above:

  1. The app lives in a new div at the end of the body.
  2. The event handlers defined in the options object are hooked to the matching events on the root instance. This will make sense by the end of the tutorial, promise.
  3. The $commentsOverlay property added to Vue’s prototype exposes the root instance to all Vue components in the application.

Step 6: Make a custom directive

Finally, we need a way for apps using the plugin to tell it which element will have the comments functionality enabled. This is a case for a custom Vue directive. Since plugins have access to the global Vue object, they can define new directives.

Ours will be named comments-enabled, and it goes like this:

// src/plugins/CommentsOverlay/index.js export default { install(vue, opts){ ... // Register custom directive tha enables commenting on any element vue.directive('comments-enabled', { bind(el, binding) { // Add this target entry in root instance's data root.$set( root.targets, binding.value, { id: binding.value, comments: [], getRect: () => el.getBoundingClientRect(), }); el.addEventListener('click', (evt) => { root.$emit(`commentTargetClicked__${binding.value}`, { id: uuid(), commenter: options.commenterSelector(), clientX: evt.clientX, clientY: evt.clientY }) }) } }) }
}

The directive does two things:

  1. It adds its target to the root instance’s data. The key defined for it is binding.value. It enables consumers to specify their own ID for target elements, like so : <img v-comments-enabled="imgFromDb.id" src="imgFromDb.src" />.
  2. It registers a click event handler on the target element that, in turn, emits an event on the root instance for this particular target. We’ll get back to how to handle it later on.

The install function is now complete! Now we can move on to the commenting functionality and components to render.

Step 7: Establish a “Comments Root Container” component

We’re going to create a CommentsRootContainer and use it as the root component of the plugin’s UI. Let’s take a look at it:

<!-- src/plugins/CommentsOverlay/CommentsRootContainer.vue --> <template> <div> <comments-overlay v-for="target in targets" :target="target" :key="target.id"> </comments-overlay> </div>
</template> <script>
import CommentsOverlay from "./CommentsOverlay"; export default { components: { CommentsOverlay }, computed: { targets() { return this.$root.targets; } }
};
</script>

What’s this doing? We’ve basically created a wrapper that’s holding another component we’ve yet to make: CommentsOverlay. You can see where that component is being imported in the script and the values that are being requested inside the wrapper template (target and target.id). Note how the target computed property is derived from the root component’s data.

Now, the overlay component is where all the magic happens. Let’s get to it!

Step 8: Make magic with a “Comments Overlay” component

OK, I’m about to throw a lot of code at you, but we’ll be sure to walk through it:

<!-- src/plugins/CommentsOverlay/CommentsRootContainer.vue --> <template> <div class="comments-overlay"> <div class="comments-overlay__container" v-for="comment in target.comments" :key="comment.id" :style="getCommentPostition(comment)"> <button class="comments-overlay__indicator" v-if="editing != comment" @click="onIndicatorClick(comment)"> {{ comment.commenter.initials }} </button> <div v-else class="comments-overlay__form"> <p>{{ getCommentMetaString(comment) }}</p> <textarea ref="text" v-model="text" /> <button @click="edit" :disabled="!text">Save</button> <button @click="cancel">Cancel</button> <button @click="remove">Remove</button> </div> </div> <div class="comments-overlay__form" v-if="this.creating" :style="getCommentPostition(this.creating)"> <textarea ref="text" v-model="text" /> <button @click="create" :disabled="!text">Save</button> <button @click="cancel">Cancel</button> </div> </div>
</template> <script>
export default { props: ['target'], data() { return { text: null, editing: null, creating: null }; }, methods: { onTargetClick(payload) { this._resetState(); const rect = this.target.getRect(); this.creating = { id: payload.id, targetId: this.target.id, commenter: payload.commenter, ratioX: (payload.clientX - rect.left) / rect.width, ratioY: (payload.clientY - rect.top) / rect.height }; }, onIndicatorClick(comment) { this._resetState(); this.text = comment.text; this.editing = comment; }, getCommentPostition(comment) { const rect = this.target.getRect(); const x = comment.ratioX <em> rect.width + rect.left; const y = comment.ratioY <em> rect.height + rect.top; return { left: `${x}px`>, top: `${y}px` }; }, getCommentMetaString(comment) { return `${ comment.commenter.fullName } - ${comment.timestamp.getMonth()}/${comment.timestamp.getDate()}/${comment.timestamp.getFullYear()}`; }, edit() { this.editing.text = this.text; this.editing.timestamp = new Date(); this._emit("edit", this.editing); this._resetState(); }, create() { this.creating.text = this.text; this.creating.timestamp = new Date(); this._emit("create", this.creating); this._resetState(); }, cancel() { this._resetState(); }, remove() { this._emit("remove", this.editing); this._resetState(); }, _emit(evt, data) { this.$root.$emit(evt, data); }, _resetState() { this.text = null; this.editing = null; this.creating = null; } }, mounted() { this.$root.$on(`commentTargetClicked__${this.target.id}`, this.onTargetClick ); }, beforeDestroy() { this.$root.$off(`commentTargetClicked__${this.target.id}`, this.onTargetClick ); }
};
</script>

I know, I know. A little daunting. But it’s basically only doing a few key things.

First off, the entire first part contained in the <template> tag establishes the markup for a comment popover that will display on the screen with a form to submit a comment. In other words, this is the HTML markup that renders our comments.

Next up, we write the scripts that power the way our comments behave. The component receives the full target object as a prop. This is where the comments array and the positioning info is stored.

Then, the magic. We’ve defined several methods that do important stuff when triggered:

  • Listens for a click
  • Renders a comment box and positions it where the click was executed
  • Captures user-submitted data, including the user’s name and the comment
  • Provides affordances to create, edit, remove, and cancel a comment

Lastly, the handler for the commentTargetClicked events we saw earlier is managed within the mounted and beforeDestroy hooks.

It’s worth noting that the root instance is used as the event bus. Even if this approach is often discouraged, I judged it reasonable in this context since the components aren’t publicly exposed and can be seen as a monolithic unit.

Aaaaaaand, we’re all set! After a bit of styling (I won’t expand on my dubious CSS skills), our plugin is ready to take user comments on target elements!

Demo time!

Live Demo

GitHub Repo

Getting acquainted with more Vue plugins

We spent the bulk of this post creating a Vue plugin but I want to bring this full circle to the reason we use plugins at all. I’ve compiled a short list of extremely popular Vue plugins to showcase all the wonderful things you gain access to when putting plugins to use.

  • Vue-router – If you’re building single-page applications, you’ll without a doubt need Vue-router. As the official router for Vue, it integrates deeply with its core to accomplish tasks like mapping components and nesting routes.
  • Vuex – Serving as a centralized store for all the components in an application, Vuex is a no-brainer if you wish to build large apps with high maintenance.
  • Vee-validate – When building typical line of business applications, form validation can quickly become unmanageable if not handled with care. Vee-validate takes care of it all in a graceful manner. It uses directives, and it’s built with localization in mind.

I’ll limit myself to these plugins, but know that there are many others waiting to help Vue developers, like yourself!

And, hey! If you can’t find a plugin that serves your exact needs, you now have some hands-on experience crafting a custom plugin. 😀

The post Getting Started with Vue Plugins appeared first on CSS-Tricks.

HTML for Numeric Zip Codes

I just overheard this discussion on Twitter, kicked off by Dave.

It seems like zip codes are just numbers, right? So…

<input id="zip" name="zip" type="number">

The advantage there being able to take advantage of free validation from the browser, and triggering a more helpful number-based keyboard on mobile devices.

But Zach pointed out that type="number" is problematic for zip codes because zip codes can have leading zeros (e.g. a Boston zip code might be 02119). Filament group also has a little lib for fixing this.

This is the perfect job for inputmode, as Jeremy suggests:

<input id="zip" name="zip" type="text" inputmode="numeric" pattern="^(?(^00000(|-0000))|(\d{5}(|-\d{4})))$">

But the support is pretty bad at the time of this writing.

A couple of people mentioned trying to hijack type="tel" for it, but that has its own downsides, like rejecting properly formatted 9-digit zip codes.

So, zip codes, while they look like numbers, are probably best treated as strings. Another option here is to leave it as a text input, but force numbers with pattern, as Pamela Fox documents:

<input id="zip" name="zip" type="text" pattern="[0-9]*">

As many have pointed out in the comments, it’s worth noting that numeric patterns for zip codes are best suited for the U.S. as many the codes for many other countries contain numbers and letters.

The post HTML for Numeric Zip Codes appeared first on CSS-Tricks.

Sass Selector Combining

Brad Frost was asking about this the other day…

.c-btn { &__icon { ... }
}

I guess that’s technically “nesting” but the selectors come out flat:

.c-button__icon { }

The question was whether you do that or just write out the whole selector instead, as you would with vanilla CSS. Brad’s post gets into all the pro’s and con’s of both ways.

To me, I’m firmly in the camp of not “nesting” because it makes searching for selectors so much harder. I absolutely live by being able to search my project for fully expanded class names and, ironically, just as Brad was posting that poll, I was stumped by a combined class like this and changed it in one of my own code bases.

Robin Rendle also notes the difficulty in searching as an issue with an example that has clearly gone too far!

Direct Link to ArticlePermalink

The post Sass Selector Combining appeared first on CSS-Tricks.