Simple Named Grid Areas

I think of named grid areas in CSS Grids as bring-your-own syntactic sugar. You don’t absolutely need them (you could express grid placement in other ways), but it can make that placement more intuitive. And, hey, if I’m wrong about that, correct me in the comments.

Say you set up a 3-column grid:

.grid { display: grid; grid-gap: 1rem; grid-template-columns: 200px 1fr 1fr;
}

No rows defined there; those are implicit and will appear as needed. We could define them, we just aren’t, because like most situations in web design, we don’t care how tall the items are — the height will grow as needed to accomodate the content.

Now, how do we place something in that very top-left position in the grid? We could tell the grid to place it there like this:

.item { grid-column: 1 / 2; /* start at the first grid column line and end at the second */
}

That works, although that .item better be the first child of .grid. Otherwise, something else may implicitly be placed there and .item will kick down to the next open row. If we wanted to be super sure to place it in the top-left, we could do the row as well:

.item { grid-column: 1 / 2; grid-row: 1 / 2;
}

Now it will be in the top-left for sure, even if other items are explicitly placed there (they’ll just overlap). We can even shorten things up with the grid-area property:

.item { grid-area: 1 / 1 / 2 / 2;
}

All those 1’s and 2’s might be intuitive enough for now, but the numbers become a bit much in more complex grids involving both column and row placement.

Check this. While we are defining the columns, we can name them with a separate property:

.grid { display: grid; grid-gap: 1rem; grid-template-columns: 200px 1fr 1fr; grid-template-areas: "pro a b" "pro c d";
}

Every quoted group in grid-template-areas is a row. Inside are names I just made up. Could be just about anything, as long as it makes sense to you. See how the word “pro” is repeated twice there on two rows? That’s important, as it’s saying that we could place a grid item where that value “pro” is and it will be in the first of three columns and span two rows. Pretty intuitive, yeah?

We place it like this:

.pro-features { /* rather than */ grid-area: 1 / 1 / 2 / 3; /* we can now do */ grid-area: pro;
}

Here’s that simple example:

See the Pen Simple Named Grid Areas by Chris Coyier (@chriscoyier) on CodePen.

Want to get even more descriptive with a grid? Try drawing it in your CSS comments.

The post Simple Named Grid Areas appeared first on CSS-Tricks.

Symbolic Links for Easier Multi-Folder Local Development

You know how you open a “project” in a local code editor? I guess different editors have different terminology for it, but essentially what you are doing is opening a folder/directory and it shows you a sidebar full of files and folders you can navigate through and such.

Typically there is one parent folder, and everything else is within that folder. Right? Well, it doesn’t have to be! That’s where symbolic links come in.

Otherwise known as symlinks, they are like pointers to another place. While you don’t have to actually move the folder you are referencing, you can create a pointer to it that behaves as if you did.

You can create them right from the command line:

ln -s /path/to/original/ /path/to/link

You’ll get a link that looks like an “alias” on macOS. Ya know, the things you can make by right-clicking an item or going File > Make Alias. But they are different. In my experience, aliases tend not to work in code editors, but symlinks do.

Looks like an alias, but it’s really a symlink.

I was actually lazy (hey, I prefer GUIs for just about everything) and used Nick Zitzmann’s symboliclinker context menu plugin to help make the link I wanted (and allow me to make other ones super easy).

Why bother? I’ve had a handful of occasions over the years, but here’s one that just came up for me. I’m working on a WordPress theme, and there is a WordPress Functionality Plugin that goes with it. Ideally, I’d have just my theme folder open in my code editor (no need to have the entire WordPress root there, that would just slow my editor and make searching a mess). But I’d also like to have that plugin open at the same time, so in case I’m calling functions and such that the plugin controls, I can see both. But these folders are in totally different places…

No matter, I can put a symlink to the plugin in the theme. (You may want to .gitignore it, depending on your deployment setup and such.) Now I can search and find things in both places like I want:

I know that some editors have their own concept of this, like VS Code’s Multi-root Workspaces and how you can Project > Add Folder to Project in Sublime. But symlinks are a way to do the same thing but in a cross-editor and cross-system way that everyone can use!

The post Symbolic Links for Easier Multi-Folder Local Development appeared first on CSS-Tricks.

Subset Numerals so They’re as Awesome as the Rest of Your Content

You’re putting the finishing touches on your new million-dollar-idea — your copy is perfect, your color scheme is dazzling, and you’ve found a glorious font pairing (who knew Baskerville and Raleway looked so great together? You did.) but there’s one problem: Raleway’s pesky lowercase numbers make your shopping cart look confusing and overwhelm the user.

This is a fairly mundane problem, but an issue that can make beautiful typefaces unusable if numbers are an important part of your site; a store or an analytics dashboard for example. And this isn’t just an issue with lowercase numerals. Non-monospaced numerals can be equally distracting when glancing at a list of numbers.

We’re going to look at a few techniques to combat this problem, but first we need to find a font whose numerals we can use instead of our main body font. There’s no cut-and-dry way of finding your font twin. The most important characteristics to search for are the weight and width so that you can match it to that of your original font. If you intend to use numerals at multiple weights, try looking at fonts that have a wide range of weights to up your chances at matching your original. You may end up needing a different numeral font for each weight or mismatching the weights of the font pairs, but that’s fine because there are in fact no font police.

Here are a few Google Font pairings that match well enough to not be noticeable at small sizes:

Method 0: Wrap ‘em in spans

@import url('https://fonts.googleapis.com/css?family=Raleway:400|Nunito:300'); body { font-family: 'Raleway', sans-serif;
} .numeral { font-family: 'Nunito', 'Raleway', sans-serif;
}
Your total comes to $<span class="numeral">15</span>.<span class="numeral">99</span>

This is not a good solution. Having to add to the markup is bad, and loading both fonts in their entirety is not great, but if you don’t have a lot of content and want a simple solution, then there’s no shame in it.

What we’d prefer to find is a CSS-only solution that isolates the numerals of the number font and loads them instead of (or before the main font) without having to change the HTML. Read on.

How font-family works

The following methods rely on creating a @font-face declaration which only refers to our preferred subset of numerals, and references them in the font stack as normal:

body { font-family: 'Custom Numeral Font', 'Main Font', sans-serif;
}

By ordering the subsetted font first in your font-family declaration, the browser will default to it and will fallback to the subsequent fonts for glyphs that are not available in the first. This is a really important distinction — the browser is not only checking that the declared font is available (locally or imported via @font-face), but it is also checking that its character map contains the requested character and will pass onto the next font if it doesn’t, on a character-by-character basis. By the way, the spec for the font-matching algorithm is a surprisingly interesting read.

It’s important to note that the browser will prioritize the font family over the font weight and style, so if you subset the numerals for only a normal weight and then have a number inside a bold-style element, the browser will choose to show the normal-weight character from the numeral font rather than the bold-weight character of the main font. Basically, if you’re doing this, make sure you do it for all the font weights you’ll show numbers in.

Method 1: Font Squirrel custom subsetting

If you self-host your font files instead of serving them from a hosted service like Adobe Fonts or Google Fonts, then you can use the expert configuration of Font Squirrel’s Webfont Generator to create files that only contain the numeral subset. Read the font’s license agreement to make sure this type of manipulation is okay before proceeding.

Setting the Character Type for the replacement font to Numbers in the Font Squirrel interface.

Once you have the subsetted font files, you can use them as you normally would. Check out this article for more information about @font-face and file type browser support.

@font-face { font-family: 'Nunito'; src: url('nunito-light-webfont.woff2') format('woff2'), url('nunito-light-webfont.woff') format('woff'); font-weight: normal; font-style: normal;
} body { font-family: 'Nunito', 'Raleway', sans-serif;
}

If you’re being performance-conscious, you can also subset your main font to remove its numeral glyphs and shave off a few extra bytes.

Method 2: @font-face unicode-range subsetting

The unicode-range property of @font-face is mostly used to declare the character set the font files contain in order to help the browser decide whether or not to download the files; a big potential performance boost for multi-language sites that use non-Latin alphabets. The flip-side is that unicode-range also restricts the @font-face declaration to the specified range, meaning that we can only use it to make certain portions of the font files available for use in the browser.

@font-face { font-family: 'Nunito'; src: url('nunito-light-webfont.woff2') format('woff2'), url('nunito-light-webfont.woff') format('woff'); font-weight: normal; font-style: normal; unicode-range: U+30-39; /* 0-9 /
} body { font-family: 'Nunito', 'Raleway', sans-serif;
}

This is worse for performance than the previous method as the browser still has to download the whole font file to use the numerals, but preferable if the license agreement disallows manipulation of the files.

Sadly, we can’t use this method to subset fonts already loaded by an external service, but it can be used on local fonts:

@font-face { font-family: 'Times Numeral Roman'; src: local('Times New Roman'); unicode-range: U+30-39; /* 0-9 */
}

This is a neat way of tweaking particular characters of your main font, perhaps subsetting for just an ampersand or preferred curly quotes (in which case you’d have to give up the “Times Numeral Roman” pun), with no performance loss as the local font will just be ignored if it doesn’t exist. You can check common system font availability here. And you can become Queen of the Type Nerds by making a site that can only be appreciated properly if you have all its subsetted fonts downloaded locally, premium esoteric.

Support for unicode-range is pretty good, but note that the subset font will be used for all text if it’s not supported, so maybe don’t make it Papyrus. Or if you really want to use Papyrus, you can be sneaky and add another web-safe font first so that unsupported browsers will default to it instead of Papyrus:

@font-face { font-family: 'Backup Font'; src: local('Arial'); unicode-range: U+60; /* backtick because I can't think of a more innocuous character */
} @font-face { font-family: 'Papyrus Ampersand'; src: local('Papyrus'); unicode-range: U+26; /* & */
} body { font-family: 'Backup Font', 'Papyrus Ampersand', 'Main Font', sans-serif;
}

Method 3: Google Fonts text subsetting

The Google Fonts API comes with a few handy extra options to aid optimization by specifying only particular font weights, styles and alphabets (the subset parameter takes a list of alphabets like greek,cyrillic and not a unicode range, sadly), but there’s also a little-known “beta” parameter called text which is ostensibly for use in titles and logos but works equally well for our purpose:

@import url('https://fonts.googleapis.com/css?family=Raleway:400');
@import url('https://fonts.googleapis.com/css?family=Nunito:300&text=0123456789'); body { font-family: 'Nunito', 'Raleway', sans-serif;
}

So simple! So elegant!

The text parameter can take any UTF-8 characters, but make sure to URL encode them if they’re not alphanumeric. The only possible issue with this method is that we’re not creating a custom name with @font-face, so if the user already has the subset font on their system, it will use that font in its entirety.

I haven’t been able to find any other hosted font services that offer this level of granularity for subsetting but do comment below if you come across one.

A few use cases

Live Demo
Live Demo
Live Demo
Live Demo

The post Subset Numerals so They’re as Awesome as the Rest of Your Content appeared first on CSS-Tricks.

Sign Up vs. Signup

Anybody building a site in that requires users to create accounts is going to face this language challenge. You’ll probably have this language strewed across your entire site, from prominent calls-to-action in your homepage hero, to persistent header buttons, to your documentation.

So which is correct? “Sign Up” or “Signup”? Let’s try to figure it out.

With some light internet grammar research, the term “sign up” is a verbal phrase. As in, “sign” is a verb (describes an action) and “sign up” is a verb plus a complement — participial phrase, best I can tell. That sounds about right to me.

My best guess before looking into this was that “signup” isn’t even a word at all, and more of a lazy internet mistake. Just like “frontend” isn’t a word. It’s either “front-end” (a compound adjective as in a front-end developer), or “front end” (as in, “Your job is to work on the front end.”).

I was wrong, though. “Signup” is a noun. Like a thing. As in, “Go up the hallway past the water fountain and you’ll see the signup on the wall.” Which could certainly be a digital thing as well. Seems to me it wouldn’t be wrong to call a form that collects a user’s name and email address a “signup form.”

“Sign-up” is almost definitely wrong, as it’s not a compound word or compound adjective.

The fact that both “sign up” and “signup” are both legit words/phrases makes this a little tricky. Having a verbal phrase as a button seems like a solid choice, but I wouldn’t call it wrong to have a button that said “Signup” since the button presumably links directly to a form in which you can sign up and that’s the correct noun for it.

Let’s see what some popular websites do.

Twitter goes with “Sign Up” and “Log in.” We haven’t talked about the difference between “Log in” and “Login” yet, but the difference is very much the same. Verbal phrase vs. noun. The only thing weird about Twitter’s approach here is the capitalization of “Up” and the lowercase “in.” Twitter seems giant enough that they must have thought of this and decided this intentionally, so I’d love to understand why because it looks like a mistake to my eyes.

Facebook, like Twitter, goes with “Sign Up” and “Log In.”

Google goes with “Sign in” and “Create account.” It’s not terribly rare to see companies use the “Create” verb. Visiting Microsoft’s Azure site, they used the copy “Create your account today” complemented with a “Start free” button. Slack uses “Sign in” and “Get Started.”

I can see the appeal of going with symmetry. Zoom uses “SIGN IN” and “SIGN UP” with the use of all-caps giving a pass on having to decide which words are capitalized.

Figma goes the “Sign In” and “Sign up” route, almost having symmetry — but what’s up with the mismatched capitalization? I thought, if anything, they’d go with a lowercase “i” because the uppercase “I” can look like a lowercase “L” and maybe that’s slightly weird.

At CodePen, we rock the “Sign Up” and “Log In” and try to be super consistent through the entire site using those two phrases.

If you’re looking for a conclusion here, I’d say that it probably doesn’t matter all that much. There are so many variations out there that people are probably used to it and you aren’t losing customers over it. It’s not like many will know the literal definition of “Signup.” I personally like active verb phrases — like “Sign Up,” “Log In,” or “Sign In” — with no particular preference for capitalization.

The post Sign Up vs. Signup appeared first on CSS-Tricks.

CSS-Tricks Chronicle XXXIV

Hey gang, time for another broad update about various goings on as we tend to do occasionally. Some various happenings around here, appearances on other sites, upcoming conferences, and the like.

I’m speaking at a handful of conferences coming up!

At the end of this month, October 29th-30th, I’ll be speaking at JAMstack_conf. Ever since I went to a jQuery conference several million years ago (by my count), I’ve always had a special place in my heart for conferences with a tech-specific focus. Certainly this whole world of JAMstack and serverless can be pretty broad, but it’s more focused than a general web design conference.


In December, I’ll be at WordCamp US. I like getting to go to WordPress-specific events to help me stay current on that community. CSS-Tricks is, and always has been a WordPress site, as are many other sites I manage. I like to keep my WordPress development chops up the best I can. I imagine the Gutenburg talk will be hot and heavy! I’ll be speaking as well, generally about front-end development.


Next Spring, March 4th-6th, I’ll be in Seattle for An Event Apart !


Over on ShopTalk, Dave and I have kicked off a series of shows we’re calling “How to Think Like a Front-End Developer.”

I’ve been fascinated by this idea for a while and have been collecting thoughts on it. I have my own ideas, but I want to contrast them with the ideas of other front-end developers much more accomplished than myself! My goal is to turn all this into a talk that I can give toward the end of this year and next year. This is partially inspired by some posts we’ve published here over the years:

…as well other people’s work, of course, like Brad Frost and Dan Mall’s Designer/Developer Workflow, and Lara Schenck and Mandy Michael’s thoughts on front-end development. Not to mention seismic shifts in the front-end development landscape through New JavaScript and Serverless.

I’ve been collecting these articles the best I can.

The ShopTalk series is happening now! A number of episodes are already published:


Speaking of ShopTalk, a while back Dave and I mused about wanting to redesign the ShopTalk Show website. We did all this work on the back end making sure all the data from our 350+ episodes is super clean and easy to work when, then I slapped a design on top of it that is honestly pretty bad.

Dan Mall heard us talk about it and reached out to us to see if he could help. Not to do the work himself… that would be amazing, but Dan had an even better idea. Instead, we would all work together to find a newcomer to design and have them work under Dan’s direction and guidence to design the site. Here’s Dan’s intro post (and note that applications are now closed).

We’re currently in the process of narrowing down the applicants and interviewing finalists. We’re planning on being very public about the process, so not only will we hopefully be helping someone who could use a bit of a break into this industry, but we’ll also help anyone else who cares to watch it happen.


I’ve recently had the pleasure of being a guest on other shows.

First up, I was on the Script & Style Show with David Walsh and Todd Gardner

I love that David has ressurected the name Script & Style. We did a site together quite a few years back with that same name!


I have a very short interview on Makerviews:

What one piece of advice would you give to other makers?

I’d say that you’re lucky. The most interesting people I know that seem to lead the most fulfilling, long, and interesting lives are those people who do interesting things, make interesting things, and generally just engage with life at a level deeper than just skating by or watching.


And my (third?) appearance on Thundernerds:


If you happen to live in Central Oregon, note that our BendJS meetups have kicked back up for the season. We’ve been having them right at our CodePen office and it’s been super fun.


I haven’t even gotten to CodePen stuff yet! Since my last chronicle, we’ve brought in a number of new employees, like Klare Frank, Cassidy Williams, and now Stephen Shaw. We’re always chugging away at polishing and maintaining CodePen, building new features, encouraging community, and everything else that running a social coding site requires.

Oh and hey! CodePen is now a registered trademark, so I can do this: CodePen®. One of our latest user-facing features is pinned items. Rest assured, we have loads of other features that are in development for y’all that are coming soon.

If you’re interested in the technology side of CodePen, we’ve dug into lots of topics lately on CodePen radio like:

The post CSS-Tricks Chronicle XXXIV appeared first on CSS-Tricks.

Continuous Integration: The What, Why and How

Not long ago, I had a novice understanding of Continuous Integration (CI) and thought it seemed like an extra process that forces engineers to do extra work on already large projects. My team began to implement CI into projects and, after some hands-on experience, I realized its great benefits, not only to the company, but to me, an engineer! In this post, I will describe CI, the benefits I’ve discovered, and how to implement it for free, and fast.

CI and Continuous Delivery (CD) are usually discussed together. Writing about both CI and CD within a post is a lot to write and read about all at once, so we’ll only discuss CI here. Maybe, I will cover CD in a future post. 😉

Table of Contents:

What is CI?

Continuous Integration, as I understand it, is a pattern of programming combining testing, safety checks, and development practices to confidently push code from a development branch to production ready branch continuously.

Microsoft Word is an example of CI. Words are written into the program and checked against spelling and grammar algorithms to assert a document’s general readability and spelling.

Why CI should be used everywhere

We’ve already touched on this a bit, but the biggest benefit of CI that I see is that it saves a lot of money by making engineers more productive. Specifically, it provides quicker feedback loops, easier integration, and it reduces bottlenecks. Directly correlating CI to company savings is hard because SaaS costs scale as the user base changes. So, if a developer wants to sell CI to the business, the formula below can be utilized. Curious just how much it can save? My friend, David Inoa, created the following demo to help calculate the savings.

See the Pen Continuous Integration (CI) Company Cost Savings Estimator by David (@davidinoa) on CodePen.

What really excites enough to scream to the top of the rooftops is how CI can benefit you and me as developers!

For starters, CI will save you time. How much? We’re talking hours per week. How? Oh, do I want to tell you! CI automatically tests your code and lets you know if it is okay to be merged in a branch that goes to production. The amount of time that you would spend testing your code and working with others to get code ready for production is a lot of time.

Then there’s the way it helps prevent code fatigue. It sports tools like Greenkeeper, which can automatically set up — and even merge — pull requests following a code review. This keeps code up-to-date and allows developers to focus on what we really need to do. You know, like writing code or living life. Code updates within packages usually only need to be reviewed for major version updates, so there’s less need to track every minor release for breaking changes that require action.

CI takes a lot of the guesswork out of updating dependencies that otherwise would take a lot of research and testing.

No excuses, use CI!

When talking to developers, the conversation usually winds up something like:

“I would use CI but…[insert excuse].”

To me, that’s a cop out! CI can be free. It can also be easy. It’s true that the benefits of CI come with some costs, including monthly fees for tools like CircleCI or Greenkeeper. But that’s a drop in the bucket with the long-term savings it provides. It’s also true that it will take time to set things up. But it’s worth calling out that the power of CI can be used for free on open source projects. If you need or want to keep your code private and don’t want pay for CI tools, then you really can build your own CI setup with a few great npm packages.

So, enough with the excuses and behold the power of CI!

What problems does CI solve?

Before digging in much further, we should cover the use cases for CI. It solves a lot of issues and comes in handy in many situations:

  • When more than one developer wants to merge into a production branch at once
  • When mistakes are not caught or cannot be fixed before deployment
  • When dependencies are out of date
  • When developers have to wait extended periods of time to merge code
  • When packages are dependent on other packages
  • When a package is updated and must be changed in multiple place
CI tests updates and prevents bugs from being deployed.

Recommended CI tools

Let’s look at the high level parts used to create a CI feedback loop with some quick code bits to get CI setup for any open source project today. We’ll break this down into digestible chunks.

Documentation

In order to get CI working for me right away, I usually set CI up to test my initial documentation for a project. Specifically, I use MarkdownLint and Write Good because they provide all the features and functionality I need to write tests for this part of the project.

The great news is that GitHub provides standard templates and there is a lot of content that can be copied to get documentation setup quickly. Read more about quickly setting up documentation and creating a documentation feedback loop.

I keep a package.json file at the root of the project and run a script command like this:

"grammar": "write-good *.md --no-passive", "markdownlint": "markdownlint *.md"

Those two lines allow me to start using CI. That’s it! I can now run CI to test grammar.

At this point, I can move onto setting up CircleCI and Greenkeeper to help me make sure that packages are up to date. We’ll get to that in just a bit.

Unit testing

Unit tests are a method for testing small blocks (units) of code to ensure that the expected behavior of that block works as intended.

Unit tests provide a lot of help with CI. They define code quality and provide developers with feedback without having to push/merge/host code. Read more about unit tests and quickly setting a unit test feedback loop.

Here is an example of a very basic unit test without using a library:

const addsOne = (num) => num + 1 // We start with 1 as an initial value const numPlus1 = addsOne(3) // Function to add 3 const stringNumPlus1 = addsOne('3') // Add the two functions, expect 4 as the value /** * console.assert * https://developer.mozilla.org/en-US/docs/Web/API/console/assert * @param test? * @param string * @returns string if the test fails **/ console.assert(numPlus1 === 4, 'The variable `numPlus1` is not 4!') console.assert(stringNumPlus1 === 4, 'The variable `stringNumPlus1` is not 4!')

Over time, it is nice to use libraries like Jest to unit test code, but this example gives you an idea of what we’re looking at.

Here’s an example of the same test above using Jest:

const addsOne = (num) => num + 1 describe('addsOne', () => { it('adds a number', () => { const numPlus1 = addsOne(3) expect(numPlus1).toEqual(4) }) it('will not add a string', () => { const stringNumPlus1 = addsOne('3') expect(stringNumPlus1 === 4).toBeFalsy(); })
})

Using Jest, tests can be hooked up for CI with a command in a package.json like this:

"test:jest": "jest --coverage",

The flag --coverage configures Jest to report test coverage.

Safety checks

Safety checks help communicate code and code quality. Documentation, document templates, linter, spell checkers, and type checker are all safety checks. These tools can be automated to run during commits, in development, during CI, or even in a code editor.

Safety checks fall into more than one category of CI: feedback loop and testing. I’ve compiled a list of the types of safety checked I typically bake into a project.

All of these checks may seem like another layer of code abstraction or learning, so be gentle on yourself and others if this feels overwhelming. These tools have helped my own team bridge experience gaps, define shareable team patterns, and assist developers when they’re confused about what their code is doing.

  • Committing, merging, communicating: Tools like husky, commitizen, GitHub Templates, and Changelogs help keep CI running clean code and form a nice workflow for a collaborative team environment.
  • Defining code (type checkers): Tools like TypeScript define and communicate code interfaces — not only types!
  • Linting: This is the practice of ensuring that something matches defined standards and patterns. There’s a linter for nearly all programming languages and you’ve probably worked with common ones, like ESlint (JavaScript) and Stylelint (CSS) in other projects.
  • Writing and commenting: Write Good helps catch grammar errors in documentation. Tools like JSDoc, Doctrine, and TypeDoc assist in writing documentation and add useful hints in code editors. Both can compile into markdown documentation.

ESlint is a good example for how any of these types of tools are implemented in CI. For example, this is all that’s needed in package.json to lint JavaScript:

"eslint": "eslint ."

Obviously, there are many options that allow you to configure a linter to conform to you and your team’s coding standards, but you can see how practical it can be to set up.

High level CI setup

Getting CI started for a repository often takes very little time, yet there are plenty of advanced configurations we can also put to use, if needed. Let’s look at a quick setup and then move into a more advanced configuration. Even the most basic setup is beneficial for saving time and code quality!

Two features that can save developers hours per week with simple CI are automatic dependency updates and build testing. Dependency updates are written about in more detail here.

Build testing refers to node_modules installation during CI by running an install — for example, (npm install where all node_modules install as expected. This is a simple task and does fail. Ensuring that node_modules installs as expected saves considerable time!

Quick CI Setup

CI can be setup automatically for both CircleCI and Travis! If a valid test command is already defined in the repository’s package.json, then CI can be implemented without any more configuration.

In a CI tool, like CircleCI or Travis, the repository can be searched for after logging in or authentication. From there, follow the CI tool’s UI to start testing.

For JavaScript, CircleCI will look at test within a repository’s package.json to see if a valid test script is added. If it is, then CircleCI will begin running CI automatically! Read more about setting up CircleCI automatically here.

Advanced configurations

If unit tests are unfinished, or if a more configuration is needed, a .yml file can be added for a CI tool (like CircleCI) where the execute runner scripts are made.

Below is how to set up a custom CircleCI configuration with JavaScript linting (again, using ESlint as an example) for a CircleCI.

First off, run this command:

mkdir .circleci && touch .circleci/config.yml

Then add the following to generated file:

defaults: &defaults working_directory: ~/code docker: - image: circleci/node:10 environment: NPM_CONFIG_LOGLEVEL: error # make npm commands less noisy JOBS: max <h3>https://gist.github.com/ralphtheninja/f7c45bdee00784b41fed version: 2 jobs: build: <<: *defaults steps: - checkout - run: npm i - run: npm run eslint:ci

After these steps are completed and after CircleCI has been configured in GitHub (more on that here), CircleCI will pick up .circleci/config.yml and lint JavaScript in a CI process when a pull request is submitted.

I created a folder with examples in this demo repository to show ideas for configuring CI with config.yml filesand you can reference it for your own project or use the files as a starting point.

The are more even more CI tools that can be setup to help save developers more time, like auto-merging, auto-updating, monitoring, and much more!

Summary

We covered a lot here! To sum things up, setting up CI is very doable and can even be free of cost. With additional tooling (both paid and open source), we can have more time to code, and more time to write more tests for CI — or enjoy more life away from the screen!

Here are some demo repositories to help developers get setup fast or learn. Please feel free to reach out within the repositories with questions, ideas or improvements.

The post Continuous Integration: The What, Why and How appeared first on CSS-Tricks.

Demystifying JavaScript Testing

Many people have messaged me, confused about where to get started with testing. Just like everything else in software, we work hard to build abstractions to make our jobs easier. But that amount of abstraction evolves over time, until the only ones who really understand it are the ones who built the abstraction in the first place. Everyone else is left with taking the terms, APIs, and tools at face value and struggling to make things work.

One thing I believe about abstraction in code is that the abstraction is not magic — it’s code. Another I thing I believe about abstraction in code is that it’s easier to learn by doing.

Imagine that a less seasoned engineer approaches you. They’re hungry to learn, they want to be confident in their code, and they’re ready to start testing. 👍 Ever prepared to learn from you, they’ve written down a list of terms, APIs, and concepts they’d like you to define for them:

  • Assertion
  • Testing Framework
  • The describe/it/beforeEach/afterEach/test functions
  • Mocks/Stubs/Test Doubles/Spies
  • Unit/Integration/End to end/Functional/Accessibility/Acceptance/Manual testing

So…

Could you rattle off definitions for that budding engineer? Can you explain the difference between an assertion library and a testing framework? Or, are they easier for you to identify than explain?

Here’s the point. The better you understand these terms and abstractions, the more effective you will be at teaching them. And if you can teach them, you’ll be more effective at using them, too.

Enter a teach-an-engineer-to-fish moment. Did you know that you can write your own assertion library and testing framework? We often think of these abstractions as beyond our capabilities, but they’re not. Each of the popular assertion libraries and frameworks started with a single line of code, followed by another and then another. You don’t need any tools to write a simple test.

Here’s an example:

const {sum} = require('../math')
const result = sum(3, 7)
const expected = 10
if (result !== expected) { throw new Error(`${result} is not equal to ${expected}`)
}

Put that in a module called test.js and run it with node test.js and, poof, you can start getting confident that the sum function from the math.js module is working as expected. Make that run on CI and you can get the confidence that it won’t break as changes are made to the codebase. 🏆

Let’s see what a failure would look like with this:

Terminal window showing an error indicating -4 is not equal to 10.

So apparently our sum function is subtracting rather than adding and we’ve been able to automatically detect that through this script. All we need to do now is fix the sum function, run our test script again and:

Terminal window showing that we ran our test script and no errors were logged.

Fantastic! The script exited without an error, so we know that the sum function is working. This is the essence of a testing framework. There’s a lot more to it (e.g. nicer error messages, better assertions, etc.), but this is a good starting point to understand the foundations.

Once you understand how the abstractions work at a fundamental level, you’ll probably want to use them because, hey, you just learned to fish and now you can go fishing. And we have some pretty phenomenal fishing polls, uh, tools available to us. My favorite is the Jest testing platform. It’s amazingly capable, fully featured and allows me to write tests that give me the confidence I need to not break things as I change code.

I feel like fundamentals are so important that I included an entire module about it on TestingJavaScript.com. This is the place where you can learn the smart, efficient way to test any JavaScript application. I’m really happy with what I’ve created for you. I think it’ll help accelerate your understanding of testing tools and abstractions by giving you the chance to implement parts from scratch. The (hopeful) result? You can start writing tests that are maintainable and built to instill confidence in your code day after day. 🎣

The early bird sale is going on right now! 40% off every tier! The sale is going away in the next few days so grab this ASAP!

TestingJavaScript.com – Learn the smart, efficient way to test any JavaScript application.

P.S. Give this a try: Tweet what’s the difference between a testing framework and an assertion library? In my course, I’ll not only explain it, we’ll build our own!

The post Demystifying JavaScript Testing appeared first on CSS-Tricks.

Hand roll charts with D3 like you actually know what you’re doing

Charts! My least favorite subject besides Social Studies. But you just won’t get very far in this industry before someone wants you to make a chart. I don’t know what it is with people and charts, but apparently we can’t have a civilization without a bar chart showing Maggie’s sales for last month so by ALL MEANS — let’s make a chart.

Yes, I know this is not how you would display this data. I’m trying to make a point here.

To prepare you for that impending “OMG I’m going to have to make a chart” existential crisis that, much like death, we like to pretend is never going to happen, I’m going to show you how to hand-roll your own scatter plot graph with D3.js. This article is heavy on the code side and your first glance at the finished code is going to trigger your “fight or flight” response. But if you can get through this article, I think you will be surprised at how well you understand D3 and how confident you are that you can go make some other chart that you would rather not make.

Before we do that, though, it’s important to talk about WHY you would ever want to roll your own chart.

Building vs. Buying

When you do have to chart, you will likely reach for something that comes “out of the box.” You would never ever hand-roll a chart. The same way you would never sit around and smash your thumb with a hammer; it’s rather painful and there are more productive ways to use your hammer. Charts are rather complex user interface items. It’s not like you’re center-aligning some text in a div here. Libraries like Chart.js or Kendo UI have pre-made charts that you can just point at your data. Developers have spent thousands of hours perfecting these charts You would never ever build one of these yourself.

Or would you?

Charting libraries are fantastic, but they do impose a certain amount of restrictions on you…and sometimes they actually make it harder to do even the simple things. As Peter Parker’s grandfather said before he over-acted his dying scene in Spiderman, “With great charting libraries, comes great trade-off in flexibility.”

Toby never should have been Spiderman. FITE ME.

This is exactly the scenario I found myself in when my colleague, Jasmine Greenaway, and I decided that we could use charts to figure out who @horse_js is. In case you aren’t already a big @horse_js fan, it’s a Twitter parody account that quotes people out of context. It’s extremely awesome.

We pulled every tweet from @horse_js for the past two years. We stuck that in a Cosmos DB database and then created an Azure Function endpoint to expose the data.

And then, with a sinking feeling in our stomachs, we realized that we needed a chart. We wanted to be able to see what the data looked like as it occurred over time. We thought being able to see the data visually in a Time Series Analysis might help us identify some pattern or gain some insight about the twitter account. And indeed, it did.

We charted every tweet that @horse_js has posted in the last two years. When we look at that data on a scatter plot, it looks like this:

See the Pen wYxYNd by Burke Holland (@burkeholland) on CodePen.

Coincidentally, this is the thing we are going to build in this article.

Each tweet is displayed with the date on the x-axis, and the time of day on the y. I thought this would be easy to do with a charting library, but all the ones I tried weren’t really equipped to handle the scenario of a date across the x and a time on the y. I also couldn’t find any examples of people doing it online. Am I breaking new ground here? Am I a data visualization pioneer?

Probably. Definitely.

So, let’s take a look at how we can build this breathtaking scatter plot using D3.

Getting started with D3

Here’s the thing about D3: it looks pretty awful. I just want to get that out there so we can stop pretending like D3 code is fun to look at. It’s not. There’s no shame in saying that. Now that we’ve invited that elephant in the room to the tea party, allow me to insinuate that even though D3 code looks pretty bad, it’s actually not. There’s just a lot of it.

To get started, we need D3. I am using the CDN include for D3 5 for these examples. I’m also using Moment to work with the dates, which we’ll get to later.

https://cdnjs.cloudflare.com/ajax/libs/d3/5.7.0/d3.min.js
https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.22.2/moment.min.js

D3 works with SVG. That’s what it does. It basically marries SVG with data and provides some handy pre-built mechanisms for visualization it — things such as axis. Or Axees? Axises? Whatever the plural of “axis” is. But for now, just know that it’s like jQuery for SVG.

So, the first thing we need is an SVG element to work with.

<svg id="chart"></svg>

OK. Now we’re ready to start D3’ing our way to data visualization infamy. The first thing we’re going to do is make our scatter plot a class. We want to make this thing as generic as possible so that we can re-use it with other sets of data. We’ll start with a constructor that takes two parameters. The first will be the class or id of the element we are about to work with (in our case that’s, #chart) and the second is an object that will allow us to pass in any parameters that might vary from chart-to-chart (e.g. data, width, etc.).

class ScatterPlot { constructor(el, options) { }
}

The chart code itself will go in a render function, which will also require the data set we’re working with to be passed.

class ScatterPlot { constructor(el, options) { this.render(options.data); } render(data) { }
}

The first thing we’ll do in our render method is set some size values and margins for our chart.

class ScatterPlot { constructor(el, options) { this.data = options.data || []; this.width = options.width || 500; this.height = options.height || 400; this.render(); } render() { let margin = { top: 20, right: 20, bottom: 50, left: 60 }; let height = this.height || 400; let width = (this.height || 400) - margin.top - margin.bottom; let data = this.data; }
}

I mentioned that D3 is like jQuery for SVG, and I think that analogy sticks. So you can see what I mean, let’s make a simple SVG drawing with D3.

For starters, you need to select the DOM element that SVG is going to work with. Once you do that, you can start appending things and setting their attributes. D3, just like jQuery, is built on the concept of chaining, so each function that you call returns an instance of the element on which you called it. In this manner, you can keep on adding elements and attributes until the cows come home.

For instance, let’s say we wanted to draw a square. With D3, we can draw a rectangle (in SVG that’s a rect), adding the necessary attributes along the way.

See the Pen zmdpJZ by Burke Holland (@burkeholland) on CodePen.

NOW. At this point you will say, “But I don’t know SVG.” Well, I don’t either. But I do know how to Google and there is no shortage of articles on how to do pretty much anything in SVG.

So, how do we get from a rectangle to a chart? This is where D3 becomes way more than just “jQuery for drawing.”

​​First, let’s create a chart. We start with an empty SVG element in our markup. We use D3 to select that empty svg element (called #chart​) and define its width and height as well as margins.

// create the chart
this.chart = d3.select(this.el) .attr('width', width + margin.right + margin.left) .attr('height', height + margin.top + margin.bottom);

And here’s what it looks like:

See the Pen EdpOqy by Burke Holland (@burkeholland) on CodePen.

AMAZING! Nothing there. If you open the dev tools, you’ll see that there is something there. It’s just an empty something. Kind of like my soul.

That’s your chart! Let’s go about putting some data in it. For that, we are going to need to define our x and y-axis.

That’s pretty easy in D3. You call the axisBottom method. Here, I am also formatting the tick marks with the right date format to display.

let xAxis = d3.axisBottom(x).tickFormat(d3.timeFormat('%b-%y'));

I am also passing an “x” parameter to the axisBottom method. What is that? That is called a scale.

D3 scales

D3 has something called scales. Scales are just a way of telling D3 where to put your data and D3 has a lot of different types of scales. The most common kind would be linear — like a scale of data from 1 to 10. It also contains a scale just for time series data — which is what we need for this chart. We can use the scaleTime method to define a “scale” for our x-axis.

// define the x-axis
let minDateValue = d3.min(data, d => { return new Date(moment(d.created_at).format('MM-DD-YYYY'));
}); let maxDateValue = d3.max(data, d => { return new Date(moment(d.created_at).format('MM-DD-YYYY'));
}); let x = d3.scaleTime() .domain([minDateValue, maxDateValue]) .range([0, width]); let xAxis = d3.axisBottom(x).tickFormat(d3.timeFormat('%b-%y'));

D3 scales use some terminology that is slightly intimidating. There are two main concepts to understand here: domains and ranges.

  • Domain: The range of possible values in your data set. In my case, I’m getting the minimum date from the array, and the maximum date from the array. Every other value in the data set falls between these two endpoints — so those “endpoints” define my domain.
  • Range: The range over which to display your data set. In other words, how spread out do you want your data to be? In our case, we want it constrained to the width of the chart, so we just pass width as the second parameter. If we passed a value like, say, 10000, our data out over 10,000 pixels wide. If we passed no value at all, it would draw all of the data on top of itself all on the left-hand side of the chart… like the following image.

The y-axis is built in the same way. Only, for it, we are going to be formatting our data for time, not date.

// define y axis
let minTimeValue = new Date().setHours(0, 0, 0, 0);
let maxTimeValue = new Date().setHours(23, 59, 59, 999); let y = d3.scaleTime() .domain([minTimeValue, maxTimeValue]) .nice(d3.timeDay) .range([height, 0]); let yAxis = d3.axisLeft(y).ticks(24).tickFormat(d3.timeFormat('%H:%M'));

The extra nice method call on the y scale tells the y-axis to format this time scale nicely. If we don’t include that, it won’t have a label for the top-most tick on the left-hand side because it only goes to 11:59:59 PM, rather than all the way to midnight. It’s a quirk, but we’re not making crap here. We need labels on all our ticks.

Now we’re ready to draw our axis to the chart. Remember that our chart has some margins on it. In order to properly position the items inside of our chart, we are going to create a grouping (g) element and set its width and height. Then, we can draw all of our elements in that container.

let main = this.chart.append('g') .attr('transform', `translate(${margin.left}, ${margin.top})`) .attr('width', width) .attr('height', height) .attr('class', 'main');

We’re drawing our container, accounting for margin and setting its width and height. Yes. I know. It’s tedious. But such is the state of laying things out in a browser. When was the last time you tried to horizontally and vertically center content in a div? Yeah, not so awesome prior to Flexbox and CSS Grid.

Now, we can draw our x-axis:

main.chart.append('g') .attr('transform', `translate(0, ${height})`) .attr('class', 'main axis date') .call(xAxis);

We make a container element, and then “call” the xAxis that we defined earlier. D3 draws things starting at the top-left, so we use the transform attribute to offset the x-axis from the top so it appears at the bottom. If we didn’t do that, our chart would look like this…

By specifying the transform, we push it to the bottom. Now for the y-axis:

main.append('g') .attr('class', 'main axis date') .call(yAxis);

Let’s look at all the code we have so far, and then we’ll see what this outputs to the screen.

class ScatterPlot { constructor(el, options) { this.el = el; if (options) { this.data = options.data || []; this.tooltip = options.tooltip; this.pointClass = options.pointClass || ''; this.data = options.data || []; this.width = options.width || 500; this.height = options.height || 400; this.render(); } } render() { let margin = { top: 20, right: 15, bottom: 60, left: 60 }; let height = this.height || 400; let width = (this.width || 500) - margin.right - margin.left; let data = this.data; // create the chart let chart = d3.select(this.el) .attr('width', width + margin.right + margin.left) .attr('height', height + margin.top + margin.bottom); // define the x-axis let minDateValue = d3.min(data, d => { return new Date(moment(d.created_at).format('MM-DD-YYYY')); }); let maxDateValue = d3.max(data, d => { return new Date(moment(d.created_at).format('MM-DD-YYYY')); }); let x = d3.scaleTime() .domain([minDateValue, maxDateValue]) .range([0, width]); let xAxis = d3.axisBottom(x).tickFormat(d3.timeFormat('%b-%y')); // define y axis let minTimeValue = new Date().setHours(0, 0, 0, 0); let maxTimeValue = new Date().setHours(23, 59, 59, 999); let y = d3.scaleTime() .domain([minTimeValue, maxTimeValue]) .nice(d3.timeDay) .range([height, 0]); let yAxis = d3.axisLeft(y).ticks(24).tickFormat(d3.timeFormat('%H:%M')); // define our content area let main = chart.append('g') .attr('transform', `translate(${margin.left}, ${margin.top})`) .attr('width', width) .attr('height', height) .attr('class', 'main'); // draw x axis main.append('g') .attr('transform', `translate(0, ${height})`) .attr('class', 'main axis date') .call(xAxis); // draw y axis main.append('g') .attr('class', 'main axis date') .call(yAxis); }
}

See the Pen oaeybM by Burke Holland (@burkeholland) on CodePen.

We’ve got a chart! Call your friends! Call your parents! IMPOSSIBLE IS NOTHING!

​​Axis labels

Now let’s add some chart labels. By now you may have figured out that when it comes to D3, you are doing pretty much everything by hand. Adding axis labels is no different. All we are going to do is add an svg text​ element, set it’s value and position it. That’s all.
​​
​​For the x​-axis, we can add the text label and position it using translate​. We set it’s x​ position to the middle (width / 2) of the chart. Then we subtract the left-hand margin to make sure we are centered under just the chart. I’m also using a CSS class for axis-label​ that has a text-anchor: middle​ to make sure our text is originating from the center of the text element.
​​

​​​​// text label for the x axis
​​chart.append("text") ​​ .attr("transform",
​​ "translate(" + ((width/2) + margin.left) + " ," + ​​ (height + margin.top + margin.bottom) + ")")
​​ .attr('class', 'axis-label')
​​ .text("Date Of Tweet");

​​
​​The y​-axis is the same concept — a text​ element that we manually position. This one is positioned with absolute x​ and y​ attributes. This is because our transform​ is used to rotate the label, so we use the x​ and y​ properties to position it.
​​
​​Remember: Once you rotate an element, x and y rotate with it. That means that when the text​ element is on its side like it is here, y​ now pushes it left and right and x​ pushes it up and down. Confused yet? It’s OK, you’re in great company.
​​

​​// text label for the y-axis
​​chart.append("text")
​​ .attr("transform", "rotate(-90)")
​​ .attr("y", 10)
​​ .attr("x",0 - ((height / 2) + (margin.top + margin.bottom))
​​ .attr('class', 'axis-label')
​​ .text("Time of Tweet - CST (-6)");

​​
​​

See the Pen oaeybM by Burke Holland (@burkeholland) on CodePen.

​​Now, like I said — it’s a LOT of code. That’s undeniable. But it’s not super complex code. It’s like LEGO: LEGO blocks are simple, but you can build pretty complex things with them. What I’m trying to say is it’s a highly sophisticated interlocking brick system.

​​Now that we have a chart, it’s time to draw our data.
​​

Drawing the data points

This is fairly straightforward. As usual, we create a grouping to put all our circles in. Then we loop over each item in our data set and draw an SVG circle. We have to set the position of each circle (cx and cy) based on the current data item’s date and time value. Lastly, we set its radius (r), which controls how big the circle is.

let circles = main.append('g'); data.forEach(item => { circles.append('svg:circle') .attr('class', this.pointClass) .attr('cx', d => { return x(new Date(item.created_at)); }) .attr('cy', d => { let today = new Date(); let time = new Date(item.created_at); return y(today.setHours(time.getHours(), time.getMinutes(), time.getSeconds(), time.getMilliseconds())); }) .attr('r', 5);
});

When we set the cx and cy values, we use the scale (x or y) that we defined earlier. We pass that scale the date or time value of the current data item and the scale will give us back the correct position on the chart for this item.

And, my good friend, we have a real chart with some real data in it.

See the Pen VEzdrR by Burke Holland (@burkeholland) on CodePen.

Lastly, let’s add some animation to this chart. D3 has some nice easing functions that we can use here. What we do is define a transition on each one of our circles. Basically, anything that comes after the transition method gets animated. Since D3 draws everything from the top-left, we can set the x position first and then animate the y. The result is the dots look like they are falling into place. We can use D3’s nifty easeBounce easing function to make those dots bounce when they fall.

data.forEach(item => { circles.append('svg:circle') .attr('class', this.pointClass) .attr('cx', d => { return x(new Date(item.created_at)); }) .transition() .duration(Math.floor(Math.random() * (3000-2000) + 1000)) .ease(d3.easeBounce) .attr('cy', d => { let today = new Date(); let time = new Date(item.created_at); return y(today.setHours(time.getHours(), time.getMinutes(), time.getSeconds(), time.getMilliseconds())); }) .attr('r', 5);

OK, so one more time, all together now…

class ScatterPlot { constructor(el, options) { this.el = el; this.data = options.data || []; this.width = options.width || 960; this.height = options.height || 500; this.render(); } render() { let margin = { top: 20, right: 20, bottom: 50, left: 60 }; let height = this.height - margin.bottom - margin.top; let width = this.width - margin.right - margin.left; let data = this.data; // create the chart let chart = d3.select(this.el) .attr('width', width + margin.right + margin.left) .attr('height', height + margin.top + margin.bottom); // define the x-axis let minDateValue = d3.min(data, d => { return new Date(moment(d.created_at).format('MM-DD-YYYY')); }); let maxDateValue = d3.max(data, d => { return new Date(moment(d.created_at).format('MM-DD-YYYY')); }); let x = d3.scaleTime() .domain([minDateValue, maxDateValue]) .range([0, width]); let xAxis = d3.axisBottom(x).tickFormat(d3.timeFormat('%b-%y')); // define y axis let minTimeValue = new Date().setHours(0, 0, 0, 0); let maxTimeValue = new Date().setHours(23, 59, 59, 999); let y = d3.scaleTime() .domain([minTimeValue, maxTimeValue]) .nice(d3.timeDay) .range([height, 0]); let yAxis = d3.axisLeft(y).ticks(24).tickFormat(d3.timeFormat('%H:%M')); // define our content area let main = chart.append('g') .attr('transform', `translate(${margin.left}, ${margin.top})`) .attr('width', width) .attr('height', height) .attr('class', 'main'); // draw x axis main.append('g') .attr('transform', `translate(0, ${height})`) .attr('class', 'main axis date') .call(xAxis); // draw y axis main.append('g') .attr('class', 'main axis date') .call(yAxis); // text label for the y axis
​​ chart.append("text")
​​ .attr("transform", "rotate(-90)")
​​ .attr("y", 10)
​​ .attr("x",0 - ((height / 2) + margin.top + margin.bottom)
​​ .attr('class', 'axis-label')
​​ .text("Time of Tweet - CST (-6)"); ​​ ​​ // draw the data points let circles = main.append('g'); data.forEach(item => { circles.append('svg:circle') .attr('class', this.pointClass) .attr('cx', d => { return x(new Date(item.created_at)); }) .transition() .duration(Math.floor(Math.random() * (3000-2000) + 1000)) .ease(d3.easeBounce) .attr('cy', d => { let today = new Date(); let time = new Date(item.created_at); return y(today.setHours(time.getHours(), time.getMinutes(), time.getSeconds(), time.getMilliseconds())); }) .attr('r', 5); }); }
}

We can now make a call for some data and render this chart…

// get the data
let data = fetch('https://s3-us-west-2.amazonaws.com/s.cdpn.io/4548/time-series.json').then(d => d.json()).then(data => { // massage the data a bit to get it in the right format let horseData = data.map(item => { return item.horse; }) // create the chart let chart = new ScatterPlot('#chart', { data: horseData, width: 960 });
});

And here is the whole thing, complete with a call to our Azure Function returning the data from Cosmos DB. It’s a TON of data, so be patient while we chew up all your bandwidth.

See the Pen GYvGep by Burke Holland (@burkeholland) on CodePen.

If you made it this far, I…well, I’m impressed. D3 is not an easy thing to get into. It simply doesn’t look like it’s going to be any fun. BUT, no thumbs were smashed here, and we now have complete control of this chart. We can do anything we like with it.

Check out some of these additional resources for D3, and good luck with your chart. You can do it! Or you can’t. Either way, someone has to make a chart, and it might as well be you.

For your data and API:

More on D3:

The post Hand roll charts with D3 like you actually know what you’re doing appeared first on CSS-Tricks.

Durable Functions: Fan Out Fan In Patterns

This post is a collaboration between myself and my awesome coworker, Maxime Rouiller.

Durable Functions? Wat. If you’re new to Durable, I suggest you start here with this post that covers all the essentials so that you can properly dive in. In this post, we’re going to dive into one particular use case so that you can see a Durable Function pattern at work!

Today, let’s talk about the Fan Out, Fan In pattern. We’ll do so by retrieving an open issue count from GitHub and then storing what we get. Here’s the repo where all the code lives that we’ll walk through in this post.

View Repo

About the Fan Out/Fan In Pattern

We briefly mentioned this pattern in the previous article, so let’s review. You’d likely reach for this pattern when you need to execute multiple functions in parallel and then perform some other task with those results. You can imagine that this pattern is useful for quite a lot of projects, because it’s pretty often that we have to do one thing based on data from a few other sources.

For example, let’s say you are a takeout restaurant with a ton of orders coming through. You might use this pattern to first get the order, then use that order to figure out prices for all the items, the availability of those items, and see if any of them have any sales or deals. Perhaps the sales/deals are not hosted in the same place as your prices because they are controlled by an outside sales firm. You might also need to find out what your delivery queue is like and who on your staff should get it based on their location.

That’s a lot of coordination! But you’d need to then aggregate all of that information to complete the order and process it. This is a simplified, contrived example of course, but you can see how useful it is to work on a few things concurrently so that they can then be used by one final function.

Here’s what that looks like, in abstract code and visualization

See the Pen Durable Functions: Pattern #2, Fan Out, Fan In by Sarah Drasner (@sdras) on CodePen.

const df = require('durable-functions') module.exports = df(function*(ctx) { const tasks = [] // items to process concurrently, added to an array const taskItems = yield ctx.df.callActivityAsync('fn1') taskItems.forEach(item => tasks.push(ctx.df.callActivityAsync('fn2', item)) yield ctx.df.task.all(tasks) // send results to last function for processing yield ctx.df.callActivityAsync('fn3', tasks)
})

Now that we see why we would want to use this pattern, let’s dive in to a simplified example that explains how.

Setting up your environment to work with Durable Functions

First things first. We’ve got to get development environment ready to work with Durable Functions. Let’s break that down.

GitHub Personal Access Token

To run this sample, you’ll need to create a personal access token in GitHub. If you go under your account photo, open the dropdown, and select Settings, then Developer settings in the left sidebar. In the same sidebar on the next screen, click Personal access tokens option.

Then a prompt will come up and you can click the Generate new token button. You should give your token a name that makes sense for this project. Like “Durable functions are better than burritos.” You know, something standard like that.

For the scopes/permission option, I suggest selecting “repos” which then allows to click the Generate token button and copy the token to your clipboard. Please keep in mind that you should never commit your token. (It will be revoked if you do. Ask me why I know that.) If you need more info on creating tokens, there are further instructions here.

Functions CLI

First, we’ll install the latest version of the Azure Functions CLI. We can do so by running this in our terminal:

npm i -g azure-functions-core-tools@core --unsafe-perm true

Does the unsafe perm flag freak you out? It did for me as well. Really what it’s doing is preventing UID/GID switching when package scripts run, which is necessary because the package itself is a JavaScript wrapper around .NET. Brew installing without such a flag is also available and more information about that is here.

Optional: Setting up the project in VS Code

Totally not necessary, but I like working in VS Code with Azure Functions because it has great local debugging, which is typically a pain with Serverless functions. If you haven’t already installed it, you can do so here:

Set up a Free Trial for Azure and Create a Storage Account

To run this sample, you’ll need to test drive a free trial for Azure. You can go into the portal and sign in the lefthand corner. You’ll make a new Blob Storage account, and retrieve the keys. Since we have that all squared away, we’re ready to rock!

Setting up Our Durable Function

Let’s take a look at the repo we have set up. We’ll clone or fork it:

git clone https://github.com/Azure-Samples/durablefunctions-apiscraping-nodejs.git 

Here’s what that initial file structure is like.

file structure for the durable function repo

(This visualization was made from my CLI tool.)

In local.settings.json, change GitHubToken to the value you grabbed from GitHub earlier, and do the same for the two storage keys — paste in the keys from the storage account you set up earlier.

Then run:

func extensions install
npm i
func host start

And now we’re running locally!

Understanding the Orchestrator

As you can see, we have a number of folders within the FanOutFanInCrawler directory. The functions in the directories listed GetAllRepositoriesForOrganization, GetAllOpenedIssues, and SaveRepositories are the functions that we will be coordinating.

Here’s what we’ll be doing:

  • The Orchestrator will kick off the GetAllRepositoriesForOrganization function, where we’ll pass in the organization name, retrieved from getInput() from the Orchestrator_HttpStart function
  • Since this is likely to be more than one repo, we’ll first create an empty array, then loop through all of the repos and run GetOpenedIssues, and push those onto the array. What we’re running here will all fire concurrently because it isn’t within the yield in the iterator
  • Then we’ll wait for all of the tasks to finish executing and finally call SaveRepositories which will store all of the results in Blob Storage

Since the other functions are fairly standard, let’s dig into that Orchestrator for a minute. If we look inside the Orchestrator directory, we can see it has a fairly traditional setup for a function with index.js and function.json files.

Generators

Before we dive into the Orchestrator, let’s take a very brief side tour into generators, because you won’t be able to understand the rest of the code without them.

A generator is not the only way to write this code! It could be accomplished with other asynchronous JavaScript patterns as well. It just so happens that this is a pretty clean and legible way to write it, so let’s look at it really fast.

function* generator(i) {
yield i++;
yield i++;
yield i++;
} var gen = generator(1); console.log(gen.next().value); // 1
console.log(gen.next().value); // 2
console.log(gen.next().value); // 3
console.log(gen.next()); // {value: undefined, done: true}

After the initial little asterisk following function*, you can begin to use the yield keyword. Calling a generator function does not execute the whole function in its entirety; an iterator object is returned instead. The next() method will walk over them one by one, and we’ll be given an object that tells us both the value and done — which will be a boolean of whether we’re done walking through all of the yield statements. You can see in the example above that for the last .next() call, an object is returned where done is true, letting us know we’ve iterated through all values.

Orchestrator code

We’ll start with the require statement we’ll need for this to work:

const df = require('durable-functions') module.exports = df(function*(context) { // our orchestrator code will go here
})

It’s worth noting that the asterisk there will create an iterator function.

First, we’ll get the organization name from the Orchestrator_HttpStart function and get all the repos for that organization with GetAllRepositoriesForOrganization. Note we use yield within the repositories assignment to make the function perform in sequential order.

const df = require('durable-functions') module.exports = df(function*(context) { var organizationName = context.df.getInput() var repositories = yield context.df.callActivityAsync( 'GetAllRepositoriesForOrganization', organizationName )
})

Then we’re going to create an empty array named output, create a for loop from the array we got containing all of the organization’s repos, and use that to push the issues into the array. Note that we don’t use yield here so that they’re all running concurrently instead of waiting one after another.

const df = require('durable-functions') module.exports = df(function*(context) { var organizationName = context.df.getInput() var repositories = yield context.df.callActivityAsync( 'GetAllRepositoriesForOrganization', organizationName ) var output = [] for (var i = 0; i < repositories.length; i++) { output.push( context.df.callActivityAsync('GetOpenedIssues', repositories[i]) ) } })

Finally, when all of these executions are done, we’re going to store the results and pass that in to the SaveRepositories function, which will save them to Blob Storage. Then we’ll return the unique ID of the instance (context.instanceId).

const df = require('durable-functions') module.exports = df(function*(context) { var organizationName = context.df.getInput() var repositories = yield context.df.callActivityAsync( 'GetAllRepositoriesForOrganization', organizationName ) var output = [] for (var i = 0; i < repositories.length; i++) { output.push( context.df.callActivityAsync('GetOpenedIssues', repositories[i]) ) } const results = yield context.df.Task.all(output) yield context.df.callActivityAsync('SaveRepositories', results) return context.instanceId
})

Now we’ve got all the steps we need to manage all of our functions with this single orchestrator!

Deploy

Now the fun part. Let’s deploy! 🚀

To deploy components, Azure requires you to install the Azure CLI and login with it.

First, you will need to provision the service. Look into the provision.ps1 file that’s provided to familiarize yourself with the resources we are going to create. Then, you can execute the file with the previously generated GitHub token like this:

.\provision.ps1 -githubToken <TOKEN> -resourceGroup <ResourceGroupName> -storageName <StorageAccountName> -functionName <FunctionName>

If you don’t want to install PowerShell, you can also take the commands within provision.ps1 and run it manually.

And there we have it! Our Durable Function is up and running.

The post Durable Functions: Fan Out Fan In Patterns appeared first on CSS-Tricks.

Introducing GitHub Actions

It’s a common situation: you create a site and it’s ready to go. It’s all on GitHub. But you’re not really done. You need to set up deployment. You need to set up a process that runs your tests for you and you’re not manually running commands all the time. Ideally, every time you push to master, everything runs for you: the tests, the deployment… all in one place.

Previously, there only few options here that could help with that. You could piece together other services, set them up, and integrate them with GitHub. You could also write post-commit hooks, which also help.

But now, enter GitHub Actions.

Actions are small bits of code that can be run off of various GitHub events, the the most common of which is pushing to master. But it’s not necessarily limited to that. They’re all directly integrated with GitHub, meaning you no longer need a middleware service or have to write a solution yourself. And they already have many options for you to choose from. For example, you can publish straight to npm and deploy to a variety of cloud services, (Azure, AWS, Google Cloud, Zeit… you name it) just to name a couple.

But actions are more than deploy and publish. That’s what’s so cool about them. They’re containers all the way down, so you could quite literally do pretty much anything — the possibilities are endless! You could use them to minify and concatenate CSS and JavaScript, send you information when people create issues in your repo, and more… they sky really is the limit.

You also don’t need to configure/create the containers yourself, either. Actions let you point to someone else’s repo, an existing Dockerfile, or a path, and the action will behave accordingly. This is a whole new can of worms for open source possibilities, and ecosystems.

Setting up your first action

There are two ways you can set up an action: through the workflow GUI or by writing and committing the file by hand. We’ll start with the GUI because it’s so easy to understand, then move on to writing it by hand because that offers the most control.

First, we’ll sign up for the beta by clicking on the big blue button here. It might take a little bit for them to bring you into the beta, so hang tight.

A screenshot of the GitHub Actions beta site showing a large blue button to click to join the beta.
The GitHub Actions beta site.

Now let’s create a repo. I made a small demo repo with a tiny Node.js sample site. I can already notice that I have a new tab on my repo, called Actions:

A screenshot of the sample repo showing the Actions tab in the menu.

If I click on the Actions tab, this screen shows:

screen that shows

I click “Create a New Workflow,” and then I’m shown the screen below. This tells me a few things. First, I’m creating a hidden folder called .github, and within it, I’m creating a file called main.workflow. If you were to create a workflow from scratch (which we’ll get into), you’d need to do the same.

new workflow

Now, we see in this GUI that we’re kicking off a new workflow. If we draw a line from this to our first action, a sidebar comes up with a ton of options.

show all of the action options in the sidebar

There are actions in here for npm, Filters, Google Cloud, Azure, Zeit, AWS, Docker Tags, Docker Registry, and Heroku. As mentioned earlier, you’re not limited to these options — it’s capable of so much more!

I work for Azure, so I’ll use that as an example, but each action provides you with the same options, which we’ll walk through together.

shows options for azure in the sidebar

At the top where you see the heading “GitHub Action for Azure,” there’s a “View source” link. That will take you directly to the repo that’s used to run this action. This is really nice because you can also submit a pull request to improve any of these, and have the flexibility to change what action you’re using if you’d like, with the “uses” option in the Actions panel.

Here’s a rundown of the options we’re provided:

  • Label: This is the name of the Action, as you’d assume. This name is referenced by the Workflow in the resolves array — that is what’s creating the connection between them. This piece is abstracted away for you in the GUI, but you’ll see in the next section that, if you’re working in code, you’ll need to keep the references the same to have the chaining work.
  • Runs allows you to override the entry point. This is great because if you’d like to run something like git in a container, you can!
  • Args: This is what you’d expect — it allows you to pass arguments to the container.
  • secrets and env: These are both really important because this is how you’ll use passwords and protect data without committing them directly to the repo. If you’re using something that needs one token to deploy, you’d probably use a secret here to pass that in.

Many of these actions have readmes that tell you what you need. The setup for “secrets” and “env” usually looks something like this:

action "deploy" { uses = ... secrets = [ "THIS_IS_WHAT_YOU_NEED_TO_NAME_THE_SECRET", ]
}

You can also string multiple actions together in this GUI. It’s very easy to make things work one action at a time, or in parallel. This means you can have nicely running async code simply by chaining things together in the interface.

Writing an action in code

So, what if none of the actions shown here are quite what we need? Luckily, writing actions is really pretty fun! I wrote an action to deploy a Node.js web app to Azure because that will let me deploy any time I push to the repo’s master branch. This was super fun because now I can reuse it for the rest of my web apps. Happy Sarah!

Create the app services account

If you’re using other services, this part will change, but you do need to create an existing service in whatever you’re using in order to deploy there.

First you’ll need to get your free Azure account. I like using the Azure CLI, so if you don’t already have that installed, you’d run:

brew update && brew install azure-cli

Then, we’ll log in to Azure by running:

az login

Now, we’ll create a Service Principle by running:

az ad sp create-for-rbac --name ServicePrincipalName --password PASSWORD

It will pass us this bit of output, that we’ll use in creating our action:

{ "appId": "APP_ID", "displayName": "ServicePrincipalName", "name": "http://ServicePrincipalName", "password": ..., "tenant": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
}

What’s in an action?

Here is a base example of a workflow and an action so that you can see the bones of what it’s made of:

workflow "Name of Workflow" { on = "push" resolves = ["deploy"]
} action "deploy" { uses = "actions/someaction" secrets = [ "TOKEN", ]
}

We can see that we kick off the workflow, and specify that we want it to run on push (on = "push"). There are many other options you can use as well, the full list is here.

The resolves line beneath it resolves = ["deploy"] is an array of the actions that will be chained following the workflow. This doesn’t specify the order, but rather, is a full list of everything. You can see that we called the action following “deploy” — these strings need to match, that’s how they are referencing one another.

Next, we’ll look at that action block. The first uses line is really interesting: right out of the gate, you can use any of the predefined actions we talked about earlier (here’s a list of all of them). But you can also use another person’s repo, or even files hosted on the Docker site. For example, if we wanted to execute git inside a container, we would use this one. I could do so with: uses = "docker://alpine/git:latest". (Shout out to Matt Colyer for pointing me in the right direction for the URL.)

We may need some secrets or environment variables defined here and we would use them like this:

action "Deploy Webapp" { uses = ... args = "run some code here and use a $ENV_VARIABLE_NAME" secrets = ["SECRET_NAME"] env = { ENV_VARIABLE_NAME = "myEnvVariable" }
}

Creating a custom action

What we’re going to do with our custom action is take the commands we usually run to deploy a web app to Azure, and write them in such a way that we can just pass in a few values, so that the action executes it all for us. The files look more complicated than they are- really we’re taking that first base Azure action you saw in the GUI and building on top of it.

In entrypoint.sh:

#!/bin/sh set -e echo "Login"
az login --service-principal --username "${SERVICE_PRINCIPAL}" --password "${SERVICE_PASS}" --tenant "${TENANT_ID}" echo "Creating resource group ${APPID}-group"
az group create -n ${APPID}-group -l westcentralus echo "Creating app service plan ${APPID}-plan"
az appservice plan create -g ${APPID}-group -n ${APPID}-plan --sku FREE echo "Creating webapp ${APPID}"
az webapp create -g ${APPID}-group -p ${APPID}-plan -n ${APPID} --deployment-local-git echo "Getting username/password for deployment"
DEPLOYUSER=`az webapp deployment list-publishing-profiles -n ${APPID} -g ${APPID}-group --query '[0].userName' -o tsv`
DEPLOYPASS=`az webapp deployment list-publishing-profiles -n ${APPID} -g ${APPID}-group --query '[0].userPWD' -o tsv` git remote add azure https://${DEPLOYUSER}:${DEPLOYPASS}@${APPID}.scm.azurewebsites.net/${APPID}.git git push azure master

A couple of interesting things to note about this file:

  • set -e in a shell script will make sure that if anything blows up the rest of the file doesn’t keep evaluating.
  • The lines following “Getting username/password” look a little tricky — really what they’re doing is extracting the username and password from Azure’s publishing profiles. We can then use it for the following line of code where we add the remote.
  • You might also note that in those lines we passed in -o tsv, this is something we did to format the code so we could pass it directly into an environment variable, as tsv strips out excess headers, etc.

Now we can work on our main.workflow file!

workflow "New workflow" { on = "push" resolves = ["Deploy to Azure"]
} action "Deploy to Azure" { uses = "./.github/azdeploy" secrets = ["SERVICE_PASS"] env = { SERVICE_PRINCIPAL="http://sdrasApp", TENANT_ID="72f988bf-86f1-41af-91ab-2d7cd011db47", APPID="sdrasMoonshine" }
}

The workflow piece should look familiar to you — it’s kicking off on push and resolves to the action, called “Deploy to Azure.”

uses is pointing to within the directory, which is where we housed the other file. We need to add a secret, so we can store our password for the app. We called this service pass, and we’ll configure this by going here and adding it, in settings:

adding a secret in settings

Finally, we have all of the environment variables we’ll need to run the commands. We got all of these from the earlier section where we created our App Services Account. The tenant from earlier becomes TENANT_ID, name becomes the SERVICE_PRINCIPAL, and the APPID is actually whatever you’d like to name it 🙂

You can use this action too! All of the code is open source at this repo. Just bear in mind that since we created the main.workflow manually, you will have to also edit the env variables manually within the main.workflow file — once you stop using GUI, it doesn’t work the same way anymore.

Here you can see everything deploying nicely, turning green, and we have our wonderful “Hello World” app that redeploys whenever we push to master 🎉

successful workflow showing green
Hello Work app screenshot

Game changing

GitHub actions aren’t only about websites, though you can see how handy they are for them. It’s a whole new way of thinking about how we deal with infrastructure, events, and even hosting. Consider Docker in this model.

Normally when you create a Dockerfile, you would have to write the Dockerfile, use Docker to build the image, and then push the image up somewhere so that it’s hosted for other people to download. In this paradigm, you can point it at a git repo with an existing Docker file in it, or something that’s hosted on Docker directly.

You also need to host the image anywhere as GitHub will build it for you on the fly. This keeps everything within the GitHub ecosystem, which is huge for open source, and allows for forking and sharing so much more readily. You can also put the Dockerfile directly in your action which means you don’t have to maintain a separate repo for those Dockerfiles.

All in all, it’s pretty exciting. Partially because of the flexibility: on the one hand you can choose to have a lot of abstraction and create the workflow you need with a GUI and existing action, and on the other you can write the code yourself, building and fine-tuning anything you want within a container, and even chain multiple reusable custom actions together. All in the same place you’re hosting your code.

The post Introducing GitHub Actions appeared first on CSS-Tricks.