Continuous Integration: The What, Why and How

Not long ago, I had a novice understanding of Continuous Integration (CI) and thought it seemed like an extra process that forces engineers to do extra work on already large projects. My team began to implement CI into projects and, after some hands-on experience, I realized its great benefits, not only to the company, but to me, an engineer! In this post, I will describe CI, the benefits I’ve discovered, and how to implement it for free, and fast.

CI and Continuous Delivery (CD) are usually discussed together. Writing about both CI and CD within a post is a lot to write and read about all at once, so we’ll only discuss CI here. Maybe, I will cover CD in a future post. 😉

Table of Contents:

What is CI?

Continuous Integration, as I understand it, is a pattern of programming combining testing, safety checks, and development practices to confidently push code from a development branch to production ready branch continuously.

Microsoft Word is an example of CI. Words are written into the program and checked against spelling and grammar algorithms to assert a document’s general readability and spelling.

Why CI should be used everywhere

We’ve already touched on this a bit, but the biggest benefit of CI that I see is that it saves a lot of money by making engineers more productive. Specifically, it provides quicker feedback loops, easier integration, and it reduces bottlenecks. Directly correlating CI to company savings is hard because SaaS costs scale as the user base changes. So, if a developer wants to sell CI to the business, the formula below can be utilized. Curious just how much it can save? My friend, David Inoa, created the following demo to help calculate the savings.

See the Pen Continuous Integration (CI) Company Cost Savings Estimator by David (@davidinoa) on CodePen.

What really excites enough to scream to the top of the rooftops is how CI can benefit you and me as developers!

For starters, CI will save you time. How much? We’re talking hours per week. How? Oh, do I want to tell you! CI automatically tests your code and lets you know if it is okay to be merged in a branch that goes to production. The amount of time that you would spend testing your code and working with others to get code ready for production is a lot of time.

Then there’s the way it helps prevent code fatigue. It sports tools like Greenkeeper, which can automatically set up — and even merge — pull requests following a code review. This keeps code up-to-date and allows developers to focus on what we really need to do. You know, like writing code or living life. Code updates within packages usually only need to be reviewed for major version updates, so there’s less need to track every minor release for breaking changes that require action.

CI takes a lot of the guesswork out of updating dependencies that otherwise would take a lot of research and testing.

No excuses, use CI!

When talking to developers, the conversation usually winds up something like:

“I would use CI but…[insert excuse].”

To me, that’s a cop out! CI can be free. It can also be easy. It’s true that the benefits of CI come with some costs, including monthly fees for tools like CircleCI or Greenkeeper. But that’s a drop in the bucket with the long-term savings it provides. It’s also true that it will take time to set things up. But it’s worth calling out that the power of CI can be used for free on open source projects. If you need or want to keep your code private and don’t want pay for CI tools, then you really can build your own CI setup with a few great npm packages.

So, enough with the excuses and behold the power of CI!

What problems does CI solve?

Before digging in much further, we should cover the use cases for CI. It solves a lot of issues and comes in handy in many situations:

  • When more than one developer wants to merge into a production branch at once
  • When mistakes are not caught or cannot be fixed before deployment
  • When dependencies are out of date
  • When developers have to wait extended periods of time to merge code
  • When packages are dependent on other packages
  • When a package is updated and must be changed in multiple place
CI tests updates and prevents bugs from being deployed.

Recommended CI tools

Let’s look at the high level parts used to create a CI feedback loop with some quick code bits to get CI setup for any open source project today. We’ll break this down into digestible chunks.

Documentation

In order to get CI working for me right away, I usually set CI up to test my initial documentation for a project. Specifically, I use MarkdownLint and Write Good because they provide all the features and functionality I need to write tests for this part of the project.

The great news is that GitHub provides standard templates and there is a lot of content that can be copied to get documentation setup quickly. Read more about quickly setting up documentation and creating a documentation feedback loop.

I keep a package.json file at the root of the project and run a script command like this:

"grammar": "write-good *.md --no-passive", "markdownlint": "markdownlint *.md"

Those two lines allow me to start using CI. That’s it! I can now run CI to test grammar.

At this point, I can move onto setting up CircleCI and Greenkeeper to help me make sure that packages are up to date. We’ll get to that in just a bit.

Unit testing

Unit tests are a method for testing small blocks (units) of code to ensure that the expected behavior of that block works as intended.

Unit tests provide a lot of help with CI. They define code quality and provide developers with feedback without having to push/merge/host code. Read more about unit tests and quickly setting a unit test feedback loop.

Here is an example of a very basic unit test without using a library:

const addsOne = (num) => num + 1 // We start with 1 as an initial value const numPlus1 = addsOne(3) // Function to add 3 const stringNumPlus1 = addsOne('3') // Add the two functions, expect 4 as the value /** * console.assert * https://developer.mozilla.org/en-US/docs/Web/API/console/assert * @param test? * @param string * @returns string if the test fails **/ console.assert(numPlus1 === 4, 'The variable `numPlus1` is not 4!') console.assert(stringNumPlus1 === 4, 'The variable `stringNumPlus1` is not 4!')

Over time, it is nice to use libraries like Jest to unit test code, but this example gives you an idea of what we’re looking at.

Here’s an example of the same test above using Jest:

const addsOne = (num) => num + 1 describe('addsOne', () => { it('adds a number', () => { const numPlus1 = addsOne(3) expect(numPlus1).toEqual(4) }) it('will not add a string', () => { const stringNumPlus1 = addsOne('3') expect(stringNumPlus1 === 4).toBeFalsy(); })
})

Using Jest, tests can be hooked up for CI with a command in a package.json like this:

"test:jest": "jest --coverage",

The flag --coverage configures Jest to report test coverage.

Safety checks

Safety checks help communicate code and code quality. Documentation, document templates, linter, spell checkers, and type checker are all safety checks. These tools can be automated to run during commits, in development, during CI, or even in a code editor.

Safety checks fall into more than one category of CI: feedback loop and testing. I’ve compiled a list of the types of safety checked I typically bake into a project.

All of these checks may seem like another layer of code abstraction or learning, so be gentle on yourself and others if this feels overwhelming. These tools have helped my own team bridge experience gaps, define shareable team patterns, and assist developers when they’re confused about what their code is doing.

  • Committing, merging, communicating: Tools like husky, commitizen, GitHub Templates, and Changelogs help keep CI running clean code and form a nice workflow for a collaborative team environment.
  • Defining code (type checkers): Tools like TypeScript define and communicate code interfaces — not only types!
  • Linting: This is the practice of ensuring that something matches defined standards and patterns. There’s a linter for nearly all programming languages and you’ve probably worked with common ones, like ESlint (JavaScript) and Stylelint (CSS) in other projects.
  • Writing and commenting: Write Good helps catch grammar errors in documentation. Tools like JSDoc, Doctrine, and TypeDoc assist in writing documentation and add useful hints in code editors. Both can compile into markdown documentation.

ESlint is a good example for how any of these types of tools are implemented in CI. For example, this is all that’s needed in package.json to lint JavaScript:

"eslint": "eslint ."

Obviously, there are many options that allow you to configure a linter to conform to you and your team’s coding standards, but you can see how practical it can be to set up.

High level CI setup

Getting CI started for a repository often takes very little time, yet there are plenty of advanced configurations we can also put to use, if needed. Let’s look at a quick setup and then move into a more advanced configuration. Even the most basic setup is beneficial for saving time and code quality!

Two features that can save developers hours per week with simple CI are automatic dependency updates and build testing. Dependency updates are written about in more detail here.

Build testing refers to node_modules installation during CI by running an install — for example, (npm install where all node_modules install as expected. This is a simple task and does fail. Ensuring that node_modules installs as expected saves considerable time!

Quick CI Setup

CI can be setup automatically for both CircleCI and Travis! If a valid test command is already defined in the repository’s package.json, then CI can be implemented without any more configuration.

In a CI tool, like CircleCI or Travis, the repository can be searched for after logging in or authentication. From there, follow the CI tool’s UI to start testing.

For JavaScript, CircleCI will look at test within a repository’s package.json to see if a valid test script is added. If it is, then CircleCI will begin running CI automatically! Read more about setting up CircleCI automatically here.

Advanced configurations

If unit tests are unfinished, or if a more configuration is needed, a .yml file can be added for a CI tool (like CircleCI) where the execute runner scripts are made.

Below is how to set up a custom CircleCI configuration with JavaScript linting (again, using ESlint as an example) for a CircleCI.

First off, run this command:

mkdir .circleci && touch .circleci/config.yml

Then add the following to generated file:

defaults: &defaults working_directory: ~/code docker: - image: circleci/node:10 environment: NPM_CONFIG_LOGLEVEL: error # make npm commands less noisy JOBS: max <h3>https://gist.github.com/ralphtheninja/f7c45bdee00784b41fed version: 2 jobs: build: <<: *defaults steps: - checkout - run: npm i - run: npm run eslint:ci

After these steps are completed and after CircleCI has been configured in GitHub (more on that here), CircleCI will pick up .circleci/config.yml and lint JavaScript in a CI process when a pull request is submitted.

I created a folder with examples in this demo repository to show ideas for configuring CI with config.yml filesand you can reference it for your own project or use the files as a starting point.

The are more even more CI tools that can be setup to help save developers more time, like auto-merging, auto-updating, monitoring, and much more!

Summary

We covered a lot here! To sum things up, setting up CI is very doable and can even be free of cost. With additional tooling (both paid and open source), we can have more time to code, and more time to write more tests for CI — or enjoy more life away from the screen!

Here are some demo repositories to help developers get setup fast or learn. Please feel free to reach out within the repositories with questions, ideas or improvements.

The post Continuous Integration: The What, Why and How appeared first on CSS-Tricks.

Demystifying JavaScript Testing

Many people have messaged me, confused about where to get started with testing. Just like everything else in software, we work hard to build abstractions to make our jobs easier. But that amount of abstraction evolves over time, until the only ones who really understand it are the ones who built the abstraction in the first place. Everyone else is left with taking the terms, APIs, and tools at face value and struggling to make things work.

One thing I believe about abstraction in code is that the abstraction is not magic — it’s code. Another I thing I believe about abstraction in code is that it’s easier to learn by doing.

Imagine that a less seasoned engineer approaches you. They’re hungry to learn, they want to be confident in their code, and they’re ready to start testing. 👍 Ever prepared to learn from you, they’ve written down a list of terms, APIs, and concepts they’d like you to define for them:

  • Assertion
  • Testing Framework
  • The describe/it/beforeEach/afterEach/test functions
  • Mocks/Stubs/Test Doubles/Spies
  • Unit/Integration/End to end/Functional/Accessibility/Acceptance/Manual testing

So…

Could you rattle off definitions for that budding engineer? Can you explain the difference between an assertion library and a testing framework? Or, are they easier for you to identify than explain?

Here’s the point. The better you understand these terms and abstractions, the more effective you will be at teaching them. And if you can teach them, you’ll be more effective at using them, too.

Enter a teach-an-engineer-to-fish moment. Did you know that you can write your own assertion library and testing framework? We often think of these abstractions as beyond our capabilities, but they’re not. Each of the popular assertion libraries and frameworks started with a single line of code, followed by another and then another. You don’t need any tools to write a simple test.

Here’s an example:

const {sum} = require('../math')
const result = sum(3, 7)
const expected = 10
if (result !== expected) { throw new Error(`${result} is not equal to ${expected}`)
}

Put that in a module called test.js and run it with node test.js and, poof, you can start getting confident that the sum function from the math.js module is working as expected. Make that run on CI and you can get the confidence that it won’t break as changes are made to the codebase. 🏆

Let’s see what a failure would look like with this:

Terminal window showing an error indicating -4 is not equal to 10.

So apparently our sum function is subtracting rather than adding and we’ve been able to automatically detect that through this script. All we need to do now is fix the sum function, run our test script again and:

Terminal window showing that we ran our test script and no errors were logged.

Fantastic! The script exited without an error, so we know that the sum function is working. This is the essence of a testing framework. There’s a lot more to it (e.g. nicer error messages, better assertions, etc.), but this is a good starting point to understand the foundations.

Once you understand how the abstractions work at a fundamental level, you’ll probably want to use them because, hey, you just learned to fish and now you can go fishing. And we have some pretty phenomenal fishing polls, uh, tools available to us. My favorite is the Jest testing platform. It’s amazingly capable, fully featured and allows me to write tests that give me the confidence I need to not break things as I change code.

I feel like fundamentals are so important that I included an entire module about it on TestingJavaScript.com. This is the place where you can learn the smart, efficient way to test any JavaScript application. I’m really happy with what I’ve created for you. I think it’ll help accelerate your understanding of testing tools and abstractions by giving you the chance to implement parts from scratch. The (hopeful) result? You can start writing tests that are maintainable and built to instill confidence in your code day after day. 🎣

The early bird sale is going on right now! 40% off every tier! The sale is going away in the next few days so grab this ASAP!

TestingJavaScript.com – Learn the smart, efficient way to test any JavaScript application.

P.S. Give this a try: Tweet what’s the difference between a testing framework and an assertion library? In my course, I’ll not only explain it, we’ll build our own!

The post Demystifying JavaScript Testing appeared first on CSS-Tricks.

Introducing GitHub Actions

It’s a common situation: you create a site and it’s ready to go. It’s all on GitHub. But you’re not really done. You need to set up deployment. You need to set up a process that runs your tests for you and you’re not manually running commands all the time. Ideally, every time you push to master, everything runs for you: the tests, the deployment… all in one place.

Previously, there only few options here that could help with that. You could piece together other services, set them up, and integrate them with GitHub. You could also write post-commit hooks, which also help.

But now, enter GitHub Actions.

Actions are small bits of code that can be run off of various GitHub events, the the most common of which is pushing to master. But it’s not necessarily limited to that. They’re all directly integrated with GitHub, meaning you no longer need a middleware service or have to write a solution yourself. And they already have many options for you to choose from. For example, you can publish straight to npm and deploy to a variety of cloud services, (Azure, AWS, Google Cloud, Zeit… you name it) just to name a couple.

But actions are more than deploy and publish. That’s what’s so cool about them. They’re containers all the way down, so you could quite literally do pretty much anything — the possibilities are endless! You could use them to minify and concatenate CSS and JavaScript, send you information when people create issues in your repo, and more… they sky really is the limit.

You also don’t need to configure/create the containers yourself, either. Actions let you point to someone else’s repo, an existing Dockerfile, or a path, and the action will behave accordingly. This is a whole new can of worms for open source possibilities, and ecosystems.

Setting up your first action

There are two ways you can set up an action: through the workflow GUI or by writing and committing the file by hand. We’ll start with the GUI because it’s so easy to understand, then move on to writing it by hand because that offers the most control.

First, we’ll sign up for the beta by clicking on the big blue button here. It might take a little bit for them to bring you into the beta, so hang tight.

A screenshot of the GitHub Actions beta site showing a large blue button to click to join the beta.
The GitHub Actions beta site.

Now let’s create a repo. I made a small demo repo with a tiny Node.js sample site. I can already notice that I have a new tab on my repo, called Actions:

A screenshot of the sample repo showing the Actions tab in the menu.

If I click on the Actions tab, this screen shows:

screen that shows

I click “Create a New Workflow,” and then I’m shown the screen below. This tells me a few things. First, I’m creating a hidden folder called .github, and within it, I’m creating a file called main.workflow. If you were to create a workflow from scratch (which we’ll get into), you’d need to do the same.

new workflow

Now, we see in this GUI that we’re kicking off a new workflow. If we draw a line from this to our first action, a sidebar comes up with a ton of options.

show all of the action options in the sidebar

There are actions in here for npm, Filters, Google Cloud, Azure, Zeit, AWS, Docker Tags, Docker Registry, and Heroku. As mentioned earlier, you’re not limited to these options — it’s capable of so much more!

I work for Azure, so I’ll use that as an example, but each action provides you with the same options, which we’ll walk through together.

shows options for azure in the sidebar

At the top where you see the heading “GitHub Action for Azure,” there’s a “View source” link. That will take you directly to the repo that’s used to run this action. This is really nice because you can also submit a pull request to improve any of these, and have the flexibility to change what action you’re using if you’d like, with the “uses” option in the Actions panel.

Here’s a rundown of the options we’re provided:

  • Label: This is the name of the Action, as you’d assume. This name is referenced by the Workflow in the resolves array — that is what’s creating the connection between them. This piece is abstracted away for you in the GUI, but you’ll see in the next section that, if you’re working in code, you’ll need to keep the references the same to have the chaining work.
  • Runs allows you to override the entry point. This is great because if you’d like to run something like git in a container, you can!
  • Args: This is what you’d expect — it allows you to pass arguments to the container.
  • secrets and env: These are both really important because this is how you’ll use passwords and protect data without committing them directly to the repo. If you’re using something that needs one token to deploy, you’d probably use a secret here to pass that in.

Many of these actions have readmes that tell you what you need. The setup for “secrets” and “env” usually looks something like this:

action "deploy" { uses = ... secrets = [ "THIS_IS_WHAT_YOU_NEED_TO_NAME_THE_SECRET", ]
}

You can also string multiple actions together in this GUI. It’s very easy to make things work one action at a time, or in parallel. This means you can have nicely running async code simply by chaining things together in the interface.

Writing an action in code

So, what if none of the actions shown here are quite what we need? Luckily, writing actions is really pretty fun! I wrote an action to deploy a Node.js web app to Azure because that will let me deploy any time I push to the repo’s master branch. This was super fun because now I can reuse it for the rest of my web apps. Happy Sarah!

Create the app services account

If you’re using other services, this part will change, but you do need to create an existing service in whatever you’re using in order to deploy there.

First you’ll need to get your free Azure account. I like using the Azure CLI, so if you don’t already have that installed, you’d run:

brew update && brew install azure-cli

Then, we’ll log in to Azure by running:

az login

Now, we’ll create a Service Principle by running:

az ad sp create-for-rbac --name ServicePrincipalName --password PASSWORD

It will pass us this bit of output, that we’ll use in creating our action:

{ "appId": "APP_ID", "displayName": "ServicePrincipalName", "name": "http://ServicePrincipalName", "password": ..., "tenant": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
}

What’s in an action?

Here is a base example of a workflow and an action so that you can see the bones of what it’s made of:

workflow "Name of Workflow" { on = "push" resolves = ["deploy"]
} action "deploy" { uses = "actions/someaction" secrets = [ "TOKEN", ]
}

We can see that we kick off the workflow, and specify that we want it to run on push (on = "push"). There are many other options you can use as well, the full list is here.

The resolves line beneath it resolves = ["deploy"] is an array of the actions that will be chained following the workflow. This doesn’t specify the order, but rather, is a full list of everything. You can see that we called the action following “deploy” — these strings need to match, that’s how they are referencing one another.

Next, we’ll look at that action block. The first uses line is really interesting: right out of the gate, you can use any of the predefined actions we talked about earlier (here’s a list of all of them). But you can also use another person’s repo, or even files hosted on the Docker site. For example, if we wanted to execute git inside a container, we would use this one. I could do so with: uses = "docker://alpine/git:latest". (Shout out to Matt Colyer for pointing me in the right direction for the URL.)

We may need some secrets or environment variables defined here and we would use them like this:

action "Deploy Webapp" { uses = ... args = "run some code here and use a $ENV_VARIABLE_NAME" secrets = ["SECRET_NAME"] env = { ENV_VARIABLE_NAME = "myEnvVariable" }
}

Creating a custom action

What we’re going to do with our custom action is take the commands we usually run to deploy a web app to Azure, and write them in such a way that we can just pass in a few values, so that the action executes it all for us. The files look more complicated than they are- really we’re taking that first base Azure action you saw in the GUI and building on top of it.

In entrypoint.sh:

#!/bin/sh set -e echo "Login"
az login --service-principal --username "${SERVICE_PRINCIPAL}" --password "${SERVICE_PASS}" --tenant "${TENANT_ID}" echo "Creating resource group ${APPID}-group"
az group create -n ${APPID}-group -l westcentralus echo "Creating app service plan ${APPID}-plan"
az appservice plan create -g ${APPID}-group -n ${APPID}-plan --sku FREE echo "Creating webapp ${APPID}"
az webapp create -g ${APPID}-group -p ${APPID}-plan -n ${APPID} --deployment-local-git echo "Getting username/password for deployment"
DEPLOYUSER=`az webapp deployment list-publishing-profiles -n ${APPID} -g ${APPID}-group --query '[0].userName' -o tsv`
DEPLOYPASS=`az webapp deployment list-publishing-profiles -n ${APPID} -g ${APPID}-group --query '[0].userPWD' -o tsv` git remote add azure https://${DEPLOYUSER}:${DEPLOYPASS}@${APPID}.scm.azurewebsites.net/${APPID}.git git push azure master

A couple of interesting things to note about this file:

  • set -e in a shell script will make sure that if anything blows up the rest of the file doesn’t keep evaluating.
  • The lines following “Getting username/password” look a little tricky — really what they’re doing is extracting the username and password from Azure’s publishing profiles. We can then use it for the following line of code where we add the remote.
  • You might also note that in those lines we passed in -o tsv, this is something we did to format the code so we could pass it directly into an environment variable, as tsv strips out excess headers, etc.

Now we can work on our main.workflow file!

workflow "New workflow" { on = "push" resolves = ["Deploy to Azure"]
} action "Deploy to Azure" { uses = "./.github/azdeploy" secrets = ["SERVICE_PASS"] env = { SERVICE_PRINCIPAL="http://sdrasApp", TENANT_ID="72f988bf-86f1-41af-91ab-2d7cd011db47", APPID="sdrasMoonshine" }
}

The workflow piece should look familiar to you — it’s kicking off on push and resolves to the action, called “Deploy to Azure.”

uses is pointing to within the directory, which is where we housed the other file. We need to add a secret, so we can store our password for the app. We called this service pass, and we’ll configure this by going here and adding it, in settings:

adding a secret in settings

Finally, we have all of the environment variables we’ll need to run the commands. We got all of these from the earlier section where we created our App Services Account. The tenant from earlier becomes TENANT_ID, name becomes the SERVICE_PRINCIPAL, and the APPID is actually whatever you’d like to name it 🙂

You can use this action too! All of the code is open source at this repo. Just bear in mind that since we created the main.workflow manually, you will have to also edit the env variables manually within the main.workflow file — once you stop using GUI, it doesn’t work the same way anymore.

Here you can see everything deploying nicely, turning green, and we have our wonderful “Hello World” app that redeploys whenever we push to master 🎉

successful workflow showing green
Hello Work app screenshot

Game changing

GitHub actions aren’t only about websites, though you can see how handy they are for them. It’s a whole new way of thinking about how we deal with infrastructure, events, and even hosting. Consider Docker in this model.

Normally when you create a Dockerfile, you would have to write the Dockerfile, use Docker to build the image, and then push the image up somewhere so that it’s hosted for other people to download. In this paradigm, you can point it at a git repo with an existing Docker file in it, or something that’s hosted on Docker directly.

You also need to host the image anywhere as GitHub will build it for you on the fly. This keeps everything within the GitHub ecosystem, which is huge for open source, and allows for forking and sharing so much more readily. You can also put the Dockerfile directly in your action which means you don’t have to maintain a separate repo for those Dockerfiles.

All in all, it’s pretty exciting. Partially because of the flexibility: on the one hand you can choose to have a lot of abstraction and create the workflow you need with a GUI and existing action, and on the other you can write the code yourself, building and fine-tuning anything you want within a container, and even chain multiple reusable custom actions together. All in the same place you’re hosting your code.

The post Introducing GitHub Actions appeared first on CSS-Tricks.