Introducing GitHub Actions

It’s a common situation: you create a site and it’s ready to go. It’s all on GitHub. But you’re not really done. You need to set up deployment. You need to set up a process that runs your tests for you and you’re not manually running commands all the time. Ideally, every time you push to master, everything runs for you: the tests, the deployment… all in one place.

Previously, there only few options here that could help with that. You could piece together other services, set them up, and integrate them with GitHub. You could also write post-commit hooks, which also help.

But now, enter GitHub Actions.

Actions are small bits of code that can be run off of various GitHub events, the the most common of which is pushing to master. But it’s not necessarily limited to that. They’re all directly integrated with GitHub, meaning you no longer need a middleware service or have to write a solution yourself. And they already have many options for you to choose from. For example, you can publish straight to npm and deploy to a variety of cloud services, (Azure, AWS, Google Cloud, Zeit… you name it) just to name a couple.

But actions are more than deploy and publish. That’s what’s so cool about them. They’re containers all the way down, so you could quite literally do pretty much anything — the possibilities are endless! You could use them to minify and concatenate CSS and JavaScript, send you information when people create issues in your repo, and more… they sky really is the limit.

You also don’t need to configure/create the containers yourself, either. Actions let you point to someone else’s repo, an existing Dockerfile, or a path, and the action will behave accordingly. This is a whole new can of worms for open source possibilities, and ecosystems.

Setting up your first action

There are two ways you can set up an action: through the workflow GUI or by writing and committing the file by hand. We’ll start with the GUI because it’s so easy to understand, then move on to writing it by hand because that offers the most control.

First, we’ll sign up for the beta by clicking on the big blue button here. It might take a little bit for them to bring you into the beta, so hang tight.

A screenshot of the GitHub Actions beta site showing a large blue button to click to join the beta.
The GitHub Actions beta site.

Now let’s create a repo. I made a small demo repo with a tiny Node.js sample site. I can already notice that I have a new tab on my repo, called Actions:

A screenshot of the sample repo showing the Actions tab in the menu.

If I click on the Actions tab, this screen shows:

screen that shows

I click “Create a New Workflow,” and then I’m shown the screen below. This tells me a few things. First, I’m creating a hidden folder called .github, and within it, I’m creating a file called main.workflow. If you were to create a workflow from scratch (which we’ll get into), you’d need to do the same.

new workflow

Now, we see in this GUI that we’re kicking off a new workflow. If we draw a line from this to our first action, a sidebar comes up with a ton of options.

show all of the action options in the sidebar

There are actions in here for npm, Filters, Google Cloud, Azure, Zeit, AWS, Docker Tags, Docker Registry, and Heroku. As mentioned earlier, you’re not limited to these options — it’s capable of so much more!

I work for Azure, so I’ll use that as an example, but each action provides you with the same options, which we’ll walk through together.

shows options for azure in the sidebar

At the top where you see the heading “GitHub Action for Azure,” there’s a “View source” link. That will take you directly to the repo that’s used to run this action. This is really nice because you can also submit a pull request to improve any of these, and have the flexibility to change what action you’re using if you’d like, with the “uses” option in the Actions panel.

Here’s a rundown of the options we’re provided:

  • Label: This is the name of the Action, as you’d assume. This name is referenced by the Workflow in the resolves array — that is what’s creating the connection between them. This piece is abstracted away for you in the GUI, but you’ll see in the next section that, if you’re working in code, you’ll need to keep the references the same to have the chaining work.
  • Runs allows you to override the entry point. This is great because if you’d like to run something like git in a container, you can!
  • Args: This is what you’d expect — it allows you to pass arguments to the container.
  • secrets and env: These are both really important because this is how you’ll use passwords and protect data without committing them directly to the repo. If you’re using something that needs one token to deploy, you’d probably use a secret here to pass that in.

Many of these actions have readmes that tell you what you need. The setup for “secrets” and “env” usually looks something like this:

action "deploy" { uses = ... secrets = [ "THIS_IS_WHAT_YOU_NEED_TO_NAME_THE_SECRET", ]
}

You can also string multiple actions together in this GUI. It’s very easy to make things work one action at a time, or in parallel. This means you can have nicely running async code simply by chaining things together in the interface.

Writing an action in code

So, what if none of the actions shown here are quite what we need? Luckily, writing actions is really pretty fun! I wrote an action to deploy a Node.js web app to Azure because that will let me deploy any time I push to the repo’s master branch. This was super fun because now I can reuse it for the rest of my web apps. Happy Sarah!

Create the app services account

If you’re using other services, this part will change, but you do need to create an existing service in whatever you’re using in order to deploy there.

First you’ll need to get your free Azure account. I like using the Azure CLI, so if you don’t already have that installed, you’d run:

brew update && brew install azure-cli

Then, we’ll log in to Azure by running:

az login

Now, we’ll create a Service Principle by running:

az ad sp create-for-rbac --name ServicePrincipalName --password PASSWORD

It will pass us this bit of output, that we’ll use in creating our action:

{ "appId": "APP_ID", "displayName": "ServicePrincipalName", "name": "http://ServicePrincipalName", "password": ..., "tenant": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
}

What’s in an action?

Here is a base example of a workflow and an action so that you can see the bones of what it’s made of:

workflow "Name of Workflow" { on = "push" resolves = ["deploy"]
} action "deploy" { uses = "actions/someaction" secrets = [ "TOKEN", ]
}

We can see that we kick off the workflow, and specify that we want it to run on push (on = "push"). There are many other options you can use as well, the full list is here.

The resolves line beneath it resolves = ["deploy"] is an array of the actions that will be chained following the workflow. This doesn’t specify the order, but rather, is a full list of everything. You can see that we called the action following “deploy” — these strings need to match, that’s how they are referencing one another.

Next, we’ll look at that action block. The first uses line is really interesting: right out of the gate, you can use any of the predefined actions we talked about earlier (here’s a list of all of them). But you can also use another person’s repo, or even files hosted on the Docker site. For example, if we wanted to execute git inside a container, we would use this one. I could do so with: uses = "docker://alpine/git:latest". (Shout out to Matt Colyer for pointing me in the right direction for the URL.)

We may need some secrets or environment variables defined here and we would use them like this:

action "Deploy Webapp" { uses = ... args = "run some code here and use a $ENV_VARIABLE_NAME" secrets = ["SECRET_NAME"] env = { ENV_VARIABLE_NAME = "myEnvVariable" }
}

Creating a custom action

What we’re going to do with our custom action is take the commands we usually run to deploy a web app to Azure, and write them in such a way that we can just pass in a few values, so that the action executes it all for us. The files look more complicated than they are- really we’re taking that first base Azure action you saw in the GUI and building on top of it.

In entrypoint.sh:

#!/bin/sh set -e echo "Login"
az login --service-principal --username "${SERVICE_PRINCIPAL}" --password "${SERVICE_PASS}" --tenant "${TENANT_ID}" echo "Creating resource group ${APPID}-group"
az group create -n ${APPID}-group -l westcentralus echo "Creating app service plan ${APPID}-plan"
az appservice plan create -g ${APPID}-group -n ${APPID}-plan --sku FREE echo "Creating webapp ${APPID}"
az webapp create -g ${APPID}-group -p ${APPID}-plan -n ${APPID} --deployment-local-git echo "Getting username/password for deployment"
DEPLOYUSER=`az webapp deployment list-publishing-profiles -n ${APPID} -g ${APPID}-group --query '[0].userName' -o tsv`
DEPLOYPASS=`az webapp deployment list-publishing-profiles -n ${APPID} -g ${APPID}-group --query '[0].userPWD' -o tsv` git remote add azure https://${DEPLOYUSER}:${DEPLOYPASS}@${APPID}.scm.azurewebsites.net/${APPID}.git git push azure master

A couple of interesting things to note about this file:

  • set -e in a shell script will make sure that if anything blows up the rest of the file doesn’t keep evaluating.
  • The lines following “Getting username/password” look a little tricky — really what they’re doing is extracting the username and password from Azure’s publishing profiles. We can then use it for the following line of code where we add the remote.
  • You might also note that in those lines we passed in -o tsv, this is something we did to format the code so we could pass it directly into an environment variable, as tsv strips out excess headers, etc.

Now we can work on our main.workflow file!

workflow "New workflow" { on = "push" resolves = ["Deploy to Azure"]
} action "Deploy to Azure" { uses = "./.github/azdeploy" secrets = ["SERVICE_PASS"] env = { SERVICE_PRINCIPAL="http://sdrasApp", TENANT_ID="72f988bf-86f1-41af-91ab-2d7cd011db47", APPID="sdrasMoonshine" }
}

The workflow piece should look familiar to you — it’s kicking off on push and resolves to the action, called “Deploy to Azure.”

uses is pointing to within the directory, which is where we housed the other file. We need to add a secret, so we can store our password for the app. We called this service pass, and we’ll configure this by going here and adding it, in settings:

adding a secret in settings

Finally, we have all of the environment variables we’ll need to run the commands. We got all of these from the earlier section where we created our App Services Account. The tenant from earlier becomes TENANT_ID, name becomes the SERVICE_PRINCIPAL, and the APPID is actually whatever you’d like to name it 🙂

You can use this action too! All of the code is open source at this repo. Just bear in mind that since we created the main.workflow manually, you will have to also edit the env variables manually within the main.workflow file — once you stop using GUI, it doesn’t work the same way anymore.

Here you can see everything deploying nicely, turning green, and we have our wonderful “Hello World” app that redeploys whenever we push to master 🎉

successful workflow showing green
Hello Work app screenshot

Game changing

GitHub actions aren’t only about websites, though you can see how handy they are for them. It’s a whole new way of thinking about how we deal with infrastructure, events, and even hosting. Consider Docker in this model.

Normally when you create a Dockerfile, you would have to write the Dockerfile, use Docker to build the image, and then push the image up somewhere so that it’s hosted for other people to download. In this paradigm, you can point it at a git repo with an existing Docker file in it, or something that’s hosted on Docker directly.

You also need to host the image anywhere as GitHub will build it for you on the fly. This keeps everything within the GitHub ecosystem, which is huge for open source, and allows for forking and sharing so much more readily. You can also put the Dockerfile directly in your action which means you don’t have to maintain a separate repo for those Dockerfiles.

All in all, it’s pretty exciting. Partially because of the flexibility: on the one hand you can choose to have a lot of abstraction and create the workflow you need with a GUI and existing action, and on the other you can write the code yourself, building and fine-tuning anything you want within a container, and even chain multiple reusable custom actions together. All in the same place you’re hosting your code.

The post Introducing GitHub Actions appeared first on CSS-Tricks.

Helping a Beginner Understand Getting a Website Live

I got a great email from a fellow named Josh Long the other day. He is, in his words, “relatively new to web design” and was a bit stuck on the concept of getting a site live. I should say that I’m happy to get emails like this an I always read them, but I typically can’t offer tech support over email. If I can respond at all, I normally point people to other community resources.

In this case, it struck me what a perfect moment this is for Josh. He’s a little confused, but he knows enough to be asking a lot of questions and sorting through all this stuff. I figured this was a wonderful opportunity to dig into his questions, hopefully helping him and just maybe helping others in a similar situation.

Here’s one of the original paragraphs Josh sent me, completely unedited:

I’m relatively new to web design, but I’ve taken a few courses on HTML and CSS and I’ve done a Codecademy course on JavaScript. But, (jumping forward probably quite a while here!) after having fully designed and coded a HTML/CSS/JS website or webpage, I don’t fully understand the full process of going from a local site hosted with mamp/wamp to publishing a public site using wordpress(?) or some other host (is WordPress even a host?!) and also finding a server that’s suitable and some way of hosting images/videos or other content. (If that sounded like I didn’t know what half of those meant, it’s because unfortunately I don’t!.. but I’d really like to!)

Can you sense that enthusiasm? I love it.

We worked together a bit to refine many of his questions for this post, but they are still Josh’s words. Here we go!


What is a Domain Registrar? I get they are for registering domain names, but what’s the difference between them? How do you know which one is right for you? A quick search for “best domain hosts” on Google gave me 5 ads for companies who are domain registrars/hosts and 9 “Top 10” style pages that look as though they have some sort of affiliation with at least one company they’re suggesting. Am I just looking for the cheapest one?

You’re exactly right, domain registrants are for registering domain names. If you want joshlongisverycool.com, you’re going to have to buy it, and domain registrants are companies that help you do that.

Searching for a domain name

It’s a bummer that web search results are so saturated by ads and affiliate-link-saturated SEO-jacked pages. You’ll never get total honesty from that because those pages are full of links promoting whoever will pay them the most to send new customers. Heck, even Google themselves will sell you a domain name.

The truth is that you can’t go too wrong here. Domain names are a bit of a commodity and the hundreds of companies that will sell you one largely compete on marketing.

Some things to watch out for:

  • Some companies treat the domain as a loss leader. Like a grocery store selling cheap milk to hope you buy some more stuff while you are there. The check out process at any domain registrant will almost certainly try to sell you a bunch of extra stuff. For example, they might try to sell you additional domain names or email hosting you probably don’t need. Just be careful.
  • Web hosts (which we’re getting to next) will often sell them to you along with hosting. That’s fine I suppose, but I consider it a bit of a conflict of interest. Say you choose to move hosts one day. That hosting company is incentivized in the wrong direction to make that easy for you. If the domain is handled elsewhere, that domain registrant is incentivized the right direction to help you make changes.

I hate to add to the noise for you, but here are some domain registrants that I’ve used personally and aren’t paying for sponsorship here nor are affiliate links:

  • Hover
  • GoDaddy
  • Google Domains
  • Network Solutions
  • Amazon Route 53

Our own Sarah Drasner recommends looking at ZEIT domains, which are super interesting in that you buy and manage them entirely over the command line.

I might suggest, if you can see yourself owning several domain names in your life, keeping them consolidated to a single registrant. Managing domains isn’t something you’ll do very often, so it’s easy to lose track of what domains you registered on what registrant, not to mention how/where to change all the settings across different registrants.

I’d also suggest it’s OK to experiment here. That’s how all of us have learned. Pick one with a WP Engine is a web host that focuses specifically on WordPress.

  • Media Temple has WordPress-specific hosting, but has a wider range of services from very small and budget friendly to huge and white-glove.
  • BlueHost is has very inexpensive hosting choices, but has essentially the same capabilities as those others.
  • Now here some other web hosts that are a little less traditional. Forgive the techy terms here — if they don’t mean anything to you, just ignore them.

    • Netlify does static site hosting, which is great for things like static site generators and JAMstack sites.
    • Zeit is a host where you interact with it only through the command line.
    • Digital Ocean has their own way of talking about hosting. They call their servers Droplets, which are kind of like virtual machines with extra features.
    • Heroku is great for hosting apps with a ready-to-use backend for things like Node, Ruby, Java, and Python.
    • Amazon Web Services (AWS) is a whole suite of products with specialized hosting focuses, which are for pretty advanced users. Microsoft Azure and Google Cloud are similar.

    Again, I’d say it’s OK to, in a sense, make some mistakes here. If you aren’t hosting something particularly mission-critical, like your personal website, pick a host that seems to fit the bill and give it a go. Hopefully, it works out great — if not, you can move. Moving isn’t always super fun, but everybody ends up doing it, and you’ll learn as you go.

    When you buy web hosting, that host is going to tell you how to use it. One common way is that the host will give you WordPress and CraftCMS. They don’t really have anything directly to do with that connection between working locally on a site and getting that site live. But they do rather complicate it.

    The reason that you’d use a CMS at all is to make working on your site easier. Consider this site you’re looking at right now. There are tens of thousands of pages on this site. It would be untenable for each of them to be a hand-authored file.html file.

    Instead, a CMS allows us to craft all those pages by combining data and templates.

    Let’s consider the technology behind WordPress, a CMS that works pretty good for CSS-Tricks. In order for WordPress to run, it needs:

    1. PHP (the back-end language)
    2. MySQL (the database)
    3. Apache (the web server)

    You can do all that locally!

    I use Local by Flywheel for that (Mac and Windows), but there are a number of ways to get those technologies running: MAMP, Docker, Vagrant, etc.

    That’ll get you up and running locally with your CMS. That’s a beautiful thing. Having a good local development environment for your site is crucial. But it doesn’t help you get that site live.

    We’ll go over getting all this to a live site a bit later, but you should know this: the web server that you run will need to run these same technologies. Looking at the WordPress example from above, your web server will also need to run PHP, MySQL, and Apache. Then you’ll need to set up a system for getting all your files from your local computer to your web server like you would for any other site, but also probably have a system for dealing with the database. The database is a bit tricky as it’s not a “flat file” like most of the rest of your site.

    A CMS could be built from any set of technologies, not just the ones listed above. For example, see KeystoneJS. Instead of PHP, Keystone is Node.js. Instead of MySQL for the database, it uses MongoDB. Instead of Apache, it uses Express. Just a different set of technologies. Both of which you can get running locally and on a live web server.

    A CMS could even have no database at all! Static site generators are like this. You get the site running locally, and they produce a set of flat files which you move to your live server. A different way to do things, but absolutely still a CMS. What I always say is that the best CMS is one that is customized to your needs.


    What is “Asset Hosting”? Are assets not content? What is the difference between a CMS and an asset hosting service? What does an asset host do?

    Let’s define an asset: any “flat” file. As in, not dynamically generated like how a CMS might generate an HTML file. Images are a prime example of an “asset.” But it’s also things like CSS and JavaScript files, as well as videos and PDFs.

    And before we get any further: you probably don’t need to worry about this right away. Your web host can host assets and that’s plenty fine for your early days on small sites.

    One major reason people go with an asset host (probably more commonly referred to as a CDN or Content Delivery Network) is for a speed boost. Asset hosts are also servers, just like your web host’s web server, but they are designed for hosting those flat file assets super fast. So, not only do those assets get delivered to people looking at your site super fast, but your web server is relieved of that burden.

    You could even think of something like YouTube as an asset host. That 100 MB video of a butterfly in your garden is a heavy load for your little web server, and potentially a problem if your outgoing bandwidth is capped like it often is. Uploading that video to YouTube puts your video into that whole social universe, but a big reason to do it other than the social stuff is that it’s hosting that video asset for you.


    I’ve heard of “repositories”, but don’t really get what they are. I hear stuff like “just upload it to my Git Repository.” What the heck does that mean? I feel like a moron for asking this. What is Git? What is it for? Do I need to use it? Is it involved in the “local to live” process at all?

    Sorry you got hit with a “just” there. There is an epidemic in technology conversations where people slip that in to make it seem like what they are about to say is easy and obvious when it could be neither depending on who is reading.

    But let’s get into talking about Git repositories.

    Git is a specific form of version control. There are others, but Git is so dominant in the web industry that it’s hardly worth mentioning any others.

    Let’s say you and me are working on a website together. We’ve purchased a domain and hosting and gotten the site live. We’ve shared the SFTP credentials so we both have access to change the files on the live site. This could be quite dangerous!

    Coda is a code editor that let’s you edit files directly on a sever you’ve connected to over SFTP. Cool, but dangerous!

    Say we both edit a file and upload it over SFTP… which change wins? It’s whoever uploaded their file last. We have no idea who has done what. We’re over-writing each other and have no way of staying in sync with each other’s changes, changes immediately affect the live site which could break things and we have no way of undoing changes in case we break things. That’s a situation so unacceptable that it’s really never done.

    Instead, we work with version control, like Git. With Git, when we make changes, we commit them to a repository. A repository can be hosted anywhere, even on your local machine. But to make them really useful, they are hosted on the internet somewhere everyone has access to. You’ve surely seen GitHub, which hosts these repositories and adds a bunch of other features like issue tracking. Similar are GitLab and Bitbucket.

    Now let’s say you and me are working on that same site, but we’ve set up a Git repository for it. When I make a change, I commit it to the repository. If you want to make a change as well, you have to pull my changes down which merges them into your own copy of the code. Then you can push your changes up to the repository. Like anything, it gets more complicated, but that’s the gist of it.

    But a Git repository isn’t the live website. You’re on your own for getting the files from a Git repository to a live site. Fortunately, that’s a situation that everyone faces, so there are lots of options. Good thing your last question is about this!


    OK. So now with all that straight… where do you start with going from local to live? Where do you “upload” your HTML, CSS and JavaScript files? How do you link your shiny new domain name to those files and see it out in the wild? Which service is in charge of adding new content to your site, or updating it? Does it get really confusing if you have different companies for each service?

    Let’s start with a very simple website on what I’d consider typical web hosting. Say you have just index.html, style.css, and script.js files on your local computer which is your entire website. You’ve purchased a domain name and pointed the DNS settings at a web host. That host has given you SFTP credentials. You’ll use those credentials in an app that allows SFTP connections to log in:

    Your host will also tell you which folder is the “public root” of your website. The files there will be out on the public internet for the world to see!

    You might hear people refer to the “live” website as a “production” site. When someone asks something like, Did that bug make it to production?” they mean whether the bug is on the live website. “Development” is your local computer. You might also have a “Staging” site, which is a clone of the live website on the same hardware/software of the live site for testing.

    Remember earlier when we talked about Git repositories? While the repositories themselves don’t directly help you get the files in them to your web server, most systems that help you with the local-to-live process work with your repositories.

    The phrase “local-to-live” refers to deployment. When you have changes that you want to go out to your production website, you deploy them. That’s the process of moving your work from “development” to “production.”

    One service that helps with this idea of deployment is Beanstalk. Beanstalk hosts your Git repository, plus you give it the SFTP credentials for your server — that way it can move the files to your web server when you make commits. Cool right? Say you wanted to host that Git repo elsewhere though, like GitHub, Bitbucket, or Gitlab. Check out DeployBoy, which does the same thing, only it can connect to those sites as well. It probably comes as no surprise by now that there are lots of options here, ranging in price and complexity.

    Let’s go back to our WordPress example.

    1. You’ve got it running locally (on your computer) just perfectly and now want to go live with it.
    2. You’ve bought a domain name from a registrar.
    3. You’ve purchased hosting that meets WordPress requirements.
    4. You’ve pointed the DNS of the domain name at the web host.
    5. You’ve verified it’s all working (easy way: upload an index.html file in the public root via SFTP and verify that it loads when you type you type the domain name into a browser.)

    Now, you’ve still got some work to do:

    1. Set up a Git repository for the site.
    2. Set up a deployment service to move the files from the repository to the live site.
    3. Configure/Set up the live site as needed. For example, you’ll need a database on the live site. You’ll have to create that (your host will have instructions) and do things like run the WordPress installer and update configuration files.
    4. If you have things in your local database that you want to move to the live site, you might be exporting/importing things. That can happen at the raw MySQL level using WordPress’ native import/export features, or a fancy plugin like WP DB Migrate Pro.

    It’s a non-trivial amount of work! Sorry. This process is pretty similar for any site though. It’s a matter of configuring and setting up your production web server exactly how it should be, then deploying files to it. Every site is a bit different, but you’ll get the hang of this whole dance.

    It really is a big dance. I’ve only painted one picture for you here. I tried to pick one that was generic and broad enough that it shows the landscape of what needs to be done. But, at every step in this dance, there are different ways you can do things, different services you can pick, companies trying to help you at different pain points… it’s an ever-changing world.

    Right now, Netlify is enjoying a lot of popularity because they are one of the only web hosts that actually helps you with deployment. They’ll watch your Git repositories and do deployment for you! Netlify is for static sites only, but that can be a whole world onto itself. ZEIT also is massively innovative in how it helps with deploying and hosting web projects, including directly connecting with GitHub.


    Good luck!

    I hope this was helpful. Remember, you aren’t alone in all this. Zillions of other developers have done this before you and there is help to be found on the internet.

    Oh, and remember: the best way to learn anything at all is to…


    https://css-tricks.com/wp-content/uploads/2018/08/jbw.mp3

    The post Helping a Beginner Understand Getting a Website Live appeared first on CSS-Tricks.