Sign Up vs. Signup

Anybody building a site in that requires users to create accounts is going to face this language challenge. You’ll probably have this language strewed across your entire site, from prominent calls-to-action in your homepage hero, to persistent header buttons, to your documentation.

So which is correct? “Sign Up” or “Signup”? Let’s try to figure it out.

With some light internet grammar research, the term “sign up” is a verbal phrase. As in, “sign” is a verb (describes an action) and “sign up” is a verb plus a complement — participial phrase, best I can tell. That sounds about right to me.

My best guess before looking into this was that “signup” isn’t even a word at all, and more of a lazy internet mistake. Just like “frontend” isn’t a word. It’s either “front-end” (a compound adjective as in a front-end developer), or “front end” (as in, “Your job is to work on the front end.”).

I was wrong, though. “Signup” is a noun. Like a thing. As in, “Go up the hallway past the water fountain and you’ll see the signup on the wall.” Which could certainly be a digital thing as well. Seems to me it wouldn’t be wrong to call a form that collects a user’s name and email address a “signup form.”

“Sign-up” is almost definitely wrong, as it’s not a compound word or compound adjective.

The fact that both “sign up” and “signup” are both legit words/phrases makes this a little tricky. Having a verbal phrase as a button seems like a solid choice, but I wouldn’t call it wrong to have a button that said “Signup” since the button presumably links directly to a form in which you can sign up and that’s the correct noun for it.

Let’s see what some popular websites do.

Twitter goes with “Sign Up” and “Log in.” We haven’t talked about the difference between “Log in” and “Login” yet, but the difference is very much the same. Verbal phrase vs. noun. The only thing weird about Twitter’s approach here is the capitalization of “Up” and the lowercase “in.” Twitter seems giant enough that they must have thought of this and decided this intentionally, so I’d love to understand why because it looks like a mistake to my eyes.

Facebook, like Twitter, goes with “Sign Up” and “Log In.”

Google goes with “Sign in” and “Create account.” It’s not terribly rare to see companies use the “Create” verb. Visiting Microsoft’s Azure site, they used the copy “Create your account today” complemented with a “Start free” button. Slack uses “Sign in” and “Get Started.”

I can see the appeal of going with symmetry. Zoom uses “SIGN IN” and “SIGN UP” with the use of all-caps giving a pass on having to decide which words are capitalized.

Figma goes the “Sign In” and “Sign up” route, almost having symmetry — but what’s up with the mismatched capitalization? I thought, if anything, they’d go with a lowercase “i” because the uppercase “I” can look like a lowercase “L” and maybe that’s slightly weird.

At CodePen, we rock the “Sign Up” and “Log In” and try to be super consistent through the entire site using those two phrases.

If you’re looking for a conclusion here, I’d say that it probably doesn’t matter all that much. There are so many variations out there that people are probably used to it and you aren’t losing customers over it. It’s not like many will know the literal definition of “Signup.” I personally like active verb phrases — like “Sign Up,” “Log In,” or “Sign In” — with no particular preference for capitalization.

The post Sign Up vs. Signup appeared first on CSS-Tricks.

CSS-Tricks Chronicle XXXIV

Hey gang, time for another broad update about various goings on as we tend to do occasionally. Some various happenings around here, appearances on other sites, upcoming conferences, and the like.

I’m speaking at a handful of conferences coming up!

At the end of this month, October 29th-30th, I’ll be speaking at JAMstack_conf. Ever since I went to a jQuery conference several million years ago (by my count), I’ve always had a special place in my heart for conferences with a tech-specific focus. Certainly this whole world of JAMstack and serverless can be pretty broad, but it’s more focused than a general web design conference.


In December, I’ll be at WordCamp US. I like getting to go to WordPress-specific events to help me stay current on that community. CSS-Tricks is, and always has been a WordPress site, as are many other sites I manage. I like to keep my WordPress development chops up the best I can. I imagine the Gutenburg talk will be hot and heavy! I’ll be speaking as well, generally about front-end development.


Next Spring, March 4th-6th, I’ll be in Seattle for An Event Apart !


Over on ShopTalk, Dave and I have kicked off a series of shows we’re calling “How to Think Like a Front-End Developer.”

I’ve been fascinated by this idea for a while and have been collecting thoughts on it. I have my own ideas, but I want to contrast them with the ideas of other front-end developers much more accomplished than myself! My goal is to turn all this into a talk that I can give toward the end of this year and next year. This is partially inspired by some posts we’ve published here over the years:

…as well other people’s work, of course, like Brad Frost and Dan Mall’s Designer/Developer Workflow, and Lara Schenck and Mandy Michael’s thoughts on front-end development. Not to mention seismic shifts in the front-end development landscape through New JavaScript and Serverless.

I’ve been collecting these articles the best I can.

The ShopTalk series is happening now! A number of episodes are already published:


Speaking of ShopTalk, a while back Dave and I mused about wanting to redesign the ShopTalk Show website. We did all this work on the back end making sure all the data from our 350+ episodes is super clean and easy to work when, then I slapped a design on top of it that is honestly pretty bad.

Dan Mall heard us talk about it and reached out to us to see if he could help. Not to do the work himself… that would be amazing, but Dan had an even better idea. Instead, we would all work together to find a newcomer to design and have them work under Dan’s direction and guidence to design the site. Here’s Dan’s intro post (and note that applications are now closed).

We’re currently in the process of narrowing down the applicants and interviewing finalists. We’re planning on being very public about the process, so not only will we hopefully be helping someone who could use a bit of a break into this industry, but we’ll also help anyone else who cares to watch it happen.


I’ve recently had the pleasure of being a guest on other shows.

First up, I was on the Script & Style Show with David Walsh and Todd Gardner

I love that David has ressurected the name Script & Style. We did a site together quite a few years back with that same name!


I have a very short interview on Makerviews:

What one piece of advice would you give to other makers?

I’d say that you’re lucky. The most interesting people I know that seem to lead the most fulfilling, long, and interesting lives are those people who do interesting things, make interesting things, and generally just engage with life at a level deeper than just skating by or watching.


And my (third?) appearance on Thundernerds:


If you happen to live in Central Oregon, note that our BendJS meetups have kicked back up for the season. We’ve been having them right at our CodePen office and it’s been super fun.


I haven’t even gotten to CodePen stuff yet! Since my last chronicle, we’ve brought in a number of new employees, like Klare Frank, Cassidy Williams, and now Stephen Shaw. We’re always chugging away at polishing and maintaining CodePen, building new features, encouraging community, and everything else that running a social coding site requires.

Oh and hey! CodePen is now a registered trademark, so I can do this: CodePen®. One of our latest user-facing features is pinned items. Rest assured, we have loads of other features that are in development for y’all that are coming soon.

If you’re interested in the technology side of CodePen, we’ve dug into lots of topics lately on CodePen radio like:

The post CSS-Tricks Chronicle XXXIV appeared first on CSS-Tricks.

Continuous Integration: The What, Why and How

Not long ago, I had a novice understanding of Continuous Integration (CI) and thought it seemed like an extra process that forces engineers to do extra work on already large projects. My team began to implement CI into projects and, after some hands-on experience, I realized its great benefits, not only to the company, but to me, an engineer! In this post, I will describe CI, the benefits I’ve discovered, and how to implement it for free, and fast.

CI and Continuous Delivery (CD) are usually discussed together. Writing about both CI and CD within a post is a lot to write and read about all at once, so we’ll only discuss CI here. Maybe, I will cover CD in a future post. 😉

Table of Contents:

What is CI?

Continuous Integration, as I understand it, is a pattern of programming combining testing, safety checks, and development practices to confidently push code from a development branch to production ready branch continuously.

Microsoft Word is an example of CI. Words are written into the program and checked against spelling and grammar algorithms to assert a document’s general readability and spelling.

Why CI should be used everywhere

We’ve already touched on this a bit, but the biggest benefit of CI that I see is that it saves a lot of money by making engineers more productive. Specifically, it provides quicker feedback loops, easier integration, and it reduces bottlenecks. Directly correlating CI to company savings is hard because SaaS costs scale as the user base changes. So, if a developer wants to sell CI to the business, the formula below can be utilized. Curious just how much it can save? My friend, David Inoa, created the following demo to help calculate the savings.

See the Pen Continuous Integration (CI) Company Cost Savings Estimator by David (@davidinoa) on CodePen.

What really excites enough to scream to the top of the rooftops is how CI can benefit you and me as developers!

For starters, CI will save you time. How much? We’re talking hours per week. How? Oh, do I want to tell you! CI automatically tests your code and lets you know if it is okay to be merged in a branch that goes to production. The amount of time that you would spend testing your code and working with others to get code ready for production is a lot of time.

Then there’s the way it helps prevent code fatigue. It sports tools like Greenkeeper, which can automatically set up — and even merge — pull requests following a code review. This keeps code up-to-date and allows developers to focus on what we really need to do. You know, like writing code or living life. Code updates within packages usually only need to be reviewed for major version updates, so there’s less need to track every minor release for breaking changes that require action.

CI takes a lot of the guesswork out of updating dependencies that otherwise would take a lot of research and testing.

No excuses, use CI!

When talking to developers, the conversation usually winds up something like:

“I would use CI but…[insert excuse].”

To me, that’s a cop out! CI can be free. It can also be easy. It’s true that the benefits of CI come with some costs, including monthly fees for tools like CircleCI or Greenkeeper. But that’s a drop in the bucket with the long-term savings it provides. It’s also true that it will take time to set things up. But it’s worth calling out that the power of CI can be used for free on open source projects. If you need or want to keep your code private and don’t want pay for CI tools, then you really can build your own CI setup with a few great npm packages.

So, enough with the excuses and behold the power of CI!

What problems does CI solve?

Before digging in much further, we should cover the use cases for CI. It solves a lot of issues and comes in handy in many situations:

  • When more than one developer wants to merge into a production branch at once
  • When mistakes are not caught or cannot be fixed before deployment
  • When dependencies are out of date
  • When developers have to wait extended periods of time to merge code
  • When packages are dependent on other packages
  • When a package is updated and must be changed in multiple place
CI tests updates and prevents bugs from being deployed.

Recommended CI tools

Let’s look at the high level parts used to create a CI feedback loop with some quick code bits to get CI setup for any open source project today. We’ll break this down into digestible chunks.

Documentation

In order to get CI working for me right away, I usually set CI up to test my initial documentation for a project. Specifically, I use MarkdownLint and Write Good because they provide all the features and functionality I need to write tests for this part of the project.

The great news is that GitHub provides standard templates and there is a lot of content that can be copied to get documentation setup quickly. Read more about quickly setting up documentation and creating a documentation feedback loop.

I keep a package.json file at the root of the project and run a script command like this:

"grammar": "write-good *.md --no-passive", "markdownlint": "markdownlint *.md"

Those two lines allow me to start using CI. That’s it! I can now run CI to test grammar.

At this point, I can move onto setting up CircleCI and Greenkeeper to help me make sure that packages are up to date. We’ll get to that in just a bit.

Unit testing

Unit tests are a method for testing small blocks (units) of code to ensure that the expected behavior of that block works as intended.

Unit tests provide a lot of help with CI. They define code quality and provide developers with feedback without having to push/merge/host code. Read more about unit tests and quickly setting a unit test feedback loop.

Here is an example of a very basic unit test without using a library:

const addsOne = (num) => num + 1 // We start with 1 as an initial value const numPlus1 = addsOne(3) // Function to add 3 const stringNumPlus1 = addsOne('3') // Add the two functions, expect 4 as the value /** * console.assert * https://developer.mozilla.org/en-US/docs/Web/API/console/assert * @param test? * @param string * @returns string if the test fails **/ console.assert(numPlus1 === 4, 'The variable `numPlus1` is not 4!') console.assert(stringNumPlus1 === 4, 'The variable `stringNumPlus1` is not 4!')

Over time, it is nice to use libraries like Jest to unit test code, but this example gives you an idea of what we’re looking at.

Here’s an example of the same test above using Jest:

const addsOne = (num) => num + 1 describe('addsOne', () => { it('adds a number', () => { const numPlus1 = addsOne(3) expect(numPlus1).toEqual(4) }) it('will not add a string', () => { const stringNumPlus1 = addsOne('3') expect(stringNumPlus1 === 4).toBeFalsy(); })
})

Using Jest, tests can be hooked up for CI with a command in a package.json like this:

"test:jest": "jest --coverage",

The flag --coverage configures Jest to report test coverage.

Safety checks

Safety checks help communicate code and code quality. Documentation, document templates, linter, spell checkers, and type checker are all safety checks. These tools can be automated to run during commits, in development, during CI, or even in a code editor.

Safety checks fall into more than one category of CI: feedback loop and testing. I’ve compiled a list of the types of safety checked I typically bake into a project.

All of these checks may seem like another layer of code abstraction or learning, so be gentle on yourself and others if this feels overwhelming. These tools have helped my own team bridge experience gaps, define shareable team patterns, and assist developers when they’re confused about what their code is doing.

  • Committing, merging, communicating: Tools like husky, commitizen, GitHub Templates, and Changelogs help keep CI running clean code and form a nice workflow for a collaborative team environment.
  • Defining code (type checkers): Tools like TypeScript define and communicate code interfaces — not only types!
  • Linting: This is the practice of ensuring that something matches defined standards and patterns. There’s a linter for nearly all programming languages and you’ve probably worked with common ones, like ESlint (JavaScript) and Stylelint (CSS) in other projects.
  • Writing and commenting: Write Good helps catch grammar errors in documentation. Tools like JSDoc, Doctrine, and TypeDoc assist in writing documentation and add useful hints in code editors. Both can compile into markdown documentation.

ESlint is a good example for how any of these types of tools are implemented in CI. For example, this is all that’s needed in package.json to lint JavaScript:

"eslint": "eslint ."

Obviously, there are many options that allow you to configure a linter to conform to you and your team’s coding standards, but you can see how practical it can be to set up.

High level CI setup

Getting CI started for a repository often takes very little time, yet there are plenty of advanced configurations we can also put to use, if needed. Let’s look at a quick setup and then move into a more advanced configuration. Even the most basic setup is beneficial for saving time and code quality!

Two features that can save developers hours per week with simple CI are automatic dependency updates and build testing. Dependency updates are written about in more detail here.

Build testing refers to node_modules installation during CI by running an install — for example, (npm install where all node_modules install as expected. This is a simple task and does fail. Ensuring that node_modules installs as expected saves considerable time!

Quick CI Setup

CI can be setup automatically for both CircleCI and Travis! If a valid test command is already defined in the repository’s package.json, then CI can be implemented without any more configuration.

In a CI tool, like CircleCI or Travis, the repository can be searched for after logging in or authentication. From there, follow the CI tool’s UI to start testing.

For JavaScript, CircleCI will look at test within a repository’s package.json to see if a valid test script is added. If it is, then CircleCI will begin running CI automatically! Read more about setting up CircleCI automatically here.

Advanced configurations

If unit tests are unfinished, or if a more configuration is needed, a .yml file can be added for a CI tool (like CircleCI) where the execute runner scripts are made.

Below is how to set up a custom CircleCI configuration with JavaScript linting (again, using ESlint as an example) for a CircleCI.

First off, run this command:

mkdir .circleci && touch .circleci/config.yml

Then add the following to generated file:

defaults: &defaults working_directory: ~/code docker: - image: circleci/node:10 environment: NPM_CONFIG_LOGLEVEL: error # make npm commands less noisy JOBS: max <h3>https://gist.github.com/ralphtheninja/f7c45bdee00784b41fed version: 2 jobs: build: <<: *defaults steps: - checkout - run: npm i - run: npm run eslint:ci

After these steps are completed and after CircleCI has been configured in GitHub (more on that here), CircleCI will pick up .circleci/config.yml and lint JavaScript in a CI process when a pull request is submitted.

I created a folder with examples in this demo repository to show ideas for configuring CI with config.yml filesand you can reference it for your own project or use the files as a starting point.

The are more even more CI tools that can be setup to help save developers more time, like auto-merging, auto-updating, monitoring, and much more!

Summary

We covered a lot here! To sum things up, setting up CI is very doable and can even be free of cost. With additional tooling (both paid and open source), we can have more time to code, and more time to write more tests for CI — or enjoy more life away from the screen!

Here are some demo repositories to help developers get setup fast or learn. Please feel free to reach out within the repositories with questions, ideas or improvements.

The post Continuous Integration: The What, Why and How appeared first on CSS-Tricks.

Demystifying JavaScript Testing

Many people have messaged me, confused about where to get started with testing. Just like everything else in software, we work hard to build abstractions to make our jobs easier. But that amount of abstraction evolves over time, until the only ones who really understand it are the ones who built the abstraction in the first place. Everyone else is left with taking the terms, APIs, and tools at face value and struggling to make things work.

One thing I believe about abstraction in code is that the abstraction is not magic — it’s code. Another I thing I believe about abstraction in code is that it’s easier to learn by doing.

Imagine that a less seasoned engineer approaches you. They’re hungry to learn, they want to be confident in their code, and they’re ready to start testing. 👍 Ever prepared to learn from you, they’ve written down a list of terms, APIs, and concepts they’d like you to define for them:

  • Assertion
  • Testing Framework
  • The describe/it/beforeEach/afterEach/test functions
  • Mocks/Stubs/Test Doubles/Spies
  • Unit/Integration/End to end/Functional/Accessibility/Acceptance/Manual testing

So…

Could you rattle off definitions for that budding engineer? Can you explain the difference between an assertion library and a testing framework? Or, are they easier for you to identify than explain?

Here’s the point. The better you understand these terms and abstractions, the more effective you will be at teaching them. And if you can teach them, you’ll be more effective at using them, too.

Enter a teach-an-engineer-to-fish moment. Did you know that you can write your own assertion library and testing framework? We often think of these abstractions as beyond our capabilities, but they’re not. Each of the popular assertion libraries and frameworks started with a single line of code, followed by another and then another. You don’t need any tools to write a simple test.

Here’s an example:

const {sum} = require('../math')
const result = sum(3, 7)
const expected = 10
if (result !== expected) { throw new Error(`${result} is not equal to ${expected}`)
}

Put that in a module called test.js and run it with node test.js and, poof, you can start getting confident that the sum function from the math.js module is working as expected. Make that run on CI and you can get the confidence that it won’t break as changes are made to the codebase. 🏆

Let’s see what a failure would look like with this:

Terminal window showing an error indicating -4 is not equal to 10.

So apparently our sum function is subtracting rather than adding and we’ve been able to automatically detect that through this script. All we need to do now is fix the sum function, run our test script again and:

Terminal window showing that we ran our test script and no errors were logged.

Fantastic! The script exited without an error, so we know that the sum function is working. This is the essence of a testing framework. There’s a lot more to it (e.g. nicer error messages, better assertions, etc.), but this is a good starting point to understand the foundations.

Once you understand how the abstractions work at a fundamental level, you’ll probably want to use them because, hey, you just learned to fish and now you can go fishing. And we have some pretty phenomenal fishing polls, uh, tools available to us. My favorite is the Jest testing platform. It’s amazingly capable, fully featured and allows me to write tests that give me the confidence I need to not break things as I change code.

I feel like fundamentals are so important that I included an entire module about it on TestingJavaScript.com. This is the place where you can learn the smart, efficient way to test any JavaScript application. I’m really happy with what I’ve created for you. I think it’ll help accelerate your understanding of testing tools and abstractions by giving you the chance to implement parts from scratch. The (hopeful) result? You can start writing tests that are maintainable and built to instill confidence in your code day after day. 🎣

The early bird sale is going on right now! 40% off every tier! The sale is going away in the next few days so grab this ASAP!

TestingJavaScript.com – Learn the smart, efficient way to test any JavaScript application.

P.S. Give this a try: Tweet what’s the difference between a testing framework and an assertion library? In my course, I’ll not only explain it, we’ll build our own!

The post Demystifying JavaScript Testing appeared first on CSS-Tricks.

Hand roll charts with D3 like you actually know what you’re doing

Charts! My least favorite subject besides Social Studies. But you just won’t get very far in this industry before someone wants you to make a chart. I don’t know what it is with people and charts, but apparently we can’t have a civilization without a bar chart showing Maggie’s sales for last month so by ALL MEANS — let’s make a chart.

Yes, I know this is not how you would display this data. I’m trying to make a point here.

To prepare you for that impending “OMG I’m going to have to make a chart” existential crisis that, much like death, we like to pretend is never going to happen, I’m going to show you how to hand-roll your own scatter plot graph with D3.js. This article is heavy on the code side and your first glance at the finished code is going to trigger your “fight or flight” response. But if you can get through this article, I think you will be surprised at how well you understand D3 and how confident you are that you can go make some other chart that you would rather not make.

Before we do that, though, it’s important to talk about WHY you would ever want to roll your own chart.

Building vs. Buying

When you do have to chart, you will likely reach for something that comes “out of the box.” You would never ever hand-roll a chart. The same way you would never sit around and smash your thumb with a hammer; it’s rather painful and there are more productive ways to use your hammer. Charts are rather complex user interface items. It’s not like you’re center-aligning some text in a div here. Libraries like Chart.js or Kendo UI have pre-made charts that you can just point at your data. Developers have spent thousands of hours perfecting these charts You would never ever build one of these yourself.

Or would you?

Charting libraries are fantastic, but they do impose a certain amount of restrictions on you…and sometimes they actually make it harder to do even the simple things. As Peter Parker’s grandfather said before he over-acted his dying scene in Spiderman, “With great charting libraries, comes great trade-off in flexibility.”

Toby never should have been Spiderman. FITE ME.

This is exactly the scenario I found myself in when my colleague, Jasmine Greenaway, and I decided that we could use charts to figure out who @horse_js is. In case you aren’t already a big @horse_js fan, it’s a Twitter parody account that quotes people out of context. It’s extremely awesome.

We pulled every tweet from @horse_js for the past two years. We stuck that in a Cosmos DB database and then created an Azure Function endpoint to expose the data.

And then, with a sinking feeling in our stomachs, we realized that we needed a chart. We wanted to be able to see what the data looked like as it occurred over time. We thought being able to see the data visually in a Time Series Analysis might help us identify some pattern or gain some insight about the twitter account. And indeed, it did.

We charted every tweet that @horse_js has posted in the last two years. When we look at that data on a scatter plot, it looks like this:

See the Pen wYxYNd by Burke Holland (@burkeholland) on CodePen.

Coincidentally, this is the thing we are going to build in this article.

Each tweet is displayed with the date on the x-axis, and the time of day on the y. I thought this would be easy to do with a charting library, but all the ones I tried weren’t really equipped to handle the scenario of a date across the x and a time on the y. I also couldn’t find any examples of people doing it online. Am I breaking new ground here? Am I a data visualization pioneer?

Probably. Definitely.

So, let’s take a look at how we can build this breathtaking scatter plot using D3.

Getting started with D3

Here’s the thing about D3: it looks pretty awful. I just want to get that out there so we can stop pretending like D3 code is fun to look at. It’s not. There’s no shame in saying that. Now that we’ve invited that elephant in the room to the tea party, allow me to insinuate that even though D3 code looks pretty bad, it’s actually not. There’s just a lot of it.

To get started, we need D3. I am using the CDN include for D3 5 for these examples. I’m also using Moment to work with the dates, which we’ll get to later.

https://cdnjs.cloudflare.com/ajax/libs/d3/5.7.0/d3.min.js
https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.22.2/moment.min.js

D3 works with SVG. That’s what it does. It basically marries SVG with data and provides some handy pre-built mechanisms for visualization it — things such as axis. Or Axees? Axises? Whatever the plural of “axis” is. But for now, just know that it’s like jQuery for SVG.

So, the first thing we need is an SVG element to work with.

<svg id="chart"></svg>

OK. Now we’re ready to start D3’ing our way to data visualization infamy. The first thing we’re going to do is make our scatter plot a class. We want to make this thing as generic as possible so that we can re-use it with other sets of data. We’ll start with a constructor that takes two parameters. The first will be the class or id of the element we are about to work with (in our case that’s, #chart) and the second is an object that will allow us to pass in any parameters that might vary from chart-to-chart (e.g. data, width, etc.).

class ScatterPlot { constructor(el, options) { }
}

The chart code itself will go in a render function, which will also require the data set we’re working with to be passed.

class ScatterPlot { constructor(el, options) { this.render(options.data); } render(data) { }
}

The first thing we’ll do in our render method is set some size values and margins for our chart.

class ScatterPlot { constructor(el, options) { this.data = options.data || []; this.width = options.width || 500; this.height = options.height || 400; this.render(); } render() { let margin = { top: 20, right: 20, bottom: 50, left: 60 }; let height = this.height || 400; let width = (this.height || 400) - margin.top - margin.bottom; let data = this.data; }
}

I mentioned that D3 is like jQuery for SVG, and I think that analogy sticks. So you can see what I mean, let’s make a simple SVG drawing with D3.

For starters, you need to select the DOM element that SVG is going to work with. Once you do that, you can start appending things and setting their attributes. D3, just like jQuery, is built on the concept of chaining, so each function that you call returns an instance of the element on which you called it. In this manner, you can keep on adding elements and attributes until the cows come home.

For instance, let’s say we wanted to draw a square. With D3, we can draw a rectangle (in SVG that’s a rect), adding the necessary attributes along the way.

See the Pen zmdpJZ by Burke Holland (@burkeholland) on CodePen.

NOW. At this point you will say, “But I don’t know SVG.” Well, I don’t either. But I do know how to Google and there is no shortage of articles on how to do pretty much anything in SVG.

So, how do we get from a rectangle to a chart? This is where D3 becomes way more than just “jQuery for drawing.”

​​First, let’s create a chart. We start with an empty SVG element in our markup. We use D3 to select that empty svg element (called #chart​) and define its width and height as well as margins.

// create the chart
this.chart = d3.select(this.el) .attr('width', width + margin.right + margin.left) .attr('height', height + margin.top + margin.bottom);

And here’s what it looks like:

See the Pen EdpOqy by Burke Holland (@burkeholland) on CodePen.

AMAZING! Nothing there. If you open the dev tools, you’ll see that there is something there. It’s just an empty something. Kind of like my soul.

That’s your chart! Let’s go about putting some data in it. For that, we are going to need to define our x and y-axis.

That’s pretty easy in D3. You call the axisBottom method. Here, I am also formatting the tick marks with the right date format to display.

let xAxis = d3.axisBottom(x).tickFormat(d3.timeFormat('%b-%y'));

I am also passing an “x” parameter to the axisBottom method. What is that? That is called a scale.

D3 scales

D3 has something called scales. Scales are just a way of telling D3 where to put your data and D3 has a lot of different types of scales. The most common kind would be linear — like a scale of data from 1 to 10. It also contains a scale just for time series data — which is what we need for this chart. We can use the scaleTime method to define a “scale” for our x-axis.

// define the x-axis
let minDateValue = d3.min(data, d => { return new Date(moment(d.created_at).format('MM-DD-YYYY'));
}); let maxDateValue = d3.max(data, d => { return new Date(moment(d.created_at).format('MM-DD-YYYY'));
}); let x = d3.scaleTime() .domain([minDateValue, maxDateValue]) .range([0, width]); let xAxis = d3.axisBottom(x).tickFormat(d3.timeFormat('%b-%y'));

D3 scales use some terminology that is slightly intimidating. There are two main concepts to understand here: domains and ranges.

  • Domain: The range of possible values in your data set. In my case, I’m getting the minimum date from the array, and the maximum date from the array. Every other value in the data set falls between these two endpoints — so those “endpoints” define my domain.
  • Range: The range over which to display your data set. In other words, how spread out do you want your data to be? In our case, we want it constrained to the width of the chart, so we just pass width as the second parameter. If we passed a value like, say, 10000, our data out over 10,000 pixels wide. If we passed no value at all, it would draw all of the data on top of itself all on the left-hand side of the chart… like the following image.

The y-axis is built in the same way. Only, for it, we are going to be formatting our data for time, not date.

// define y axis
let minTimeValue = new Date().setHours(0, 0, 0, 0);
let maxTimeValue = new Date().setHours(23, 59, 59, 999); let y = d3.scaleTime() .domain([minTimeValue, maxTimeValue]) .nice(d3.timeDay) .range([height, 0]); let yAxis = d3.axisLeft(y).ticks(24).tickFormat(d3.timeFormat('%H:%M'));

The extra nice method call on the y scale tells the y-axis to format this time scale nicely. If we don’t include that, it won’t have a label for the top-most tick on the left-hand side because it only goes to 11:59:59 PM, rather than all the way to midnight. It’s a quirk, but we’re not making crap here. We need labels on all our ticks.

Now we’re ready to draw our axis to the chart. Remember that our chart has some margins on it. In order to properly position the items inside of our chart, we are going to create a grouping (g) element and set its width and height. Then, we can draw all of our elements in that container.

let main = this.chart.append('g') .attr('transform', `translate(${margin.left}, ${margin.top})`) .attr('width', width) .attr('height', height) .attr('class', 'main');

We’re drawing our container, accounting for margin and setting its width and height. Yes. I know. It’s tedious. But such is the state of laying things out in a browser. When was the last time you tried to horizontally and vertically center content in a div? Yeah, not so awesome prior to Flexbox and CSS Grid.

Now, we can draw our x-axis:

main.chart.append('g') .attr('transform', `translate(0, ${height})`) .attr('class', 'main axis date') .call(xAxis);

We make a container element, and then “call” the xAxis that we defined earlier. D3 draws things starting at the top-left, so we use the transform attribute to offset the x-axis from the top so it appears at the bottom. If we didn’t do that, our chart would look like this…

By specifying the transform, we push it to the bottom. Now for the y-axis:

main.append('g') .attr('class', 'main axis date') .call(yAxis);

Let’s look at all the code we have so far, and then we’ll see what this outputs to the screen.

class ScatterPlot { constructor(el, options) { this.el = el; if (options) { this.data = options.data || []; this.tooltip = options.tooltip; this.pointClass = options.pointClass || ''; this.data = options.data || []; this.width = options.width || 500; this.height = options.height || 400; this.render(); } } render() { let margin = { top: 20, right: 15, bottom: 60, left: 60 }; let height = this.height || 400; let width = (this.width || 500) - margin.right - margin.left; let data = this.data; // create the chart let chart = d3.select(this.el) .attr('width', width + margin.right + margin.left) .attr('height', height + margin.top + margin.bottom); // define the x-axis let minDateValue = d3.min(data, d => { return new Date(moment(d.created_at).format('MM-DD-YYYY')); }); let maxDateValue = d3.max(data, d => { return new Date(moment(d.created_at).format('MM-DD-YYYY')); }); let x = d3.scaleTime() .domain([minDateValue, maxDateValue]) .range([0, width]); let xAxis = d3.axisBottom(x).tickFormat(d3.timeFormat('%b-%y')); // define y axis let minTimeValue = new Date().setHours(0, 0, 0, 0); let maxTimeValue = new Date().setHours(23, 59, 59, 999); let y = d3.scaleTime() .domain([minTimeValue, maxTimeValue]) .nice(d3.timeDay) .range([height, 0]); let yAxis = d3.axisLeft(y).ticks(24).tickFormat(d3.timeFormat('%H:%M')); // define our content area let main = chart.append('g') .attr('transform', `translate(${margin.left}, ${margin.top})`) .attr('width', width) .attr('height', height) .attr('class', 'main'); // draw x axis main.append('g') .attr('transform', `translate(0, ${height})`) .attr('class', 'main axis date') .call(xAxis); // draw y axis main.append('g') .attr('class', 'main axis date') .call(yAxis); }
}

See the Pen oaeybM by Burke Holland (@burkeholland) on CodePen.

We’ve got a chart! Call your friends! Call your parents! IMPOSSIBLE IS NOTHING!

​​Axis labels

Now let’s add some chart labels. By now you may have figured out that when it comes to D3, you are doing pretty much everything by hand. Adding axis labels is no different. All we are going to do is add an svg text​ element, set it’s value and position it. That’s all.
​​
​​For the x​-axis, we can add the text label and position it using translate​. We set it’s x​ position to the middle (width / 2) of the chart. Then we subtract the left-hand margin to make sure we are centered under just the chart. I’m also using a CSS class for axis-label​ that has a text-anchor: middle​ to make sure our text is originating from the center of the text element.
​​

​​​​// text label for the x axis
​​chart.append("text") ​​ .attr("transform",
​​ "translate(" + ((width/2) + margin.left) + " ," + ​​ (height + margin.top + margin.bottom) + ")")
​​ .attr('class', 'axis-label')
​​ .text("Date Of Tweet");

​​
​​The y​-axis is the same concept — a text​ element that we manually position. This one is positioned with absolute x​ and y​ attributes. This is because our transform​ is used to rotate the label, so we use the x​ and y​ properties to position it.
​​
​​Remember: Once you rotate an element, x and y rotate with it. That means that when the text​ element is on its side like it is here, y​ now pushes it left and right and x​ pushes it up and down. Confused yet? It’s OK, you’re in great company.
​​

​​// text label for the y-axis
​​chart.append("text")
​​ .attr("transform", "rotate(-90)")
​​ .attr("y", 10)
​​ .attr("x",0 - ((height / 2) + (margin.top + margin.bottom))
​​ .attr('class', 'axis-label')
​​ .text("Time of Tweet - CST (-6)");

​​
​​

See the Pen oaeybM by Burke Holland (@burkeholland) on CodePen.

​​Now, like I said — it’s a LOT of code. That’s undeniable. But it’s not super complex code. It’s like LEGO: LEGO blocks are simple, but you can build pretty complex things with them. What I’m trying to say is it’s a highly sophisticated interlocking brick system.

​​Now that we have a chart, it’s time to draw our data.
​​

Drawing the data points

This is fairly straightforward. As usual, we create a grouping to put all our circles in. Then we loop over each item in our data set and draw an SVG circle. We have to set the position of each circle (cx and cy) based on the current data item’s date and time value. Lastly, we set its radius (r), which controls how big the circle is.

let circles = main.append('g'); data.forEach(item => { circles.append('svg:circle') .attr('class', this.pointClass) .attr('cx', d => { return x(new Date(item.created_at)); }) .attr('cy', d => { let today = new Date(); let time = new Date(item.created_at); return y(today.setHours(time.getHours(), time.getMinutes(), time.getSeconds(), time.getMilliseconds())); }) .attr('r', 5);
});

When we set the cx and cy values, we use the scale (x or y) that we defined earlier. We pass that scale the date or time value of the current data item and the scale will give us back the correct position on the chart for this item.

And, my good friend, we have a real chart with some real data in it.

See the Pen VEzdrR by Burke Holland (@burkeholland) on CodePen.

Lastly, let’s add some animation to this chart. D3 has some nice easing functions that we can use here. What we do is define a transition on each one of our circles. Basically, anything that comes after the transition method gets animated. Since D3 draws everything from the top-left, we can set the x position first and then animate the y. The result is the dots look like they are falling into place. We can use D3’s nifty easeBounce easing function to make those dots bounce when they fall.

data.forEach(item => { circles.append('svg:circle') .attr('class', this.pointClass) .attr('cx', d => { return x(new Date(item.created_at)); }) .transition() .duration(Math.floor(Math.random() * (3000-2000) + 1000)) .ease(d3.easeBounce) .attr('cy', d => { let today = new Date(); let time = new Date(item.created_at); return y(today.setHours(time.getHours(), time.getMinutes(), time.getSeconds(), time.getMilliseconds())); }) .attr('r', 5);

OK, so one more time, all together now…

class ScatterPlot { constructor(el, options) { this.el = el; this.data = options.data || []; this.width = options.width || 960; this.height = options.height || 500; this.render(); } render() { let margin = { top: 20, right: 20, bottom: 50, left: 60 }; let height = this.height - margin.bottom - margin.top; let width = this.width - margin.right - margin.left; let data = this.data; // create the chart let chart = d3.select(this.el) .attr('width', width + margin.right + margin.left) .attr('height', height + margin.top + margin.bottom); // define the x-axis let minDateValue = d3.min(data, d => { return new Date(moment(d.created_at).format('MM-DD-YYYY')); }); let maxDateValue = d3.max(data, d => { return new Date(moment(d.created_at).format('MM-DD-YYYY')); }); let x = d3.scaleTime() .domain([minDateValue, maxDateValue]) .range([0, width]); let xAxis = d3.axisBottom(x).tickFormat(d3.timeFormat('%b-%y')); // define y axis let minTimeValue = new Date().setHours(0, 0, 0, 0); let maxTimeValue = new Date().setHours(23, 59, 59, 999); let y = d3.scaleTime() .domain([minTimeValue, maxTimeValue]) .nice(d3.timeDay) .range([height, 0]); let yAxis = d3.axisLeft(y).ticks(24).tickFormat(d3.timeFormat('%H:%M')); // define our content area let main = chart.append('g') .attr('transform', `translate(${margin.left}, ${margin.top})`) .attr('width', width) .attr('height', height) .attr('class', 'main'); // draw x axis main.append('g') .attr('transform', `translate(0, ${height})`) .attr('class', 'main axis date') .call(xAxis); // draw y axis main.append('g') .attr('class', 'main axis date') .call(yAxis); // text label for the y axis
​​ chart.append("text")
​​ .attr("transform", "rotate(-90)")
​​ .attr("y", 10)
​​ .attr("x",0 - ((height / 2) + margin.top + margin.bottom)
​​ .attr('class', 'axis-label')
​​ .text("Time of Tweet - CST (-6)"); ​​ ​​ // draw the data points let circles = main.append('g'); data.forEach(item => { circles.append('svg:circle') .attr('class', this.pointClass) .attr('cx', d => { return x(new Date(item.created_at)); }) .transition() .duration(Math.floor(Math.random() * (3000-2000) + 1000)) .ease(d3.easeBounce) .attr('cy', d => { let today = new Date(); let time = new Date(item.created_at); return y(today.setHours(time.getHours(), time.getMinutes(), time.getSeconds(), time.getMilliseconds())); }) .attr('r', 5); }); }
}

We can now make a call for some data and render this chart…

// get the data
let data = fetch('https://s3-us-west-2.amazonaws.com/s.cdpn.io/4548/time-series.json').then(d => d.json()).then(data => { // massage the data a bit to get it in the right format let horseData = data.map(item => { return item.horse; }) // create the chart let chart = new ScatterPlot('#chart', { data: horseData, width: 960 });
});

And here is the whole thing, complete with a call to our Azure Function returning the data from Cosmos DB. It’s a TON of data, so be patient while we chew up all your bandwidth.

See the Pen GYvGep by Burke Holland (@burkeholland) on CodePen.

If you made it this far, I…well, I’m impressed. D3 is not an easy thing to get into. It simply doesn’t look like it’s going to be any fun. BUT, no thumbs were smashed here, and we now have complete control of this chart. We can do anything we like with it.

Check out some of these additional resources for D3, and good luck with your chart. You can do it! Or you can’t. Either way, someone has to make a chart, and it might as well be you.

For your data and API:

More on D3:

The post Hand roll charts with D3 like you actually know what you’re doing appeared first on CSS-Tricks.

How to stop using console.log() and start using your browser’s debugger

Whenever I see someone really effectively debug JavaScript in the browser, they use the DevTools tooling to do it. Setting breakpoints and hopping over them and such. That, as opposed to sprinkling console.log() (and friends) statements all around your code.

Parag Zaveri wrote about the transition and it has clearly resonated with lots of folks! (7.5k claps on Medium as I write).

I know I have hangups about it…

  • Part of debugging is not just inspecting code once as-is; it’s inspecting stuff, making changes and then continuing to debug. If I spend a bunch of time setting up breakpoints, will they still be there after I’ve changed my code and refreshed? Answer: DevTools appears to do a pretty good job with that.
  • Looking at the console to see some output is one thing, but mucking about in the Sources panel is another. My code there might be transpiled, combined, and not quite look like my authored code, making things harder to find. Plus it’s a bit cramped in there, visually.

But yet! It’s so powerful. Setting a breakpoint (just by clicking a line number) means that I don’t have to litter my own code with extra junk, nor do I have to choose what to log. Every variable in local and global scope is available for me to look at that breakpoint. I learned in Parag’s article that you might not even need to manually set breakpoints. You can, for example, have it break whenever a click (or other) event fires. Plus, you can type in variable names you specifically want to watch for, so you don’t have to dig around looking for them. I’ll be trying to use the proper DevTools for debugging more often and seeing how it goes.

While we’re talking about debugging though… I’ve had this in my head for a few months: Why doesn’t JavaScript have log levels? Apparently, this is a very common concept in many other languages. You can write logging statements, but they will only log if the configuration says it should. That way, in development, you can get detailed logging, but log only more serious errors in production. I mention it because it could be nice to leave useful logging statements in the code, but not have them actually log if you set like console.level = "production"; or whatever. Or perhaps they could be compiled out during a build step.

Direct Link to ArticlePermalink

The post How to stop using console.log() and start using your browser’s debugger appeared first on CSS-Tricks.

Use Cases for Flexbox

I remember when I first started to work with flexbox that the world looked like flexible boxes to me. It’s not that I forgot how floats, inline-block, or any other layout mechanisms work, I just found myself reaching for flexbox by default.

Now that grid is here and I find myself working on projects where I can use it freely, I find myself reaching for grid by default for the most part. But it’s not that I forgot how flexbox works or feel that grid supersedes flexbox — it’s just that darn useful. Rachel puts is very well:

Asking whether your design should use Grid or Flexbox is a bit like asking if your design should use font-size or color. You should probably use both, as required. And, no-one is going to come to chase you if you use the wrong one.

Yes, they can both lay out some boxes, but they are different in nature and are designed for different use cases. Wrapping un-even length elements is a big one, but Rachel goes into a bunch of different use cases in this article.

Direct Link to ArticlePermalink

The post Use Cases for Flexbox appeared first on CSS-Tricks.

Durable Functions: Fan Out Fan In Patterns

This post is a collaboration between myself and my awesome coworker, Maxime Rouiller.

Durable Functions? Wat. If you’re new to Durable, I suggest you start here with this post that covers all the essentials so that you can properly dive in. In this post, we’re going to dive into one particular use case so that you can see a Durable Function pattern at work!

Today, let’s talk about the Fan Out, Fan In pattern. We’ll do so by retrieving an open issue count from GitHub and then storing what we get. Here’s the repo where all the code lives that we’ll walk through in this post.

View Repo

About the Fan Out/Fan In Pattern

We briefly mentioned this pattern in the previous article, so let’s review. You’d likely reach for this pattern when you need to execute multiple functions in parallel and then perform some other task with those results. You can imagine that this pattern is useful for quite a lot of projects, because it’s pretty often that we have to do one thing based on data from a few other sources.

For example, let’s say you are a takeout restaurant with a ton of orders coming through. You might use this pattern to first get the order, then use that order to figure out prices for all the items, the availability of those items, and see if any of them have any sales or deals. Perhaps the sales/deals are not hosted in the same place as your prices because they are controlled by an outside sales firm. You might also need to find out what your delivery queue is like and who on your staff should get it based on their location.

That’s a lot of coordination! But you’d need to then aggregate all of that information to complete the order and process it. This is a simplified, contrived example of course, but you can see how useful it is to work on a few things concurrently so that they can then be used by one final function.

Here’s what that looks like, in abstract code and visualization

See the Pen Durable Functions: Pattern #2, Fan Out, Fan In by Sarah Drasner (@sdras) on CodePen.

const df = require('durable-functions') module.exports = df(function*(ctx) { const tasks = [] // items to process concurrently, added to an array const taskItems = yield ctx.df.callActivityAsync('fn1') taskItems.forEach(item => tasks.push(ctx.df.callActivityAsync('fn2', item)) yield ctx.df.task.all(tasks) // send results to last function for processing yield ctx.df.callActivityAsync('fn3', tasks)
})

Now that we see why we would want to use this pattern, let’s dive in to a simplified example that explains how.

Setting up your environment to work with Durable Functions

First things first. We’ve got to get development environment ready to work with Durable Functions. Let’s break that down.

GitHub Personal Access Token

To run this sample, you’ll need to create a personal access token in GitHub. If you go under your account photo, open the dropdown, and select Settings, then Developer settings in the left sidebar. In the same sidebar on the next screen, click Personal access tokens option.

Then a prompt will come up and you can click the Generate new token button. You should give your token a name that makes sense for this project. Like “Durable functions are better than burritos.” You know, something standard like that.

For the scopes/permission option, I suggest selecting “repos” which then allows to click the Generate token button and copy the token to your clipboard. Please keep in mind that you should never commit your token. (It will be revoked if you do. Ask me why I know that.) If you need more info on creating tokens, there are further instructions here.

Functions CLI

First, we’ll install the latest version of the Azure Functions CLI. We can do so by running this in our terminal:

npm i -g azure-functions-core-tools@core --unsafe-perm true

Does the unsafe perm flag freak you out? It did for me as well. Really what it’s doing is preventing UID/GID switching when package scripts run, which is necessary because the package itself is a JavaScript wrapper around .NET. Brew installing without such a flag is also available and more information about that is here.

Optional: Setting up the project in VS Code

Totally not necessary, but I like working in VS Code with Azure Functions because it has great local debugging, which is typically a pain with Serverless functions. If you haven’t already installed it, you can do so here:

Set up a Free Trial for Azure and Create a Storage Account

To run this sample, you’ll need to test drive a free trial for Azure. You can go into the portal and sign in the lefthand corner. You’ll make a new Blob Storage account, and retrieve the keys. Since we have that all squared away, we’re ready to rock!

Setting up Our Durable Function

Let’s take a look at the repo we have set up. We’ll clone or fork it:

git clone https://github.com/Azure-Samples/durablefunctions-apiscraping-nodejs.git 

Here’s what that initial file structure is like.

file structure for the durable function repo

(This visualization was made from my CLI tool.)

In local.settings.json, change GitHubToken to the value you grabbed from GitHub earlier, and do the same for the two storage keys — paste in the keys from the storage account you set up earlier.

Then run:

func extensions install
npm i
func host start

And now we’re running locally!

Understanding the Orchestrator

As you can see, we have a number of folders within the FanOutFanInCrawler directory. The functions in the directories listed GetAllRepositoriesForOrganization, GetAllOpenedIssues, and SaveRepositories are the functions that we will be coordinating.

Here’s what we’ll be doing:

  • The Orchestrator will kick off the GetAllRepositoriesForOrganization function, where we’ll pass in the organization name, retrieved from getInput() from the Orchestrator_HttpStart function
  • Since this is likely to be more than one repo, we’ll first create an empty array, then loop through all of the repos and run GetOpenedIssues, and push those onto the array. What we’re running here will all fire concurrently because it isn’t within the yield in the iterator
  • Then we’ll wait for all of the tasks to finish executing and finally call SaveRepositories which will store all of the results in Blob Storage

Since the other functions are fairly standard, let’s dig into that Orchestrator for a minute. If we look inside the Orchestrator directory, we can see it has a fairly traditional setup for a function with index.js and function.json files.

Generators

Before we dive into the Orchestrator, let’s take a very brief side tour into generators, because you won’t be able to understand the rest of the code without them.

A generator is not the only way to write this code! It could be accomplished with other asynchronous JavaScript patterns as well. It just so happens that this is a pretty clean and legible way to write it, so let’s look at it really fast.

function* generator(i) {
yield i++;
yield i++;
yield i++;
} var gen = generator(1); console.log(gen.next().value); // 1
console.log(gen.next().value); // 2
console.log(gen.next().value); // 3
console.log(gen.next()); // {value: undefined, done: true}

After the initial little asterisk following function*, you can begin to use the yield keyword. Calling a generator function does not execute the whole function in its entirety; an iterator object is returned instead. The next() method will walk over them one by one, and we’ll be given an object that tells us both the value and done — which will be a boolean of whether we’re done walking through all of the yield statements. You can see in the example above that for the last .next() call, an object is returned where done is true, letting us know we’ve iterated through all values.

Orchestrator code

We’ll start with the require statement we’ll need for this to work:

const df = require('durable-functions') module.exports = df(function*(context) { // our orchestrator code will go here
})

It’s worth noting that the asterisk there will create an iterator function.

First, we’ll get the organization name from the Orchestrator_HttpStart function and get all the repos for that organization with GetAllRepositoriesForOrganization. Note we use yield within the repositories assignment to make the function perform in sequential order.

const df = require('durable-functions') module.exports = df(function*(context) { var organizationName = context.df.getInput() var repositories = yield context.df.callActivityAsync( 'GetAllRepositoriesForOrganization', organizationName )
})

Then we’re going to create an empty array named output, create a for loop from the array we got containing all of the organization’s repos, and use that to push the issues into the array. Note that we don’t use yield here so that they’re all running concurrently instead of waiting one after another.

const df = require('durable-functions') module.exports = df(function*(context) { var organizationName = context.df.getInput() var repositories = yield context.df.callActivityAsync( 'GetAllRepositoriesForOrganization', organizationName ) var output = [] for (var i = 0; i < repositories.length; i++) { output.push( context.df.callActivityAsync('GetOpenedIssues', repositories[i]) ) } })

Finally, when all of these executions are done, we’re going to store the results and pass that in to the SaveRepositories function, which will save them to Blob Storage. Then we’ll return the unique ID of the instance (context.instanceId).

const df = require('durable-functions') module.exports = df(function*(context) { var organizationName = context.df.getInput() var repositories = yield context.df.callActivityAsync( 'GetAllRepositoriesForOrganization', organizationName ) var output = [] for (var i = 0; i < repositories.length; i++) { output.push( context.df.callActivityAsync('GetOpenedIssues', repositories[i]) ) } const results = yield context.df.Task.all(output) yield context.df.callActivityAsync('SaveRepositories', results) return context.instanceId
})

Now we’ve got all the steps we need to manage all of our functions with this single orchestrator!

Deploy

Now the fun part. Let’s deploy! 🚀

To deploy components, Azure requires you to install the Azure CLI and login with it.

First, you will need to provision the service. Look into the provision.ps1 file that’s provided to familiarize yourself with the resources we are going to create. Then, you can execute the file with the previously generated GitHub token like this:

.\provision.ps1 -githubToken <TOKEN> -resourceGroup <ResourceGroupName> -storageName <StorageAccountName> -functionName <FunctionName>

If you don’t want to install PowerShell, you can also take the commands within provision.ps1 and run it manually.

And there we have it! Our Durable Function is up and running.

The post Durable Functions: Fan Out Fan In Patterns appeared first on CSS-Tricks.

Understanding the difference between grid-template and grid-auto

Ire Aderinokun:

Within a grid container, there are grid cells. Any cell positioned and sized using the grid-template-* properties forms part of the explicit grid. Any grid cell that is not positioned/sized using this property forms part of the implicit grid instead.

Understanding explicit grids and implicit grids is powerful. This is my quicky take:

  • Explicit: you define a grid and place items exactly where you want them to go.
  • Implicit: you define a grid and let items fall into it as they can.

Grids can be both!

Direct Link to ArticlePermalink

The post Understanding the difference between grid-template and grid-auto appeared first on CSS-Tricks.