​​Avoiding those dang cannot read property of undefined errors

​​​​Uncaught TypeError: Cannot read property 'foo' of undefined.​ The dreaded error we all hit at some point in JavaScript development. Could be an empty state from an API that returns differently than you expected. Could be something else. We don’t know because the error itself is so general and broad.

​​I recently had an issue where certain environment variables weren’t being pulled in for one reason or another, causing all sorts of funkiness with that error staring me in the face. Whatever the cause, it can be a disastrous error if it’s left unaccounted for, so how can we prevent it in the first place?

​​Let’s figure it out.

​​Utility library

​​If you are already using a utility library in your project, there is a good chance that it includes a function for preventing this error. _.get​ in lodash​ (docs) or R.path in Ramda​ (docs) allow accessing the object safely.
​​
​​If you are already using a utility library, this is likely the simplest solution. If you are not using a utility library, read on!

​​

Short-circuiting with &&

​​​​One interesting fact about logical operators in JavaScript is that they don’t always return a boolean. According to the spec, “the value produced by a &&​ or ||​ operator is not necessarily of type Boolean. The value produced will always be the value of one of the two operand expressions.”
​​
​​​​In the case of the &&​ operator, the first expression will be used if it a “falsy” value. Otherwise, the second expression will be used. This means that the expression 0 && 1​ will be evaluated as 0​ (a falsy value), and the expression 2 && 3​ will be evaluated as 3​. If multiple &&​ expressions are chained together, they will evaluate to either the first falsy value or the last value. For example, 1 && 2 && 3 && null && 4​ will evaluate to null​, and 1 && 2 && 3​ will evaluate to 3​.

​​​​How is this useful for safely accessing nested object properties? Logical operators in JavaScript will “short-circuit.” In this case of &&​, this means that the expression will cease moving forward after it reaches its first falsy value.

​​​​

​​const foo = false && destroyAllHumans();
​​console.log(foo); // false, and humanity is safe

​​In this example, destroyAllHumans is never called because the &&​ operand stopped all evaluation after false​.

​​This can be used to safely access nested properties.

​​

​​const meals = {
​​ breakfast: null, // I skipped the most important meal of the day! 🙁
​​ lunch: {
​​ protein: 'Chicken',
​​ greens: 'Spinach',
​​ },
​​ dinner: {
​​ protein: 'Soy',
​​ greens: 'Kale',
​​ },
​​};
​​
​​const breakfastProtein = meals.breakfast && meals.breakfast.protein; // null
​​const lunchProtein = meals.lunch && meals.lunch.protein; // 'Chicken'

​​Aside from its simplicity, one of the main advantages of this approach is its brevity when dealing with small chains. However, when accessing deeper objects, this can be quite verbose.

​​

const favorites = {
​​ video: {
​​ movies: ['Casablanca', 'Citizen Kane', 'Gone With The Wind'],
​​ shows: ['The Simpsons', 'Arrested Development'],
​​ vlogs: null,
​​ },
​​ audio: {
​​ podcasts: ['Shop Talk Show', 'CodePen Radio'],
​​ audiobooks: null,
​​ },
​​ reading: null, // Just kidding -- I love to read
​​};
​​
​​const favoriteMovie = favorites.video && favorites.video.movies && favorites.video.movies[0];
​​// Casablanca
​​const favoriteVlog = favorites.video && favorites.video.vlogs && favorites.video.vlogs[0];
​​// null

​​The more deeply nested an object is, the more unwieldy it gets.

​​
​​

The “Maybe Monad”

​​Oliver Steele came up with this method and goes through it in much more detail in his blog post, “Monads on the Cheap I: The Maybe Monad.” I will attempt to give a brief explanation here.

​​

const favoriteBook = ((favorites.reading||{}).books||[])[0]; // undefined
​​const favoriteAudiobook = ((favorites.audio||{}).audiobooks||[])[0]; // undefined
​​const favoritePodcast = ((favorites.audio||{}).podcasts||[])[0]; // 'Shop Talk Show'

​​Similar to the short-circuit example above, this method works by checking if a value is falsy. If it is, it will attempt to access the next property on an empty object. In the example above, favorites.reading​ is null​, so the books​ property is being accessed from an empty object. This will result in an undefined​, so the 0​ will likewise be accessed from an empty array.

​​The advantage of this method over the &&​ method is that it avoids repetition of property names. On deeper objects, this can be quite a significant advantage. The primary disadvantage would be readability — it is not a common pattern, and may take a reader a moment to parse out how it is working.​

​​

​​try/catch

​​​​try...catch​ statements in JavaScript allow another method for safely accessing properties.

​​

try {
​​ console.log(favorites.reading.magazines[0]);
​​} catch (error) {
​​ console.log("No magazines have been favorited.");
​​}

​​Unfortunately, in JavaScript, try...catch​ statements are not expressions. They do not evaluate to a value as they do in some languages. This prevents a concise try​ statement as a way of setting a variable.

​​One option is to use a let​ variable that is defined in the block above the try...catch​.

​​

let favoriteMagazine;
​​try { ​​ favoriteMagazine = favorites.reading.magazines[0]; ​​} catch (error) { ​​ favoriteMagazine = null; /* any default can be used */
​​};

​​Although it’s verbose, this works for setting a single variable (that is, if the mutable variable doesn’t scare you off). However, issues can arise if they’re done in bulk.

​​

let favoriteMagazine, favoriteMovie, favoriteShow;
​​try {
​​ favoriteMovie = favorites.video.movies[0];
​​ favoriteShow = favorites.video.shows[0];
​​ favoriteMagazine = favorites.reading.magazines[0];
​​} catch (error) {
​​ favoriteMagazine = null;
​​ favoriteMovie = null;
​​ favoriteShow = null;
​​};
​​
​​console.log(favoriteMovie); // null
​​console.log(favoriteShow); // null
​​console.log(favoriteMagazine); // null

​​If any of the attempts to access the property fails, this will cause all of them to fall back into their defaults.

​​An alternative is to wrap the try...catch​ in a reusable utility function.

​​

const tryFn = (fn, fallback = null) => {
​​ try {
​​ return fn();
​​ } catch (error) {
​​ return fallback;
​​ }
​​} ​​
​​const favoriteBook = tryFn(() => favorites.reading.book[0]); // null
​​const favoriteMovie = tryFn(() => favorites.video.movies[0]); // "Casablanca"

​​By wrapping the access to the object in a function, you can delay the “unsafe” code and pass it into a try...catch​.

​​A major advantage of this method is how natural it is to access the property. As long as properties are wrapped in a function, they are safely accessed. A default value can also be specified in the case of a non-existent path.

​​Merge with a default object

​​
By merging an object with a similarly shaped object of “defaults,” we can ensure that the path that we are trying to access is safe.
​​
​​

const defaults = {
​​ position: "static",
​​ background: "transparent",
​​ border: "none",
​​};
​​
​​const settings = {
​​ border: "1px solid blue",
​​};
​​
​​const merged = { ...defaults, ...settings };
​​
​​console.log(merged); ​​/*
​​ {
​​ position: "static",
​​ background: "transparent",
​​ border: "1px solid blue"
​​ }
​​*/

​​
​​Careful, though, because the entire nested object can be overwritten rather than a single property.
​​
​​

const defaults = {
​​ font: {
​​ family: "Helvetica",
​​ size: "12px",
​​ style: "normal",
​​ }, ​​ color: "black",
​​};
​​
​​const settings = {
​​ font: {
​​ size: "16px",
​​ }
​​};
​​
​​const merged = { ​​ ...defaults, ​​ ...settings,
​​};
​​
​​console.log(merged.font.size); // "16px"
​​console.log(merged.font.style); // undefined

​​Oh no! To fix this, we’ll need to similarly copy each of the nested objects.

​​

const merged = { ​​ ...defaults, ​​ ...settings,
​​ font: {
​​ ...defaults.font,
​​ ...settings.font,
​​ },
​​};
​​
​​console.log(merged.font.size); // "16px"
​​console.log(merged.font.style); // "normal"

​​Much better!

​​This pattern is common with plugins or components that accept a large settings object with included defaults.

​​A bonus about this approach is that, by writing a default object, we’re including documentation on how an object should look. Unfortunately, depending on the size and shape of the data, the “merging” can be littered with copying each nested object.

​​​

The future: optional chaining

​​There is currently a TC39 proposal for a feature called “optional chaining.” This new operator would look like this:

​​console.log(favorites?.video?.shows[0]); // 'The Simpsons'
​​console.log(favorites?.audio?.audiobooks[0]); // undefined

​​The ?.​ operator works by short-circuiting: if the left-hand side of the ?.​ operator evaluates to null​ or undefined​, the entire expression will evaluate to undefined​ and the right-hand side will remain unevaluated.

​​To have a custom default, we can use the ||​ operator in the case of an undefined.

​​

console.log(favorites?.audio?.audiobooks[0] || "The Hobbit");

​​

Which method should you use?

​​The answer, as you might have guessed, is that age-old answer… “it depends.” If the optional chaining operator has been added to the language and has the necessary browser support, it is likely the best option. If you are not from the future, however, there are more considerations to take into account. Are you using a utility library? How deeply nested is your object? Do you need to specify defaults? Different cases may warrant a different approach.

The post ​​Avoiding those dang cannot read property of undefined errors appeared first on CSS-Tricks.

A Funny Thing Happened on the Way to the JavaScript

Around this time last year, I wrote an article about the JavaScript learning landscape. Within that article, you’ll find my grand plans to learn JavaScript — complete with a link to a CodePen Collection I started for tracking my progress, and it even got dozens of comments cheering me on.

Like most people, I was ambitious. It was a new year and I was excited to tackle a long-standing project. It was my development version of losing 30 pounds (which I also need to do). But, if you follow that link to the CodePen Collection, you’ll see that there’s nothing there. If you were to scour my hard drive or cloud storage, you’d see that there aren’t any JavaScript files or projects there, either.

Over the past year, I didn’t make any progress on one of my main goals. So, what the hell happened?

A Story as Old as Time

The internet is littered with similar tweets and blog posts. Inboxes are filled with TinyLetters of resolutions and there’s no shortage of YouTubers teaching anyone who will listen how to have their best year ever. But very few people follow through on their goals. This might be even more true in the design and development world, what with the plethora of new technologies, languages, libraries, and tools that hit the scene on a regular basis.

These stories all follow a similar path:

  1. Person determines major goal
  2. Person tells friends (or who knows how many CSS-Tricks visitors)
  3. Person gets distracted, overwhelmed, disinterested, or all three
  4. Goal is completely forgotten about after X amount of time
  5. Person apologizes and makes up excuses for friends (or, again, who know how many CSS-Tricks visitors)

In my experience, it’s not the goal-setting or telling everyone about said goal that’s the problem. It’s step three above. When goals go off the rails, at least for me, it’s due to three main issues: distraction, stress, and lack of interest. Barring unforeseen life events, these three issues are responsible for all those unachieved goals that we struggle with.

In thinking about my goals for this year, I decided to start first with deconstructing why I couldn’t reach the one major goal I set for myself last year. So, let’s dig into those three issues and see if there’s a way to prevent any of them happening this time around.

Distraction

Distraction seems to be the big one here. We all have a lot going on. Between job and family responsibilities, other hobbies and hanging out with friends, it’s hard to fit in new projects. As necessary as they are, all those other interests and responsibilities are distractions when it comes to our goals.

The whole point of setting a goal is carving out time to work towards it. It’s about prioritizing the goal over other things. For me, I found myself letting all of those other distractions in life work their way into my day. It was all too easy to work through lunch instead of taking that time to tackle a chapter in a JavaScript book. I would get sucked into the latest Netflix series after the kids went to bed. I didn’t prioritize learning JavaScript and I had nothing to show for it at the end of the year.

Overcoming Distraction

The key here is to block out those distractions, which is easier said than done. We can’t simply ignore the needs of our families and careers, but we need to give ourselves time to focus without distractions. For me, I’m increasingly convinced that the solution is time blocking.

Time blocking is exactly what it sounds like: You block out specific periods of time on your calendar to focus on certain tasks. Time blocking allows you to prioritize what’s important. It doesn’t force you to sit down, crack open a book, or start coding, but it gives you the time to do it.
There are a ton of articles online that go into different time blocking methods, a few of which are below:

For me, I’m going to block out specific times throughout the week to focus on learning JavaScript in 2019. I’m trying to be realistic about how much time I can invest, weighing it against other obligations. Then I’m putting those time blocks on my shared family calendar to make it clear to everyone what I’m prioritizing. More importantly, I’m making it clear that this time is for focus, and to leave the other distractions at the door.

It can also be helpful to block smaller, but just as impactful, distractions on your phone and computer. Closing out browser tabs not related to your task, silencing notifications, and clearing your desk of otherwise distracting items should be part of the routine when you sit down to start working on your task. It’s easy to scroll through Twitter, Hacker News, or even CSS-Tricks and convince yourself that it’s time well spent (that last one usually is, though) but that time adds up and doesn’t always result in learning or growing your skills like you think it will. Cutting out those distractions and allowing yourself to focus on what you want to accomplish is a great way to, you know, actually accomplish your goals.

Stress

Last year’s post lays out a landscape full of interesting articles, books, podcasts, and courses. There is no lack of things to learn about and enough resources to keep anyone busy for way longer than just a year. And, when it comes to JavaScript, it seems like there’s always some new technique or framework that you need to learn.

Combine that with all of the ancillary topics you need to understand when learning JavaScript and you end up with one of those overwhelming developer roadmaps that Chris collected a while back.

I don’t care how smart you are, that’s intimidating as hell. Feeling overwhelmed on the web is common place. How do you think it feels as someone just starting out? Combined with all the responsibilities and distractions from the last section, and you have a killer recipe for burnout.

I had originally intended to work my way through Marijn Haverbeke’s Eloquent JavaScript as a first step towards learning the language. But I also mentioned all the podcasts, YouTube channels, and newsletters with which I was surrounding myself. The intention was to learn through immersion, but it quickly resulted in feeling stressed and overwhelmed. And when I felt overwhelmed, I quickly allowed all those distractions to pull my attention away from learning JavaScript.

Overcoming Stress

Just like when dealing with distraction, I think the key to dealing with stress is to focus on one or two things and cut out all the rest. Instead of fully immersing myself in the JavaScript world, I’m going to stick to just the book, work my way through that, and then find the next resource later down the road. I’m going to intentionally ignore as much of the JavaScript world as I can in order to get my bearings and only open myself up to the stress of the developer roadmap if, and when, I feel like I want to journey down that path.

Disinterest

Flipping through any programming book (at least for a beginner) causes most people’s eyes to glaze over. The code looks overly complex and it resembles a math textbook. I don’t know about you, but I hated math class and I found it hard to get excited about investing my free time in something that felt a lot like going back to high school.

But I know that learning JavaScript (and programming, in general) is a worthwhile pursuit and will let me tackle projects that I’ve long wanted to complete but haven’t had the chops to do. So, how can I get interested in what, at first glance, looks like such a boring task?

Overcoming Disinterest

I think the key here is to relate what I learn to some subject that I find fascinating.

I’ve been interested in data visualization for a long time. Blogs like Flowing Data are fascinating, and I’ve wanted to be able to create data visualizations of my own for years. And I know that JavaScript is increasingly a viable way to create those graphics. Tools like D3.js and p5.js are first-class frameworks for creating amazing visualizations — so why not learn the underlying language those tools use?

My plan to overcome disinterest is to work my way towards a project that I want to build. Go through all the basics, trudge through the muck, and then use the concepts learned along the way to understand more advanced tools, like D3.js.

Anytime you can align your learning to areas you find interesting, you’re more likely to be successful. I think that’s what was missing the first time around, so I’m setting up targets to aim for when learning JavaScript, things that will keep me interested enough to learn what I need to learn.

It’s a Hard Road

Learning is rarely easy. But, sometimes, it’s when it’s the hardest that it pays off the most.

I’m convinced that the more we can uncover our own mental roadblocks and deconstruct them, the better positioned we are to achieve our goals. For me, my mental roadblocks are distraction, stress, and disinterest. The three work together to keep me from my goals, but I’m putting plans into motion to overcome all three. Your roadblocks may differ, but you probably have ways of dealing with them, too.

I’d love to hear from everyone how they overcame their own challenges when learning a new skill. Leave a comment below telling me your story. Sharing it may help me, and others, finally achieve what we’ve always wanted, whether it’s learning JavaScript, digging into the latest framework, or running that marathon we’ve all been putting off for so long.

The post A Funny Thing Happened on the Way to the JavaScript appeared first on CSS-Tricks.

You might not need a loop

Ire Aderinokun has written a nifty piece using loops and when we might consider replacing it with another method, say .map() and .filter(). I particularly like what she has to say here:

As I mentioned earlier, loops are a great tool for a lot of cases, and the existence of these new methods doesn’t mean that loops shouldn’t be used at all.

I think these methods are great because they provide code that is in a way self-documenting. When we use the filter() method instead of a for loop, it is easier to understand at first glance what the purpose of the logic is.

However, these methods have very specific use cases and may be overkill if their full value isn’t being used. An example of this is the map() method, which can technically be used to replace almost any arbitrary loop. If in our first example, we only wanted to modify the original articles array and not create a new, modified, amazingArticles, using this method would be unnecessary. It’s important to use the method that suits each scenario, to make sure that we aren’t over- or under-performing.

If you’re interested in digging more into this subject, Adan Giese wrote a great post about the .filter() method a short while ago that’s definitely worth checking out. Oh, and speaking of lots of different ways to approach loops, Chris compiled a list of options for looping over querySelectorAll NodeLists where forEach is just one of many options.

Direct Link to ArticlePermalink

The post You might not need a loop appeared first on CSS-Tricks.

A Bunch of Options for Looping Over querySelectorAll NodeLists

A common need when writing vanilla JavaScript is to find a selection of elements in the DOM and loop over them. For example, finding instances of a button and attaching a click handler to them.

const buttons = document.querySelectorAll(".js-do-thing");
// There could be any number of these! // I need to loop over them and attach a click handler.

There are SO MANY ways to go about it. Let’s go through them.

forEach

forEach is normally for arrays, and interestingly, what comes back from querySelectorAll is not an array but a NodeList. Fortunately, most modern browsers support using forEach on NodeLists anyway.

buttons.forEach((button) => { button.addEventListener('click', () => { console.log("forEach worked"); });
});

If you’re worried that forEach might not work on your NodeList, you could spread it into an array first:

[...buttons].forEach((button) => { button.addEventListener('click', () => { console.log("spread forEach worked"); });
});

But I’m not actually sure if that helps anything since it seems a bit unlikely there are browsers that support spreads but not forEach on NodeLists. Maybe it gets weird when transpiling gets involved, though I dunno. Either way, spreading is nice in case you want to use anything else array-specific, like .map(), .filter(), or .reduce().

A slightly older method is to jack into the array’s natural forEach with this little hack:

[].forEach.call(buttons, (button) => { button.addEventListener('click', () => { console.log("array forEach worked"); });
});

Todd Motto once called out this method pretty hard though, so be advised. He recommended building your own method (updated for ES6):

const forEach = (array, callback, scope) => { for (var i = 0; i < array.length; i++) { callback.call(scope, i, array[i]); }
};

…which we would use like this:

forEach(buttons, (index, button) => { console.log("our own function worked");
});

for .. of

Browser support for for .. of loops looks pretty good and this seems like a super clean syntax to me:

for (const button of buttons) { button.addEventListener('click', () => { console.log("for .. of worked"); });
}

Make an array right away

const buttons = Array.prototype.slice.apply( document.querySelectorAll(".js-do-thing")
);

Now you can use all the normal array functions.

buttons.forEach((button) => { console.log("apply worked");
});

Old for loop

If you need maximum possible browser support, there is no shame in an ancient classic for loop:

for (let i = 0; i < buttons.length; ++i) { buttons[i].addEventListener('click', () => { console.log("for loop worked"); });
}

Libraries

If you’re using jQuery, you don’t even have to bother….

$(".buttons").on("click", () => { console.log("jQuery works");
});

If you’re using a React/JSX setup, you don’t need think about this kind of binding at all.

Lodash has a _.forEach as well, which presumably helps with older browsers.

_.forEach(buttons, (button, key) => { console.log("lodash worked");
});

Poll

Twitter peeps:

Also here’s a Pen with all these options in it.

The post A Bunch of Options for Looping Over querySelectorAll NodeLists appeared first on CSS-Tricks.

Demystifying JavaScript Testing

Many people have messaged me, confused about where to get started with testing. Just like everything else in software, we work hard to build abstractions to make our jobs easier. But that amount of abstraction evolves over time, until the only ones who really understand it are the ones who built the abstraction in the first place. Everyone else is left with taking the terms, APIs, and tools at face value and struggling to make things work.

One thing I believe about abstraction in code is that the abstraction is not magic — it’s code. Another I thing I believe about abstraction in code is that it’s easier to learn by doing.

Imagine that a less seasoned engineer approaches you. They’re hungry to learn, they want to be confident in their code, and they’re ready to start testing. 👍 Ever prepared to learn from you, they’ve written down a list of terms, APIs, and concepts they’d like you to define for them:

  • Assertion
  • Testing Framework
  • The describe/it/beforeEach/afterEach/test functions
  • Mocks/Stubs/Test Doubles/Spies
  • Unit/Integration/End to end/Functional/Accessibility/Acceptance/Manual testing

So…

Could you rattle off definitions for that budding engineer? Can you explain the difference between an assertion library and a testing framework? Or, are they easier for you to identify than explain?

Here’s the point. The better you understand these terms and abstractions, the more effective you will be at teaching them. And if you can teach them, you’ll be more effective at using them, too.

Enter a teach-an-engineer-to-fish moment. Did you know that you can write your own assertion library and testing framework? We often think of these abstractions as beyond our capabilities, but they’re not. Each of the popular assertion libraries and frameworks started with a single line of code, followed by another and then another. You don’t need any tools to write a simple test.

Here’s an example:

const {sum} = require('../math')
const result = sum(3, 7)
const expected = 10
if (result !== expected) { throw new Error(`${result} is not equal to ${expected}`)
}

Put that in a module called test.js and run it with node test.js and, poof, you can start getting confident that the sum function from the math.js module is working as expected. Make that run on CI and you can get the confidence that it won’t break as changes are made to the codebase. 🏆

Let’s see what a failure would look like with this:

Terminal window showing an error indicating -4 is not equal to 10.

So apparently our sum function is subtracting rather than adding and we’ve been able to automatically detect that through this script. All we need to do now is fix the sum function, run our test script again and:

Terminal window showing that we ran our test script and no errors were logged.

Fantastic! The script exited without an error, so we know that the sum function is working. This is the essence of a testing framework. There’s a lot more to it (e.g. nicer error messages, better assertions, etc.), but this is a good starting point to understand the foundations.

Once you understand how the abstractions work at a fundamental level, you’ll probably want to use them because, hey, you just learned to fish and now you can go fishing. And we have some pretty phenomenal fishing polls, uh, tools available to us. My favorite is the Jest testing platform. It’s amazingly capable, fully featured and allows me to write tests that give me the confidence I need to not break things as I change code.

I feel like fundamentals are so important that I included an entire module about it on TestingJavaScript.com. This is the place where you can learn the smart, efficient way to test any JavaScript application. I’m really happy with what I’ve created for you. I think it’ll help accelerate your understanding of testing tools and abstractions by giving you the chance to implement parts from scratch. The (hopeful) result? You can start writing tests that are maintainable and built to instill confidence in your code day after day. 🎣

The early bird sale is going on right now! 40% off every tier! The sale is going away in the next few days so grab this ASAP!

TestingJavaScript.com – Learn the smart, efficient way to test any JavaScript application.

P.S. Give this a try: Tweet what’s the difference between a testing framework and an assertion library? In my course, I’ll not only explain it, we’ll build our own!

The post Demystifying JavaScript Testing appeared first on CSS-Tricks.

How to stop using console.log() and start using your browser’s debugger

Whenever I see someone really effectively debug JavaScript in the browser, they use the DevTools tooling to do it. Setting breakpoints and hopping over them and such. That, as opposed to sprinkling console.log() (and friends) statements all around your code.

Parag Zaveri wrote about the transition and it has clearly resonated with lots of folks! (7.5k claps on Medium as I write).

I know I have hangups about it…

  • Part of debugging is not just inspecting code once as-is; it’s inspecting stuff, making changes and then continuing to debug. If I spend a bunch of time setting up breakpoints, will they still be there after I’ve changed my code and refreshed? Answer: DevTools appears to do a pretty good job with that.
  • Looking at the console to see some output is one thing, but mucking about in the Sources panel is another. My code there might be transpiled, combined, and not quite look like my authored code, making things harder to find. Plus it’s a bit cramped in there, visually.

But yet! It’s so powerful. Setting a breakpoint (just by clicking a line number) means that I don’t have to litter my own code with extra junk, nor do I have to choose what to log. Every variable in local and global scope is available for me to look at that breakpoint. I learned in Parag’s article that you might not even need to manually set breakpoints. You can, for example, have it break whenever a click (or other) event fires. Plus, you can type in variable names you specifically want to watch for, so you don’t have to dig around looking for them. I’ll be trying to use the proper DevTools for debugging more often and seeing how it goes.

While we’re talking about debugging though… I’ve had this in my head for a few months: Why doesn’t JavaScript have log levels? Apparently, this is a very common concept in many other languages. You can write logging statements, but they will only log if the configuration says it should. That way, in development, you can get detailed logging, but log only more serious errors in production. I mention it because it could be nice to leave useful logging statements in the code, but not have them actually log if you set like console.level = "production"; or whatever. Or perhaps they could be compiled out during a build step.

Direct Link to ArticlePermalink

The post How to stop using console.log() and start using your browser’s debugger appeared first on CSS-Tricks.

Why Using reduce() to Sequentially Resolve Promises Works

Writing asynchronous JavaScript without using the Promise object is a lot like baking a cake with your eyes closed. It can be done, but it’s gonna be messy and you’ll probably end up burning yourself.

I won’t say it’s necessary, but you get the idea. It’s real nice. Sometimes, though, it needs a little help to solve some unique challenges, like when you’re trying to sequentially resolve a bunch of promises in order, one after the other. A trick like this is handy, for example, when you’re doing some sort of batch processing via AJAX. You want the server to process a bunch of things, but not all at once, so you space the processing out over time.

Ruling out packages that help make this task easier (like Caolan McMahon’s async library), the most commonly suggested solution for sequentially resolving promises is to use Array.prototype.reduce(). You might’ve heard of this one. Take a collection of things, and reduce them to a single value, like this:

let result = [1,2,5].reduce((accumulator, item) => { return accumulator + item;
}, 0); // <-- Our initial value. console.log(result); // 8

But, when using reduce() for our purposes, the setup looks more like this:

let userIDs = [1,2,3]; userIDs.reduce( (previousPromise, nextID) => { return previousPromise.then(() => { return methodThatReturnsAPromise(nextID); });
}, Promise.resolve());

Or, in a more modern format:

let userIDs = [1,2,3]; userIDs.reduce( async (previousPromise, nextID) => { await previousPromise; return methodThatReturnsAPromise(nextID);
}, Promise.resolve());

This is neat! But for the longest time, I just swallowed this solution and copied that chunk of code into my application because it “worked.” This post is me taking a stab at understanding two things:

  1. Why does this approach even work?
  2. Why can’t we use other Array methods to do the same thing?

Why does this even work?

Remember, the main purpose of reduce() is to “reduce” a bunch of things into one thing, and it does that by storing up the result in the accumulator as the loop runs. But that accumulator doesn’t have to be numeric. The loop can return whatever it wants (like a promise), and recycle that value through the callback every iteration. Notably, no matter what the accumulator value is, the loop itself never changes its behavior — including its pace of execution. It just keeps rolling through the collection as fast as the thread allows.

This is huge to understand because it probably goes against what you think is happening during this loop (at least, it did for me). When we use it to sequentially resolve promises, the reduce() loop isn’t actually slowing down at all. It’s completely synchronous, doing its normal thing as fast as it can, just like always.

Look at the following snippet and notice how the progress of the loop isn’t hindered at all by the promises returned in the callback.

function methodThatReturnsAPromise(nextID) { return new Promise((resolve, reject) => { setTimeout(() => { console.log(`Resolve! ${dayjs().format('hh:mm:ss')}`); resolve(); }, 1000); });
} [1,2,3].reduce( (accumulatorPromise, nextID) => { console.log(`Loop! ${dayjs().format('hh:mm:ss')}`); return accumulatorPromise.then(() => { return methodThatReturnsAPromise(nextID); });
}, Promise.resolve());

In our console:

"Loop! 11:28:06" "Loop! 11:28:06" "Loop! 11:28:06" "Resolve! 11:28:07" "Resolve! 11:28:08" "Resolve! 11:28:09"

The promises resolve in order as we expect, but the loop itself is quick, steady, and synchronous. After looking at the MDN polyfill for reduce(), this makes sense. There’s nothing asynchronous about a while() loop triggering the callback() over and over again, which is what’s happening under the hood:

while (k < len) { if (k in o) { value = callback(value, o[k], k, o); } k++;
}

With all that in mind, the real magic occurs in this piece right here:

return previousPromise.then(() => { return methodThatReturnsAPromise(nextID)
});

Each time our callback fires, we return a promise that resolves to another promise. And while reduce() doesn’t wait for any resolution to take place, the advantage it does provide is the ability to pass something back into the same callback after each run, a feature unique to reduce(). As a result, we’re able build a chain of promises that resolve into more promises, making everything nice and sequential:

new Promise( (resolve, reject) => { // Promise #1 resolve();
}).then( (result) => { // Promise #2 return result;
}).then( (result) => { // Promise #3 return result;
}); // ... and so on!

All of this should also reveal why we can’t just return a single, new promise each iteration. Because the loop runs synchronously, each promise will be fired immediately, instead of waiting for those created before it.

[1,2,3].reduce( (previousPromise, nextID) => { console.log(`Loop! ${dayjs().format('hh:mm:ss')}`); return new Promise((resolve, reject) => { setTimeout(() => { console.log(`Resolve! ${dayjs().format('hh:mm:ss')}`); resolve(nextID); }, 1000); });
}, Promise.resolve());

In our console:

"Loop! 11:31:20" "Loop! 11:31:20" "Loop! 11:31:20" "Resolve! 11:31:21" "Resolve! 11:31:21" "Resolve! 11:31:21"

Is it possible to wait until all processing is finished before doing something else? Yes. The synchronous nature of reduce() doesn’t mean you can’t throw a party after every item has been completely processed. Look:

function methodThatReturnsAPromise(id) { return new Promise((resolve, reject) => { setTimeout(() => { console.log(`Processing ${id}`); resolve(id); }, 1000); });
} let result = [1,2,3].reduce( (accumulatorPromise, nextID) => { return accumulatorPromise.then(() => { return methodThatReturnsAPromise(nextID); });
}, Promise.resolve()); result.then(e => { console.log("Resolution is complete! Let's party.")
});

Since all we’re returning in our callback is a chained promise, that’s all we get when the loop is finished: a promise. After that, we can handle it however we want, even long after reduce() has run its course.

Why won’t any other Array methods work?

Remember, under the hood of reduce(), we’re not waiting for our callback to complete before moving onto the next item. It’s completely synchronous. The same goes for all of these other methods:

But reduce() is special.

We found that the reason reduce() works for us is because we’re able to return something right back to our same callback (namely, a promise), which we can then build upon by having it resolve into another promise. With all of these other methods, however, we just can’t pass an argument to our callback that was returned from our callback. Instead, each of those callback arguments are predetermined, making it impossible for us to leverage them for something like sequential promise resolution.

[1,2,3].map((item, [index, array]) => [value]);
[1,2,3].filter((item, [index, array]) => [boolean]);
[1,2,3].some((item, [index, array]) => [boolean]);
[1,2,3].every((item, [index, array]) => [boolean]);

I hope this helps!

At the very least, I hope this helps shed some light on why reduce() is uniquely qualified to handle promises in this way, and maybe give you a better understanding of how common Array methods operate under the hood. Did I miss something? Get something wrong? Let me know!

The post Why Using reduce() to Sequentially Resolve Promises Works appeared first on CSS-Tricks.

Moving Backgrounds With Mouse Position

Let’s say you wanted to move the background-position on an element as you mouse over it to give the design a little pizzazz. You have an element like this:

<div class="module" id="module"></div>

And you toss a background on it:

.module { background-image: url(big-image.jpg);
}

You can adjust the background-position in JavaScript like this:

const el = document.querySelector("#module"); el.addEventListener("mousemove", (e) => { el.style.backgroundPositionX = -e.offsetX + "px"; el.style.backgroundPositionY = -e.offsetY + "px";
});

See the Pen Move a background with mouse by Chris Coyier (@chriscoyier) on CodePen.

Or, you could update CSS custom properties in the JavaScript instead:

const el = document.querySelector("#module"); el.addEventListener("mousemove", (e) => { el.style.setProperty('--x', -e.offsetX + "px"); el.style.setProperty('--y', -e.offsetY + "px");
});
.module { --x: 0px; --y: 0px; background-image: url(large-image.jpg); background-position: var(--x) var(--y);
}

See the Pen Move a background with mouse by Chris Coyier (@chriscoyier) on CodePen.

Here’s an example that moves the background directly in JavaScript, but with a transition applied so it slides to the new position rather than jerking around the first time you enter:

See the Pen Movable Background Ad by Chris Coyier (@chriscoyier) on CodePen.

Or, you could move an actual element instead (rather than the background-position). You’d do this if there is some kind of content or interactivity on the sliding element. Here’s an example of that, which sets CSS custom properties again, but then actually moves the element via a CSS translate() and a calc() to temper the speed.

See the Pen Hotjar Moving Heatmap Ad by Chris Coyier (@chriscoyier) on CodePen.

I’m sure there are loads of other ways to do this — a moving SVG viewBox, scripts controlling a canvas, webGL… who knows! If you have some fancier ways to handle this, link ’em up in the comments.

The post Moving Backgrounds With Mouse Position appeared first on CSS-Tricks.

New mobile Chrome feature would disable scripts on slow connections

This is a possible upcoming feature for mobile Chrome:

If a Data Saver user is on a 2G-speed or slower network according to the NetInfo API, Chrome disables scripts and sends an intervention header on every resource request. Users are shown a UI at the bottom of the screen indicating the page has been modified to save data. Users can enable scripts on the page by tapping “Show original” in the UI.

And the people shout: progressive enhancement!

Jeremy Keith:

An excellent idea for people in low-bandwidth situations: automatically disable JavaScript. As long as the site is built with progressive enhancement, there’s no problem (and if not, the user is presented with the choice to enable scripts).

Power to the people!

George Burduli reports:

This is huge news for developing countries where mobile data packets may cost a lot and are not be affordable to all. Enabling NoScript by default will make sure that users don’t burn through their data without knowledge. The feature will probably be available in the Chrome 69, which will also come with the new Material Design refresh.

The post New mobile Chrome feature would disable scripts on slow connections appeared first on CSS-Tricks.

Level up your .filter game

.filter is a built-in array iteration method that accepts a predicate which is called against each of its values, and returns a subset of all values that return a truthy value.

That is a lot to unpack in one statement! Let’s take a look at that statement piece-by-piece.

  • “Built-in” simply means that it is part of the language—you don’t need to add any libraries to get access to this functionality.
  • “Iteration methods” accept a function that are run against each item of the array. Both .map and .reduce are other examples of iteration methods.
  • A “predicate” is a function that returns a boolean.
  • A “truthy value” is any value that evaluates to true when coerced to a boolean. Almost all values are truthy, with the exceptions of: undefined, null, false, 0, NaN, or "" (empty string).

To see .filter in action, let’s take a look at this array of restaurants.

const restaurants = [ { name: "Dan's Hamburgers", price: 'Cheap', cuisine: 'Burger', }, { name: "Austin's Pizza", price: 'Cheap', cuisine: 'Pizza', }, { name: "Via 313", price: 'Moderate', cuisine: 'Pizza', }, { name: "Bufalina", price: 'Expensive', cuisine: 'Pizza', }, { name: "P. Terry's", price: 'Cheap', cuisine: 'Burger', }, { name: "Hopdoddy", price: 'Expensive', cuisine: 'Burger', }, { name: "Whataburger", price: 'Moderate', cuisine: 'Burger', }, { name: "Chuy's", cuisine: 'Tex-Mex', price: 'Moderate', }, { name: "Taquerias Arandina", cuisine: 'Tex-Mex', price: 'Cheap', }, { name: "El Alma", cuisine: 'Tex-Mex', price: 'Expensive', }, { name: "Maudie's", cuisine: 'Tex-Mex', price: 'Moderate', },
];

That’s a lot of information. I’m currently in the mood for a burger, so let’s filter that array down a bit.

const isBurger = ({cuisine}) => cuisine === 'burger';
const burgerJoints = restaurants.filter(isBurger);

isBurger is the predicate, and burgerJoints is a new array which is a subset of restaurants. It is important to note that restaurants remained unchanged from the .filter.

Here is a simple example of two lists being rendered—one of the original restaurants array, and one of the filtered burgerJoints array.

See the Pen .filter – isBurger by Adam Giese (@AdamGiese) on CodePen.

Negating Predicates

For every predicate there is an equal and opposite negated predicate.

A predicate is a function that returns a boolean. Since there are only two possible boolean values, that means it is easy to “flip” the value of a predicate.

A few hours have passed since I’ve eaten my burger, and now I’m hungry again. This time, I want to filter out burgers to try something new. One option is to write a new isNotBurger predicate from scratch.

const isBurger = ({cuisine}) => cuisine === 'burger';
const isNotBurger = ({cuisine}) => cuisine !== 'burger';

However, look at the amount of similarities between the two predicates. This is not very DRY code. Another option is to call the isBurger predicate and flip the result.

const isBurger = ({cuisine}) => cuisine === 'burger';
const isNotBurger = ({cuisine}) => !isBurger(cuisine);

This is better! If the definition of a burger changes, you will only need to change the logic in one place. However, what if we have a number of predicates that we’d like to negate? Since this is something that we’d likely want to do often, it may be a good idea to write a negate function.

const negate = predicate => function() { return !predicate.apply(null, arguments);
} const isBurger = ({cuisine}) => cuisine === 'burger';
const isNotBurger = negate(isBurger); const isPizza = ({cuisine}) => cuisine === 'pizza';
const isNotPizza = negate(isPizza);

You may have some questions.

What is .apply?

MDN:

apply() method calls a function with a given this value, and arguments provided as an array (or an array-like object).

What is arguments?

MDN:

The arguments object is a local variable available within all (non-arrow) functions. You can refer to a function’s arguments within the function by using the arguments object.

Why return an old-school function instead of a newer, cooler arrow function?

In this case, returning a traditional function is necessary because the arguments object is only available on traditional functions.

Returning Predicates

As we saw with our negate function, it is easy for a function to return a new function in JavaScript. This can be useful for writing “predicate creators.” For example, let’s look back at our isBurger and isPizza predicates.

const isBurger = ({cuisine}) => cuisine === 'burger';
const isPizza = ({cuisine}) => cuisine === 'pizza';

These two predicates share the same logic; they only differ in comparisons. So, we can wrap the shared logic in an isCuisine function.

const isCuisine = comparison => ({cuisine}) => cuisine === comparison;
const isBurger = isCuisine('burger');
const isPizza = isCuisine('pizza');

This is great! Now, what if we want to start checking the price?

const isPrice = comparison => ({price}) => price === comparison;
const isCheap = isPrice('cheap');
const isExpensive = isPrice('expensive');

Now the isCheap and isExpensive are DRY, and isPizza and isBurger are DRY—but isPrice and isCuisine share their logic! Luckily, there are no rules for how many functions deep you can return.

const isKeyEqualToValue = key => value => object => object[key] === value; // these can be rewritten
const isCuisine = isKeyEqualToValue('cuisine');
const isPrice = isKeyEqualToValue('price'); // these don't need to change
const isBurger = isCuisine('burger');
const isPizza = isCuisine('pizza');
const isCheap = isPrice('cheap');
const isExpensive = isPrice('expensive');

This, to me, is the beauty of arrow functions. In a single line, you can elegantly create a third-order function. isKeyEqualToValue is a function that returns the function isPrice which returns the function isCheap.

See how easy it is to create multiple filtered lists from the original restaurants array?

See the Pen .filter – returning predicates by Adam Giese (@AdamGiese) on CodePen.

Composing Predicates

We can now filter our array by burgers or by a cheap price… but what if you want cheap burgers? One option is to simply chain to filters together.

const cheapBurgers = restaurants.filter(isCheap).filter(isBurger);

Another option is to “compose” the two predicates into a single one.

const isCheapBurger = restaurant => isCheap(restaurant) && isBurger(restaurant);
const isCheapPizza = restaurant => isCheap(restaurant) && isPizza(restaurant);

Look at all of that repeated code. We can definitely wrap this into a new function!

const both = (predicate1, predicate2) => value => predicate1(value) && predicate2(value); const isCheapBurger = both(isCheap, isBurger);
const isCheapPizza = both(isCheap, isPizza); const cheapBurgers = restaurants.filter(isCheapBurger);
const cheapPizza = restaurants.filter(isCheapPizza);

What if you are fine with either pizza or burgers?

const either = (predicate1, predicate2) => value => predicate1(value) || predicate2(value); const isDelicious = either(isBurger, isPizza);
const deliciousFood = restaurants.filter(isDelicious);

This is a step in the right direction, but what if you have more than two foods you’d like to include? This isn’t a very scalable approach. There are two built-in array methods that come in handy here. .every and .some are both predicate methods that also accept predicates. .every checks if each member of an array passes a predicate, while .some checks to see if any member of an array passes a predicate.

const isDelicious = restaurant => [isPizza, isBurger, isBbq].some(predicate => predicate(restaurant)); const isCheapAndDelicious = restaurant => [isDelicious, isCheap].every(predicate => predicate(restaurant));

And, as always, let’s wrap them up into some useful abstraction.

const isEvery = predicates => value => predicates.every(predicate => predicate(value)); const isAny = predicates => value => predicates.some(predicate => predicate(value)); const isDelicious = isAny([isBurger, isPizza, isBbq]);
const isCheapAndDelicious = isEvery([isCheap, isDelicious]);

isEvery and isAny both accept an array of predicates and return a single predicate.

Since all of these predicates are easily created by higher order functions, it isn’t too difficult to create and apply these predicates based on a user’s interaction. Taking all of the lessons we have learned, here is an example of an app that searches restaurants by applying filters based on button clicks.

See the Pen .filter – dynamic filters by Adam Giese (@AdamGiese) on CodePen.

Wrapping up

Filters are an essential part of JavaScript development. Whether you’re sorting out bad data from an API response or responding to user interactions, there are countless times when you would want a subset of an array’s values. I hope this overview helped with ways that you can manipulate predicates to write more readable and maintainable code.

The post Level up your .filter game appeared first on CSS-Tricks.