The Low Hanging Fruit of Web Performance

I kicked off a really poppin’ Twitter thread the other day:

So, I decided to round up all the ideas (both my own and yours) around that in a post over on the Media Temple blog.

These are the things we dive into in that post:

  1. Reduce Requests
  2. Optimize Assets
  3. Make sure you’re gzipping
  4. Make sure you’re browser caching
  5. Use a CDN
  6. Lazy Load and Defer Loading of Things
  7. Use responsive images (or at least use reasonable sizes)
  8. Mind Your Fonts
  9. Good Hosting / HTTP2 / PHP7
  10. Turbolinks

Direct Link to Article — Permalink

The post The Low Hanging Fruit of Web Performance appeared first on CSS-Tricks.

Browser painting and considerations for web performance

The process of a web browser turning HTML, CSS, and JavaScript into a finished visual representation is quite complex and involves a good bit of magic. Here’s a simplified set of steps the browser goes through:

  1. Browser creates the DOM and CSSOM.
  2. Browser creates the render tree, where the DOM and styles from the CSSOM are taken into account (display: none elements are avoided).
  3. Browser computes the geometry of the layout and its elements based on the render tree.
  4. Browser paints pixel by pixel to create the visual representation we see on the screen.

In this article, I’d like to focus on the last part: painting.

All of those steps combined is a lot of work for a browser to do on load… and actually, not just on load, but any time the DOM (or CSSOM) is changed. That’s why many web developers tend to partially solve this by using some sort of frontend framework, such as React which, apart from many other advantages, can help to highly optimize changes in the DOM to avoid unnecessary recalculating or rendering.

You may have heard terms such as state, component rendering, or immutability. All of those have something to do with optimization of DOM changes, or in other words, to only make changes to the DOM when it’s necessary.

To give an example, the state of a web application may change, and that would lead to a change in UI. However, certain (or many) components are not affected by this change. What React helps to do is limit the writing to the DOM for elements that are actually affected by a change in state and ultimately limit the rendering to the smallest part of the web application possible:

DOM/CSSOM → render tree → layout → painting

However, browser painting is special in its own way, as it can happen even without any changes to the DOM and/or CSSOM.

Example of page performance summary

The diagram above was generated using Chrome’s performance panel in DevTools (more on that later) and it shows how much time was taken by each task in the browser in the recorded time (0-7.12s) after reloading of a page. As you can see, painting takes a significant part, and that’s not automatically a bad thing. In this particular example, the increased painting is caused by a combination of animated GIFs on the page and canvas drawing (at 60fps), where both don’t cause any changes to the DOM or its styles, while still triggering painting.

Another good example of a feature that may cause painting without any outside intervention is the CSS animation property, and compared to animated GIF or canvas, it is probably more common on the web. An animation is usually triggered by user input, like hover, but thanks to animation and @keyframes rules, we can even create quite complex animations running constantly on the page without much of an effort, which is pretty amazing.

What some might not realize, is that those animations can easily get out of hand and constantly trigger painting, and that can cost us a lot of processing power. Of course, there are some rules that can be used to avoid painting. Most obvious is limiting manipulation of elements to CSS transform and opacity properties, which by default don’t trigger paint, unless some special circumstances are in place, such as animating an SVG path.

Paint flashing

You likely know that Chrome has DevTools. What you might not know about is a little shortcut (Shift+Cmd+P on Mac or Control+Shift+P on PC) which can be used inside DevTools to bring up a little search bar and command menu.

Command Menu

I’ve started digging around it, and apart from many other useful and incredibly interesting options, a render panel caught my attention.

Render Panel

At the first sight, you can see some interesting options that can be very helpful when it comes to debugging animation on the web, like an transform property, which as we covered, doesn’t cause painting. The painting was present where one would expect it to be, like changes in text color on hover, but that’s not something that should be much of a concern due to its area and presence only on hover of the element. To sum it up, you can always find something to improve, even if you wrote the code yesterday…

But one thing was a slap in the face.

It doesn’t matter how experienced or careful you are, you can — and most likely will — make a mistake. We’re just people and some would argue that fixing your own bugs is most of the job when it comes to development. However, for a bug to be fixable, we need to be aware of it… and that’s exactly where the render panel helps.

Case study

Let’s take a closer look at the actual issue. The design came in with the request for a noisy background. That kind of effect that old TVs had when there was no signal.

It is known that GIFs have many issues, where performance is certainly one of them, so I definitely couldn’t use that for a whole page background. If you’d like to read some more on why to avoid GIFs, here is a good resource with a bunch of reasons.

Using JavaScript is definitely an option in this case. Displaying or hiding elements with a slightly moved background was the first thing that came to my mind, and using canvas could help too. However, all of this seemed a little overkill for simply having a background. I decided to go for a CSS-only approach.

My solution was to take a small “noisy” PNG image as a background-image, enable background-repeat and throw it over a one-color background. How did I achieve the noise effect? With infinite CSS animation! By setting the background-position to different value over the period of 200 milliseconds. Here’s how that turned out:

See the Pen MXoddr by Georgy Marchuk (@gmrchk) on CodePen.

Can you guess the problem? It seemed like a quite an elegant solution to me, and I was excited about my achievement of making it through without a crappy GIF and even not a single line of JavaScript. Just simple CSS that is optimized in browsers these days.

Well, the paint flashing showed something completely different. The layer of the size of the window was constantly repainting, without the user even doing anything. You can see the paint flashing in the demo above if you enable it in the render panel (note that paint flashing doesn’t show up in embedded pen).

Without paint flashing (left) vs. with paint flashing (right)

That certainly doesn’t play well with the performance of the website and drains laptop batteries like there’s no tomorrow.

CPU usage for the animation done with background-position (top) and transform (bottom)

All of this CPU usage could have been avoided by replacing the changes to background-position using transform or opacity.

See the Pen XYOYGm by Georgy Marchuk (@gmrchk) on CodePen.

The problem

I’ve been doing web development for a while and I knew very well that animating a background is never a good idea. This felt like a rookie mistake. People make mistakes… but that’s not the whole story. The website was all laggy and uncomfortable to navigate. How did I miss it?

Something that certainly plays a big role is the fact that I am (and you may be as well) a little spoiled when it comes to development equipment. I have a nice, powerful computer for work and access to speedy internet. Unless we write some really crappy code, anything we write runs quite smoothly in our eyes. But that’s not always the case for our users.

A similar problem applies to many other things — like display size. Using a little exaggeration, while we are developing on 27” display with 4K resolution and getting the designs primarily for 1920×1080, our visitors come in mainly from 1366×768 laptops and have a completely different workflow when it comes to using a computer.

Conclusion

While this article started off as a piece about painting, its main topic is really much more about being mindful of the impact our code has on the painting process or performance in general. While painting serves as a good example of something that can be problematic and easily missed, it’s more of a disconnect between developer and user that is the issue.

The web is a place of many environments, where the developer’s environment is often far different than the user’s. While there is no need to change our ways or switch to lazy computers, it definitely helps to see our work the way it is seen by others from time to time. My suggestion is: when you come home from work and have a little free time, try to pick up your old computer and check your work there, to get a little closer to what your users experience.

If you don’t have this kind of computer around, tools like render panel can turn out to be awfully handy.

The post Browser painting and considerations for web performance appeared first on CSS-Tricks.

Slow Websites

The web has grown bigger. Both in expansiveness and weight. Nick Heer’s “The Bullshit Web”:

The average internet connection in the United States is about six times as fast as it was just ten years ago, but instead of making it faster to browse the same types of websites, we’re simply occupying that extra bandwidth with more stuff.

Nick clearly explains what he means by bullshit, and one can see a connection to Brad Frost’s similarly framed argument. Nick talks about how each incremental interaction is a choice and connects the cruft of the web to the rise and adoption of frameworks like AMP.

Ethan Marcotte paints things in a different light by looking at business incentive:

…ultimately, the web’s performance problem is a problem of profitability. If we’re going to talk about bloated pages, we should do so in context: in the context of a web where digital advertising revenue is cratering for publishers, but is positively flourishing for Facebook and Google. We should look at the underlying structural issues that incentivize a company to include heavy advertising scripts and pesky overlays, or examine the market challenges that force a publisher to adopt something like AMP.

In other words, the way we talk about slow websites needs to be much, much broader. If we can do that, then we’ll have a sharper understanding of where—and how—the web can be faster.

It’s a systemic state of the industry problem that breeds slow websites. The cultural fight to fix it is perhaps just as important as the technical fights. Not that there isn’t a lot to learn and deal with on a technical level.

Addy Osamai wrote up a deep dive (a 20-minute read, according to Medium) that explores the cost of JavaScript to overall web performance. Everyone seems to agree JavaScript is the biggest problem area for slow websites. It’s not preachy but rather a set of well-explained principles to follow in this era where the use of JavaScript is trending up:

  • To stay fast, only load JavaScript needed for the current page.
  • Embrace performance budgets and learn to live within them.
  • Learn how to audit and trim your JavaScript bundles.
  • Every interaction is the start of a new ‘Time-to-Interactive’; consider optimizations in this context.
  • If client-side JavaScript isn’t benefiting the user experience, ask yourself if it’s really necessary.

The post Slow Websites appeared first on CSS-Tricks.

The web can be anything we want it to be

I really enjoyed this chat between Bruce Lawson and Mustafa Kurtuldu where they talked about browser support and the health of the web. Bruce expands upon a lot of the thoughts in a post he wrote last year called World Wide Web, Not Wealthy Western Web where he writes:

…across the world, regardless of disposable income, regardless of hardware or network speed, people want to consume the same kinds of goods and services. And if your websites are made for the whole world, not just the wealthy Western world, then the next 4 billion people might consume the stuff that your organization makes.

Another highlight is where Bruce also mentions that, as web developers, we might think that we’ve all moved on from jQuery as a community, and yet there are still millions of websites that depend upon jQuery to function properly. It’s an interesting anecdote and relevant to recent discussions about React making a run at being the next thing to replace jQuery:

However! The most interesting part of this particular discussion, for me at least, is where they talk about Flash and the impact it had on the design of CSS3 and HTML5. They both argue that despite Flash’s shortcomings and accessibility issues, it happened to show us all that the web can be much more than just a place to store some hypertext and that ultimately it can be anything we want it to be.

Direct Link to Article — Permalink

The post The web can be anything we want it to be appeared first on CSS-Tricks.

Hey hey `font-display`

Y’all know about font-display? It’s pretty great. It’s a CSS property that you can use within @font-face blocks to control how, visually, that font loads. Font loading is really pretty damn complicated. Here’s a guide from Zach Leatherman to prove it, which includes over 10 font loading strategies, including strategies that involve critical inline CSS of subsets of fonts combined with loading the rest of the fonts later through JavaScript. It ain’t no walk in the park.

Using font-display is kinda like a walk in the park though. It’s just a single line of CSS. It doesn’t solve everything that Zach’s more exotic demos do, but it can go a long way with that one line. It’s notable to bring up right now, as support has improved a lot lately. It’s now in Firefox 58+, Chrome 60+, Safari 11.1+, iOS 11.3+, and Chrome on Android 64+. Pretty good.

What do you get from it? The ability to control font-display for the Masses by Jeremy Wagner

  • If you really dislike FOUT, font-display: optional might be your jam by me
  • Reminder:

    FOUT = Flash of Unstyled Text
    FOIT = Flash of Invisible Text

    Neither is great. In a perfect world, our custom fonts just show up immediately. But since that’s not a practical possibility, we pick based on our priorities.

    The best resource out there about it is Monica Dinculescu’s explainer page:

    i’d summarize those values choices like this:

    • If you’re OK with FOUT, you’re probably best off with font-display: swap; which will display a fallback font fairly fast, but swap in your custom font when it loads.
    • If you’re OK with FOIT, you’re probably best off with font-display: block; which is fairly similar to current browser behavior, where it shows nothing as it waits for the custom font, but will eventually fall back.
    • If you only want the custom font to show at all if it’s there immediately, font-display: optional; is what you want. It’ll still load in the background and be there next page load probably.

    Those are some pretty decent options for a single line of CSS. But again, remember if you’re running a major text-heavy site with custom fonts, Zach’s guide can help you do more.

    I’d almost go out on a limb and say: every @font-face block out there should have a font-display property. With the only caveat being you’re doing something exotic and for some reason want the browser default behavior.

    Wanna hear something quite unfortunate? We already mentioned font-display: block;. Wouldn’t you think it, uh, well, blocked the rendering of text until the custom font loads? It doesn’t. It’s still got a swap period. It would be the perfect thing for something like icon fonts where the icon (probably) has no meaning unless the custom font loads. Alas, there is no font-display solution for that.

    And, hey gosh, wouldn’t it be nice if Google Fonts allowed us to use it?

    The post Hey hey `font-display` appeared first on CSS-Tricks.

    Monitoring unused CSS by unleashing the raw power of the DevTools Protocol

    From Johnny’s dev blog:

    The challenge: Calculate the real percentage of unused CSS

    Our goal is to create a script that will measure the percentage of unused CSS of this page. Notice that the user can interact with the page and navigate using the different tabs.

    DevTools can be used to measure the amount of unused CSS in the page using the Coverage tab. Notice that the percentage of unused CSS after the page loads is ~55%, but after clicking on each of the tabs, more CSS rules are applied and the percentage drops down to just ~15%.

    That’s why I’m so skeptical of anything that attempts to measure “unused CSS.” This is an incredibly simple demo (all it does is click some tabs) and the amount of unused CSS changes dramatically.

    If you are looking for accurate data on how much unused CSS is in your codebase, in an automated fashion, you’ll need to visit every single URL on your site and trigger every possible event on every element and continue doing that until things stop changing. Then do that for every possible state a user could be in—in every possible browser.

    Here’s another incredibly exotic way I’ve heard of it being done:

    1. Wait a random amount of time after the page loads
    2. Loop through all the selectors in the CSSOM
    3. Put a querySelector on them and see if it finds anything or not
    4. Report those findings back to a central database
    5. Run this for enough time on a random set of visitors (or all visitors) that you’re certain is a solid amount of data representing everywhere on your site
    6. Take your set of selectors that never matched anything and add a tiny 1px transparent GIF background image to them
    7. Run that modified CSS for an equal amount of time
    8. Check your server logs to make sure those images were never requested. If they were, you were wrong about that selector being unused, so remove it from the list
    9. And the end of all that, you have a set of selectors in your CSS that are very likely to be unused.

    Clever, but highly unlikely that anyone is using either of these methods in a consistent and useful way.

    I’m a little scared for tools like Lighthouse that claim to audit your unused CSS telling you to “remove unused rules from stylesheets to reduce unnecessary bytes consumed by network activity.” The chances seem dangerously high that someone runs this, finds this so-called unused CSS and deletes it only to discover it wasn’t really unused.

    Direct Link to Article — Permalink


    Monitoring unused CSS by unleashing the raw power of the DevTools Protocol is a post from CSS-Tricks

    Front-End Performance Checklist

    Vitaly Friedman swings wide with a massive list of performance considerations. It’s a well-considered mix of old tactics (cutting the mustard, progressive enhancement, etc.) and newer considerations (tree shaking, prefetching, etc.). I like the inclusion of a quick wins section since so much can be done for little effort; it’s important to do those things before getting buried in more difficult performance tasks.

    Speaking of considering performance, Philip Walton recently dug into what interactive actually means, in a world where we throw around acronyms like Front-End Checklist.

    Direct Link to Article — Permalink


    Front-End Performance Checklist is a post from CSS-Tricks

    Breaking Down the Performance API

    JavaScript’s Performance API is prudent, because it hands over tools to accurately measure the performance of Web pages, which, in spite of being performed since long before, never really became easy or precise enough.

    That said, it isn’t as easy to get started with the API as it is to actually use it. Although I’ve seen extensions of it covered here and there in other posts, the big picture that ties everything together is hard to find.

    One look at any document explaining the global performance interface (the access point for the Performance API) and you’ll be bombarded with a slew of other specifications, including High Resolution Time API, Performance Timeline API and the Navigation API among what feels like many, many others. It’s enough to make the overarching concept more than a little confusing as to what exactly the API is measuring but, more importantly, make it easy to overlook the specific goodies that we get with it.

    Here’s an illustration of how all these pieces fit together. This can be super confusing, so having a visual can help clarify what we’re talking about.

    The Performance API includes the Performance Timeline API and, together, they constitute a wide range of methods that fetch useful metrics on Web page performance.

    Let’s dig in, shall we?

    High Resolution Time API

    The performance interface is a part of the High Resolution Time API.

    “What is High Resolution Time?” you might ask. That’s a key concept we can’t overlook.

    A time based on the Date is accurate to the millisecond. A high resolution time, on the other hand, is precise up to fractions of milliseconds. That’s pretty darn precise, making it more ideal for yielding accurate measurements of time.

    It’s worth pointing out that a high resolution time measured by User Agent (UA) doesn’t change with any changes in system time because it is taken from a global, increasingly monotonic clock created by the UA. The time always increases and cannot be forced to reduce. That becomes a useful constraint for time measurement.

    Every time measurement measured in the Performance API is a high resolution time. Not only does that make it a super precise way to measure performance but it’s also what makes the API a part of the High Resolution Time API and why we see the two often mentioned together.

    Performance Timeline API

    The Performance Timeline API is an extension of the Performance API. That means that where the Performance API is part of the High Resolution Time API, the Performance Timeline API is part of the Performance API.

    Or, to put it more succinctly:

    High Resolution Time API
    └── Performance API └── Performance Timeline API

    Performance Timeline API gives us access to almost all of the measurements and values we can possibly get from whole of the Performance API itself. That’s a lot of information at our fingertips with a single API and why the diagram at the start of this article shows them nearly on the same plane as one another.

    There are many extensions of the Performance API. Each one returns performance-related entries and all of them can be accessed and even filtered through Performance Timeline, making this a must-learn API for anyone who wants to get started with performance measurements. They are so closely related and complementary that it makes sense to be familiar with both.

    The following are three methods of the Performance Timeline API that are included in the performance interface:

    • getEntries()
    • getEntriesByName()
    • getEntriesByType()

    Each method returns a list of (optionally filtered) performance entries gathered from all of the other extensions of the Performance API and we’ll get more acquainted with them as we go.

    Another key interface included in the API is PerformanceObserver. It watches for a new entry in a given list of performance entries, and notifies of the same. Pretty handy for real-time monitoring!

    The Performance Entries

    The things we measure with the Performance API are referred to as “entries” and they all offer a lot of insight into Web performance.

    Curious what they are? MDN has a full list that will likely get updated as new items are released, but this is what we currently have:

    Entry What it Measures Parent API
    frame Measures frames, which represent a loop of the amount of work a browser needs to do to process things like DOM events, resizing, scrolling and CSS animations. Frame Timing API
    mark Creates a timestamp in the performance timeline that provides values for a name, start time and duration. User Timing API
    measure Similar to mark in that they are points on the timeline, but they are named for you and placed between marks. Basically, they’re a midpoint between marks with no custom name value. User Timing API
    navigation Provides context for the load operation, such as the types of events that occur. Navigation Timing API
    paint Reports moments when pixels are rendered on the screen, such as the first paint, first paint with content, the start time and total duration. Paint Timing API
    resource Measures the latency of dependencies for rendering the screen, like images, scripts and stylesheets. This is where caching makes a difference! Resource Timing API

    Let’s look at a few examples that illustrate how each API looks in use. To learn more in depth about them, you can check out the specifications linked up in the table above. The Frame Timing API is still in the works.

    Paint Timing API, conveniently, has already been covered thoroughly on CSS-Tricks, but here’s an example of pulling the timestamp for when painting begins:

    // Time when the page began to render console.log(performance.getEntriesByType('paint')[0].startTime)

    The User Timing API can measure the performance for developer scripts. For example, say you have code that validates an uploaded file. We can measure how long that takes to execute:

    // Time to console-print "hello"
    // We could also make use of "performance.measure()" to measure the time
    // instead of calculating the difference between the marks in the last line.
    performance.mark('')
    console.log('hello')
    performance.mark('')
    var marks = performance.getEntriesByType('mark')
    console.info(`Time took to say hello ${marks[1].startTime - marks[0].startTime}`)

    The Navigation Timing API shows metrics for loading the current page, metrics even from when the unloading of the previous page took place. We can measure with a ton of precision for exactly how long a current page takes to load:

    // Time to complete DOM content loaded event
    var navEntry = performance.getEntriesByType('navigation')[0]
    console.log(navEntry.domContentLoadedEventEnd - navEntry.domContentLoadedEventStart)

    The Resource Timing API is similar to Navigation Timing API in that it measures load times, except it measures all the metrics for loading the requested resources of a current page, rather than the current page itself. For instance, we can measure how long it takes an image hosted on another server, such as a CDN, to load on the page:

    // Response time of resources
    performance.getEntriesByType('resource').forEach((r) => {
    console.log(`response time for ${r.name}: ${r.responseEnd - r.responseStart}`);
    });

    The Navigation Anomaly

    Wanna hear an interesting tidbit about the Navigation Timing API?

    It was conceived before the Performance Timeline API. That’s why, although you can access some navigation metrics using the Performance Timeline API (by filtering the navigation entry type), the Navigation Timing API itself has two interfaces that are directly extended from the Performance API:

    • performance.timing
    • performance.navigation

    All the metrics provided by performance.navigation can be provided by navigation entries of the Performance Timeline API. As for the metrics you fetch from performance.timing, however, only some are accessible from the Performance Timeline API.

    As a result, we use performance.timing to get the navigation metrics for the current page instead of using the Performance Timeline API via performance.getEntriesByType("navigation"):

    // Time from start of navigation to the current page to the end of its load event
    addEventListener('load', () => { with(performance.timing) console.log(navigationStart - loadEventEnd);
    })

    Let’s Wrap This Up

    I’d say your best bet for getting started with the Performance API is to begin by familiarizing yourself with all the performance entry types and their attributes. This will get you quickly acquainted with the end results of all the APIs—and the power this API provides for measuring performance.

    As a second course of action, get to know how the Performance Timeline API probes into all those available metrics. As we covered, the two are closely related and the interplay between the two can open up interesting and helpful methods of measurement.

    At that point, you can make a move toward mastering the fine art of putting the other extended APIs to use. That’s where everything comes together and you finally get see the full picture of how all of these APIs, methods and entries are interconnected.


    Breaking Down the Performance API is a post from CSS-Tricks

    Comparing Novel vs. Tried and True Image Formats

    Popular image file formats such as JPG, PNG, and GIF have been around for a long time. They are relatively efficient and web developers have introduced many optimization solutions to further compress their size. However, the era of JPGs, PNGs, and GIFs may be coming to an end as newer, more efficient image file formats aim to take their place.

    We’re going to explore these newer file formats in this post along with an analysis of how they stack up against one another and the previous formats. We will also cover optimization techniques to improve the delivery of your images.

    Why do we need new image formats at all?

    Aside from image quality, the most noticeable difference between older and newer image formats is file size. New formats use algorithms that are more efficient at compressing data, so the file sizes can be much smaller. In the context of web development, smaller files mean faster load times, which translates into lower bounce rates, more traffic, and more conversions. All good things that we often preach.

    As with most technological innovations, the rollout of new image formats will be gradual as browsers consider and adopt their standards. In the meantime, we as web developers will have to accommodate users with varying levels of support. Thankfully, Can I Use is already on top of that and reporting on browser support for specific image formats.

    The New Stuff

    As we wander into a new frontier of image file formats, we’ll have lots of format choices. Here are a few candidates that are already popping up and making cases to replace the existing standard bearers.

    WebP

    WebP was developed by Google as an alternative to JPG and can be up to 80 percent smaller than JPEGs containing the same image.

    WebP browser support is improving all the time. Opera and Chrome currently support it. Firefox announced plans to implement it. For now, Internet Explorer and Safari are the holdouts. Large companies with tons of influence like Google and Facebook are currently experimenting with the format and it already makes up about 95 percent of the images on eBay’s homepage. YouTube also uses WebP for large thumbnails.

    If you’re using a CMS like WordPress or Joomla, there are extensions to help you easily implement support for WebP, such as Optimus and Cache Enabler for WordPress and Joomla’s own supported extension. These will not break your website for browsers that don’t support the format so long as you provide PNG or JPG fallbacks. As a result, browsers that support the newer formats will see a performance boost while others get the standard experience. Considering that browser support for WebP is growing, it’s a great opportunity to save on latency.

    This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

    Desktop

    Chrome Opera Firefox IE Edge Safari
    23 12 No No No No

    Mobile / Tablet

    iOS Safari Opera Mobile Opera Mini Android Android Chrome Android Firefox
    No 11.1 all 4.2-4.3 62 No

    HEIF

    High efficiency image files (or HEIF) actually bear the extension HEIC (.heic), which stands for high efficiency image container, but the two acronyms are being used interchangeably. Earlier this year, Apple announced that its newest line of products will support HEIF format by default.

    On top of smaller file sizes, HEIF offers more versatility than other formats since it can support both still images and image sequences. Therefore, it’s possible to store burst photos, focal stacks, exposure stacks, images captured from video and other image collections in a single file. HEIF also supports transparency, 3D, and 4K.

    In addition to images, HEIF files can hold image properties, thumbnails, metadata and auxiliary data such as depth maps and audio. Image derivations can be stored as well thanks to non-destructive editing operations. That means cropping, rotations, and other alterations can be undone at any time. Imagine all of your image variations contained in a single file!

    Apple is doing everything it can to make the transition as seamless as possible. For example, when users share HEIF files with apps that do not support the format, Apple will automatically convert the image to a more compatible format such as JPG.

    There is no browser support for HEIF at the time of this writing.

    This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

    Desktop

    Chrome Opera Firefox IE Edge Safari
    No No No No No No

    Mobile / Tablet

    iOS Safari Opera Mobile Opera Mini Android Android Chrome Android Firefox
    No No No No No No

    That being said, the file format offers impressive file savings for both video and images. This is becoming increasingly important as our devices become stronger and are able to take higher quality images and videos, thus resulting in a greater need for efficient media files.

    FLIF

    Free Lossless Image Format (or FLIF) uses a compression algorithm that results in files that are 14-74 percent smaller than older formats without sacrificing quality (i.e. lossless). Therefore, FLIF is a great fit for any type image or animation.

    The FLIF homepage claims that FLIF files are 43% percent smaller on average than typical PNG files. The graph below illustrates how FILF compares to other formats in this regard.

    FLIF often winds up being the most efficient format in tests.

    FLIF takes advantage of something called meta-adaptive near-zero integer arithmetic coding, or (appropriately) MANIAC. FLIF also supports progressive interlacing so that images appear whole as soon as they begin downloading, which is another feature that has shown to reduce web page bounce rates.

    The potential of FLIF is very exciting, but there is no browser support at the moment nor does it look like any browsers are currently considering adding it. While creators of the format are working hard on achieving native support for popular web browsers and image editing tools, developers can access the FLIF source code and snag a polyfill solution to test it out.

    The Existing Stuff

    As mentioned earlier, we’re likely still years away from the new formats completely taking over. In some cases, it might be better to stick with the tried and true. Let’s review what formats we’re talking about and discuss how they’ve stuck around for so long.

    JPG

    As the ruling standard for most digital cameras and photo sharing devices, JPG is the most frequently used image format on the internet. W3Techs reports that nearly three-quarters of all websites use JPG files. Similarly, most popular photo editing software save images as JPG files by default.

    JPG is named after Joint Photographic Experts Group, the organization that developed the technology; hence why JPG is alternatively called JPEG. You may see these acronyms used interchangeably.

    The format dates all the way back to 1992, and was created to facilitate lossy compression of bitmap images. Lossy compression is an irreversible process that relies on inexact approximations. The idea was to allow developers to adjust compression ratios to achieve their desired balance between file size and image quality.

    The JPG format is terrific for captured photos; however, as the name implies, lossy compression comes with a reduction in image quality. Quality degrades further each time an image is edited and re-saved, which is why developers are taught to refrain from resizing images multiple times.

    GIF

    GIF is short for graphics interchange format. It depends on a compression algorithm called LZW, which doesn’t degrade image quality. The GIF format lacks the color support of JPG and PNG, but it has stuck around nonetheless thanks to its ability to render animations by bundling multiple images into a single file. Images stored inside a GIF file can render in succession to create a short movie-like effect. GIFs can be configured to display image sequences a set number of times or loop infinitely.

    Image courtesy of Giphy.com

    PNG

    The good old portable network graphic (PNG) was originally conceptualized as the successor to the GIF format and debuted in 1996. It was designed specifically for representing images on the web. In terms of popularity, PNG is a close runner-up to JPG. W3Techs claims that 72 percent of websites use this format. Unlike JPG, PNG images are capable of lossless compression (meaning no image quality is lost).

    Another advantage over JPG is that PNG supports transparency and opacity. Since large photos tend to look superior in the JPG format, the PNG format is typically used for non-complex graphics and illustrations.

    Comparing the transparency support of JPG (left) and PNG (right).

    Ways to Improve Image Optimization and Delivery

    There are a few vital things to consider when optimizing images for the web because any file format—including the new ones—can end up adding yet another layer of complexity. Images typically account for the bulk of the bytes on a web page, so image optimization is considered low-hanging fruit for improving a website’s performance. The Google Dev Guide has a comprehensive article on the topic, but here is a condensed list of tips for speeding up your image delivery.

    Implement Support for New Image Formats

    Since newer formats like WebP aren’t yet universally supported, you must configure your applications so that they serve up the appropriate resources to your users.

    You must be able to detect which formats the client supports and deliver the best option. In the case of WebP, there are a few ways to do this.

    Invest in a CDN

    A content delivery network (CDN) accelerates the delivery of images by caching them on their network of edge servers. Therefore, when visitors come to your website, they get routed to the nearest edge server instead of the origin server. This can produce massive time savings especially if your users are far from your origin server.

    We have a whole post on the topic to help understand how CDNs work and how to leverage them for your projects.

    Use CSS Instead of Images

    Because older browsers didn’t support image shadows and rounded corners, veteran web developers are used to displaying certain elements like buttons as images. Remember the days when displaying a custom font required making images for headlines? These practices are still out in the wild, but are terribly inefficient approaches. Instead, use CSS whenever you can.

    Check Your Image Cache Settings

    For image files that don’t change very often, you can utilize HTTP caching directives to improve load times for your regular visitors. That way, when someone visits your website for the first time, their browser will cache the image so that it doesn’t have to be downloaded again on subsequent visits. This practice can also save you money by reducing bandwidth costs.

    Of course, improper caching can cause problems. Adding a fingerprint, such as a timestamp, to your images can help prevent caching conflicts. Fortunately, most web development platforms do this automatically.

    Resize Images for Different Devices

    Figuring out how to best accommodate mobile devices with inferior screen resolutions is an ongoing process. Some developers don’t even bother and simply offer the same image files for all users, but this approach wastes your bandwidth and your mobile visitors’ time. Consider using srcset so that the browser determines which image size it should deliver based on the client’s size dimensions.

    Image Compression Tests

    It’s always interesting to see the size differences each image format provides. In the case of this article, we’re comparing lossless and lossy image formats together. Of course, that’s not common practice as many times lossy will be smaller in size than lossless as the quality of the image suffers in order to produce a smaller image size.

    In any case, choosing between lossless and lossy image formats should be based on how image intensive your site is and how fast it already runs. For example, an e-commerce shop may be comfortable with a slightly degraded image in exchange for faster load times while a photographer website is likely the opposite in order to showcase talent.

    To compare the sizes of each of the six image formats mentioned in this article, we began with three JPG images and converted them into each of the other formats. Here are the performance results.

    • Image 1
    • Image 2
    • Image 3

    As previously mentioned, the results below vary significantly due to lossless/lossy image formats. For instance, PNG and FLIF images are both lossless, therefore resulting in larger image files.

    Image 1 Size Image 2 Size Image 3 Size
    WebP 1.8 MB 293 KB 1.6 MB
    HEIF 1.2 MB 342 KB 1.1 MB
    FLIF 7.4 MB 2.5 MB 6.6 MB
    JPG 3.9 MB 1.3 MB 3.5 MB
    GIF 6.3 MB 3.9 MB 6.7 MB
    PNG 13.2 MB 5 MB 12.5 MB

    According to the results above, HEIF images were smaller overall than any other format. However, due to their lack of support, it currently isn’t possible to integrate the HEIF format into web applications. WebP came in at a fairly close second and does offer ways to work around the less-than-ideal amount of browser support. For users who are using Chrome or Opera, WebP images will certainly help accelerate delivery.

    As for the lossless image formats, PNG is significantly larger than it’s lossy JPG counterpart. However, when optimized with FLIF, savings of about 50 percent were realized. This makes FLIF a great alternative for those who require high-quality images at a smaller file size. That said FLIF currently isn’t supported by another web browsers yet, similar to HEIF.


    Conclusion

    The old image formats will likely still be around for many years to come, but more developers will embrace the newer formats once they realize the size-saving benefits.

    Cameras, mobile devices and many gadgets, in general, are becoming more and more sophisticated meaning that the images and videos taken are of higher quality and taking up more space. New formats must be adopted to mitigate this and it looks like we have some extremely promising options to look forward to, even if it will take some time to see them officially adopted.


    Comparing Novel vs. Tried and True Image Formats is a post from CSS-Tricks