Hey hey `font-display`

Y’all know about font-display? It’s pretty great. It’s a CSS property that you can use within @font-face blocks to control how, visually, that font loads. Font loading is really pretty damn complicated. Here’s a guide from Zach Leatherman to prove it, which includes over 10 font loading strategies, including strategies that involve critical inline CSS of subsets of fonts combined with loading the rest of the fonts later through JavaScript. It ain’t no walk in the park.

Using font-display is kinda like a walk in the park though. It’s just a single line of CSS. It doesn’t solve everything that Zach’s more exotic demos do, but it can go a long way with that one line. It’s notable to bring up right now, as support has improved a lot lately. It’s now in Firefox 58+, Chrome 60+, Safari 11.1+, iOS 11.3+, and Chrome on Android 64+. Pretty good.

What do you get from it? The ability to control font-display for the Masses by Jeremy Wagner

  • If you really dislike FOUT, font-display: optional might be your jam by me
  • Reminder:

    FOUT = Flash of Unstyled Text
    FOIT = Flash of Invisible Text

    Neither is great. In a perfect world, our custom fonts just show up immediately. But since that’s not a practical possibility, we pick based on our priorities.

    The best resource out there about it is Monica Dinculescu’s explainer page:

    i’d summarize those values choices like this:

    • If you’re OK with FOUT, you’re probably best off with font-display: swap; which will display a fallback font fairly fast, but swap in your custom font when it loads.
    • If you’re OK with FOIT, you’re probably best off with font-display: block; which is fairly similar to current browser behavior, where it shows nothing as it waits for the custom font, but will eventually fall back.
    • If you only want the custom font to show at all if it’s there immediately, font-display: optional; is what you want. It’ll still load in the background and be there next page load probably.

    Those are some pretty decent options for a single line of CSS. But again, remember if you’re running a major text-heavy site with custom fonts, Zach’s guide can help you do more.

    I’d almost go out on a limb and say: every @font-face block out there should have a font-display property. With the only caveat being you’re doing something exotic and for some reason want the browser default behavior.

    Wanna hear something quite unfortunate? We already mentioned font-display: block;. Wouldn’t you think it, uh, well, blocked the rendering of text until the custom font loads? It doesn’t. It’s still got a swap period. It would be the perfect thing for something like icon fonts where the icon (probably) has no meaning unless the custom font loads. Alas, there is no font-display solution for that.

    And, hey gosh, wouldn’t it be nice if Google Fonts allowed us to use it?

    The post Hey hey `font-display` appeared first on CSS-Tricks.

    Monitoring unused CSS by unleashing the raw power of the DevTools Protocol

    From Johnny’s dev blog:

    The challenge: Calculate the real percentage of unused CSS

    Our goal is to create a script that will measure the percentage of unused CSS of this page. Notice that the user can interact with the page and navigate using the different tabs.

    DevTools can be used to measure the amount of unused CSS in the page using the Coverage tab. Notice that the percentage of unused CSS after the page loads is ~55%, but after clicking on each of the tabs, more CSS rules are applied and the percentage drops down to just ~15%.

    That’s why I’m so skeptical of anything that attempts to measure “unused CSS.” This is an incredibly simple demo (all it does is click some tabs) and the amount of unused CSS changes dramatically.

    If you are looking for accurate data on how much unused CSS is in your codebase, in an automated fashion, you’ll need to visit every single URL on your site and trigger every possible event on every element and continue doing that until things stop changing. Then do that for every possible state a user could be in—in every possible browser.

    Here’s another incredibly exotic way I’ve heard of it being done:

    1. Wait a random amount of time after the page loads
    2. Loop through all the selectors in the CSSOM
    3. Put a querySelector on them and see if it finds anything or not
    4. Report those findings back to a central database
    5. Run this for enough time on a random set of visitors (or all visitors) that you’re certain is a solid amount of data representing everywhere on your site
    6. Take your set of selectors that never matched anything and add a tiny 1px transparent GIF background image to them
    7. Run that modified CSS for an equal amount of time
    8. Check your server logs to make sure those images were never requested. If they were, you were wrong about that selector being unused, so remove it from the list
    9. And the end of all that, you have a set of selectors in your CSS that are very likely to be unused.

    Clever, but highly unlikely that anyone is using either of these methods in a consistent and useful way.

    I’m a little scared for tools like Lighthouse that claim to audit your unused CSS telling you to “remove unused rules from stylesheets to reduce unnecessary bytes consumed by network activity.” The chances seem dangerously high that someone runs this, finds this so-called unused CSS and deletes it only to discover it wasn’t really unused.

    Direct Link to Article — Permalink


    Monitoring unused CSS by unleashing the raw power of the DevTools Protocol is a post from CSS-Tricks

    Front-End Performance Checklist

    Vitaly Friedman swings wide with a massive list of performance considerations. It’s a well-considered mix of old tactics (cutting the mustard, progressive enhancement, etc.) and newer considerations (tree shaking, prefetching, etc.). I like the inclusion of a quick wins section since so much can be done for little effort; it’s important to do those things before getting buried in more difficult performance tasks.

    Speaking of considering performance, Philip Walton recently dug into what interactive actually means, in a world where we throw around acronyms like Front-End Checklist.

    Direct Link to Article — Permalink


    Front-End Performance Checklist is a post from CSS-Tricks

    Breaking Down the Performance API

    JavaScript’s Performance API is prudent, because it hands over tools to accurately measure the performance of Web pages, which, in spite of being performed since long before, never really became easy or precise enough.

    That said, it isn’t as easy to get started with the API as it is to actually use it. Although I’ve seen extensions of it covered here and there in other posts, the big picture that ties everything together is hard to find.

    One look at any document explaining the global performance interface (the access point for the Performance API) and you’ll be bombarded with a slew of other specifications, including High Resolution Time API, Performance Timeline API and the Navigation API among what feels like many, many others. It’s enough to make the overarching concept more than a little confusing as to what exactly the API is measuring but, more importantly, make it easy to overlook the specific goodies that we get with it.

    Here’s an illustration of how all these pieces fit together. This can be super confusing, so having a visual can help clarify what we’re talking about.

    The Performance API includes the Performance Timeline API and, together, they constitute a wide range of methods that fetch useful metrics on Web page performance.

    Let’s dig in, shall we?

    High Resolution Time API

    The performance interface is a part of the High Resolution Time API.

    “What is High Resolution Time?” you might ask. That’s a key concept we can’t overlook.

    A time based on the Date is accurate to the millisecond. A high resolution time, on the other hand, is precise up to fractions of milliseconds. That’s pretty darn precise, making it more ideal for yielding accurate measurements of time.

    It’s worth pointing out that a high resolution time measured by User Agent (UA) doesn’t change with any changes in system time because it is taken from a global, increasingly monotonic clock created by the UA. The time always increases and cannot be forced to reduce. That becomes a useful constraint for time measurement.

    Every time measurement measured in the Performance API is a high resolution time. Not only does that make it a super precise way to measure performance but it’s also what makes the API a part of the High Resolution Time API and why we see the two often mentioned together.

    Performance Timeline API

    The Performance Timeline API is an extension of the Performance API. That means that where the Performance API is part of the High Resolution Time API, the Performance Timeline API is part of the Performance API.

    Or, to put it more succinctly:

    High Resolution Time API
    └── Performance API └── Performance Timeline API

    Performance Timeline API gives us access to almost all of the measurements and values we can possibly get from whole of the Performance API itself. That’s a lot of information at our fingertips with a single API and why the diagram at the start of this article shows them nearly on the same plane as one another.

    There are many extensions of the Performance API. Each one returns performance-related entries and all of them can be accessed and even filtered through Performance Timeline, making this a must-learn API for anyone who wants to get started with performance measurements. They are so closely related and complementary that it makes sense to be familiar with both.

    The following are three methods of the Performance Timeline API that are included in the performance interface:

    • getEntries()
    • getEntriesByName()
    • getEntriesByType()

    Each method returns a list of (optionally filtered) performance entries gathered from all of the other extensions of the Performance API and we’ll get more acquainted with them as we go.

    Another key interface included in the API is PerformanceObserver. It watches for a new entry in a given list of performance entries, and notifies of the same. Pretty handy for real-time monitoring!

    The Performance Entries

    The things we measure with the Performance API are referred to as “entries” and they all offer a lot of insight into Web performance.

    Curious what they are? MDN has a full list that will likely get updated as new items are released, but this is what we currently have:

    Entry What it Measures Parent API
    frame Measures frames, which represent a loop of the amount of work a browser needs to do to process things like DOM events, resizing, scrolling and CSS animations. Frame Timing API
    mark Creates a timestamp in the performance timeline that provides values for a name, start time and duration. User Timing API
    measure Similar to mark in that they are points on the timeline, but they are named for you and placed between marks. Basically, they’re a midpoint between marks with no custom name value. User Timing API
    navigation Provides context for the load operation, such as the types of events that occur. Navigation Timing API
    paint Reports moments when pixels are rendered on the screen, such as the first paint, first paint with content, the start time and total duration. Paint Timing API
    resource Measures the latency of dependencies for rendering the screen, like images, scripts and stylesheets. This is where caching makes a difference! Resource Timing API

    Let’s look at a few examples that illustrate how each API looks in use. To learn more in depth about them, you can check out the specifications linked up in the table above. The Frame Timing API is still in the works.

    Paint Timing API, conveniently, has already been covered thoroughly on CSS-Tricks, but here’s an example of pulling the timestamp for when painting begins:

    // Time when the page began to render console.log(performance.getEntriesByType('paint')[0].startTime)

    The User Timing API can measure the performance for developer scripts. For example, say you have code that validates an uploaded file. We can measure how long that takes to execute:

    // Time to console-print "hello"
    // We could also make use of "performance.measure()" to measure the time
    // instead of calculating the difference between the marks in the last line.
    performance.mark('')
    console.log('hello')
    performance.mark('')
    var marks = performance.getEntriesByType('mark')
    console.info(`Time took to say hello ${marks[1].startTime - marks[0].startTime}`)

    The Navigation Timing API shows metrics for loading the current page, metrics even from when the unloading of the previous page took place. We can measure with a ton of precision for exactly how long a current page takes to load:

    // Time to complete DOM content loaded event
    var navEntry = performance.getEntriesByType('navigation')[0]
    console.log(navEntry.domContentLoadedEventEnd - navEntry.domContentLoadedEventStart)

    The Resource Timing API is similar to Navigation Timing API in that it measures load times, except it measures all the metrics for loading the requested resources of a current page, rather than the current page itself. For instance, we can measure how long it takes an image hosted on another server, such as a CDN, to load on the page:

    // Response time of resources
    performance.getEntriesByType('resource').forEach((r) => {
    console.log(`response time for ${r.name}: ${r.responseEnd - r.responseStart}`);
    });

    The Navigation Anomaly

    Wanna hear an interesting tidbit about the Navigation Timing API?

    It was conceived before the Performance Timeline API. That’s why, although you can access some navigation metrics using the Performance Timeline API (by filtering the navigation entry type), the Navigation Timing API itself has two interfaces that are directly extended from the Performance API:

    • performance.timing
    • performance.navigation

    All the metrics provided by performance.navigation can be provided by navigation entries of the Performance Timeline API. As for the metrics you fetch from performance.timing, however, only some are accessible from the Performance Timeline API.

    As a result, we use performance.timing to get the navigation metrics for the current page instead of using the Performance Timeline API via performance.getEntriesByType("navigation"):

    // Time from start of navigation to the current page to the end of its load event
    addEventListener('load', () => { with(performance.timing) console.log(navigationStart - loadEventEnd);
    })

    Let’s Wrap This Up

    I’d say your best bet for getting started with the Performance API is to begin by familiarizing yourself with all the performance entry types and their attributes. This will get you quickly acquainted with the end results of all the APIs—and the power this API provides for measuring performance.

    As a second course of action, get to know how the Performance Timeline API probes into all those available metrics. As we covered, the two are closely related and the interplay between the two can open up interesting and helpful methods of measurement.

    At that point, you can make a move toward mastering the fine art of putting the other extended APIs to use. That’s where everything comes together and you finally get see the full picture of how all of these APIs, methods and entries are interconnected.


    Breaking Down the Performance API is a post from CSS-Tricks

    Comparing Novel vs. Tried and True Image Formats

    Popular image file formats such as JPG, PNG, and GIF have been around for a long time. They are relatively efficient and web developers have introduced many optimization solutions to further compress their size. However, the era of JPGs, PNGs, and GIFs may be coming to an end as newer, more efficient image file formats aim to take their place.

    We’re going to explore these newer file formats in this post along with an analysis of how they stack up against one another and the previous formats. We will also cover optimization techniques to improve the delivery of your images.

    Why do we need new image formats at all?

    Aside from image quality, the most noticeable difference between older and newer image formats is file size. New formats use algorithms that are more efficient at compressing data, so the file sizes can be much smaller. In the context of web development, smaller files mean faster load times, which translates into lower bounce rates, more traffic, and more conversions. All good things that we often preach.

    As with most technological innovations, the rollout of new image formats will be gradual as browsers consider and adopt their standards. In the meantime, we as web developers will have to accommodate users with varying levels of support. Thankfully, Can I Use is already on top of that and reporting on browser support for specific image formats.

    The New Stuff

    As we wander into a new frontier of image file formats, we’ll have lots of format choices. Here are a few candidates that are already popping up and making cases to replace the existing standard bearers.

    WebP

    WebP was developed by Google as an alternative to JPG and can be up to 80 percent smaller than JPEGs containing the same image.

    WebP browser support is improving all the time. Opera and Chrome currently support it. Firefox announced plans to implement it. For now, Internet Explorer and Safari are the holdouts. Large companies with tons of influence like Google and Facebook are currently experimenting with the format and it already makes up about 95 percent of the images on eBay’s homepage. YouTube also uses WebP for large thumbnails.

    If you’re using a CMS like WordPress or Joomla, there are extensions to help you easily implement support for WebP, such as Optimus and Cache Enabler for WordPress and Joomla’s own supported extension. These will not break your website for browsers that don’t support the format so long as you provide PNG or JPG fallbacks. As a result, browsers that support the newer formats will see a performance boost while others get the standard experience. Considering that browser support for WebP is growing, it’s a great opportunity to save on latency.

    This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

    Desktop

    Chrome Opera Firefox IE Edge Safari
    23 12 No No No No

    Mobile / Tablet

    iOS Safari Opera Mobile Opera Mini Android Android Chrome Android Firefox
    No 11.1 all 4.2-4.3 62 No

    HEIF

    High efficiency image files (or HEIF) actually bear the extension HEIC (.heic), which stands for high efficiency image container, but the two acronyms are being used interchangeably. Earlier this year, Apple announced that its newest line of products will support HEIF format by default.

    On top of smaller file sizes, HEIF offers more versatility than other formats since it can support both still images and image sequences. Therefore, it’s possible to store burst photos, focal stacks, exposure stacks, images captured from video and other image collections in a single file. HEIF also supports transparency, 3D, and 4K.

    In addition to images, HEIF files can hold image properties, thumbnails, metadata and auxiliary data such as depth maps and audio. Image derivations can be stored as well thanks to non-destructive editing operations. That means cropping, rotations, and other alterations can be undone at any time. Imagine all of your image variations contained in a single file!

    Apple is doing everything it can to make the transition as seamless as possible. For example, when users share HEIF files with apps that do not support the format, Apple will automatically convert the image to a more compatible format such as JPG.

    There is no browser support for HEIF at the time of this writing.

    This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

    Desktop

    Chrome Opera Firefox IE Edge Safari
    No No No No No No

    Mobile / Tablet

    iOS Safari Opera Mobile Opera Mini Android Android Chrome Android Firefox
    No No No No No No

    That being said, the file format offers impressive file savings for both video and images. This is becoming increasingly important as our devices become stronger and are able to take higher quality images and videos, thus resulting in a greater need for efficient media files.

    FLIF

    Free Lossless Image Format (or FLIF) uses a compression algorithm that results in files that are 14-74 percent smaller than older formats without sacrificing quality (i.e. lossless). Therefore, FLIF is a great fit for any type image or animation.

    The FLIF homepage claims that FLIF files are 43% percent smaller on average than typical PNG files. The graph below illustrates how FILF compares to other formats in this regard.

    FLIF often winds up being the most efficient format in tests.

    FLIF takes advantage of something called meta-adaptive near-zero integer arithmetic coding, or (appropriately) MANIAC. FLIF also supports progressive interlacing so that images appear whole as soon as they begin downloading, which is another feature that has shown to reduce web page bounce rates.

    The potential of FLIF is very exciting, but there is no browser support at the moment nor does it look like any browsers are currently considering adding it. While creators of the format are working hard on achieving native support for popular web browsers and image editing tools, developers can access the FLIF source code and snag a polyfill solution to test it out.

    The Existing Stuff

    As mentioned earlier, we’re likely still years away from the new formats completely taking over. In some cases, it might be better to stick with the tried and true. Let’s review what formats we’re talking about and discuss how they’ve stuck around for so long.

    JPG

    As the ruling standard for most digital cameras and photo sharing devices, JPG is the most frequently used image format on the internet. W3Techs reports that nearly three-quarters of all websites use JPG files. Similarly, most popular photo editing software save images as JPG files by default.

    JPG is named after Joint Photographic Experts Group, the organization that developed the technology; hence why JPG is alternatively called JPEG. You may see these acronyms used interchangeably.

    The format dates all the way back to 1992, and was created to facilitate lossy compression of bitmap images. Lossy compression is an irreversible process that relies on inexact approximations. The idea was to allow developers to adjust compression ratios to achieve their desired balance between file size and image quality.

    The JPG format is terrific for captured photos; however, as the name implies, lossy compression comes with a reduction in image quality. Quality degrades further each time an image is edited and re-saved, which is why developers are taught to refrain from resizing images multiple times.

    GIF

    GIF is short for graphics interchange format. It depends on a compression algorithm called LZW, which doesn’t degrade image quality. The GIF format lacks the color support of JPG and PNG, but it has stuck around nonetheless thanks to its ability to render animations by bundling multiple images into a single file. Images stored inside a GIF file can render in succession to create a short movie-like effect. GIFs can be configured to display image sequences a set number of times or loop infinitely.

    Image courtesy of Giphy.com

    PNG

    The good old portable network graphic (PNG) was originally conceptualized as the successor to the GIF format and debuted in 1996. It was designed specifically for representing images on the web. In terms of popularity, PNG is a close runner-up to JPG. W3Techs claims that 72 percent of websites use this format. Unlike JPG, PNG images are capable of lossless compression (meaning no image quality is lost).

    Another advantage over JPG is that PNG supports transparency and opacity. Since large photos tend to look superior in the JPG format, the PNG format is typically used for non-complex graphics and illustrations.

    Comparing the transparency support of JPG (left) and PNG (right).

    Ways to Improve Image Optimization and Delivery

    There are a few vital things to consider when optimizing images for the web because any file format—including the new ones—can end up adding yet another layer of complexity. Images typically account for the bulk of the bytes on a web page, so image optimization is considered low-hanging fruit for improving a website’s performance. The Google Dev Guide has a comprehensive article on the topic, but here is a condensed list of tips for speeding up your image delivery.

    Implement Support for New Image Formats

    Since newer formats like WebP aren’t yet universally supported, you must configure your applications so that they serve up the appropriate resources to your users.

    You must be able to detect which formats the client supports and deliver the best option. In the case of WebP, there are a few ways to do this.

    Invest in a CDN

    A content delivery network (CDN) accelerates the delivery of images by caching them on their network of edge servers. Therefore, when visitors come to your website, they get routed to the nearest edge server instead of the origin server. This can produce massive time savings especially if your users are far from your origin server.

    We have a whole post on the topic to help understand how CDNs work and how to leverage them for your projects.

    Use CSS Instead of Images

    Because older browsers didn’t support image shadows and rounded corners, veteran web developers are used to displaying certain elements like buttons as images. Remember the days when displaying a custom font required making images for headlines? These practices are still out in the wild, but are terribly inefficient approaches. Instead, use CSS whenever you can.

    Check Your Image Cache Settings

    For image files that don’t change very often, you can utilize HTTP caching directives to improve load times for your regular visitors. That way, when someone visits your website for the first time, their browser will cache the image so that it doesn’t have to be downloaded again on subsequent visits. This practice can also save you money by reducing bandwidth costs.

    Of course, improper caching can cause problems. Adding a fingerprint, such as a timestamp, to your images can help prevent caching conflicts. Fortunately, most web development platforms do this automatically.

    Resize Images for Different Devices

    Figuring out how to best accommodate mobile devices with inferior screen resolutions is an ongoing process. Some developers don’t even bother and simply offer the same image files for all users, but this approach wastes your bandwidth and your mobile visitors’ time. Consider using srcset so that the browser determines which image size it should deliver based on the client’s size dimensions.

    Image Compression Tests

    It’s always interesting to see the size differences each image format provides. In the case of this article, we’re comparing lossless and lossy image formats together. Of course, that’s not common practice as many times lossy will be smaller in size than lossless as the quality of the image suffers in order to produce a smaller image size.

    In any case, choosing between lossless and lossy image formats should be based on how image intensive your site is and how fast it already runs. For example, an e-commerce shop may be comfortable with a slightly degraded image in exchange for faster load times while a photographer website is likely the opposite in order to showcase talent.

    To compare the sizes of each of the six image formats mentioned in this article, we began with three JPG images and converted them into each of the other formats. Here are the performance results.

    • Image 1
    • Image 2
    • Image 3

    As previously mentioned, the results below vary significantly due to lossless/lossy image formats. For instance, PNG and FLIF images are both lossless, therefore resulting in larger image files.

    Image 1 Size Image 2 Size Image 3 Size
    WebP 1.8 MB 293 KB 1.6 MB
    HEIF 1.2 MB 342 KB 1.1 MB
    FLIF 7.4 MB 2.5 MB 6.6 MB
    JPG 3.9 MB 1.3 MB 3.5 MB
    GIF 6.3 MB 3.9 MB 6.7 MB
    PNG 13.2 MB 5 MB 12.5 MB

    According to the results above, HEIF images were smaller overall than any other format. However, due to their lack of support, it currently isn’t possible to integrate the HEIF format into web applications. WebP came in at a fairly close second and does offer ways to work around the less-than-ideal amount of browser support. For users who are using Chrome or Opera, WebP images will certainly help accelerate delivery.

    As for the lossless image formats, PNG is significantly larger than it’s lossy JPG counterpart. However, when optimized with FLIF, savings of about 50 percent were realized. This makes FLIF a great alternative for those who require high-quality images at a smaller file size. That said FLIF currently isn’t supported by another web browsers yet, similar to HEIF.


    Conclusion

    The old image formats will likely still be around for many years to come, but more developers will embrace the newer formats once they realize the size-saving benefits.

    Cameras, mobile devices and many gadgets, in general, are becoming more and more sophisticated meaning that the images and videos taken are of higher quality and taking up more space. New formats must be adopted to mitigate this and it looks like we have some extremely promising options to look forward to, even if it will take some time to see them officially adopted.


    Comparing Novel vs. Tried and True Image Formats is a post from CSS-Tricks