Planning for Responsive Images

The first time I made an image responsive, it was as simple as coding these four lines:

img { max-width: 100%; height auto; /* default */
}

Though that worked for me as a developer, it wasn’t the best for the audience. What happens if the the image in the src attribute is heavy? On high-end developer devices (like mine with 16GB RAM), few or no performance problems occur. But on low-end devices? It’s another story.

image at multiple screen sizes

The above illustration isn’t detailed enough. I’m from Nigeria and, if your product works in Africa, then you shouldn’t be looking at that. Look at this graph instead:

Nowadays, the lowest-priced iPhone sells for an average of $300. The average African can’t afford it even though iPhone is a threshold for measuring fast devices.

That’s all the business analysis you need to understand that CSS width doesn’t cut it for responsive images. What would, you ask? Let me first explain what images are about.

Nuances of images

Images are appealing to users but are a painstaking challenge for us developers who must consider the following factors:

  • Format
  • Disk size
  • Render dimension (layout width and height in the browser)
  • Original dimension (original width and height)
  • Aspect ratio

So, how do we pick the right parameters and deftly mix and match them to deliver an optimal experience for your audience? The answer, in turn, depends on the answers to these questions:

  • Are the images created dynamically by the user or statically by a design team?
  • If the width and height of the image are changed disproportionately, would that affect the quality?
  • Are all the images rendered at the same width and height? When rendered, must they have a specific aspect ratio or one that’s entirely different?
  • What must be considered when presenting the images on different viewports?

Jot down your answers. They will not only help you understand your images — their sources, technical requirements and such — but also enable you to make the right choices in delivery.

Provisional strategies for image delivery

Image delivery has evolved from a simple addition of URLs to the src attribute to complex scenarios. Before delving into them, let’s talk about the multiple options for presenting images so that you can devise a strategy on how and when to deliver and render yours.

First, identify the sources of the images. That way, the number of obscure edge cases can be reduced and the images can be handled as efficiently as possible.

In general, images are either:

  • Dynamic: Dynamic images are uploaded by the audience, having been generated by other events in the system.
  • Static: A photographer, designer, or you (the developer) create the images for the website.

Let’s dig into strategy for each of this types of images.

Strategy for dynamic images

Static images are fairly easy to work with. On the other hand, dynamic images are tricky and prone to problems. What can be done to mitigate their dynamic nature and make them more predictable like static images? Two things: validation and intelligent cropping.

Validation

Set out a few rules for the audience on what is acceptable and what is not. Nowadays, we can validate all the properties of an image, namely:

  • Format
  • Disk size
  • Dimension
  • Aspect ratio

Note: An image’s render size is determined during rendering, hence no validation on our part.

After validation, a predictable set of images would emerge, which are easier to consume.

Intelligent Cropping

Another strategy for handling dynamic images is to crop them intelligently to avoid deleting important content and refocus on (or re-center) the primary content. That’s hard to do. However, you can take advantage of the artificial intelligence offered by open-source tools or SaaS companies that specialize in image management. An example is in the upcoming sections.


Once a strategy has been nailed down for dynamic images, create a rule table with all the layout options for the images. Below is an example. It’s even worth looking into analytics to determine the most important devices and viewport sizes.

Browser Viewport HP Laptop PS4 Slim Camera Lens / Aspect Ratio
< 300 100 vw 100 vw 100 vw/1:2
300 – 699 100 vw 100 vw 100 vw/1:1
700 – 999 50 vw 50 vw 50 vw/1:1
> 999 33 vw 33 vw 100 vw/1:2

The bare (sub-optimal) minimum

Now set aside the complexities of responsiveness and just do what we do best — simple HTML markup with maximum-width CSS.

The following code renders a few images:

<main> <figure> <img src="https://res.cloudinary.com/...w700/ps4-slim.jpg" alt="PS4 Slim"> </figure> <figure> <img src="https://res.cloudinary.com/...w700/x-box-one-s.jpg" alt="X Box One S"> </figure> <!-- More images --> <figure> <img src="https://res.cloudinary.com/...w700/tv.jpg" alt="Tv"> </figure>
</main>

Note: The ellipsis (…) in the image URL specifies the folder, dimension, and cropping strategy, which are too much detail to include, hence the truncation to focus on what matters now. For the complete version, see the CodePen example down below.

This is the shortest CSS example on the Internet that makes images responsive:

/* The parent container */
main { display: grid; grid-template-columns: repeat(auto-fill, minmax(300px, 1fr));
} img { max-width: 100%;
}

If the images do not have a uniform width and height, replace max-width with object-fit and set the value to cover.

Jo Franchetti’s blog post on common responsive layouts with CSS Grid explains how the value of grid-template-columns makes the entire layout adaptive (responsive).

See the Pen
Grid Gallery
by Chris Nwamba (@codebeast)
on CodePen.

The above is not what we are looking for, however, because…

  • the image size and weight are the same on both high-end and low-end devices, and
  • we might want to be stricter with the image width instead of setting it to 250 and letting it grow.

Well, this section covers “the bare minimum” so that’s it.

Layout variations

The worst thing that can happen to an image layout is mismanagement of expectations. Because images might have varying dimensions (width and height), we must specify how to render the images.

Should we intelligently crop all the images to a uniform dimension? Should we retain the aspect ratio for a viewport and alter the ratio for a different one? The ball is in our court.

In case of images in a grid, such as those in the example above with different aspect ratios, we can apply the technique of art direction to render the images. Art direction can help achieve something like this:

For details on resolution switching and art direction in responsive images, read Jason Grigsby’s series. Another informative reference is Eric Portis’s Responsive Images Guide, parts 1, 2, and 3.

See the code example below.

<main> <figure> <picture> <source media="(min-width: 900px)" srcset="https://res.cloudinary.com/.../c_fill,g_auto,h_1400,w_700/camera-lens.jpg"> <img src="https://res.cloudinary.com/.../c_fill,g_auto,h_700,w_700/camera-lens.jpg" alt="Camera lens"> </picture> </figure> <figure> <picture> <source media="(min-width: 700px)" srcset="https://res.cloudinary.com/.../c_fill,g_auto,h_1000,w_1000/ps4-pro.jpg"></source> </picture> <img src="https://res.cloudinary.com/.../c_fill,g_auto,h_700,w_700/ps4-pro.jpg" alt="PS4 Pro"> </figure>
</main>

Instead of rendering only one 700px wide image, we render 700px x 700px only if the viewport width exceeds 700px. If the viewport is larger, then the following rendering occurs:

  • Camera lens images are rendered as a portrait image of 700px in width and 1000px. in height (700px x 1000px).
  • PS4 Pro images are rendered at 1000px x 1000px.

Art direction

By cropping images to make them responsive, we might inadvertently delete the primary content, like the face of the subject. As mentioned previously, AI open-source tools can help crop intelligently and refocus on the primary objects of images. In addition, Nadav Soferman’s post on smart cropping is a useful start guide.

Strict grid and spanning

The first example on responsive images in this post is a flexible one. At a minimum of 300px width, grid items automagically flow into place according to the viewport width. Terrific.

On the other hand, we might want to apply a stricter rule to the grid items based on the design specifications. In that case, media queries come in handy.

Alternatively, we can leverage the grid-span capability to create grid items of varied widths and lengths:

@media(min-width: 700px) { main { display: grid; grid-template-columns: repeat(2, 1fr); }
} @media(min-width: 900px) { main { display: grid; grid-template-columns: repeat(3, 1fr) } figure:nth-child(3) { grid-row: span 2; } figure:nth-child(4) { grid-column: span 2; grid-row: span 2; }
}

For an image that is 1000px x 1000px square on a wide viewport, we can span it to take two grid cells on both row and column. The image that changes to a portrait orientation (700px x 1000px) on a wider viewport can take two cells on a row.

See the Pen
Grid Gallery [Art Direction]
by Chris Nwamba (@codebeast)
on CodePen.

Progressive optimization

Blind optimization is as lame as no optimization. Don’t focus on optimization without predefining the appropriate measurements. And don’t optimize if the optimization is not backed by data.

Nonetheless, ample room exists for optimization in the above examples. We started with the bare minimum, showed you some cool tricks, and now we have a working, responsive grid. The next question to ask is, “If the page contains 20-100 images, how good will the user experience be?”

Here’s the answer: We must ensure that in the case of numerous images for rendering, their size fits the device that renders them. To accomplish that, we need to specify the URLs of several images instead of one. The browser would pick the right (most optimal) one according to the criteria. This technique is called resolution switching in responsive images. See this code example:

<img srcset="https://res.cloudinary.com/.../h_300,w_300/v1548054527/ps4.jpg 300w, https://res.cloudinary.com/.../h_700,w_700/v1548054527/ps4.jpg 700w, https://res.cloudinary.com/.../h_1000,w_1000/v1548054527/ps4.jpg 1000w" sizes="(max-width: 700px) 100vw, (max-width: 900px) 50vw, 33vw" src="https://res.cloudinary.com/.../h_700,w_700/v1548054527/ps4.jpg 700w" alt="PS4 Slim">

Harry Roberts’s tweet intuitively explains what happens:

When I first tried resolution switching, I got confused and tweeted:

Hats off to Jason Grigsby for the clarification in his replies.

Thanks to resolution switching, if the browser is resized, then it downloads the right image for the right viewport; hence small images for small phones (good on CPU and RAM) and larger images for larger viewports.

The above table shows that the browser downloads the same image (blue rectangle) with different disk sizes (red rectangle).

See the Pen
Grid Gallery [Optimized]
by Chris Nwamba (@codebeast)
on CodePen.

Cloudinary’s open-source and free Responsive Image Breakpoints Generator is extremely useful for adapting website images to multiple screen sizes. However, in many cases, setting srcset and sizes alone would suffice.

Conclusion

This article aims at affording simple yet effective guidelines for setting up responsive images and layouts in light of the many—and potentially confusing—options available. Do familiarize yourself with CSS grid, art direction, and resolution switching and you’ll be a ninja in short order. Keep practicing!

The post Planning for Responsive Images appeared first on CSS-Tricks.

“the closest thing web standards have to a golden rule”

The internet’s own Mat Marquis plucks this choice quote from the HTML Design Principals spec:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.

And then he applies the idea to putting images on websites in 2019.

Direct Link to ArticlePermalink

The post “the closest thing web standards have to a golden rule” appeared first on CSS-Tricks.

How do you figure?

Scott O’Hara digs into the <figure> and <figcaption> elements. Gotta love a good ol’ HTML deep dive.

I use these on just about every blog post here on CSS-Tricks, and as I’ve suspected, I’ve basically been doing it wrong forever. My original thinking was that a figcaption was just as good as the alt attribute. I generally use it to describe the image.

<figure> <img src="starry-night.jpg" alt=""> <figcaption>The Starry Night, a famous painting by Vincent van Gogh</figcaption>
</figure>

I intentionally left off the alt text, because the figcaption is saying what I would want to say in the alt text and I thought duplicating it would be annoying (to a screen reader user) and unnecessary. Scott says that’s bad as the empty alt text makes the image entirely undiscoverable by some screen readers and the figure is describing nothing as a result.

The correct answer, I think, is to do more work:

<figure> <img src="starry-night.jpg" alt="An abstract painting with a weird squiggly tree thing in front of a swirling starry nighttime sky."> <figcaption>The Starry Night, a famous painting by Vincent van Gogh</figcaption>
</figure>

It’s a good goal, and I should do better about this. It’s just laziness that gets in the way, and laziness that makes me wish there was a pattern that allowed me to write a description once that worked for both. Maybe something like Nino Ross Rodriguez just shared today where artificial intelligence can take some of the lift. But that’s kinda not the point here. The point is that you can’t write it once because <figcaption> and alt do different things.

Direct Link to ArticlePermalink

The post How do you figure? appeared first on CSS-Tricks.

Using Artificial Intelligence to Generate Alt Text on Images

Web developers and content editors alike often forget or ignore one of the most important parts of making a website accessible and SEO performant: image alt​ text. You know, that seemingly small image attribute that describes an image:

​​​<img src="/cute/sloth/image.jpg" alt="A brown baby sloth staring straight into the camera with a tongue sticking out." >

A brown baby sloth staring straight into the camera with a tongue sticking out.
📷 Credit: Huffington Post

If you regularly publish content on the web, then you know it can be tedious trying to come up with descriptive text. Sure, 5-10 images is doable. But what if we are talking about hundreds or thousands of images? Do you have the resources for that?

Let’s look at some possibilities for automatically generating alt text for images with the use of computer vision and image recognition services from the likes Google, IBM, and Microsoft. They have the resources!

Reminder: What is alt text good for?

Often overlooked during web development and content entry, the alt​ attribute is a small bit of HTML code that describes an image that appears on a page. It’s so inconspicuous that it may not appear to have any impact on the average user, but it has very important uses indeed:

  • ​​Web Accessibility for Screen Readers: Imagine a page with lots of images and not a single one contains alt​ text. A user surfing in using a screen reader would only hear the word “image” blurted out and that’s not very helpful. Great, there’s an image, but what is it? Including alt​ enables screen readers to help the visually impaired “see” what’s there and have a better understanding of the content of the page. They say a picture is worth a thousand words — that’s a thousand words of context a user could be missing.
  • Display text if an image does not load: The World Wide Web seems infallible and, like New York City, that it never sleeps, but flaky and faulty connections are a real thing and, if that happens, well, images tend not to load properly and “break.” Alt text is a safeguard in that it displays on the page in place of where the “broken” image is, providing users with content as a fallback.
  • ​​SEO performance: Alt text on images contributes to SEO performance as well. Though it doesn’t exactly help a site or page skyrocket to the top of the search results, it is one factor to keep in mind for SEO performance.

Knowing how important these things are, hopefully you’ll be able to include proper alt​ text during development and content entry. But are your archives in good shape? Trying to come up with a detailed description for a large backlog of images can be a daunting task, especially if you’re working on tight deadlines or have to squeeze it in between other projects.

What if there was a way to apply alt​ text as an image is uploaded? And! What if there was a way to check the page for missing alt​ tags and automagically fill them in for us?

There are available solutions!

Computer vision (or image recognition) has actually been offered for quite some time now. Companies like Google, IBM and Microsoft have their own APIs publicly available so that developers can tap into those capabilities and use them to identify images as well as the content in them.

There are developers who have already utilized these services and created their own plugins to generate alt​ text. Take Sarah Drasner’s generator, for example, which demonstrates how Azure’s Computer Vision API can be used to create alt​ text for any image via upload or URL. Pretty awesome!

​​See the Pen
​​Dynamically Generated Alt Text with Azure's Computer Vision API
by Sarah Drasner (@sdras)
​​on CodePen.
​​

There’s also Automatic Alternative Text by Jacob Peattie, which is a WordPress plugin that uses the same Computer Vision API. It’s basically an addition to the workflow that allows the user to upload an image and generated alt​ text automatically.

​​Tools like these generally help speed-up the process of content management, editing and maintenance. Even the effort of thinking of a descriptive text has been minimized and passed to the machine!

Getting Your Hands Dirty With AI

I have managed to have played around with a few AI services and am confident in saying that Microsoft Azure’s Computer Vision produces the best results. The services offered by Google and IBM certainly have their perks and can still identify images and proper results, but Microsoft’s is so good and so accurate that it’s not worth settling for something else, at least in my opinion.

Creating your own image recognition plugin is pretty straightforward. First, head down to Microsoft Azure Computer Vision. You’ll need to login or create an account in order to grab an API key for the plugin.

Once you’re on the dashboard, search and select Computer Vision and fill in the necessary details.

Starting out

Wait for the platform to finish spinning up an instance of your computer vision. The API keys for development will be available once it’s done.

​​Keys: Also known as the Subscription Key in the official documentation

Let the interesting and tricky parts begin! I will be using vanilla JavaScript for the sake of demonstration. For other languages, you can check out the documentation. Below is a straight-up copy and paste of the code and you can use to replace the placeholders.

​​var request = new XMLHttpRequest();
request.open('POST', 'https://[LOCATION]/vision/v1.0/describe?maxCandidates=1&language=en', true);
request.setRequestHeader('Content-Type', 'application/json');
request.setRequestHeader('Ocp-Apim-Subscription-Key', '[SUBSCRIPTION_KEY]');
request.send(JSON.stringify({ "url": "[IMAGE_URL]" }));
request.onload = function () { var resp = request.responseText; if (request.status >= 200 && request.status < 400) { // Success! console.log('Success!'); } else { // We reached our target server, but it returned an error console.error('Error!'); } console.log(JSON.parse(resp));
}; request.onerror = function (e) { console.log(e);
};

Alright, let’s run through some key terminology of the AI service.

  • Location: This is the subscription location of the service that was selected prior to getting the subscription keys. If you can’t remember the location for some reason, you can go to the Overview screen and find it under Endpoint.
  • ​​

Overview > Endpoint : To get the location value
  • ​​Subscription Key: This is the key that unlocks the service for our plugin use and can be obtained under Keys. There’s two of them, but it doesn’t really matter which one is used.
  • ​​Image URL: This is the path for the image that’s getting the alt​ text. Take note that the images that are sent to the API must meet specific requirements:
    • File type must be JPEG, PNG, GIF, BMP
    • ​File size must be less than 4MB
    • ​​Dimensions should be greater than 50px by 50px

Easy peasy

​​Thanks to big companies opening their services and API to developers, it’s now relatively easy for anyone to utilize computer vision. As a simple demonstration, I uploaded the image below to Microsoft Azure’s Computer Vision API.

Possible alt​ text: a hand holding a cellphone

​​The service returned the following details:

​​{ "description": { "tags": [ "person", "holding", "cellphone", "phone", "hand", "screen", "looking", "camera", "small", "held", "someone", "man", "using", "orange", "display", "blue" ], "captions": [ { "text": "a hand holding a cellphone", "confidence": 0.9583763512737793 } ] }, "requestId": "31084ce4-94fe-4776-bb31-448d9b83c730", "metadata": { "width": 920, "height": 613, "format": "Jpeg" }
}

​​From there, you could pick out the alt​ text that could be potentially used for an image. How you build upon this capability is your business:

  • ​​You could create a CMS plugin and add it to the content workflow, where the alt​ text is generated when an image is uploaded and saved in the CMS.
  • ​​You could write a JavaScript plugin that adds alt​ text on-the-fly, after an image has been loaded with notably missing alt​ text.
  • ​​You could author a browser extension that adds alt​ text to images on any website when it finds images with it missing.
  • ​​You could write code that scours your existing database or repo of content for any missing alt​ text and updates them or opens pull requests for suggested changes.

​​Take note that these services are not 100% accurate. They do sometimes return a low confidence rating and a description that is not at all aligned with the subject matter. But, these platforms are constantly learning and improving. After all, Rome wasn’t built in a day.

The post Using Artificial Intelligence to Generate Alt Text on Images appeared first on CSS-Tricks.

Extinct & Endangered

I’ve been watching a lot of nature documentaries lately. I like how you can either pay super close attention to them, or use them as background TV. I was a massive fan of the original Blue Planet, so it’s been cool watching the Blue Planet II episodes drop recently, as one example. A typical nature documentary will always have a little look how bad we’re screwing up the environment twist, which is the perfect time and place for such a message.

Speaking of perfect time and place, why not remind ourselves of all the endangered animals out there with placeholder images! That’s what Endangered Species Placeholders is. It’s like PlaceKitten, but for environmental good.

I also just came across this free icon set of extinct animals. 😢

Direct Link to Article — Permalink

The post Extinct & Endangered appeared first on CSS-Tricks.