Blobs!

I was recently a guest editor for an issue of Bizarro Devs. It’s a great newsletter! Go sign up! I put in a bunch of links around blobs. Like those weird squishy random shapes that are so “in” right now. Here are those links as well. I’m always a fan of publishing stuff I write 😉

Blobs! Blobs are in! Blobs are — ahem — a bit bizarre. I’ll bask in a design like this annual report cover by Matt Pamer all day. I enjoy watching a design trend like this manifest itself in design tooling and become applied in lots of creative and crafty different ways.

We could start with <svg> and draw our own blob using the Pen tool that is pretty much stock in every vector design application. I’m a cheater though, and would probably wind up checking The Noun Project for some blob examples and steal the SVG from there. But sadly, there isn’t much there, at least as far as blobs go.

Thank god for… (wait for it)… THE BLOBMAKER:

Once we have a blob, it’s just begging to be moved around. Monica Dinculescu shows how to do just that with pure CSS and liberal use of various CSS transforms in a keyframe animation:

See the Pen
CSS only morphing blob
by Monica Dinculescu (@notwaldorf)
on CodePen.

Or we can use a JavaScript library like KUTE.js to get all morph-y, like Heartbeat has done here:

See the Pen
Morphing shapes with KUTE.js
by Heartbeat.UA (@hbagency)
on CodePen.

A library like Greensock could help moving and morphing the blobs around. Greensock even has a plugin that is probably the most powerful morphing tool out there. This Pen uses Greensock, but adds some native SVG filters so that the blobs squish into each other satisfyingly. We could call it the gooey effect:

See the Pen
SVG blob mask
by ATCOM (@Atcom)
on CodePen.

We’ve only looked at SVG so far, but don’t rule out <canvas>! Liam Egan has make this canvas-based blob downright jiggly:

See the Pen
Blob
by Liam Egan (@shubniggurath)
on CodePen.

Why not add a little physics to the party, like gravity, and let them blobs get squishy that way! Hakim El Hattab got it done here:

See the Pen
Blob
by Hakim El Hattab (@hakimel)
on CodePen.

And blobs don’t have to be alone! Blobs that are squished together are like fluid. You might get a kick out of Peeke Kuepers article Simulating blobs of fluid.

The post Blobs! appeared first on CSS-Tricks.

Creating Your Own Gravity and Space Simulator

Space is vast. Space is awesome. Space is difficult to understand — or so people tend to think. But in this tutorial I am going to show you that this is not the case. Quite the contrary; the laws that govern the motion of the stars, planets, asteroids and even entire galaxies are incredibly simple. You could argue that if our Universe was created by a developer, she sure was concerned about writing clean code that would be easy to maintain and scale.

What we are going to do is create a simulation of the inner region of our solar system using nothing but plain old JavaScript. It will be a gravitational n-body simulation where every mass feels the gravity of all the other masses being simulated. To spice things up, I will also show how you can enable users of your simulator to add planets of their own to the simulation with nothing but a little bit of mouse drag action, and in doing so, cause all sorts of cosmic mayhem. A gravity or space simulator would not be worthy of its name without motion trails, so I will show you how to create some fancy looking trails, too, in addition to some other shenanigans that will make the simulator a little bit more fun for the average user.

See the Pen
Gravity Simulator Tutorial
by Darrell Huffman (@thehappykoala)
on CodePen.

You will find the complete source code for this project in the Pen above. There is nothing fancy going on there. No bundling of modules, or transpilation of TypeScript or JSX into JavaScript; just HTML markup, CSS, and a healthy dose of JavaScript.

I came up with the idea for this while working on a project that is close to my heart, namely Harmony of the Spheres. Harmony of the Spheres is open source and very much a work in progress, so if you enjoy this tutorial and got your appetite for all things space and physics related going, check out the repository and fire away a pull request if you find a bug or have a cool new feature that you would like to see implemented.

For this tutorial, it is assumed that you have a basic grasp of JavaScript and the syntax and features that were introduced with ES6. Also, if you are able to draw a rectangle onto a canvas element, that would help, too. If you are not yet in possession of this knowledge, I suggest you head over to MDN and start reading up on ES6 classes, arrow functions, shorthand notation for defining key-value pairs for object literals and const and let. If you are not quite sure how to set up a canvas animation, go check out the documentation on the Canvas API on MDN.

Part 1: Writing a Gravitational N-Body Algorithm

To achieve the goal outlined above, we are going to draw on numerical integration, which is an approach to solving gravitational n-body problems where you take the positions and velocities of all objects at a given time (T), calculate the gravitational force they exert on each other and update their velocities and positions at time (T + dt, dt being shorthand for delta time), or in other words, the change in time between iterations. Repeating this process, we can trace the trajectories of a set of masses through space and time.

We will use a Cartesian coordinate system for our simulation. The Cartesian coordinate system is based on three mutually perpendicular coordinate axes: the x-axis, the y-axis, and the z-axis. The three axes intersect at the point called the origin, where x, y and z are equal to 0. An object in a Cartesian space has a unique position that is defined by its x, y and z values. The benefit of using the Cartesian coordinate system for our simulation is that the Canvas API, with which we will visualize our simulation, uses it, too.

For the purpose of writing an algorithm for solving the gravitational n-body problem, it is necessary to have an understanding of what is meant by velocity and acceleration. Velocity is the change in position of an object with time, while acceleration is the change in an object’s velocity with time. Newton’s first law of motion stipulates that every object will remain at rest or in uniform motion in a straight line unless compelled to change its state by the action of an external force. The Earth does not move in a straight line, but orbits the Sun, so clearly it is accelerating, but what is causing this acceleration? As you have probably guessed, given the subject matter of this tutorial, the answer is the gravitational forces exerted on Earth by the Sun, the other planets in our solar system and every other celestial object in the Universe.

Before we discuss gravity, let us write some pseudo code for updating the positions and velocities of a set of masses in Cartesian space. We store our masses as objects in an array where each object represents a mass with x, y and z position and velocity vectors. Velocity vectors are prefixed with a v — v for velocity!

const updatePositionVectors = (masses, dt) => { const massesLen = masses.length; for (let i = 0; i < massesLen; i++) { const massI = masses[i]; mass.x += mass.vx * dt; mass.y += mass.vy * dt; mass.z += mass.vz * dt; }
}; const updateVelocityVectors = (masses, dt) => { const massesLen = masses.length; for (let i = 0; i < massesLen; i++) { const massI = masses[i]; massI.vx += massI.ax * dt; massI.vy += massI.ay * dt; massI.vz += massI.az * dt; }
};

Looking at the code above, we can see that — as outlined in our discussion on numerical integration — every time we advance the simulation by a given time step, dt, we update the velocities of the masses being simulated and, with those velocities, we update the positions of the masses. The relationship between position and velocity is also made clear in the code above, as we can see that in one step of our simulation, the change in, for example, the x position vector of our mass is equal to the product of the mass’s x velocity vector and dt. Similarly, we can make out the relationship between velocity and acceleration.

How, then, do we get the x, y and z acceleration vectors for a mass so that we can calculate the change in its velocity vectors? To get the contribution of massJ to the x acceleration vector of massI, we need to calculate the gravitational force exerted by massJ on massI, and then, to obtain the x acceleration vector, we simply calculate the product of this force and the distance between the two masses on the x axis. To get the y and z acceleration vectors, we follow the same procedure. Now we just have to figure out how to calculate the gravitational force exerted by massJ on massI to be able to write some more pseudo code. The formula we are interested in looks like this:

f = g * massJ.m / dSq * (dSq + s)^1/2

The formula above tells us that the gravitational force exerted by massJ on massI is equal to the product of the gravitational constant (g) and the mass of massJ (massJ.m) divided by the product of the sum of the squares of the distance between massI and massJ on the x, y and z axises (dSq) and the square root of dSq + s, where s is what is referred to as a softening constant (softeningConstant). Including a softening constant in our gravity calculations prevents a situation where the gravitational force exerted by massJ becomes infinite because it is too close to massI. This “bug,” if you will, in the Newtonian theory of gravity arises for the reason that Newtonian gravity treats masses as point objects, which they are not in reality. Moving on, to get the net acceleration of massI along, for example, the x axis, we simply sum the acceleration induced on it by every other mass in the simulation.

Let us transform the above into code for updating the acceleration vectors of all the masses in the simulation.

const updateAccelerationVectors = (masses, g, softeningConstant) => { const massesLen = masses.length; for (let i = 0; i < massesLen; i++) { let ax = 0; let ay = 0; let az = 0; const massI = masses[i]; for (let j = 0; j < massesLen; j++) { if (i !== j) { const massJ = masses[j]; const dx = massJ.x - massI.x; const dy = massJ.y - massI.y; const dz = massJ.z - massI.z; const distSq = dx * dx + dy * dy + dz * dz; f = (g * massJ.m) / (distSq * Math.sqrt(distSq + softeningConstant)); ax += dx * f; ay += dy * f; az += dz * f; } } massI.ax = ax; massI.ay = ay; massI.az = az; }
};

We iterate over all the masses in the simulation, and for every mass we calculate the contribution to its acceleration by the other masses in a nested loop and increment the acceleration vectors accordingly. Once we are out of the nested loop, we update the acceleration vectors of massI, which we can then use to calculate its new velocity vectors! Whowie. That was a lot. We now know how to update the position, velocity and acceleration vectors of n bodies in a gravity simulation using numerical integration.

But wait; there is something missing. That is right, we have talked about distance, mass and time, but we have never specified what units we ought to use for these quantities. As long as we are consistent, the choice is arbitrary, but generally speaking, it is a good idea to go for units that are suitable for the scales under consideration, so as to avoid awkwardly long numbers. In the context of our solar system, scientists tend to use astronomical units for distance, solar masses for mass and years for time. Adopting this set of units, the value of the gravitational constant (g in the formula for calculating the gravitational force exerted by massJ on massI) is 39.5. For the position and velocity vectors of the Sun and planets of the inner solar system — Mercury, Venus, Earth and Mars — we turn to NASA JPL’s HORIZONS Web-Interface where we change the output setting to vector tables and the units to astronomical units and days. For whatever reason, Horizons does not serve vectors with years as the unit of time, so we have to multiply the velocity vectors by 365.25, the number of days in a year, to obtain velocity vectors that are consistent with our choice of years as the unit of time.

To think, that with the simple equations and laws discussed above, we can calculate the motion of every galaxy, star, planet and moon contained within this dazzling cosmic panorama captured by the Hubble Telescope, is nothing short of awe-inspiring. It is not for nothing Newton’s theory of gravity is referred to as “Newton’s law of universal gravitation.”

A JavaScript class seems like an excellent way of encapsulating the methods we wrote above together with the data on the masses and the constants we need for our simulation, so let us do some refactoring:

class nBodyProblem { constructor(params) { this.g = params.g; this.dt = params.dt; this.softeningConstant = params.softeningConstant; this.masses = params.masses; } updatePositionVectors() { const massesLen = this.masses.length; for (let i = 0; i < massesLen; i++) { const massI = this.masses[i]; massI.x += massI.vx * this.dt; massI.y += massI.vy * this.dt; massI.z += massI.vz * this.dt; } return this; } updateVelocityVectors() { const massesLen = this.masses.length; for (let i = 0; i < massesLen; i++) { const massI = this.masses[i]; massI.vx += massI.ax * this.dt; massI.vy += massI.ay * this.dt; massI.vz += massI.az * this.dt; } } updateAccelerationVectors() { const massesLen = this.masses.length; for (let i = 0; i < massesLen; i++) { let ax = 0; let ay = 0; let az = 0; const massI = this.masses[i]; for (let j = 0; j < massesLen; j++) { if (i !== j) { const massJ = this.masses[j]; const dx = massJ.x - massI.x; const dy = massJ.y - massI.y; const dz = massJ.z - massI.z; const distSq = dx * dx + dy * dy + dz * dz; const f = (this.g * massJ.m) / (distSq * Math.sqrt(distSq + this.softeningConstant)); ax += dx * f; ay += dy * f; az += dz * f; } } massI.ax = ax; massI.ay = ay; massI.az = az; } return this; }
}

That looks much nicer! Let us create an instance of this class. To do so, we need to specify three constants, namely the gravitational constant (g), the time step of the simulation (dt) and the softening constant (softeningConstant). We also need to populate an array with mass objects. Once we have all of those, we can create an instance of the nBodyProblem class, which we will call the innerSolarSystem, since, well, our simulation is going to be of the inner solar system!

const g = 39.5;
const dt = 0.008; // 0.008 years is equal to 2.92 days
const softeningConstant = 0.15; const masses = [{ name: "Sun", // We use solar masses as the unit of mass, so the mass of the Sun is exactly 1 m: 1, x: -1.50324727873647e-6, y: -3.93762725944737e-6, z: -4.86567877183925e-8, vx: 3.1669325898331e-5, vy: -6.85489559263319e-6, vz: -7.90076642683254e-7 } // Mercury, Venus, Earth and Mars data can be found in the pen for this tutorial
]; const innerSolarSystem = new nBodyProblem({ g, dt, masses: JSON.parse(JSON.stringify(masses)), softeningConstant
});

At this moment, you are probably looking at how I instantiated the nBodyProblem class and asking yourself what is up with the JSON parsing and string-ifying nonsense. The reason for why I went about passing the data contained in the masses array to the nBodyProblem constructor in this way is that we want our users to be able to reset the simulation. However, if we pass the masses array itself to the constructor of the nBodyProblem class when we create an instance of it, and then set the value of the masses property of this instance to be equal to the masses array when the user clicks the reset button, the simulation would not have been reset; the state of the masses from the end of the previous simulation run would still be there, and so would any masses the user had added. To solve this problem, we need to pass a clone of the masses array when we instantiate the nBodyProblem class or reset the simulation, so as to avoid modifying the masses array, which we need to keep pristine and untouched, and the easiest way of cloning it is to simply parse a string-ified version of it.

Okay, moving on: to advance the simulation by one step, we simply call:

innerSolarSystem.updatePositionVectors() .updateAccelerationVectors() .updateVelocityVectors();

Congratulations. You are now one step closer to collecting a Nobel prize in physics!

Part 2: Creating a Visual Manifestation for our Masses

We could represent our masses with cute little circles created with the Canvas API’s arc method, but that would look kind of dull, and we would not get a sense of the trajectories of our masses through space and time, so let us write a JavaScript class that will be our template for how our masses manifest themselves visually. It will create a circle that leaves a predetermined number of smaller and faded circles where it has been before, which conveys a sense of motion and direction to the user. The farther you get from the current position of the mass, the smaller and more faded out the circles will become. In this way, we will have created a pretty looking motion trail for our masses.

The constructor accepts three arguments, namely the drawing context for our canvas element (ctx), the length of the motion trail (trailLength) that represents the number of previous positions of our mass that the trail will visualize and finally the radius (radius) of the circle that represents the current position of our mass. In the constructor we will also initialize an empty array that we will call positions, which will — quell surprise — store the current and previous positions of the mass that are included in the motion trail.

At this point, our manifestation class looks like this:

class Manifestation { constructor(ctx, trailLength, radius) { this.ctx = ctx; this.trailLength = trailLength; this.radius = radius; this.positions = []; } }

How do we go about populating the positions array with positions and making sure that we do not store more positions than the number specified by the trailLength property? The answer is that we add a method to our class that accepts the x and y coordinates of the mass’s position as arguments and stores them in an object in the array using the array push method, which appends an element to an array. This means that the current position of the mass will be the last element in the positions array. To make sure we do not store more positions than specified when we instantiated the class, we check if the length of the positions array is greater than the trailLength property. If it is, we use the array shift method to remove the first element, which represents the oldest stored position of the positions array.

class Manifestation { constructor() { /* The code for the constructor outlined above */ } storePosition(x, y) { this.positions.push({ x, y }); if (this.positions.length > this.trailLength) this.positions.shift(); } }

Okay, let us write a method that draws our motion trail. As you have probably guessed, it will accept two arguments, namely the x and y positions of the mass we are drawing the trail for. The first thing we need to do is to store the new position in the positions array and discard any superfluous positions stored in it. Then we iterate over the positions array and draw a circle for every position and voilà, we have ourselves a motion trail! But it does not look very nice, and I promised you that our trail would be pretty with circles that would become increasingly smaller and faded out according to how close they were to the current position of our mass in time.

What we need is, clearly, a scale factor whose size depends on how far away the position we are drawing is from the current position of our mass in time! An excellent way of obtaining an appropriate scale factor, for our intents and purposes, is to simply divide the index (i) of the circle being drawn by the length of the positions array. For example, if the number of elements allowed in the positions array is 25, element number 23 in that array will get a scale factor of 23 / 25, which gives us 0.92. Element number 5, on the other hand, will get a scale factor of 5 / 25, which gives us 0.2; the scale factor decreases the further we get from the current position of our mass, which is the relationship we want! Do note that we need a condition that makes sure that if the circle being drawn represents the current position, the scale factor is set to 1, as we do not want that circle to be either faded or smaller, for that matter. With all this in mind, let us write the code for the draw method of our Manifestation class.

class Manifestation { constructor() { /* The code for the constructor outlined above */ } storePosition() { /* The code for the storePosition method discussed above */ } draw(x, y) { this.storePosition(x, y); const positionsLen = this.positions.length; for (let i = 0; i < positionsLen; i++) { let transparency; let circleScaleFactor; const scaleFactor = i / positionsLen; if (i === positionsLen - 1) { transparency = 1; circleScaleFactor = 1; } else { transparency = scaleFactor / 2; circleScaleFactor = scaleFactor; } this.ctx.beginPath(); this.ctx.arc( this.positions[i].x, this.positions[i].y, circleScaleFactor * this.radius, 0, 2 * Math.PI ); this.ctx.fillStyle = `rgb(0, 12, 153, ${transparency})`; this.ctx.fill(); } } }

Part 3: Visualizing Our Simulation

Let us write some canvas boilerplate and bind it together with the gravitational n-body algorithm and the motion trails, so that we can get an animation of our inner solar system simulation up and running. As mentioned in the introduction to this tutorial, I do not discuss the Canvas API in any great depth, as this is not an introductory tutorial on the Canvas API, so if you find yourself looking rather bemused and or perplexed, make haste and change this state of affairs by heading over to MDN’s documentation on the subject.

Before we continue, though, here is the HTML markup for our simulator:

<section id="controls-wrapper"> <label>Mass of Added Planet</label> <select id="masses-list"> <option value="0.000003003">Earth</option> <option value="0.0009543">Jupiter</option> <option value="1">Sun</option> <option value="0.1">Red Dwarf Star</option> </select> <button id="clear-masses">Reset</button>
</section>
<canvas id="canvas"></canvas>

Now, we turn to the interesting part: the JavaScript. We start by getting a reference to the canvas element and then we proceed by getting its drawing context. Next, we set the dimensions of our canvas element. When it comes to canvas animations on the web, I do not spare any expenses in terms of screen real estate, so let us set the width and height properties of the canvas element to the width and height of the browser window, respectively. You will notice that I have drawn on a peculiar syntax for setting the width and height of the canvas element in that I have declared, in one statement, that the width variable is equal to the width property of the canvas element which, in turn, is equal to the width of the window. Some developers frown upon the use of this syntax, but I find it to be semantically beautiful. If you do not feel the same way, you can deconstruct that statement into two statements. Generally speaking, do whatever you feel most comfortable with, or if you find yourself collaborating with others, what the team has agreed on.

const canvas = document.querySelector("#canvas");
const ctx = canvas.getContext("2d"); const width = (canvas.width = window.innerWidth);
const height = (canvas.height = window.innerHeight);

At this point, we are going to declare some constants for our animation. More specifically, there are three of them. The first is the radius (radius) of the circle, which represents the current position of a mass, in pixels. The second is the length of our motion trail (trailLength), which is the number of previous positions that it includes. Last, but not least, we have the scale (scale) constant, which represents the number of pixels per astronomical unit; Earth is one astronomical unit from the Sun, so if we did not introduce this scale factor, our inner solar system would look very claustrophobic, to say the least.

const scale = 70;
const radius = 4;
const trailLength = 35;

Let us now turn to the visual manifestations of the masses we are simulating. We have written a class that encapsulates their behavior, but how do we instantiate and work with these manifestations in our code? The most convenient and elegant way would be to populate every element of the masses array we are simulating with an instance of the Manifestation class, so let us write a simple method that iterates over these masses and does just that, which we then invoke.

const populateManifestations = masses => { masses.forEach( mass => (mass["manifestation"] = new Manifestation( ctx, trailLength, radius )) );
}; populateManifestations(innerSolarSystem.masses);

Our simulator is meant to be a playful affair, so it is only to be expected that users will spawn masses left and right and that after a minute, or so, the inner solar system will look like an unrecognizable cosmic mess, which is why I think it would be decent of us to provide them with the ability to reset the simulation. To achieve this goal, we start by attaching an event listener to the reset button, and then we write a callback for this event listener that sets the value of the masses property of the innerSolarSystem object to a clone of the masses array. As we cloned the masses array, we no longer have the manifestations of our masses in it, so we call the populateManifestations method to make sure that our users have something to look at after having reset the simulation.

document.querySelector('#reset-button').addEventListener('click', () => { innerSolarSystem.masses = JSON.parse(JSON.stringify(masses)); populateManifestations(innerSolarSystem.masses); }, false);

Okay, enough setting things up. Let us breathe some life into the inner solar system by writing a method that, with the help of the requestAnimationFrame API, will run 60 steps of our simulation a second and animate the results with motion trails and labels for the planets of the inner solar system and the Sun.

The first thing this method does is advance the inner solar system by one step and it does so by updating the position, acceleration and velocity vectors of its masses. Then we prepare the canvas element for the next animation cycle by clearing it of what was drawn in the preceding animation cycle using the Canvas API’s clearRect method.

Next, we iterate over the masses array and invoke the draw method of each mass manifestation. Moreover, if the mass being drawn has a name, we draw it onto the canvas, so that the user can see where the original planets are after things have gone haywire. Looking at the code in the loop, you will probably notice that we are not setting, for example, the value of the mass’s x coordinate on the canvas to massI times scale, and that we are in fact setting it to the width of the viewport divided by two plus massI times scale. Why is this? The answer is that the origin (x = 0, y = 0) of the canvas coordinate system is set to the top left corner of the canvas element, so to center our simulation on the canvas where it is clearly visible to the user, we must include this offset.

After the loop, at the end of the animate method, we call requestAnimationFrame with the animate method as the callback, and then the whole process discussed above is repeated again, creating yet another frame — and run in quick succession, these frames have brought the inner solar system to life. But wait, we have missed something! If you were to run the code I have walked you through thus far, you would not see anything at all. Fortunately, all we have to do to change this sad state of affairs is to proverbially give the inner solar system a kick in its rear end (no, I am not going to fall for the temptation of inserting a Uranus joke here; grow up!) by invoking the animate method!

const animate = () => { innerSolarSystem .updatePositionVectors() .updateAccelerationVectors() .updateVelocityVectors(); ctx.clearRect(0, 0, width, height); const massesLen = innerSolarSystem.masses.length; for (let i = 0; i < massesLen; i++) { const massI = innerSolarSystem.masses[i]; const x = width / 2 + massI.x * scale; const y = height / 2 + massI.y * scale; massI.manifestation.draw(x, y); if (massI.name) { ctx.font = "14px Arial"; ctx.fillText(massI.name, x + 12, y + 4); ctx.fill(); } } requestAnimationFrame(animate);
}; animate();
Our visualization of Mercury, Venus, Earth and Mars going about their day-to-day business of running circles around the sun. Looks pretty neat.

Woah! We have now gotten to the point where our simulation is animated, with the masses represented by dainty little blue circles stalked by marvelous looking motion trails. That is pretty cool in itself, if you were to ask me; but I did promise to also show how you can enable the user to add masses of their own to the simulation with a little bit of mouse drag action, so we are not done quite yet!

Part 4: Adding Masses with the Mouse

The idea here is that the user should be able to press down on the mouse button and draw a line by dragging it; the line will start where the user pressed down and end at the current position of the mouse cursor. When the user releases the mouse button, a new mass is spawned at the position of the screen where the user pressed down the mouse button, and the direction the mass will move is determined by the direction of the line; the length of the line determines the velocity vectors of the mass. So, how do we go about implementing this? Let us run through what we need to do, step by step. The code for steps one through six go above the animate method, while the code for step seven is a small addition to the animate method.

1. We need two variables that will store the x and y coordinates where the user pressed down the mouse button on the screen.

let mousePressX = 0;
let mousePressY = 0;

2. We need two variables that store the current x and y coordinates of the mouse cursor on the screen.

let currentMouseX = 0;
let currentMouseY = 0;

3. We need one variable that keeps track of whether the mouse is being dragged or not. The mouse is being dragged in the time that passes from when the user has pressed down the mouse button to the point where he releases it.

let dragging = false;

4. We need to attach a mousedown listener to the canvas element that logs the x and y coordinates of where the mouse was pressed down and sets the dragging variable to true.

canvas.addEventListener( "mousedown", e => { mousePressX = e.clientX; mousePressY = e.clientY; dragging = true; }, false
);

5. We need to attach a mousemove listener to the canvas element that logs the current x and y coordinates of the mouse cursor.

canvas.addEventListener( "mousemove", e => { currentMouseX = e.clientX; currentMouseY = e.clientY; }, false
);

6. We need to attach a mouseup listener to the canvas element that sets the drag variable to false, and pushes a new object representing a mass into the innerSolarSystem.masses array where the x and y position vectors are the point where the user pressed down the mouse button divided by value of the scale variable.

If we did not divide these vectors by the scale variable, the added masses would end up way out in the solar system, which is not what we want. The z position vector is set to zero and so is the z velocity vector. The x velocity vector is set to the x coordinate where the mouse was released subtracted by the x coordinate where the mouse was pressed down, and then you divide this number by 35. I will be honest and admit that 35 is a magical number that just happens to give you reasonable velocities when you add masses with the mouse to the inner solar system. Same procedure for the y velocity vector. The mass (m) of the mass we are adding is set by the user with a select element that we have populated with the masses of some famous celestial objects in the HTML markup. Last, but not least, we populate the object representing our mass with an instance of the Manifestation class so that the user can see it on the screen!

const massesList = document.querySelector("#masses-list"); canvas.addEventListener( "mouseup", e => { const x = (mousePressX - width / 2) / scale; const y = (mousePressY - height / 2) / scale; const z = 0; const vx = (e.clientX - mousePressX) / 35; const vy = (e.clientY - mousePressY) / 35; const vz = 0; innerSolarSystem.masses.push({ m: parseFloat(massesList.value), x, y, z, vx, vy, vz, manifestation: new Manifestation(ctx, trailLength, radius) }); dragging = false; }, false
);

7. In the animate function, after the loop where we draw our manifestations and, before we call requestAnimationFrame, check if the mouse is being dragged. If that is the case, we’ll draw a line between the position where the mouse was pressed down and the mouse cursors current position.

const animate = () => { // Preceding code in the animate method down to and including the loop where we draw our mass manifestations if (dragging) { ctx.beginPath(); ctx.moveTo(mousePressX, mousePressY); ctx.lineTo(currentMouseX, currentMouseY); ctx.strokeStyle = "red"; ctx.stroke(); } requestAnimationFrame(animate);
};
The inner solar system is about to get a lot more interesting — we can now add masses to our simulation!

Adding masses to our simulation with your mouse is not more difficult than that! Now, grab your mouse and unleash some mayhem on the inner solar system.

Part 5: Fencing off the Inner Solar System

As you will probably have noticed after adding some masses to the simulation, celestial objects are very shenanigan-prone in that they have a tendency to dance their way out of the viewport, especially if the added masses are very massive or they have too high of a velocity, which is kind of annoying. The natural solution to this problem is, of course, to fence off the inner solar system so that if a mass reaches the edge of the viewport, it will bounce back in! Sounds like quite a project, implementing this functionality, but fortunately doing so is a rather simple affair. At the end of the loop where we iterate over the masses and draw them in the animate method, we have insert two conditions: one that checks if our mass is outside the bounds of the viewport on the x-axis, and another that does the same check for the y axis. If the position of our mass is outside of the viewport on the x axis we reverse its x velocity vector so that it bounces back into the viewport, and the same logic applies if our mass is outside of the viewport on the y axis. With these two conditions, the animate method will look like so:

const animate = () => { // Advance the simulation by one step; clear the canvas for (let i = 0; i < massesLen; i++) { // Preceding loop code if (x < radius || x > width - radius) massI.vx = -massI.vx; if (y < radius || y > height - radius) massI.vy = -massI.vy; } requestAnimationFrame(animate);
};
Absolute madness! Venus, you silly planet, what are you doing out there?! You are supposed to be orbiting the Sun!

Ping, pong! It is almost as though we are playing a game of cosmic billiards with all those masses bouncing off the fence that we have built for the inner solar system!

Concluding Remarks

People have a tendency to think of orbital mechanics — which is what we have played around with in this tutorial — as something that is beyond the understanding of mere mortals such as yours truly. Truth, though, is that orbital mechanics follows a very simple and elegant set of rules, as this tutorial is a testament to. With a little bit of JavaScript and high-school mathematics and physics, we have reconstructed the inner solar system to a reasonable degree of accuracy, and gone beyond that to make things a little bit more spicy and, therefore, more interesting. With this simulator, you can answer silly what-if questions along the lines of, “What would happen if I flung a star with the mass of the Sun into our inner solar system?” or develop a feeling for Kepler’s laws of planetary motion by, for example, observing the relationship between the distance of a mass from the Sun and its velocity.

I sure had fun writing this tutorial, and it is my sincere hope that you had as much fun reading it!

The post Creating Your Own Gravity and Space Simulator appeared first on CSS-Tricks.

Creating your own meme generator

Almost every time a new meme pops up in my Twitter feed, I think of a witty version to create. I’m not alone in this. Memes are often a way to acknowledge a shared experience or idea. In a variation of the “Is this a pigeon” meme that has been making the rounds online, a designer Daryl Ginn joked about the elementary nature of most applications that say they use artificial intelligence.

Several people replied to his tweet saying something along the lines of “replace this with this.” Daryl’s version got them thinking about other possible variations. Platforms like imgFlip exist to make meme generations fast and easy. However, there is only so much customization they can allow. For many memes, creating new versions can only be done by people with Photoshop knowledge. But it doesn’t have to be so! For some memes that require more than Impact for the font text on an image, a meme generator can be created using the HTML Canvas API. In this tutorial, we’re going to make a generator for the #saltbae meme.

But first…

Let’s look at some fun interactive meme examples!

The website pablo.life allows you to create your own Kanye West TLOP album cover by changing the text and image.

This is one of my favorites:

The digital agency R/GA created the Straight Outta Somewhere campaign where users “show the world where they’re from by uploading their own photo and filling in the blank after ‘Straight Outta ____.'” Users can download and share the meme.

Developer Isaac Hepworth created the Trump Executive Order Generator.

Spotify collaborated with Migos to create a range of downloadable Valentine’s Day cards that can be customized by changing names.

Let’s build our own meme generator!

Now, the tutorial. In a popular version of the #saltbae meme, instead of salt, Salt Bae (whose name is Nusret Gökçe) sprinkles something other than salt.

Loading an image

The first thing we have to do is load the original image onto the canvas. You can load an image one of two ways: from a URL or from one that exists in the DOM using the <img> tag but is hidden.

Here’s how we do it with a hidden image tag:

<canvas id="canvas" width="1024" height="1024"> Canvas requires a browser that supports HTML5.
</canvas>
<img crossOrigin="Anonymous" id="salt-bae" src="http://res.cloudinary.com/dlwnmz6lr/image/upload/v1520011253/170203-salt-bae-mn-1530_060e5898cdcf7b58f97126d3cfbfdf71.nbcnews-ux-2880-1000_kllh1d.jpg"/>

I’m hosting the image on Cloudinary and added the crossOrigin attribute so we don’t run into any CORS issues.

function drawImage(text) { const canvas = document.getElementById('canvas'); const ctx = canvas.getContext('2d'); ctx.clearRect(0, 0, canvas.width, canvas.height); const img = document.getElementById('salt-bae'); ctx.drawImage(img, 0, 0, canvas.width, canvas.height);
} window.onload = function() { drawImage();
}

We’re using the canvas drawImage function to draw the image to the canvas. It can be used to draw videos or parts of an image as well. The method provides different ways to do this. We’re drawing the image by indicating the position and the width and height of the image.

ctx.drawImage(img, x, y, width, height);

Alternatively, we could load the image from a URL:

function loadAndDrawImage(src) { // Create an image object. (Not part of the dom) const image = new Image(); // After the image has loaded, draw it to the canvas image.onload = () => { // draw image }; // Then set the source of the image that we want to load image.src = src;
}

Now we load in an image to replace the sprinkles Salt Bae is throwing. First, we load the image using one of the techniques I mentioned earlier, then we draw it to the screen like we did with the Salt Bae base image.

function getRandomInt(min, max) { min = Math.ceil(min); max = Math.floor(max); return Math.floor(Math.random() * (max - min)) + min; //The maximum is exclusive and the minimum is inclusive
} function drawBackgroundImage(canvas, ctx) { ctx.clearRect(0, 0, canvas.width, canvas.height); const img = document.getElementById('salt-bae'); ctx.drawImage(img, 0, 0, canvas.width, canvas.height);
} function getRandomImageSize(min, max, width, height) { const ratio = width / height; // Used for aspect ratio width = getRandomInt(min, max); height = width / ratio; return { width, height };
} function drawSalt(src, canvas, ctx) { // Create an image object. (Not part of the dom) const image = new Image(); image.src = src; // After the image has loaded, draw it to the canvas image.onload = function() { for (let i = 0; i < 8; i++) { const randomX = getRandomInt(10, canvas.width/2); const randomY = getRandomInt(canvas.height-300, canvas.height); const dimensions = getRandomImageSize(20, 100, image.width, image.height); ctx.drawImage(image, randomX, randomY, dimensions.width, dimensions.height); } } return image;
} onload = function() { const canvas = document.getElementById('canvas'); const ctx = canvas.getContext('2d'); drawBackgroundImage(canvas, ctx); const saltImage = drawSalt('http://res.cloudinary.com/dlwnmz6lr/image/upload/v1526005050/chadwick-boseman-inspired-workout-program-wide_phczey.webp', canvas, ctx);
};

Now we can let users sprinkle something other than sprinkles.

Uploading an image

We’re going to add a button that triggers an image upload and includes an event listener to listen for a change.

<input type="file" class="upload-image">`
function updateImage(file, img){ img.src = URL.createObjectURL(file);
} onload = function() { const canvas = document.getElementById('canvas'); const ctx = canvas.getContext('2d'); drawBackgroundImage(canvas, ctx); const saltImage = drawSalt('http://res.cloudinary.com/dlwnmz6lr/image/upload/v1526005050/chadwick-boseman-inspired-workout-program-wide_phczey.webp', canvas, ctx); const input = document.querySelector("input[type='file']"); /* * Add event listener to the input to listen for changes to its selected * value, i.e when files are selected */ input.addEventListener('change', function() { drawBackgroundImage(canvas, ctx); // clear canvas and re-draw updateImage(this.files[0], saltImage); });
};

URL.createObjectURL() creates a DOMString containing a URL representing the object given in the parameter which, in this case, is the uploaded file.

We can even up the game a little bit, like providing some default options. I’ve added a few emojis you can play around with as a starting point.

Downloading the final image

Once the new meme has been generated, we want users to be able to download and share it. The typical way of doing this is by opening the canvas in a new tab using the toDataURL method but the user would have to right click to save the image from that tab and that’s not very convenient.

So, instead, we can take advantage of the download attribute added to links in HTML5. We create a link that, on click, sets the download attribute to the result of canvas.toDataURL. The toDataURL() method “returns a data URI containing a representation of the image in the format specified.”

function addLink() { var link = document.createElement('a'); link.innerHTML = 'Download!'; link.addEventListener('click', function(e) { link.href = canvas.toDataURL(); link.download = "salt-bae.png"; }, false); link.className = "instruction"; document.querySelectorAll('section')[1].appendChild(link);
}

Well that’s it! Our meme generator is done.

Some cool links

  • Darius Kazemi has been making a bunch of twitter bots that generate memes.
  • Vox Media has a meme generator called meme that’s open source.

Meme away!

The post Creating your own meme generator appeared first on CSS-Tricks.

Manipulating Pixels Using Canvas

Modern browsers support playing video via the <video> element. Most browsers also have access to webcams via the MediaDevices.getUserMedia() API. But even with those two things combined, we can’t really access and manipulate those pixels directly.

Fortunately, browsers have a Canvas API that allows us to draw graphics using JavaScript. We can actually draw images to the <canvas> from the video itself, which gives us the ability to manipulate and play with those pixels.

Everything you learn here about how to manipulate pixels will give you a foundation to work with images and videos of any kind or any source, not just canvas.

Adding an image to canvas

Before we start playing with video, let’s look at adding an image to canvas.

<img id="SourceImage" src="image.jpg">
<div class="video-container"> <canvas id="Canvas" class="video"></canvas>
</div>

We created an image element that represents the image that is going to be drawn on the canvas. Alternatively we could use the Image object in JavaScript.

var canvas;
var context; function init() { var image = document.getElementById('SourceImage'); canvas = document.getElementById('Canvas'); context = canvas.getContext('2d'); drawImage(image); // Or // var image = new Image(); // image.onload = function () { // drawImage(image); // } // image.src = 'image.jpg';
} function drawImage(image) { // Set the canvas the same width and height of the image canvas.width = image.width; canvas.height = image.height; context.drawImage(image, 0, 0);
} window.addEventListener('load', init);

The code above draws the whole image onto the canvas.

See the Pen Paint image on canvas by Welling Guzman (@wellingguzman) on CodePen.

Now we can start playing with those pixels!

Updating the image data

The image data on the canvas allows us to manipulate and change the pixels.

The data property is an ImageData object with three properties — the width, height and data/ all of which represent those things based on the original image. All these properties are readonly. The one we care about is data, n one-dimensional array represented by an Uint8ClampedArray object, containing the data of each pixel in a RGBA format.

Although the data property is readonly, it doesn’t mean we cannot change its value. It means we cannot assign another array to this property.

// Get the canvas image data
var imageData = context.getImageData(0, 0, canvas.width, canvas.height); image.data = new Uint8ClampedArray(); // WRONG
image.data[1] = 0; // CORRECT

What values does the Uint8ClampedArray object represent, you may ask. Here is the description from MDN:

The Uint8ClampedArray typed array represents an array of 8-bit unsigned integers clamped to 0-255; if you specified a value that is out of the range of [0,255], 0 or 255 will be set instead; if you specify a non-integer, the nearest integer will be set. The contents are initialized to 0. Once established, you can reference elements in the array using the object’s methods, or using standard array index syntax (that is, using bracket notation)

In short, this array stores values ranging from 0 to 255 in each position, making this the perfect solution for the RGBA format, as each part it is represented by 0 to 255 values.

RGBA colors

Colors can be represented by RGBA format, which is a combination of Red, Green and Blue. The A represents the alpha value which is the opacity of the color.

Each position in the array represents a color (pixel) channel value.

  • 1st position is the Red value
  • 2nd position is the Green value
  • 3rd position is the Blue value
  • 4th position is the Alpha value
  • 5th position is the next pixel Red value
  • 6th position is the next pixel Green value
  • 7th position is the next pixel Blue value
  • 8th position is the next pixel Alpha value
  • And so on…

If you have a 2×2 image, then we have a 16 position array (2×2 pixels x 4 value each).

The 2×2 image zoomed up close

The array will be represented as shown below:

// RED GREEN BLUE WHITE
[ 255, 0, 0, 255, 0, 255, 0, 255, 0, 0, 255, 255, 255, 255, 255, 255]

Changing the pixel data

One of the quickest things we can do is set all pixels to white by changing all RGBA values to 255.

// Use a button to trigger the "effect"
var button = document.getElementById('Button'); button.addEventListener('click', onClick); function changeToWhite(data) { for (var i = 0; i < data.length; i++) { data[i] = 255; }
} function onClick() { var imageData = context.getImageData(0, 0, canvas.width, canvas.height); changeToWhite(imageData.data); // Update the canvas with the new data context.putImageData(imageData, 0, 0);
}

The data will be passed as reference, which means any modification we make to it, it will change the value of the argument passed.

Inverting colors

A nice effect that doesn’t require much calculation is inverting the colors of an image.

Inverting a color value can be done using XOR operator (^) or this formula 255 - value (value must be between 0-255).

function invertColors(data) { for (var i = 0; i < data.length; i+= 4) { data[i] = data[i] ^ 255; // Invert Red data[i+1] = data[i+1] ^ 255; // Invert Green data[i+2] = data[i+2] ^ 255; // Invert Blue }
} function onClick() { var imageData = context.getImageData(0, 0, canvas.width, canvas.height); invertColors(imageData.data); // Update the canvas with the new data context.putImageData(imageData, 0, 0);
}

We are incrementing the loop by 4 instead of 1 as we did before, so we can from pixel to pixel that each fill 4 elements in the array.

The alpha value has not effect on inverting colors, so we skip it.

Brightness and contrast

Adjusting the brightness of an image can be done using the next formula: newValue = currentValue + 255 * (brightness / 100).

  • brightness must be between -100 and 100
  • currentValue is the current light value of either Red, Green or Blue.
  • newValue is the result of the current color light plus brightness

Adjusting the contrast of an image can be done with this formula:

factor = (259 * (contrast + 255)) / (255 * (259 - contrast))
color = GetPixelColor(x, y)
newRed = Truncate(factor * (Red(color) - 128) + 128)
newGreen = Truncate(factor * (Green(color) - 128) + 128)
newBlue = Truncate(factor * (Blue(color) - 128) + 128)

The main calculation is getting the contrast factor that will be applied to each color value. Truncate is a function that make sure the value stay between 0 and 255.

Let’s write these functions into JavaScript:

function applyBrightness(data, brightness) { for (var i = 0; i < data.length; i+= 4) { data[i] += 255 * (brightness / 100); data[i+1] += 255 * (brightness / 100); data[i+2] += 255 * (brightness / 100); }
} function truncateColor(value) { if (value < 0) { value = 0; } else if (value > 255) { value = 255; } return value;
} function applyContrast(data, contrast) { var factor = (259.0 * (contrast + 255.0)) / (255.0 * (259.0 - contrast)); for (var i = 0; i < data.length; i+= 4) { data[i] = truncateColor(factor * (data[i] - 128.0) + 128.0); data[i+1] = truncateColor(factor * (data[i+1] - 128.0) + 128.0); data[i+2] = truncateColor(factor * (data[i+2] - 128.0) + 128.0); }
}

In this case you don’t need the truncateColor function as Uint8ClampedArray will truncate these values, but for the sake of translating the algorithm we added that in.

One thing to keep in mind is that, if you apply a brightness or contrast, there’s no way back to the previous state as the image data is overwritten. The original image data must be stored separately for reference if we want to reset to the original state. Keeping the image variable accessible to other functions will be helpful as you can use that image instead to redraw the canvas with the original image.

var image = document.getElementById('SourceImage'); function redrawImage() { context.drawImage(image, 0, 0);
}

Using videos

To make it work with videos, we are going to take our initial image script and HTML code and make some small changes.

HTML

Change the Image element with a video element by replacing this line:

<img id="SourceImage" src="image.jpg">

…with this:

<video id="SourceVideo" src="video.mp4"></video>

JavaScript

Replace this line:

var image = document.getElementById('SourceImage');

…with this:

var video = document.getElementById('SourceVideo');

To start working with the video, we have to wait until the video can be played.

video.addEventListener('canplay', function () { // Set the canvas the same width and height of the video canvas.width = video.videoWidth; canvas.height = video.videoHeight; // Play the video video.play(); // start drawing the frames drawFrame(video);
});

The event canplay is triggered when enough data is available that the media can be played, at least for a couple of frames.

We cannot see any of the video displayed on the canvas because we are only displaying the first frame. We must execute drawFrame every n milliseconds to keep up with the video frames rate.

Inside drawFrame we call drawFrame again every 10ms.

function drawFrame(video) { context.drawImage(video, 0, 0); setTimeout(function () { drawFrame(video); }, 10);
}

After we execute drawFrame, we create a loop executing drawFrame every 10ms — enough time to keep the video in sync in the canvas.

Adding the effect to the video

We can use the same function we created before for inverting colors:

function invertColors(data) { for (var i = 0; i < data.length; i+= 4) { data[i] = data[i] ^ 255; // Invert Red data[i+1] = data[i+1] ^ 255; // Invert Green data[i+2] = data[i+2] ^ 255; // Invert Blue }
}

And add it into the drawFrame function:

function drawFrame(video) { context.drawImage(video, 0, 0); var imageData = context.getImageData(0, 0, canvas.width, canvas.height); invertColors(imageData.data); context.putImageData(imageData, 0, 0); setTimeout(function () { drawFrame(video); }, 10);
}

We can add a button and toggle the effects:

function drawFrame(video) { context.drawImage(video, 0, 0); if (applyEffect) { var imageData = context.getImageData(0, 0, canvas.width, canvas.height); invertColors(imageData.data); context.putImageData(imageData, 0, 0); } setTimeout(function () { drawFrame(video); }, 10);
}

Using camera

We are going to keep the same code we use for video with the only different is that we are going to change the video stream from a file to the camera stream using MediaDevices.getUserMedia

MediaDevices.getUserMedia is the new API deprecating the previous API MediaDevices.getUserMedia(). There’s still browser support for the old version and some browser do not support the new version and we have to resort to polyfill to make sure the browser support one of them

First, remove the src attribute from the video element:

<video id="SourceVideo"><code></pre> <pre rel="JavaScript"><code class="language-javascript">// Set the source of the video to the camera stream
function initCamera(stream) { video.src = window.URL.createObjectURL(stream);
} if (navigator.mediaDevices.getUserMedia) { navigator.mediaDevices.getUserMedia({video: true, audio: false}) .then(initCamera) .catch(console.error) );
}

Live Demo

Effects

Everything we’ve covered so far is the foundation we need in order to create different effects on a video or image. There are a lot of different effects that we can use by transforming each color independently.

GrayScale

Converting a color to a grayscale can done in different ways using different formulas/techniques, to avoid getting too deep into the subject I will show you five of the formulas based on the GIMP desaturate tool and Luma:

Gray = 0.21R + 0.72G + 0.07B // Luminosity
Gray = (R + G + B) ÷ 3 // Average Brightness
Gray = 0.299R + 0.587G + 0.114B // rec601 standard
Gray = 0.2126R + 0.7152G + 0.0722B // ITU-R BT.709 standard
Gray = 0.2627R + 0.6780G + 0.0593B // ITU-R BT.2100 standard

What we want to find using these formulas is the brightness intensity level of each pixel color. The value will range from 0 (black) to 255 (white). These values will create a grayscale (black and white) effect.

This means that the brightest color will be closest to 255 and the darkest color closest to 0.

Live Demo

Duotones

The difference between duotone effect and grayscale effect are the two colors being used. On grayscale you have a gradient from black to white, while on duotone you can have a gradient from any color to any other color, blue to pink as an example.

Using the intensity value of the grayscale, we can replace this from the gradient values.

We need to create a gradient from ColorA to ColorB.

function createGradient(colorA, colorB) { // Values of the gradient from colorA to colorB var gradient = []; // the maximum color value is 255 var maxValue = 255; // Convert the hex color values to RGB object var from = getRGBColor(colorA); var to = getRGBColor(colorB); // Creates 256 colors from Color A to Color B for (var i = 0; i <= maxValue; i++) { // IntensityB will go from 0 to 255 // IntensityA will go from 255 to 0 // IntensityA will decrease intensity while instensityB will increase // What this means is that ColorA will start solid and slowly transform into ColorB // If you look at it in other way the transparency of color A will increase and the transparency of color B will decrease var intensityB = i; var intensityA = maxValue - intensityB; // The formula below combines the two color based on their intensity // (IntensityA * ColorA + IntensityB * ColorB) / maxValue gradient[i] = { r: (intensityA*from.r + intensityB*to.r) / maxValue, g: (intensityA*from.g + intensityB*to.g) / maxValue, b: (intensityA*from.b + intensityB*to.b) / maxValue }; } return gradient;
} // Helper function to convert 6digit hex values to a RGB color object
function getRGBColor(hex)
{ var colorValue; if (hex[0] === '#') { hex = hex.substr(1); } colorValue = parseInt(hex, 16); return { r: colorValue >> 16, g: (colorValue >> 8) & 255, b: colorValue & 255 }
}

In short, we are creating an array of color values from Color A decreasing the intensity while going to Color B and increasing its intensity.

From #0096ff to #ff00f0
Zoomed representation of the color transition
var gradients = [ {r: 32, g: 144, b: 254}, {r: 41, g: 125, b: 253}, {r: 65, g: 112, b: 251}, {r: 91, g: 96, b: 250}, {r: 118, g: 81, b: 248}, {r: 145, g: 65, b: 246}, {r: 172, g: 49, b: 245}, {r: 197, g: 34, b: 244}, {r: 220, g: 21, b: 242}, {r: 241, g: 22, b: 242},
];

Above there is an example of a gradient of 10 colors values from #0096ff to #ff00f0.

Grayscale representation of the color transition

Now that we have the grayscale representation of the image, we can use it to map it to the duotone gradient values.

The duotone gradient has 256 colors while the grayscale has also 256 colors ranging from black (0) to white (255). That means a grayscale color value will map to a gradient element index.

var gradientColors = createGradient('#0096ff', '#ff00f0');
var imageData = context.getImageData(0, 0, canvas.width, canvas.height);
applyGradient(imageData.data); for (var i = 0; i < data.length; i += 4) { // Get the each channel color value var redValue = data[i]; var greenValue = data[i+1]; var blueValue = data[i+2]; // Mapping the color values to the gradient index // Replacing the grayscale color value with a color for the duotone gradient data[i] = gradientColors[redValue].r; data[i+1] = gradientColors[greenValue].g; data[i+2] = gradientColors[blueValue].b; data[i+3] = 255;
}

Live Demo

Conclusion

This topic can go more in depth or explain more effects. The homework for you is to find different algorithms you can apply to these skeleton examples.

Knowing how the pixels are structured on a canvas will allow you to create an unlimited number of effects, such as sepia, color blending, a green screen effect, image flickering/glitching, etc.

You can even create effects on the fly without using an image or a video:

The post Manipulating Pixels Using Canvas appeared first on CSS-Tricks.

Animate Images and Videos with curtains.js

While browsing the latest award-winning websites, you may notice a lot of fancy image distortion animations or neat 3D effects. Most of them are created with WebGL, an API allowing GPU-accelerated image processing effects and animations. They also tend to use libraries built on top of WebGL such as three.js or pixi.js. Both are very powerful tools to create respectively 2D and 3D scenes.

But, you should keep in mind that those libraries were not originally designed to create slideshows or animate DOM elements. There is a library designed just for that, though, and we’re going to cover how to use it here in this post.

WebGL, CSS Positioning, and Responsiveness

Say you’re working with a library like three.js or pixi.js and you want to use it to create interactions, like mouseover and scroll events on elements. You might run into trouble! How do you position your WebGL elements relative to the document and other DOM elements? How would handle responsiveness?

This is exactly what I had in mind when creating curtains.js.

Curatins.js allows you to create planes containing images and videos (in WebGL we will call them textures) that act like plain HTML elements, with position and size defined by CSS rules. But these planes can be enhanced with the endless possibilities of WebGL and shaders.

Wait, shaders?

Shaders are small programs written in GLSL that will tell your The Book of Shaders.

Now that you get the idea, let’s create our first plane!

Setup of a basic plane

To display our first plane, we will need a bit of HTML, CSS, and some JavaScript to create the plane. Then our shaders will animate it.

HTML

The HTML will be really simple here. We will create a <div> that will hold our canvas, and a div that will hold our image.

<body> <!-- div that will hold our WebGL canvas --> <div id="canvas"></div> <!-- div used to create our plane --> <div class="plane"> <!-- image that will be used as a texture by our plane --> <img src="path/to/my-image.jpg" /> </div> </body>

CSS

We will will use CSS to make sure the <div> that wraps the canvas will be bigger than our plane, and apply any size to the plane div. (Our WebGL plane will have the exact same size and positions of this div.)

body { /* make the body fit our viewport */ position: relative; width: 100%; height: 100vh; margin: 0; /* hide scrollbars */ overflow: hidden;
} #canvas { /* make the canvas wrapper fit the document */ position: absolute; top: 0; right: 0; bottom: 0; left: 0;
}
.plane { /* define the size of your plane */ width: 80%; max-width: 1400px; height: 80vh; position: relative; top: 10vh; margin: 0 auto;
} .plane img { /* hide the img element */ display: none;
}

JavaScript

There’s a bit more work in the JavaScript. We need to instantiate our WebGL context, create a plane with uniform parameters, and use it.

window.onload = function() { // pass the id of the div that will wrap the canvas to set up our WebGL context and append the canvas to our wrapper var webGLCurtain = new Curtains("canvas"); // get our plane element var planeElement = document.getElementsByClassName("plane")[0]; // set our initial parameters (basic uniforms) var params = { vertexShaderID: "plane-vs", // our vertex shader ID fragmentShaderID: "plane-fs", // our framgent shader ID uniforms: { time: { name: "uTime", // uniform name that will be passed to our shaders type: "1f", // this means our uniform is a float value: 0, }, } } // create our plane mesh var plane = webGLCurtain.addPlane(planeElement, params); // use the onRender method of our plane fired at each requestAnimationFrame call plane.onRender(function() { plane.uniforms.time.value++; // update our time uniform value }); }

Shaders

We need to write the vertex shader. It won’t be doing much except position our plane based on the model view and projection matrix and pass varyings to the fragment shader:

<!-- vertex shader -->
<script id="plane-vs" type="x-shader/x-vertex"> #ifdef GL_ES precision mediump float; #endif // those are the mandatory attributes that the lib sets attribute vec3 aVertexPosition; attribute vec2 aTextureCoord; // those are mandatory uniforms that the lib sets and that contain our model view and projection matrix uniform mat4 uMVMatrix; uniform mat4 uPMatrix; // if you want to pass your vertex and texture coords to the fragment shader varying vec3 vVertexPosition; varying vec2 vTextureCoord; void main() { // get the vertex position from its attribute vec3 vertexPosition = aVertexPosition; // set its position based on projection and model view matrix gl_Position = uPMatrix * uMVMatrix * vec4(vertexPosition, 1.0); // set the varyings vTextureCoord = aTextureCoord; vVertexPosition = vertexPosition; }
</script>

Now our fragment shader. This is where we will add a little displacement effect based on our time uniform and the texture coordinates.

<!-- fragment shader -->
<script id="plane-fs" type="x-shader/x-fragment"> #ifdef GL_ES precision mediump float; #endif // get our varyings varying vec3 vVertexPosition; varying vec2 vTextureCoord; // the uniform we declared inside our javascript uniform float uTime; // our texture sampler (this is the lib default name, but it could be changed) uniform sampler2D uSampler0; void main() { // get our texture coords vec2 textureCoord = vTextureCoord; // displace our pixels along both axis based on our time uniform and texture UVs // this will create a kind of water surface effect // try to comment a line or change the constants to see how it changes the effect // reminder : textures coords are ranging from 0.0 to 1.0 on both axis const float PI = 3.141592; textureCoord.x += ( sin(textureCoord.x * 10.0 + ((uTime * (PI / 3.0)) * 0.031)) + sin(textureCoord.y * 10.0 + ((uTime * (PI / 2.489)) * 0.017)) ) * 0.0075; textureCoord.y += ( sin(textureCoord.y * 20.0 + ((uTime * (PI / 2.023)) * 0.023)) + sin(textureCoord.x * 20.0 + ((uTime * (PI / 3.1254)) * 0.037)) ) * 0.0125; gl_FragColor = texture2D(uSampler0, textureCoord); }
</script>

Et voilà! You’re all done, and if everything went well, you should be seeing something like this.

See the Pen curtains.js basic plane by Martin Laxenaire (@martinlaxenaire) on CodePen.

Adding 3D and interactions

Alright, that’s pretty cool so far, but we started this post talking about 3D and interactions, so let’s look at how we could add those in.

About vertices

To add a 3D effect we would have to change the plane vertices position inside the vertex shader. However in our first example, we did not specify how many vertices our plane should have, so it was created with a default geometry containing six vertices forming two triangles :

In order to get decent 3D animations, we would need more triangles, thus more vertices:

This plane has five segments along its width and five segments along its height. As a result, we have 50 triangles and 150 total vertices.

Refactoring our JavaScript

Fortunately, it is easy to specify our plane definition as it could be set inside our initial parameters.

We are also going to listen to mouse position to add a bit of interaction. To do it properly, we will have to wait for the plane to be ready, convert our mouse document coordinates to our WebGL clip space coordinates and send them to the shaders as a uniform.

// we are using window onload event here but this is not mandatory
window.onload = function() { // track the mouse positions to send it to the shaders var mousePosition = { x: 0, y: 0, }; // pass the id of the div that will wrap the canvas to set up our WebGL context and append the canvas to our wrapper var webGLCurtain = new Curtains("canvas"); // get our plane element var planeElement = document.getElementsByClassName("plane")[0]; // set our initial parameters (basic uniforms) var params = { vertexShaderID: "plane-vs", // our vertex shader ID fragmentShaderID: "plane-fs", // our framgent shader ID widthSegments: 20, heightSegments: 20, // we now have 20*20*6 = 2400 vertices ! uniforms: { time: { name: "uTime", // uniform name that will be passed to our shaders type: "1f", // this means our uniform is a float value: 0, }, mousePosition: { // our mouse position name: "uMousePosition", type: "2f", // notice this is a length 2 array of floats value: [mousePosition.x, mousePosition.y], }, mouseStrength: { // the strength of the effect (we will attenuate it if the mouse stops moving) name: "uMouseStrength", // uniform name that will be passed to our shaders type: "1f", // this means our uniform is a float value: 0, }, } } // create our plane mesh var plane = webGLCurtain.addPlane(planeElement, params); // once our plane is ready, we could start listening to mouse/touch events and update its uniforms plane.onReady(function() { // set a field of view of 35 to exagerate perspective // we could have done it directly in the initial params plane.setPerspective(35); // listen our mouse/touch events on the whole document // we will pass the plane as second argument of our function // we could be handling multiple planes that way document.body.addEventListener("mousemove", function(e) { handleMovement(e, plane); }); document.body.addEventListener("touchmove", function(e) { handleMovement(e, plane); }); }).onRender(function() { // update our time uniform value plane.uniforms.time.value++; // continually decrease mouse strength plane.uniforms.mouseStrength.value = Math.max(0, plane.uniforms.mouseStrength.value - 0.0075); }); // handle the mouse move event function handleMovement(e, plane) { // touch event if(e.targetTouches) { mousePosition.x = e.targetTouches[0].clientX; mousePosition.y = e.targetTouches[0].clientY; } // mouse event else { mousePosition.x = e.clientX; mousePosition.y = e.clientY; } // convert our mouse/touch position to coordinates relative to the vertices of the plane var mouseCoords = plane.mouseToPlaneCoords(mousePosition.x, mousePosition.y); // update our mouse position uniform plane.uniforms.mousePosition.value = [mouseCoords.x, mouseCoords.y]; // reassign mouse strength plane.uniforms.mouseStrength.value = 1; } }

Now that our JavaScript is done, we have to rewrite our shaders so that they’ll use our mouse position uniform.

Refactoring the shaders

Let’s look at our vertex shader first. We have three uniforms that we could use for our effect:

  1. the time which is constantly increasing
  2. the mouse position
  3. our mouse strength, which is constantly decreasing until the next mouse move

We will use all three of them to create a kind of 3D ripple effect.

<script id="plane-vs" type="x-shader/x-vertex"> #ifdef GL_ES precision mediump float; #endif // those are the mandatory attributes that the lib sets attribute vec3 aVertexPosition; attribute vec2 aTextureCoord; // those are mandatory uniforms that the lib sets and that contain our model view and projection matrix uniform mat4 uMVMatrix; uniform mat4 uPMatrix; // our time uniform uniform float uTime; // our mouse position uniform uniform vec2 uMousePosition; // our mouse strength uniform float uMouseStrength; // if you want to pass your vertex and texture coords to the fragment shader varying vec3 vVertexPosition; varying vec2 vTextureCoord; void main() { vec3 vertexPosition = aVertexPosition; // get the distance between our vertex and the mouse position float distanceFromMouse = distance(uMousePosition, vec2(vertexPosition.x, vertexPosition.y)); // this will define how close the ripples will be from each other. The bigger the number, the more ripples you'll get float rippleFactor = 6.0; // calculate our ripple effect float rippleEffect = cos(rippleFactor * (distanceFromMouse - (uTime / 120.0))); // calculate our distortion effect float distortionEffect = rippleEffect * uMouseStrength; // apply it to our vertex position vertexPosition += distortionEffect / 15.0; gl_Position = uPMatrix * uMVMatrix * vec4(vertexPosition, 1.0); // varyings vTextureCoord = aTextureCoord; vVertexPosition = vertexPosition; }
</script>

As for the fragment shader, we are going to keep it simple. We are going to fake lights and shadows based on each vertex position:

<script id="plane-fs" type="x-shader/x-fragment"> #ifdef GL_ES precision mediump float; #endif // get our varyings varying vec3 vVertexPosition; varying vec2 vTextureCoord; // our texture sampler (this is the lib default name, but it could be changed) uniform sampler2D uSampler0; void main() { // get our texture coords vec2 textureCoords = vTextureCoord; // apply our texture vec4 finalColor = texture2D(uSampler0, textureCoords); // fake shadows based on vertex position along Z axis finalColor.rgb -= clamp(-vVertexPosition.z, 0.0, 1.0); // fake lights based on vertex position along Z axis finalColor.rgb += clamp(vVertexPosition.z, 0.0, 1.0); // handling premultiplied alpha (useful if we were using a png with transparency) finalColor = vec4(finalColor.rgb * finalColor.a, finalColor.a); gl_FragColor = finalColor; }
</script>

And there you go!

See the Pen curtains.js ripple effect example by Martin Laxenaire (@martinlaxenaire) on CodePen.

With these two simple examples, we’ve seen how to create a plane and interact with it.

Videos and displacement shaders

Our last example will create a basic fullscreen video slideshow using a displacement shader to enhance the transitions.

Displacement shader concept

The displacement shader will create a nice distortion effect. It will be written inside our fragment shader using a grayscale picture and will offset the pixel coordinates of the videos based on the texture RGB values. Here’s the image we will be using:

The effect will be calculated based on each pixel RGB value, with a black pixel being [0, 0, 0] and a white pixel [1, 1, 1] (GLSL equivalent for [255, 255, 255]). To simplify, we will use only the red channel value, as with a grayscale image red, green and blue are always equal.

You can try to create your own grayscale image (it works great with geometric shape ) to get your unique transition effect.

Multiple textures and videos

A plane can have more than one texture simply by adding multiple image tags. This time, instead of images we want to use videos. We just have to replace the <img /> tags with a <video /> one. However there are two things to know when it comes to video:

  • The video will always fit the exact size of the plane, which means your plane has to have the same width/height ratio as your video. This is not a big deal tho because it is easy to handle with CSS.
  • On mobile devices, we can’t autoplay videos without a user gesture, like a click event. It is therefore safer to add a “enter site” button to display and launch our videos.

HTML

The HTML is still pretty straightforward. We will create our canvas div wrapper, our plane div containing the textures and a button to trigger the video autoplay. Just notice the use of the data-sampler attribute on the image and video tags—it will be useful inside our fragment shader.

<body> <div id="canvas"></div> <!-- this div will handle the fullscreen video sizes and positions --> <div class="plane-wrapper"> <div class="plane"> <!-- notice here we are using the data-sampler attribute to name our sampler uniforms --> <img src="path/to/displacement.jpg" data-sampler="displacement" /> <video src="path/to/video.mp4" data-sampler="firstTexture"></video> <video src="path/to/video-2.mp4" data-sampler="secondTexture"></video> </div> </div> <div id="enter-site-wrapper"> <span id="enter-site"> Click to enter site </span> </div>
</body>

CSS

The stylesheet will handle a few things: display the button and hide the canvas before the user has entered the site, size and position our plane-wrapper div to handle fullscreen responsive videos.

@media screen { body { margin: 0; font-size: 18px; font-family: 'PT Sans', Verdana, sans-serif; background: #212121; line-height: 1.4; height: 100vh; width: 100vw; overflow: hidden; } /*** canvas ***/ #canvas { position: absolute; top: 0; right: 0; bottom: 0; left: 0; z-index: 10; /* hide the canvas until the user clicks the button */ opacity: 0; transition: opacity 0.5s ease-in; } /* display the canvas */ .video-started #canvas { opacity: 1; } .plane-wrapper { position: absolute; /* center our plane wrapper */ left: 50%; top: 50%; transform: translate(-50%, -50%); z-index: 15; } .plane { position: absolute; top: 0; right: 0; bottom: 0; left: 0; /* tell the user he can click the plane */ cursor: pointer; } /* hide the original image and videos */ .plane img, .plane video { display: none; } /* center the button */ #enter-site-wrapper { display: flex; justify-content: center; align-items: center; align-content: center; position: absolute; top: 0; right: 0; bottom: 0; left: 0; z-index: 30; /* hide the button until everything is ready */ opacity: 0; transition: opacity 0.5s ease-in; } /* show the button */ .curtains-ready #enter-site-wrapper { opacity: 1; } /* hide the button after the click event */ .curtains-ready.video-started #enter-site-wrapper { opacity: 0; pointer-events: none; } #enter-site { padding: 20px; color: white; background: #ee6557; max-width: 200px; text-align: center; cursor: pointer; } } /* fullscreen video responsive */
@media screen and (max-aspect-ratio: 1920/1080) { .plane-wrapper { height: 100vh; width: 177vh; }
} @media screen and (min-aspect-ratio: 1920/1080) { .plane-wrapper { width: 100vw; height: 56.25vw; }
}

JavaScript

As for the JavaScript, we will go like this:

  • Set a couple variables to store our slideshow state
  • Create the Curtains object and add the plane to it
  • When the plane is ready, listen to a click event to start our videos playback (notice the use of the playVideos() method). Add another click event to switch between the two videos.
  • Update our transition timer uniform inside the onRender() method
window.onload = function() { // here we will handle which texture is visible and the timer to transition between images var activeTexture = 1; var transitionTimer = 0; // set up our WebGL context and append the canvas to our wrapper var webGLCurtain = new Curtains("canvas"); // get our plane element var planeElements = document.getElementsByClassName("plane"); // some basic parameters var params = { vertexShaderID: "plane-vs", fragmentShaderID: "plane-fs", imageCover: false, // our displacement texture has to fit the plane uniforms: { transitionTimer: { name: "uTransitionTimer", type: "1f", value: 0, }, }, } var plane = webGLCurtain.addPlane(planeElements[0], params); // create our plane plane.onReady(function() { // display the button document.body.classList.add("curtains-ready"); // when our plane is ready we add a click event listener that will switch the active texture value planeElements[0].addEventListener("click", function() { if(activeTexture == 1) { activeTexture = 2; } else { activeTexture = 1; } }); // click to play the videos document.getElementById("enter-site").addEventListener("click", function() { // display canvas and hide the button document.body.classList.add("video-started"); // play our videos plane.playVideos(); }, false); }).onRender(function() { // increase or decrease our timer based on the active texture value // at 60fps this should last one second if(activeTexture == 2) { transitionTimer = Math.min(60, transitionTimer + 1); } else { transitionTimer = Math.max(0, transitionTimer - 1); } // update our transition timer uniform plane.uniforms.transitionTimer.value = transitionTimer; });
}

Shaders

This is where all the magic will occur. Like in our first example, the vertex shader won’t do much and you’ll have to focus on the fragment shader that will create a “dive in” effect:

<script id="plane-vs" type="x-shader/x-vertex"> #ifdef GL_ES precision mediump float; #endif // default mandatory variables attribute vec3 aVertexPosition; attribute vec2 aTextureCoord; uniform mat4 uMVMatrix; uniform mat4 uPMatrix; // varyings varying vec3 vVertexPosition; varying vec2 vTextureCoord; // custom uniforms uniform float uTransitionTimer; void main() { vec3 vertexPosition = aVertexPosition; gl_Position = uPMatrix * uMVMatrix * vec4(vertexPosition, 1.0); // varyings vTextureCoord = aTextureCoord; vVertexPosition = vertexPosition; }
</script> <script id="plane-fs" type="x-shader/x-fragment"> #ifdef GL_ES precision mediump float; #endif varying vec3 vVertexPosition; varying vec2 vTextureCoord; // custom uniforms uniform float uTransitionTimer; // our textures samplers // notice how it matches our data-sampler attributes uniform sampler2D firstTexture; uniform sampler2D secondTexture; uniform sampler2D displacement; void main( void ) { // our texture coords vec2 textureCoords = vec2(vTextureCoord.x, vTextureCoord.y); // our displacement texture vec4 displacementTexture = texture2D(displacement, textureCoords); // our displacement factor is a float varying from 1 to 0 based on the timer float displacementFactor = 1.0 - (cos(uTransitionTimer / (60.0 / 3.141592)) + 1.0) / 2.0; // the effect factor will tell which way we want to displace our pixels // the farther from the center of the videos, the stronger it will be vec2 effectFactor = vec2((textureCoords.x - 0.5) * 0.75, (textureCoords.y - 0.5) * 0.75); // calculate our displaced coordinates to our first video vec2 firstDisplacementCoords = vec2(textureCoords.x - displacementFactor * (displacementTexture.r * effectFactor.x), textureCoords.y - displacementFactor * (displacementTexture.r * effectFactor.y)); // opposite displacement effect on the second video vec2 secondDisplacementCoords = vec2(textureCoords.x - (1.0 - displacementFactor) * (displacementTexture.r * effectFactor.x), textureCoords.y - (1.0 - displacementFactor) * (displacementTexture.r * effectFactor.y)); // apply the textures vec4 firstDistortedColor = texture2D(firstTexture, firstDisplacementCoords); vec4 secondDistortedColor = texture2D(secondTexture, secondDisplacementCoords); // blend both textures based on our displacement factor vec4 finalColor = mix(firstDistortedColor, secondDistortedColor, displacementFactor); // handling premultiplied alpha finalColor = vec4(finalColor.rgb * finalColor.a, finalColor.a); // apply our shader gl_FragColor = finalColor; }
</script>

Here’s our little video slideshow with a cool transition effect:

See the Pen curtains.js video slideshow by Martin Laxenaire (@martinlaxenaire) on CodePen.

This example is a great way to show you how to create a slideshow with curtains.js: you might want to use images instead of videos, change the displacement texture, modify the fragment shader or even add more slides…

Going deeper

We’ve just scraped the surface of what’s possible with curtains.js. You could try to create multiple planes with a cool mouse over effect for your article thumbs for example. The possibilities are almost endless.

If you want to see more examples covering all those basics usages, you can check the library website or the GitHub repo.

The post Animate Images and Videos with curtains.js appeared first on CSS-Tricks.