Chapter 5. Autonomous Agents
Life is a journey, not a destination.
—Ralph Waldo Emerson
So far, I’ve been demonstrating inanimate objects, lifeless shapes sitting on the canvas that flop around when affected by forces in their environment. But this is The Nature of Code. What if I could breathe life into those shapes? What if those shapes could live by their own rules? Can shapes have hopes and dreams and fears? These sorts of questions are the domain of this chapter. They’re what separate unthinking objects from something much more interesting: autonomous agents.
Forces from Within
An autonomous agent is an entity that makes its own choices about how to act in its environment, without any influence from a leader or global plan. In this book, acting typically means moving. For example, instead of simulating a particle that’s passively drawn toward or repelled by another shape because of a force like gravity, I’d now like to design an entity that has the ability—or even the “desire”—to make decisions about its movements. It could decide to move toward or away from a target, like a moth drawn to a flame or a small fish evading a predator.
The switch from inanimate objects to autonomous agents is a significant conceptual leap, but the codebase will barely change. The desire for an autonomous agent to move is just another force, like the force of gravity or the force of the wind. It’s just that now the force is coming from within.
Here are three key components of autonomous agents to keep in mind as I build this chapter’s examples:
- An autonomous agent has a limited ability to perceive its environment. It makes sense that a living, breathing being should be aware of its environment. However, this awareness doesn’t refer to just the external environment but also to the agent’s internal state—its position, velocity, and potentially other properties or even simulated emotions. Throughout the chapter, I’ll explore ways agents can take their own state into account when making decisions. I’ll also cover programming techniques for objects to store references to other objects and therefore “perceive” their surroundings. It’s important to consider the word limited here. Are you designing an all-knowing circle that flies around a canvas, aware of everything else in that canvas? Or are you creating a shape that can examine other shapes only within 15 pixels of itself? Of course, there’s no right answer to this question; it all depends on what you want. I’ll explore several possibilities throughout this chapter, but in general, limitations are good for creating a simulation that feels more “natural.” An insect, for example, may be aware of only the sights and smells that immediately surround it. To model a real-world creature, you could study the exact science of these limitations. Luckily, I can just make stuff up and try it out.
- An autonomous agent processes the information from its environment and calculates an action. This will be the easy part, as the action is a force. The environment might tell the agent that there’s a big, scary-looking shark swimming right at it, and the action will be a powerful force in the opposite direction.
- An autonomous agent has no leader. This third principle is something I care a little less about, depending on the context. For example, if you’re designing a system for which it makes sense to have a leader barking commands at various entities, then that’s what you’ll want to implement. Nevertheless, many of the chapter’s examples will have no leader for an important reason: toward the end of this chapter, I’ll examine group behaviors and look at designing collections of autonomous agents that exhibit the properties of complex systems. These are intelligent and structured group dynamics that emerge not from a leader, but from the local interactions of the elements themselves.
I could start my exploration of autonomous agents in many places. Artificial simulations of ant and termite colonies are fantastic demonstrations of systems of agents, for example. For more on this topic, I encourage you to read Turtles, Termites, and Traffic Jams by Mitchel Resnick (Bradford Books, 1997). However, I want to begin by examining agent behaviors that build on the work in the first four chapters of this book: modeling motion with vectors and forces. And so I’ll return to the book’s ever-changing hero class—once Walker
, then Mover
, then Particle
—and give it yet another incarnation.
Vehicles and Steering
In the late 1980s, computer scientist Craig Reynolds developed algorithmic steering behaviors for animated characters. These behaviors allowed individual elements to navigate their digital environments in a lifelike manner, with strategies for fleeing, wandering, arriving, pursuing, evading, and more. Later, in his 1999 paper “Steering Behaviors for Autonomous Characters,” Reynolds uses the word vehicle to describe his autonomous agents. I’ll follow suit, calling my autonomous agent class Vehicle
:
class Vehicle {
constructor() {
this.position = createVector();
this.velocity = createVector();
this.acceleration = createVector();
}
/* What else do I need to add? */
Like the Mover
and Particle
classes before it, the Vehicle
class’s motion is controlled through its position, velocity, and acceleration vectors. This will make the steering behaviors of a single autonomous agent straightforward to implement. Yet by building a system of multiple vehicles that steer themselves according to simple, locally based rules, surprising levels of complexity emerge. The most famous example is Reynolds’s boids model for flocking or swarming behavior, which I’ll demonstrate in Example 5.11.
Why Vehicles?
In his book Vehicles: Experiments in Synthetic Psychology (Bradford Books, 1986), Italian neuroscientist and cyberneticist Valentino Braitenberg describes a series of hypothetical vehicles with simple internal structures, writing, “This is an exercise in fictional science, or science fiction, if you like that better.” Braitenberg argues that his extraordinarily simple mechanical vehicles manifest behaviors such as fear, aggression, love, foresight, and optimism. Reynolds took his inspiration from Braitenberg, and I’ll take mine from Reynolds.
Reynolds describes the motion of idealized vehicles—idealized because he wasn’t concerned with their actual engineering, but rather started with the assumption that they work and respond to the rules defined. These vehicles have three layers:
- Action selection: A vehicle has a goal (or goals) and can choose an action (or a combination of actions) based on that goal. This is essentially where I left off in the discussion of autonomous agents. The vehicle takes a look at its environment and selects an action based on a desire: “I see a zombie marching toward me. Since I don’t want my brains to be eaten, I’m going to flee from the zombie.” The goal is to keep one’s brains, and the action is to flee. Reynolds’s paper describes many goals and associated actions, such as seeking a target, avoiding an obstacle, and following a path. In a moment, I’ll start building out these examples with p5.js code.
- Steering: Once an action has been selected, the vehicle has to calculate its next move. That next move will be a force—more specifically, a steering force. Luckily, Reynolds has developed a simple steering force formula that I’ll use throughout the examples in this chapter: steering force = desired velocity – current velocity. I’ll get into the details of this formula and why it works so effectively in the next section.
- Locomotion: For the most part, I’m going to ignore this third layer. In the case of fleeing from zombies, the locomotion could be described as “left foot, right foot, left foot, right foot, as fast as you can.” In a canvas, however, a rectangle, circle, or triangle’s actual movement across a window is irrelevant, given that the motion is all an illusion in the first place. This isn’t to say that you should ignore locomotion entirely, however. You’ll find great value in thinking about the locomotive design of your vehicle and how you choose to animate it. The examples in this chapter will remain visually bare; a good exercise would be to elaborate on the animation style. For example, could you add spinning wheels, oscillating paddles, or shuffling legs?
Ultimately, the most important layer for you to consider is the first one, action selection. What are the elements of your system, and what are their goals? In this chapter, I’m going to cover a series of steering behaviors (that is, actions): seeking, fleeing, following a path, following a flow field, flocking with your neighbors, and so on. As I’ve said in other chapters, however, the point isn’t that you should use these exact behaviors in all your projects. Rather, the point is to show you how to model a steering behavior—any steering behavior—in code, and to provide a foundation for designing and developing your own vehicles with new and exciting goals and behaviors.
What’s more, even though the examples in this chapter are highly literal (follow that pixel!), you should allow yourself to think more abstractly (like Braitenberg). What would it mean for your vehicle to have “love” as its goal or “fear” as its driving force? Finally (and I’ll address this in “Combining Behaviors”), you won’t get very far by developing simulations with only one action. Yes, the first example’s action will be to seek a target. But by being creative—by making these steering behaviors your own—it will all come down to mixing and matching multiple actions within the same vehicle. View the coming examples not as singular behaviors to be emulated, but as pieces of a larger puzzle that you’ll eventually assemble.
The Steering Force
What exactly is a steering force? To answer, consider the following scenario: a vehicle with a current velocity is seeking a target. For fun, let’s think of the vehicle as a bug-like creature that desires to savor a delicious strawberry, as in Figure 5.1.
The vehicle’s goal and subsequent action is to seek the target. Thinking back to Chapter 2, you might begin by making the target an attractor and applying a gravitational force that pulls the vehicle to the target. This would be a perfectly reasonable solution, but conceptually it’s not what I’m looking for here.
I don’t want to simply calculate a force that pushes the vehicle toward its target; rather, I want to ask the vehicle to make an intelligent decision to steer toward the target based on its perception of its own state (its speed and the direction in which it’s currently moving) and its environment (the location of the target). The vehicle should consider how it desires to move (a vector pointing to the target), compare that goal with how it’s currently moving (its velocity), and apply a force accordingly. That’s exactly what Reynolds’s steering force formula says:
Or, as you might write in p5.js:
let steer = p5.Vector.sub(desired, velocity);
The current velocity isn’t a problem: the Vehicle
class already has a variable for that. However, the desired velocity has to be calculated. Take a look at Figure 5.2. If the vehicle’s goal is defined as seeking the target, then its desired velocity is a vector that points from its current position to the target position.
Assuming a p5.Vector
called target
defining the target’s position, I then have this:
let desired = p5.Vector.sub(target, position);
There’s more to the story, however. What if it is a high-resolution canvas and the target is thousands of pixels away? Sure, the vehicle might desire to teleport itself instantly to the target position with a massive velocity, but this won’t make for an effective animation. I’ll restate the desire as follows:
The vehicle desires to move toward the target at the maximum possible speed.
In other words, the desired
vector should point from the vehicle’s current position to the target position, with a magnitude equal to the maximum speed of the vehicle, as shown in Figure 5.3.
The concept of maximum speed was introduced in Chapter 1 to ensure that a mover’s speed remained within a reasonable range. However, I didn’t always use it in the subsequent chapters. In Chapter 2, other forces such as friction and drag kept the speed in check, while in Chapter 3, oscillation was caused by opposing forces that kept the speed limited. In this chapter, maximum speed is a key parameter for controlling the behavior of a steering agent, so I’ll include it in all the examples.
While I encourage you to consider how other forces such as friction and drag could be combined with steering behaviors, I’m going to focus only on steering forces for the time being. As such, I can include the concept of maximum speed as a limiting factor in the force calculation. First, I need to add a property to the Vehicle
class setting the maximum speed:
class Vehicle {
constructor() {
this.position = createVector();
this.velocity = createVector();
this.acceleration = createVector();
this.maxspeed = ????;
Maximum speed
}
Then, in the desired velocity calculation, I’ll scale according to maximum speed:
let desired = p5.Vector.sub(target, this.position);
desired.setMag(this.maxspeed);
Putting this all together, I can now write a method called seek()
that receives a p5.Vector
target and calculates a steering force toward that target:
seek(target) {
let desired = p5.Vector.sub(target,this.position);
Calculate the desired velocity to target at max speed.
desired.setMag(this.maxspeed);
let steer = p5.Vector.sub(desired, this.velocity);
Reynolds’s formula for steering force
this.applyForce(steer);
Use the physics model and apply the force to the object’s acceleration.
}
Notice that I finish the method by passing the steering force into applyForce()
. This assumes that the code is built on top of the foundation I developed in Chapter 2.
To see why Reynolds’s steering formula works so well, take a look at Figure 5.4. It shows what the steering force looks like relative to the vehicle and target positions.
This force looks quite different from gravitational attraction. Remember one of the principles of autonomous agents: an autonomous agent has a limited ability to perceive its environment, including its own state. Here’s that ability, subtly but powerfully embedded into Reynolds’s steering formula. In the case of gravitational attraction, the force pulling an object toward another is the same regardless of how that object is moving. But here, the vehicle is actively aware of its own velocity, and its steering force compensates accordingly. This adds a lifelike quality to the simulation, as the way in which the vehicle moves toward the target depends on its own understanding of its current motion.
In all this excitement, I’ve missed one last step. What sort of vehicle is this? Is it a super-sleek race car with amazing handling? Or a large city bus that needs a lot of advance notice to turn? A graceful panda or a lumbering elephant? The example code, as it stands, has no feature to account for this variation in steering ability. For that, I need to limit the magnitude of the steering force. I’ll call this limit the maximum force (or maxforce
for short):
class Vehicle {
constructor() {
this.position = createVector();
this.velocity = createVector();
this.acceleration = createVector();
this.maxspeed = ????;
Maximum speed
this.maxforce = ????;
Now I also have a maximum force.
}
Now I just need to impose that limit before applying the steering force:
seek(target) {
let desired = p5.Vector.sub(target, this.position);
desired.setMag(this.maxspeed);
let steer = p5.Vector.sub(desired, this.velocity);
steer.limit(this.maxforce);
Limit the magnitude of the steering force.
this.applyForce(steer);
}
Limiting the steering force brings up an important point: the goal isn’t to get the vehicle to the target as fast as possible. If it were, I would just say, “Set position equal to target,” and the vehicle would instantly teleport to that location! Instead, as Reynolds puts it, the goal is to move the vehicle in a “lifelike and improvisational manner.”
I’m trying to make the vehicle appear to be steering its way to the target, so it’s up to me to play with the forces and variables of the system to simulate a given behavior. For example, a large maximum steering force would result in a very different path than a small one (see Figure 5.5). One isn’t inherently better or worse than the other; it depends on the desired effect. (And of course, these values need not be fixed and could change based on other conditions. Perhaps a vehicle has an energy property: the higher the energy, the better it can steer.)
Here’s the full Vehicle
class, incorporating the rest of the elements from the Chapter 2 Mover
class.
Example 5.1: Seeking a Target
class Vehicle {
constructor(x, y) {
this.position = createVector(x, y);
this.velocity = createVector(0, 0);
this.acceleration = createVector(0, 0);
this.r = 6.0;
Additional variable for size
this.maxspeed = 8;
this.maxforce = 0.2;
Arbitrary values for max speed and force; try varying these!
}
update() {
this.velocity.add(this.acceleration);
this.velocity.limit(this.maxspeed);
this.position.add(this.velocity);
this.acceleration.mult(0);
}
Standard update function
applyForce(force) {
this.acceleration.add(force);
}
Newton’s second law (skipping the math)
seek(target) {
let desired = p5.Vector.sub(target, this.position);
desired.setMag(this.maxspeed);
let steer = p5.Vector.sub(desired, this.velocity);
steer.limit(this.maxforce);
this.applyForce(steer);
}
The seek steering force algorithm
show() {
let angle = this.velocity.heading();
fill(127);
stroke(0);
push();
translate(this.position.x, this.position.y);
rotate(angle);
beginShape();
vertex(this.r * 2, 0);
vertex(-this.r * 2, -this.r);
vertex(-this.r * 2, this.r);
endShape(CLOSE);
pop();
The vehicle is a triangle pointing in the direction of velocity.
}
}
Note that, unlike the circles used to represent movers and particles in previous chapters, the Vehicle
object is drawn as a triangle, defined as three custom vertices set with beginShape()
and endShape()
. This allows the vehicle to be represented in a way that indicates its direction, determined using the heading()
method, as demonstrated in Chapter 3.
Exercise 5.1
Implement a fleeing steering behavior (the desired velocity is the same as seek, but pointed in the opposite direction).
Exercise 5.2
Create a sketch in which a vehicle’s maximum force and maximum speed don’t remain constant but vary according to environmental factors.
Exercise 5.3
Implement a seeking behavior with a moving target, often referred to as pursuit. In this case, your desired vector won’t point toward the object’s current position, but rather its future position as extrapolated from its current velocity. You’ll see this ability for a vehicle to “predict the future” in later examples. The solution is covered in the “Pursue and Evade” video on the Coding Train website.
The Arrive Behavior
After working for a bit with the seeking behavior, you’re probably asking yourself, “What if I want the vehicle to slow down as it approaches the target?” Before I can even begin to answer this question, I should explain why the seek behavior causes the vehicle to fly past the target in the first place, forcing it to turn around and go back. Consider the brain of a seeking vehicle. What is it thinking at each frame of the animation?
- I want to go as fast as possible toward the target.
- I want to go as fast as possible toward the target.
- I want to go as fast as possible toward the target.
- I want to go as fast as possible toward the target.
- I want to go as fast as possible toward the target.
- and so on . . .
The vehicle is so gosh darn excited about getting to the target that it doesn’t bother to make any intelligent decisions about its speed. No matter the distance to the target, it always wants to go as fast as possible. When the vehicle is very close, it will therefore end up overshooting the target (see Figure 5.6, top).
In some cases, this is the desired behavior. (Consider a puppy going after its favorite toy: it’s not slowing down, no matter how close it gets!) However, in many other cases (a car pulling into a parking spot, a bee landing on a flower), the vehicle’s thought process needs to consider its speed relative to the distance from its target (see Figure 5.6, bottom). For example:
- I’m very far away. I want to go as fast as possible toward the target.
- I’m somewhat far away. I still want to go as fast as possible toward the target.
- I’m getting close. I want to go more slowly toward the target.
- I’m almost there. I want to go very slowly toward the target.
- I’m there. I want to stop!
How can you implement this arriving behavior in code? Think back to the seek()
method. Which part of the code sets the magnitude of the desired velocity?
let desired = p5.Vector.sub(target, this.position);
desired.setMag(this.maxspeed);
This always sets the magnitude of the desired
vector to maxspeed
, as in Figure 5.7.
What if instead the desired velocity’s magnitude were equal to half the distance?
let desired = p5.Vector.sub(target, this.position);
desired.mult(0.5);
I’d still want to limit the magnitude of desired
to no more than the maximum speed, to keep vehicles that are very far away from going ridiculously fast (Figure 5.8).
While this change nicely demonstrates the goal of tying the desired speed to the distance from the target, it’s not a particularly good solution. After all, 10 pixels away is rather close, and a desired speed of 5 is rather large. Something like a desired velocity with a magnitude equal to 5 percent of the distance might work better:
let desired = p5.Vector.sub(target, this.position);
desired.mult(0.05);
Reynolds describes an even more sophisticated approach. Imagine a circle around the target with a given radius r. If the vehicle is within that circle, it gradually slows down—from the maximum speed at the very edge of the circle to zero speed at the target (Figure 5.9).
In other words, if the distance from the target is less than r, the desired speed ranges from 0 to the maximum speed mapped according to that distance.
Example 5.2: Arriving at a Target
arrive(target) {
let desired = p5.Vector.sub(target, this.position);
let d = desired.mag();
The distance is the magnitude of the vector pointing from the position to the target.
if (d < 100) {
If we are closer than 100 pixels . . .
let m = map(d, 0, 100, 0, this.maxspeed);
desired.setMag(m);
. . . set the magnitude according to how close we are.
} else {
desired.setMag(this.maxspeed);
Otherwise, proceed at maximum speed.
}
let steer = p5.Vector.sub(desired, this.velocity);
The usual steering = desired – velocity
steer.limit(this.maxforce);
this.applyForce(steer);
}
The arrive behavior is a great demonstration of an autonomous agent’s perception of the environment—including its own state. This model differs from the inanimate forces of Chapter 2: a celestial body attracted to another body doesn’t know it is experiencing gravity, whereas a cheetah chasing its prey knows it’s chasing.
The key is in the way the forces are calculated. For instance, in the gravitational attraction sketch (Example 2.6), the force always points directly from the object to the target—the exact direction of the desired velocity. Here, by contrast, the vehicle perceives its distance to the target and adjusts its desired speed accordingly, slowing as it gets closer. The force on the vehicle itself is therefore based not just on the desired velocity but also on the desired velocity relative to its current velocity. The vehicle accounts for its own state as part of its assessment of the environment.
Put another way, the magic of Reynolds’s desired minus velocity equation is that it essentially makes the steering force a manifestation of the current velocity’s error: “I’m supposed to be going this fast in this direction, but I’m actually going this fast in another direction. My error is the difference between where I want to go and where I’m currently going.” Sometimes this can lead to seemingly unexpected results, as in Figure 5.10.
In this example of the arrive behavior, the vehicle is moving too fast toward the target. The steering force, or error, tells it to slow down by actually pointing in the opposite direction, away from the target. By contrast, with gravitational attraction, you would never have a force pointing away from the target, no matter how close the target is. Taking the error and applying it as a steering force results in more dynamic, lifelike simulations.
Your Own Behaviors
The first two examples I’ve covered—seek and arrive—boil down to calculating a single vector for each behavior: the desired velocity. In fact, every single one of Reynolds’s steering behaviors follows this same pattern. In this chapter, I’m going to walk through more of Reynolds’s behaviors—flow-field following, path following, and flocking. First, however, I want to emphasize again that these are examples—demonstrations of common steering behaviors that are useful in procedural animation. They aren’t the be-all and end-all of what you can do. As long as you can come up with a vector that describes a vehicle’s desired velocity, you’ve created your own steering behavior.
For example, let’s see how Reynolds defines the desired velocity for his wandering behavior:
Wandering is a type of random steering which has some long-term order: the steering direction on one frame is related to the steering direction on the next frame. This produces more interesting motion than, for example, simply generating a random steering direction each frame.
For Reynolds, the goal of wandering isn’t random motion, but rather a sense of moving in one direction for a little while, wandering off in the next direction for a little bit, and so on. Figure 5.11 illustrates how Reynolds calculates a target to seek in order to achieve such an effect.
First, the vehicle predicts its future position as a fixed distance in front of it (in the direction of its current velocity). Then it draws a circle with radius r centered on that position and picks a random point along the circumference of the circle. That point, which moves randomly around the circle for each frame of animation, is the vehicle’s target, so its desired velocity points in that direction.
Sounds absurd, right? Or, at the very least, a bit arbitrary. In fact, this is a clever and thoughtful solution—it uses randomness to drive a vehicle’s steering, but constrains that randomness along the path of a circle to keep the vehicle’s movement from appearing jittery and, well, totally random.
The seemingly random and arbitrary nature of this solution should drive home the point I’m trying to make: these are made-up behaviors, even if they’re inspired by real-life motion. You can just as easily concoct another elaborate scenario to compute a desired velocity. And you should!
Exercise 5.4
Write the code for Reynolds’s wandering behavior. Use polar coordinates to calculate the vehicle’s target along a circular path.
To give another example, say I want to create a steering behavior called stay within walls. To define the desired velocity, I’ll make a rule: if a vehicle comes within a distance d of a wall, that vehicle desires to move at maximum speed in the opposite direction of the wall (see Figure 5.12).
If I define the walls of the space as the edges of a canvas and an offset
distance equal to 25, I can write the code for this with a series of if
statements.
Example 5.3: “Stay Within Walls” Steering Behavior
boundaries(offset) {
The method receives an offset from the edges.
let desired = null;
Start with a null desired velocity.
if (this.position.x < offset) {
desired = createVector(this.maxspeed, this.velocity.y);
} else if (this.position.x > width - offset) {
desired = createVector(-this.maxspeed, this.velocity.y);
}
if (this.position.y < offset) {
desired = createVector(this.velocity.x, this.maxspeed);
} else if (this.position.y > height - offset) {
desired = createVector(this.velocity.x, -this.maxspeed);
}
Make a desired velocity that retains the y-direction of the vehicle but points the x-direction directly away from the canvas edges.
if (desired !== null) {
desired.normalize();
desired.mult(this.maxspeed);
let steer = p5.Vector.sub(desired, this.velocity);
steer.limit(this.maxforce);
this.applyForce(steer);
}
If the desired velocity is non-null, apply steering.
}
In this boundaries()
method, you might be wondering why I set the desired
velocity to null
at the outset. Why not just set desired
to a vector of 0? Remember, the steering force equals the desired velocity minus the current velocity! If the vehicle desires to move at 0 velocity, the resulting force would slow the vehicle to a stop. By initializing desired
to null
and checking that it’s non-null before applying the steering force, the vehicle won’t be affected at all when it’s comfortably away from the edges of the canvas.
Exercise 5.5
Come up with your own arbitrary scheme for calculating a desired velocity.
Flow Fields
Another one of Reynolds’s steering behaviors is flow-field following. But what is a flow field? Think of the canvas as a grid (Figure 5.13). In each cell of the grid lives an arrow pointing in a certain direction—you know, a vector. As a vehicle moves around the canvas, it asks, “Hey, what arrow is beneath me? That’s my desired velocity!”
Reynolds’s own flow-field example involves the vehicle looking ahead to its future position and following the vector at that spot. For simplicity’s sake, however, I’ll instead have the vehicle follow the vector at its current position.
Before I can write the additional code for the Vehicle
class to follow a flow field, I first need a class that describes the flow field. Since a flow field is essentially a grid of vectors, a 2D array is a convenient data structure to represent it, as I can reference each element with two indices, the cell’s column and row in the grid. If you aren’t familiar with 2D arrays, I suggest reviewing my video tutorial on “2D Arrays in JavaScript”.
class FlowField {
constructor() {
this.resolution = ????;
Resolution of the grid relative to the canvas width and height in pixels
this.cols = ????;
this.rows = ????;
How many columns and how many rows are in the grid?
this.field = new Array(this.cols);
for (let i = 0; i < this.cols; i++) {
this.field[i] = new Array(this.rows);
}
The field will be a 2D array of vectors.
}
How should I fill in the missing values? Let’s say I have a canvas that’s 200 pixels wide by 200 pixels high. In theory, I could make a flow field that has a vector for every single pixel, meaning 40,000 vectors total (). This isn’t a terribly unreasonable number, but in this context, one vector per pixel is overkill. I can easily get by with, say, one vector every 10 pixels (). My resolution
variable sets the size of each cell in pixels. Then I can calculate the number of columns and rows based on the size of the canvas divided by the resolution:
constructor() {
this.resolution = 10;
this.cols = floor(width / this.resolution);
The total number of columns equals the width divided by the resolution.
this.rows = floor(height / this.resolution);
The total number of rows equals the height divided by the resolution.
this.field = new Array(this.cols);
for (let i = 0; i < this.cols; i++) {
this.field[i] = new Array(this.rows);
}
The field will be a 2D array of vectors.
}
Now that I’ve set up the data structure for the flow field, it’s time to compute the flow field’s vectors. How do I do that? However I want! Perhaps I’d like every vector in the flow field pointing to the right (Figure 5.14).
For that, I can just set each vector to (1, 0)
.
for (let i = 0; i < this.cols; i++) {
for (let j = 0; j < this.rows; j++) {
Use a nested loop to hit every column and every row of the flow field.
this.field[i][j] = createVector(1, 0);
Arbitrary decision to make each vector point to the right
}
}
Maybe I’d prefer the vectors to point in random directions (Figure 5.15).
Easy. Just use the p5.Vector
class’s random2D()
method to assign each vector:
for (let i = 0; i < this.cols; i++) {
for (let j = 0; j < this.rows; j++) {
this.field[i][j] = p5.Vector.random2D();
A random vector
}
}
What about using 2D Perlin noise (Figure 5.16)?
Just map each noise value to an angle from to and create a vector from that angle:
let xoff = 0;
for (let i = 0; i < this.cols; i++) {
let yoff = 0;
for (let j = 0; j < this.rows; j++) {
let angle = map(noise(xoff, yoff), 0, 1, 0, TWO_PI);
2D noise
this.field[i][j] = p5.Vector.fromAngle(angle);
yoff += 0.1;
}
xoff += 0.1;
}
Now I’m getting somewhere. Calculating the direction of the vectors by using Perlin noise is a great way to simulate a variety of natural effects, such as irregular gusts of wind or the meandering path of a river. I’ll note, however, that this noise mapping generates a field that prefers flowing left. Since Perlin noise has a Gaussian-like distribution, angles near are more likely to be selected. For Figure 5.16, I used a range of 0 to to counteract this tendency, similarly to the way I applied in Chapter 4 to represent a range of angles for spinning confetti particles. Ultimately, of course, there’s no one correct way to calculate the vectors of a flow field; it’s up to you to decide what you’re looking to simulate.
Exercise 5.6
Write the code to calculate a flow field so that the vectors swirl in circles around the center of the canvas.
let x = i * width / cols;
let y = j * height / rows;
flowfield[i][j] = createVector(width / 2 - x, height / 2 - y);
flowfield[i][j].rotate(PI / 2);
Now that I have a 2D array storing the flow-field vectors, I need a way for the vehicle to look up its desired velocity. For that, I simply divide the vehicle’s x- and y-position by the resolution of the grid. This gives me the indices of the desired vector in the 2D array. For example, if the resolution is 10 and the vehicle is at (100, 50), I’ll want to look up column 10 and row 5:
let column = floor(this.position.x / this.resolution);
let row = floor(this.position.y / this.resolution);
Because a vehicle could theoretically wander off the p5.js canvas, employing the constrain()
function helps ensure that I don’t look outside the bounds of the flow-field array. Here’s a method called lookup()
, which I’ll add to the FlowField
class, that receives a vector (the position of the vehicle) and returns the corresponding flow-field vector for that position:
lookup(position) {
let column = constrain(floor(position.x / this.resolution), 0, this.cols - 1);
let row = constrain(floor(position.y / this.resolution), 0, this.rows - 1);
Use constrain().
return this.field[column][row].copy();
Use copy() to ensure that a copy of the vector is returned.
}
Before moving on to the Vehicle
class, let’s look at the FlowField
class code all together, this time using Perlin noise to compute the vector directions:
class FlowField {
constructor(r) {
this.resolution = r;
this.cols = width / this.resolution;
this.rows = height / this.resolution;
Determine the number of columns and rows.
this.field = new Array(this.cols);
for (let i = 0; i < this.cols; i++) {
this.field[i] = new Array(this.rows);
}
A flow field is a 2D array of vectors. The example includes a separate function to create that array.
this.init();
}
init() {
The init() function fills the 2D array with vectors.
noiseSeed(random(10000));
let xoff = 0;
for (let i = 0; i < this.cols; i++) {
let yoff = 0;
for (let j = 0; j < this.rows; j++) {
Reseed noise for a new flow field each time.
let angle = map(noise(xoff, yoff), 0, 1, 0, TWO_PI);
this.field[i][j] = p5.Vector.fromAngle(angle);
yoff += 0.1;
In this example, use Perlin noise to create the vectors.
}
xoff += 0.1;
}
}
lookup(position) {
let column = constrain(floor(position.x / this.resolution), 0, this.cols - 1);
let row = constrain(floor(position.y / this.resolution), 0, this.rows - 1);
return this.field[column][row].copy();
}
A function to return a vector based on a position
}
Now let’s assume there’s a FlowField
object called flow
. Using that object’s lookup()
method, a vehicle can then retrieve a desired velocity from the flow field and use Reynolds’s steering formula to calculate a force.
Example 5.4: Flow-Field Following
follow(flow) {
let desired = flow.lookup(this.position);
desired.setMag(this.maxspeed);
What is the vector at that spot in the flow field?
let steer = p5.Vector.sub(desired, this.velocity);
steer.limit(this.maxforce);
this.applyForce(steer);
Steering is desired minus velocity.
}
Notice that lookup()
is a method of the FlowField
class, rather than of Vehicle
. While you certainly could place lookup()
within the Vehicle
class instead, from my perspective, placing it in FlowField
aligns best with the OOP principle of encapsulation. The lookup task, which retrieves a vector based on a position from the flow field, is inherently tied to the data of the FlowField
object.
You may also notice some familiar elements from Chapter 4, such as the use of an array of vehicles. Although the vehicles here operate independently, this is a great first step toward thinking about the group behaviors that I’ll introduce later in this chapter.
Exercise 5.7
Adapt the flow-field example so the vectors change over time. (Hint: Try using the third dimension of Perlin noise!)
Exercise 5.8
Can you create a flow field from an image? For example, try having the vectors point from dark to light colors (or vice versa).
Path Following
The next steering behavior formulated by Reynolds that I ’d like to explore is path following. But let me quickly clarify something first: the behavior here is path following, not path finding. Pathfinding refers to an algorithm that solves for the shortest distance between two points, often in a maze. With path following, a predefined route, or path, already exists, and the vehicle simply tries to follow it.
In this section, I will work through the algorithm, including the corresponding mathematics and code. However, before doing so, it’s important to cover a key concept in vector math that I skipped over in Chapter 1: the dot product. I haven’t needed it yet, but it’s necessary here and likely will prove quite useful for you beyond just this example.
The Dot Product
Remember all the vector math covered in Chapter 1? Add, subtract, multiply, and divide? Figure 5.17 has a recap of some of these operations.
Notice that multiplication involves multiplying a vector by a scalar value. This makes sense; when you want a vector to be twice as large (but facing the same direction), multiply it by 2. When you want it to be half the size, multiply it by 0.5. However, several other multiplication-like operations involve a pair of vectors that are useful in certain scenarios—the dot product, the cross product, and something called the Hadamard product. For now, I’m going to focus on the dot product.
Assume vectors and :
The formula for the dot product (represented by the character) is as follows:
Crucially, the result of the dot product is a scalar value (a single number) and not a vector, even though the inputs are two vectors. For example, say you have these two vectors:
Their dot product is shown here:
In p5.js, this translates to the following:
let a = createVector(-3, 5);
let b = createVector(10, 1);
let n = a.dot(b);
The p5.Vector class includes a function to calculate the dot product.
If you look in the guts of the p5.Vector
source code, you’ll find a pretty simple implementation of this dot()
method:
dot(v) {
return this.x * v.x + this.y * v.y + this.z * v.z;
For 2D vectors, z is 0.
}
This formula is simple enough, but why is the dot product necessary, and when is it useful in coding? Well, one of the more common uses of the dot product is to find the angle between two vectors. In fact, the dot product can also be expressed as shown here:
In other words, the dot product of and is equal to the magnitude of times the magnitude of times the cosine of theta (with theta being the angle between the two vectors and ).
The two dot-product formulas can be derived from each other with trigonometry, but I’m happy not to follow that path and instead just operate on the following assumption: