Though I’ve been interested of late with the idea of random walks, I was introduced to the concept when, more than two decades ago, I stumbled across Conway’s Game of Life, the cellular automaton built by John Conway in 1970. A cellular automaton is a grid of cells in which each cell has a value depending on the values of its neighbours. The automaton simulates the evolution of the grid as the cells’ values change dynamically.
Langton’s ant was a popular instance of the simulator and one of my favourites, too. One 2001 conference paper described it as “a simple … system with a surprisingly complex behaviour.” Here, a (virtual) ant starts off at a random cell on the grid and moves randomly into one of the four neighbouring squares (diagonal squares aren’t accessible). There are three rules:
(i) A cell can be either black or white in colour;
(ii) If the square is white when the ant moves into it, the colour is flipped, and the ant turns 90º clockwise and moves forward;
(iii) If the square is black, the colour is flipped, and the ant turns 90º counter-clockwise and moves forward.
As the ant moves across the grid in this way, the first hundred or so steps produce a symmetric pattern before chaos ensues. For the next 9,900 or so steps, an image devoid of any patterns comes into view. But after around 10,000 steps, there’s magic: the ant suddenly enters into a repetitive 104-step pattern that it continues until the end of time. You can run your own simulation and check.
The march of the Langton’s ant before the repetitive portion has been described as a pseudorandom walk — a walk whose pattern appears random but whose next step is not quite random (because of the rules). In a truly random walk, the length of each step is fixed and the direction of each step is chosen at random from a fixed number of options.
If it sounds simple, it is, but you might be forgiven for thinking it’s only a mathematical flight of fancy. Random walks have applications in numerous areas, including econometrics, finance, biology, chemistry, and quantum physics.
Specific variants of the random walk behave in ways that closely match the properties of some complex system evolving in time. For example, in a Gaussian random walk, the direction of each step is random and the length of each step is sampled randomly from a Gaussian distribution (the classic example of a bell curve). Experts use the evolution of this walk to evaluate the risk exposure of investment portfolios.
The Lévy flight is a random walk with a small change: instead of the step length being determined by a random pick from the Gaussian distribution, it comes from any distribution with a heavy tail. One common example is the gamma distribution. Each such distribution can be tweaked with two parameters called κ (kappa) and θ (theta) to produce different plots on a graph, all with the same general properties. In the examples shown below, focus on the orange line (κ = 2, θ = 2): it shows a gamma distribution with a heavy tail.
Put another way, the distribution has some large values but mostly small values. A Lévy flight is a random walk where the step length is sampled randomly from this distribution, and as a result has a few large steps and many small steps. Research has shown that the foraging path of animals looking for food that is scarce can be modelled as a Lévy flight: the large steps correspond to the long distances towards food sources that are located far apart and the short steps to finding food spread in a small area at each source.
Perhaps the most famous ‘example’ of a random walk is Brownian motion; it isn’t a perfect example however. Brownian motion can describe, say, the path of a single atom over time in a gas of billions of atoms by using a Lévy process. Whereas a random walk proceeds in discrete steps, a Lévy process is continuous; they are in other respects the same. The motion itself refers to the atom’s journey in some time period, frequently bumping into other atoms (depending on the gas’s density) and shifting its path in random ways.
Brownian motion in particular uses a type of Lévy process called the Wiener process, where the path evolves according to the following rules:
(i) Each increment of the process is independent of other (non-overlapping) increments;
(ii) How much the process changes over a period of time depends only on the duration of the period;
(iii) Increments in the process are randomly sampled from a Gaussian distribution;
(iv) The process has a statistical mean equal to zero;
(v) The process’s covariance between any two time points is equal to the lower variance at those two points (variance denotes how quickly the value of a variable is spreading out over time).
The path of the atom in the gas follows a Wiener process and is thus Brownian motion. The Wiener process has a wealth of applications across both the pure and the applied sciences. Just to name one: say there is a small particle — e.g. an ion — trapped in a cell. It can’t escape the cell except through a small opening. The Wiener process, which models the Brownian motion of the ion through the cell, can be used to estimate the average amount of time the ion will need to reach the opening and escape.
Like random walks, Wiener processes can also be tweaked to produce models for different conditions. One example is the Brownian bridge, which arises when a Wiener process is limited to appear at the start of an interval and disappear at the end, with the start and end points fixed. A different, more visual way to put this is in terms of a graph with two y-axes and one x-axis. Say the point 0 is the start of the interval on the left y-axis and 1 is the end of the interval on the right y-axis. A Wiener process in the interval [0, 1] will be a ‘bridge’ that connects 0 and 1 in a path that follows Brownian motion.
By analogy, a random bridge in the interval [0, 1] will be a random walk based on the Gaussian distribution between 0 and 1; a gamma random bridge in the interval [0, 1] will be a random walk based on the gamma distribution between 0 and 1; and so on. (This said, a Wiener process and a random walk are distinct: a Wiener process will play out the same way if the underlying grid is rotated by an arbitrary angle but a random walk won’t.)
It’s a wonder of mathematics that it can discern recurring behaviours in such highly noisy systems and with its finite tools distil from them glimpses into their future. According to a 2020 preprint paper on arXiv, “Various random-walk-related models can be applied in different fields, which is of great significance to downstream tasks such as link prediction, recommendation, computer vision, semi-supervised learning, and network embedding.”
If some basic conditions are met, there are random walks out in the universe as well. In 2004, researchers estimated the Brownian velocity of the black hole at the Milky Way’s centre to be less than 1 km/s.
For a more mathematical example, in a ‘conventional’ random walk, after N steps the walker’s distance from the origin will be comparable to the square root of N. Further, it takes on average S2 steps to travel a distance of S from the origin. For a long time, researchers believed this so-called S → S2 scaling law could model almost any process in which a physical entity was moving from one location to another. The law captured the notion of how much a given distribution would spread out over time.
One of the earliest deviations from this law was fractals, where there is an S → Sβ law but such that β is always greater than 2, implying a greater amount of spread relative to the step length vis-à-vis random walks. (Factoid: a random walk on a fractal also gives rise to a fractal.)
For yet another example, random walks have a famously deep connection to resistor networks: electric circuits where a bunch of resistors are connected in some configuration, plus a power source and a grounding. Researchers have found that the effective voltage between any two points in the circuit is proportional to the time a random-walker would take to travel between those two points for the first time.
The resistor model speaks to a beautiful thing random walks bring to light: the influence an underlying structure exerts on a stochastic process — one governed entirely by chance — playing out on that structure, its inherent geometry imposing unexpected limits on the randomness and keeping it from wheeling away into chaos. At each step the random walk makes an unpredictable choice but the big picture in which these steps are individual strokes is a picture of predictability, to some degree at least.
Flip this conclusion on its head and an even more captivating notion emerges: that though two random walks may resemble each other in their statistical properties, they can still be very different journeys.