Fibonacci spirals and the mind of plants
16 Dec 2018 · Last Edit: 21 Dec 2018Plants are bimbos. Beautiful, but stupid–at least that’s what most people think. But in this post I want to talk about why I think plants are much more intelligent than people give them credit for. You’ve probably seen the pretty spirally patterns that emerge in many plants when leaves or seeds grow outwards (see figure below). I think they are very interesting and I want to convince you that plants do need to be intelligent to be able to create them.
^{1}
A well known funfact about these spirals is their connection to the Fibonacci numbers. These are the numbers belonging to the Fibonacci sequence: $1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, …$ (every number is the sum of its two preceding numbers). In plants, these spirals almost always^{2} seem to have a Fibonacci number of arms (see the figure below).
^{3}
The spirals can be found everywhere in the plant universe, so it definitely can’t be a coincidence that they always appear in Fibonacci multiplicities; there must be some reason why evolution favours this arrangement over others. To understand why the seeds are not arranged in a different way, let’s try to visualize what other patterns would look like. Regardless whether it’s with leaves, seeds or something different, the spirals only appear if single bits get created at some center point, and then get pushed outwards by newer bits growing from the same center. Actually, they don’t grow from a point, but get created on the edge of the meristem, at a fixed distance from the center point. So with this constraint, what other patterns could the plant produce? If the seeds were all just created at a random point on the edge of the meristem and then grow outwards at a fixed speed, it would look something like this:
It looks kind of weird, definitely not as pretty as the spirals. But prettiness is probably not what evolution was selecting for. I think it is reasonable to assume some kind of energy efficiency to be the driving force behind this phenomenon, as is he case in many (or maybe all^{4}) cases of natural selection. But why would it be more efficient to assemble the seeds or leaves in this special manner?
One theory for why leaves arranged in the Fibonacci spirals are beneficial, is that it is the form that ensures the most sunlight hitting each leave, because they are placed at such an angle from each other that a newer leave never fully blocks the light of an older leave. From the plant’s perspective, the most simple thing (much easier than the random arrangement above) would be to just have everything grow out in one place, like this:
Here, every new leave would block all the light of the previous one. To fix this problem, the plant does what is probably the secondmost simple thing, and also probably the best solution. It actually just puts the new seed at a fixed angle relative to the last one, and that’s everything:
The fixed angle in this case, which is the one that plants do use and that will result in the Fibonacci numbers, is the socalled golden angle: [ \theta = \frac{2\pi}{\phi^2} = \pi (3  \sqrt{5}) \approx 138^{\circ}, ] with $\phi = \frac{1 + \sqrt{5}}{2}$ being the golden ratio. By the way, this has nothing to do with this kind of spiral, although that one is also closely related to the golden ratio and Fibonacci numbers! If we zoom out a bit, the spirals become obvious:
Now let’s just accept that this is the angle the plant “wants” to have. How does it do this? Try to put yourself in the shoes of a plant; or better, a plantdesigngod. If you wanted to build a mechanical flower, how would you do it? Probably some kind of mechanism that turns $\theta$ degrees every time it places a new seed or leave. For this, there needs to be some kind of memory, where the value for $\theta$, and the previous angle are saved (at least approximately, because of their irrationality). Then there needs to be some kind of processing unit that adds $\theta$ to the previous angle and converts this sum into electrical signals to turn a motor to the right position, to place the new seed.
This seems far too complicated. Is the value of $\theta$ really engraved into these plants’ DNA? Do plants have some kind of neural networklike computation system that adds the angle from the DNA to the angle of the previous seed and then uses this new angle to compute where the next seed should go? This does not scream “energy efficiency” as hoped.
It would be very convenient, if this magically perfect angle can somehow be deduced from the laws of physics or maths without the need for complicated computation! Then the plant would just have to find the right law and use it to do the calculation for it. If a simple mathematical formula can be found, that, through iteration, produces the golden angle, in a way that could realistically be similar to what goes on in actual plants, I think it is safe to say, that this formula, not some memory or processing unit, is the key to the spirally patterns formed on plants.
To find that formula / algorithm, let’s first clarify what the the result should be. The optimal angle should be found, so that the seed placed at that angle has the largest possible space around it. This seems like a very straightforward problem (is what I thought).
The simplest way I could think of to get this angle, is to just take a weighted mean of the angles of all previous seeds, so that the newest angles count the most, and then invert that angle (add $\pi$), to get the place opposite this mean angle. Then place the new seed exactly there. A naïve (and completely wrong) idea would be to pretend angles were normal numbers and use the following equation to calculate the weighted mean: \begin{align} \theta_j &= \frac{\sum_{i=1}^{j}{\theta_i i}}{\sum_{i=1}^j{i}} + \pi \mod 2 \pi \\ &= \frac{2}{j(j+1)} \sum_{i=1}^{j}{\theta_i i} + \pi \mod 2 \pi \end{align}
Let’s write some Python code to test out if this works:
N = 5
theta = np.zeros(N)
for i in range(1, N):
mean = 2 / (i * (i + 1)) * (theta[:i] * (np.r_[:i] + 1)).sum() # Calculate the weighted "mean" angle
theta[i] = (mean + np.pi) % (2*np.pi) # Invert angle (add pi) and adjust for mod 2pi
print(theta)
fig = plt.figure(tight_layout=True)
ax = fig.add_subplot(111, projection='polar')
ax.plot(theta[:1], N  np.r_[:N1], 'o', label="Seeds")
ax.plot(theta[1], 1, 'rx', label="Next Seed")
plt.legend()
The output reads:
[0. 3.14159265 5.23598776 0.52359878 5.55014702]
The algorithm quickly returns wrong results. For example, if it were to calculate the mean of the two angles $1$° and $359$° (equally weighted), its answer would be $\frac{1^\circ + 359^\circ}{2} = 180$°. Inverting the angle would then give $360$°, which is obviously not the most optimal angle. The algorithm fails to account for the modulo nature of angles.
So how can the mean angle be calculated? Because we are essentially dealing with 2d coordinates, the electrical engineer in me is already screaming “complex numbers”. Why not just take the mean of two complex numbers and then take the angle of the result as mean? The mean can be weighted by just changing the magnitudes of the complex numbers:
\begin{align} \theta_j &= \arg\left(\frac{\sum_{i=1}^j{s_i i}}{\sum_{i=1}^j{i}}\right) + \pi \mod 2 \pi \\&= \arg\left(\sum_{i=1}^j{s_i i}\right) + \pi \mod 2 \pi, \end{align}
$s$ being the complex vector of seeds.
This actually works quite well:
seeds = np.array([rect(3, np.pi/2), rect(2, 6)])
sum_ = (seeds * (np.r_[:2] + 1)).sum()
mean = sum_ * 2 / (2 * (2 + 1)) # Multiplying a complex number by a real number does not change the argument
next_seed = rect(1, np.angle(sum_) + np.pi)
fig = plt.figure(tight_layout=True)
ax = fig.add_subplot(111, projection='polar')
ax.plot(np.angle(seeds), np.abs(seeds), "o", label="Seeds")
ax.plot(np.angle(sum_), np.abs(sum_), "x", label="Weighted sum")
ax.plot(np.angle(mean), np.abs(mean), "x", label="Weighted mean")
ax.plot(np.angle(next_seed), np.abs(next_seed), "x", label="Next seed")
plt.legend()
When testing it out, it becomes clear that there is one obvious problem with this algorithm. If we place the first seed at $1 + 0i$, the mean will be the same ($1 + 0i$), resulting in an optimal angle of 180°, but this means the weighted mean of these two seeds is still only on the real axis, so the next “optimal” angle will be 0° again. To try to fix this, we’ll use some random noise, amplified by the parameter $x$, to place the next seed:
def complex_seeds(x):
seeds = np.array([rect(1, 0)])
for i in range(1, 500):
seeds += seeds / np.abs(seeds) # Growing
sum_ = (seeds * (np.r_[:i] + 1)).sum()
next_seed = rect(1, np.angle(sum_) + np.pi + np.random.rand()*x)
seeds = np.hstack((seeds, next_seed))
return seeds
fig = plt.figure(figsize=(12, 12), dpi=200, tight_layout=True)
seeds = complex_seeds(0)
ax = fig.add_subplot(221, projection='polar')
ax.plot(np.angle(seeds), np.abs(seeds), ".", label="x = 0")
ax.legend()
seeds = complex_seeds(1e3)
ax = fig.add_subplot(222, projection='polar')
ax.plot(np.angle(seeds), np.abs(seeds), ".", label="x = 0.001")
ax.legend()
seeds = complex_seeds(1e2)
ax = fig.add_subplot(223, projection='polar')
ax.plot(np.angle(seeds), np.abs(seeds), ".", label="x = 0.01")
ax.legend()
seeds = complex_seeds(1e1)
ax = fig.add_subplot(224, projection='polar')
ax.plot(np.angle(seeds), np.abs(seeds), ".", label="x = 0.1")
ax.legend()
Ok, this looks awful. It’s time for a new plan. Because we know the seed gets placed on the edge of the meristem at the place where there is the most room^{5}, a good idea may be to construct a sort of energy function $E(\theta)$ along the meristem edge that we then try to minimize. This function must be high (at angle $\theta$), if there is a seed close to the point $r e^{i \theta}$, with $r = 1$ being the radius of the meristem. I thought a sum of Gaussian bell curves could do exactly what we want: Each bell corresponds to one old seed, scaled by how far away the seed is:
[ E(\theta) = \frac{1}{N^a} \sum_{i=1}^{N}{i^a e^{(N  i + 1)^b (\theta  \theta_i)^2}}, ]
using the parameter $a$ to scale the height of the bell curve, so that it gets smaller the farther away the seed is (low $i$), and $b$ to change the width of the curve, so that it gets wider the closer the seed is to the stem.
In Python, this function can be implemented as follows:
x = np.linspace(0, 2*np.pi, 1000) # Minimization resolution = 1000 (take the best of 1000 possible thetas)
def energy(seeds, x, a, b):
N = len(seeds)
i = np.r_['1,2', 1 : N+1].T
phi = np.angle(seeds)[np.newaxis].T
# Copy the whole function + 2pi to emulate modularity
E = 1/N**a * (np.sum(i**a * np.exp((N  i + 1)**b * (x  phi)**2), axis=0)
+ np.sum(i**a * np.exp((N  i + 1)**b * (x + 2*np.pi  phi)**2), axis=0)
+ np.sum(i**a * np.exp((N  i + 1)**b * (x  2*np.pi  phi)**2), axis=0))
return E
Let’s try to see if how it performs on the same points where we used the complexmean method before (using $a = 1$ and $b = 1$):
seeds = np.array([rect(3, np.pi/2), rect(2, 6)])
E = energy(seeds, x, a=1, b=1)
angle = x[np.argmin(E)] # Mimimize by just taking the coordinate of lowest energy value (resolution = 1000)
next_seed = rect(1, angle)
print(f"Next angle: {(np.angle(next_seed) + 2*np.pi) % (2*np.pi)} rad")
fig = plt.figure(figsize=(12, 12), dpi=200, tight_layout=True)
ax = fig.add_subplot(211, projection='polar')
ax.plot(np.angle(seeds), np.abs(seeds), "o", label="Seeds")
ax.plot(x, E, label="Energy")
ax.plot(np.angle(next_seed), np.abs(next_seed), "x", label="Next seed")
ax = fig.add_subplot(212)
ax.set_xlabel("$\\theta$")
ax.plot((np.angle(seeds) + 2*np.pi) % (2*np.pi), np.abs(seeds), "o")
ax.plot(x, E)
ax.plot((np.angle(next_seed) + 2*np.pi) % (2*np.pi), np.abs(next_seed), "x")
fig.legend(borderaxespad=4)
This prints Next angle: 3.377447957913351 rad
and returns the following figure:
This actually looks quite promising. But we still have to determine the best parameters $a$ and $b$ that result in the golden angle after enough iterations. This is pretty easily done by experimentation. The key here is that this function should describe a stable system, so that through iteration it approaches the golden angle. The parameters shouldn’t be too precise, because then again they would have to be “saved” somewhere, and that is counterproductive to our preconditions. If the energy function is stable enough, it should result in the golden angle, even if the parameters are changed (by a small amount).
So let’s try all combinations $a, b \in \{0, 1, 2, 3, 4\}$, and see how the resulting flower would look like:
def seeds(a, b):
seeds = np.array([rect(1, 0)])
for i in range(200):
seeds += seeds / np.abs(seeds) # Growing
angle = x[np.argmin(energy(seeds, x, a, b))]
seeds = np.hstack((seeds, rect(1, angle)))
return seeds
fig = plt.figure(figsize=(12, 12), dpi=200, tight_layout=True)
for a in range(5):
for b in range(5):
s = seeds(a, b)
ax = fig.add_subplot(5, 5, 5*a + b + 1, projection='polar')
ax.yaxis.set_major_formatter(plt.NullFormatter())
ax.xaxis.set_major_formatter(plt.NullFormatter())
if b == 0:
ax.annotate(f"$a = {a}$",
xy=(0, .9),
xycoords='axes fraction',
size=14,
bbox=dict(boxstyle="round", fc=(1, 1, 1), alpha=.5))
if a == 0:
ax.annotate(f"$b = {b}$",
xy=(.7, .9),
xycoords='axes fraction',
size=14,
bbox=dict(boxstyle="round", fc=(1, 1, 1), alpha=.5))
ax.plot(np.angle(s), np.abs(s), ".")
These definitely look much better than the previous attempt. Especially $b = 1$ and $b = 2$ with a high value for $a$ look promising. Let’s also visualize the final energy function that would decide where the next seed should go:
fig = plt.figure(figsize=(12, 12), dpi=200, tight_layout=True)
for a in range(5):
for b in range(5):
s = seeds(a, b)
ax = fig.add_subplot(5, 5, 5*a + b + 1, projection='polar')
ax.xaxis.set_major_formatter(plt.NullFormatter())
ax.plot(x, energy(s, x, a, b))
Lastly, let’s also plot the relative angles between each seed and the previous seed, and see if they approach the golden angle (the red line is the golden angle):
fig, ax = plt.subplots(5, 5, sharex=True, sharey=True, figsize=(12, 12), dpi=200, tight_layout=True)
for a in range(5):
for b in range(5):
s = seeds(a, b)
angles = (np.diff(np.angle(s)) + 2*np.pi) % (2*np.pi)
ax[a, b].set_xlabel("Iteration")
ax[a, b].set_ylabel("$\\theta$")
ax[a, b].axhline(np.pi*(3  5**.5), color='r')
ax[a, b].plot(angles)
It seems like if $b$ is around 1 and $a$ isn’t too small, the minimum energy angle will approach the golden angle after enough iteration steps. So let’s try to use this to find the golden angle!
a = 4
b = 1
seeds = np.array([rect(1, 0)])
for i in range(2, 500):
seeds += seeds / np.abs(seeds)
angle = x[np.argmin(energy(seeds, x, a=a, b=b))]
seeds = np.hstack((seeds, rect(1, angle)))
E = energy(seeds, x, a=a, b=b) # Energy at the end of simulation
angles = (np.diff(np.angle(seeds)) + 2*np.pi) % (2*np.pi)
fa = angles[int(len(angles)/3):].mean() # Final angle: mean angle of last 2/3 of the simulation
print(dd(f"""\
Final angle: {fa:.2f}
Error: {abs(fa  np.pi*(3  5**.5)) / np.pi*(3  5**.5) * 100 :.2f}%"""))
fig = plt.figure(figsize=(12, 12), dpi=200, tight_layout=True)
ax = fig.add_subplot(221, projection='polar')
ax.set_title("Seeds\n")
ax.plot(np.angle(seeds), np.abs(seeds), ".")
ax = fig.add_subplot(222, projection='polar')
ax.set_title("Energy at the end (t=500)\n")
ax.plot(x, E)
ax = fig.add_subplot(212)
ax.set_ylabel("$\\theta$")
ax.set_xlabel("Iteration (t)")
ax.axhline(np.pi*(3  5**.5), color='r', label="Golden angle")
ax.plot(angles, label="$\\theta_t  \\theta_{t  1} \\mathrm{mod} 2\\pi$")
ax.legend()
Final angle: 2.47
Error: 1.59%
The value is okay, but as seen in the simulation, the function is not actually very stable, in the last part it deviates significantly from the golden angle. There seems to be more than one stable point, and as a result we see some bifurcation. In the plots above, where the minimum energy angle is plotted against iteration, we see that our choice of $b$ has a great impact on the final stable point we land on. If $b=0,$ it looks like it approaches the golden angle, but actually it stays at an angle slightly larger, which is why the spirals for $b=1$ look different from what we expect. Where $b=3$ or $b=4$, the angle also seems to find very stable points far away from the golden angle, so those spirals also look very different from the ones created with the golden angle.
It seems we are not done yet! Maybe it would be better to have a function without parameters, because the less information is needed, the better! Because I still think the Gaussian functions are an elegant way to model how much room there is at a point on the meristem, I kept the energy function as a sum of Gaussians. This is the second try for an energy function:
[ E(\theta) = \frac{1}{N} \sum_{i=0}^{N  1}{\frac{1}{1  \frac{i}{N}} e^{(\theta  \theta_i)^2}} ]
In the previous function, $b$ specified how the width of the function should change with the distance from the meristem. I thought it would be a good idea to have the bell be broader if the seed was near to the stem, but conversely this also meant that it would get much narrower if it got farther away. This does not seem like it really should be the case, so I just removed the whole term in the exponential function. It is still scaled in a way so that seeds farther from the origin (lower $i$) have a smaller bell, but without the parameter $a$, and in a hyperbolic manner instead of polynomially.
Let’s check how well it performs on the same two points that until now only the naïve mean of angles couldn’t handle:
# Instead of np.linspace: add more points around x, provide a better resolution in the meaningful region
def curvedspace(a, b, x, num=50):
s = np.linspace(np.cbrt(a  x), np.cbrt(b  x), num)
return s**3 + x
x = curvedspace(0, 2*np.pi, np.pi*(3  5**.5), 1000)
def energy(seeds, x):
N = len(seeds)
Z = N
i = np.r_['0,2', NZ:N].T
phi = (np.angle(seeds)[Z:][np.newaxis].T + 2*np.pi) % (2*np.pi)
s = (1  i/N)
E = (np.sum(np.exp((x  phi)**2) / s, axis=0)
+ np.sum(np.exp((x + 2*np.pi  phi)**2) / s, axis=0)
+ np.sum(np.exp((x + 4*np.pi  phi)**2) / s, axis=0)
+ np.sum(np.exp((x  2*np.pi  phi)**2) / s, axis=0)
+ np.sum(np.exp((x  4*np.pi  phi)**2) / s, axis=0)) / N
return E
seeds = np.array([rect(3, np.pi/2), rect(2, 6)])
E = energy(seeds, x)
angle = x[np.argmin(E)] # Mimimize by just taking the coordinate of lowest energy value (resolution = 1000)
next_seed = rect(1, angle)
print(f"Next angle: {(np.angle(next_seed) + 2*np.pi) % (2*np.pi)} rad")
fig = plt.figure(figsize=(12, 12), dpi=200, tight_layout=True)
ax = fig.add_subplot(211, projection='polar')
ax.plot(np.angle(seeds), np.abs(seeds), "o", label="Seeds")
ax.plot(x, E, label="Energy")
ax.plot(np.angle(next_seed), np.abs(next_seed), "x", label="Next seed")
ax = fig.add_subplot(212)
ax.set_xlabel("$\\theta$")
ax.plot((np.angle(seeds) + 2*np.pi) % (2*np.pi), np.abs(seeds), "o")
ax.plot(x, E)
ax.plot((np.angle(next_seed) + 2*np.pi) % (2*np.pi), np.abs(next_seed), "x")
fig.legend(borderaxespad=4)
This prints Next angle: 3.6987772510159513 rad
and returns the following figure:
Alright, that is pretty much what we would have expected, not too different from the last attempt. Because there are no parameters, let’s directly run a simulation and see if we get the golden angle!
a = np.full(500, np.nan)
i = np.arange(500)
seeds = np.array([rect(1, 0)])
fig = plt.figure(figsize=(12, 12), dpi=200, tight_layout=True)
grid = plt.GridSpec(7, 2, wspace=0.4, hspace=0.3)
ax0 = fig.add_subplot(grid[:3, 0], projection='polar')
ax0.set_ylim(0, 500)
ax0.set_title("Seeds\n")
ln0, = ax0.plot(np.angle(seeds), np.abs(seeds), ".")
ax1 = fig.add_subplot(grid[:3, 1], projection='polar')
ax1.set_ylim(0, 3)
ax1.set_title("Energy\n")
ln1, = ax1.plot(x, 3*np.ones(x.shape))
ax3 = fig.add_subplot(grid[3, :])
ax3.axis('off')
ax2 = fig.add_subplot(grid[4:, :])
ax2.set_ylim(.5, 2*np.pi + .5)
ax2.axhline(np.pi*(3  5**.5), color='r', label="Golden angle")
ax2.set_ylabel("$\\theta$")
ax2.set_xlabel("Iteration (t)")
ln2, = ax2.plot(i, np.zeros(500), label="$\\theta_t  \\theta_{t  1} \\mathrm{mod} 2\\pi$")
ax2.legend()
def draw(t):
global a, seeds
E = np.exp(1j * x) * energy(seeds, x)
seeds += seeds / np.abs(seeds) # Growing
angle = x[np.argmin(energy(seeds, x))]
seeds = np.hstack((seeds, rect(1, angle)))
ln0.set_data(np.angle(seeds), np.abs(seeds))
ln1.set_data(np.angle(E), np.abs(E))
angle = (angle  np.angle(seeds)[2] + 2*np.pi) % (2*np.pi)
average = ((np.diff(np.angle(seeds)) + 2*np.pi) % (2*np.pi)).mean()
ax3.clear()
ax3.axis('off')
ax3.text(.5, 0, dd(f"""\
Minimum Energy Angle: {angle:12.9f}
Average Angle: {average:12.9f}
Golden Angle: {np.pi*(3  5**.5):12.9f}
Difference: {abs(average  np.pi*(3  5**.5)) / np.pi*(3  5**.5) * 100:12.9f}%
π ≈ {average/(3  5**.5):12.9f} """),
ha='center',
family='monospace')
a[t] = angle
ln2.set_data(i, a)
return ln0, ln1, ln2
anim = FuncAnimation(fig, draw, 500, interval=50, blit=True)
It works!^{6} The difference between the average minimum energy angle and the golden angle is constantly getting smaller, and there do not seem to be the same bifurcation issues as before. Because the only two mathematical functions used here are the Gaussian $e^{x^2}$ and that for the hyperbolic decay $\frac{1}{x}$, and these functions are not uncommon in natural phenomena, it seems possible for this, or a function similar to this, to be how plants “calculate” the golden angle. But how does it help the plant that there exists this formula to compute the golden angle?
A theory on how the seeds know in which direction to grow, i.e. where the most space is, has to do with the distribution of growth hormones on the meristem. This distribution can probably be modelled by a function similar to our energy function, so that the place, where the most growth hormones are (or where the energy function has its lowest value), is the place where the next seed begins to form. Of course there is no mathematical formula engraved in the plant’s DNA, and there are no processing cells that calculate this distribution, it simply arises naturally by how the hormones are used by the seed–if one seed forms, it uses a lot of the growth hormones that were on that spot on the meristem, so the next seed will form somewhere else, where there is now the largest amount of growth hormones. If you were to plot this distribution, it would probably look very similar to the (inverted) energy function that we have found.
Of course the plant does not know this. Evolution does not know this. Evolution was just selecting via trial and error for energy efficient spirals, and its solution happens to be something that can be expressed mathematically as well. The plant doesn’t care about this. It just used evolution as a tool to maximize efficiency, and evolution came up with a solution. But now, we can measure this solution. If we take a plant and measure the angle between two leaves, we get the golden angle, and we did not have to calculate it ourselves. But the plant also didn’t calculate it, because it has no processing abilities. But someone had to calculate it! The calculator here was evolution by natural selection. Evolution can, by the simple process of a binary answer (reproduce or don’t reproduce) and constant iteration, calculate the golden angle (or anything else, for that matter). So even if plants are still pretty dumb, they have access to a crossgenerational superbrain.
I was inspired to write this post by ViHart’s amazing videos on these spirals^{7}. If you found this post even a tiny bit interesting, do check them out, they’re much better.
Download the Jupyter Notebook file for this post
Footnotes

Swinton, J., Ochu, E., & MSI Turing’s Sunflower Consortium. (2016). Novel Fibonacci and nonFibonacci structure in the sunflower: results of a citizen science experiment. Royal Society open science, 3(5), 160091. ↩

Douady, S., & Couder, Y. (1996). Phyllotaxis as a dynamical self organizing process part I: the spiral modes resulting from timeperiodic iterations. Journal of theoretical biology, 178(3), 255273. ↩

If there is an issue with playing the animation, try this link. ↩

ViHart’s videos on the topic: link
There’s also a great video by Mathologer here ↩