One of the major challenges in building a system that can increase in complexity as it runs is figuring out how to transfer complex structures in a lower level space into simple structures in a higher level space while still maintaining the essential qualities that the complex lower level structure represents. Without this jump in abstraction, the system will simply asymptotically approach a maximum level of complexity determined by available resources (energy, computation, memory, time, …).

I’ve shown how particles can be modeled to form surfaces and surface patches by endowing them with local surface properties à la differential geometry and using that information to filter how particles connect to each other. There still remains a question as to what happens next. Given a million particles, the system (as described in a previous post) can be run in realtime on a decent laptop, which will likely produce beautiful images and structures, but still only reach a particular level of complexity that is all qualitatively the same. Even with 100 million particles, the complexity threshold will only reach a level or two above where it currently sits. To really jump ahead, a completely different spatial logic is required that operates on a higher, more abstract level.

The differential particle system deals with local surface properties and local connectivity but can’t say anything about the overall topological properties of the surfaces it’s producing such as genus or 2-manifold-ness. These kinds of properties are more efficiently calculated at the coarser level of the surface patch. Thus, if each surface patch produced by the differential particles is itself turned into a “topological” particle, we can start to figure out how the next higher level might work.

The differential particles exist in the usual Euclidean 3D space and interact with each other according to principles derived from this underlying geometry based on distance, local coordinate frames, and local surface properties. They are discrete entities representing continuous properties. Topological particles on the other hand are discrete entities representing discrete properties. They interact with each other in a completely different manner than differential particles. In the space that topological particles exist in, distance is backgrounded in favor of patterns of connectivity.

In previous posts, I’ve talked about how hyperbolic tilings (particularly the (5,4) tiling) can be used to represent 2-manifold structures. The structure of hyperbolic tilings provides a natural scaffolding for constructing surface from a topological perspective for a number of reasons. First, all 2-genus and higher surfaces can be decomposed into pentagonal tiles. Second, the structure that you get for free with hyperbolic tilings enables the efficient encoding of algorithms encoding how the topological particles interact.

The particles themselves live on the vertices of the hyperbolic tiling and can only interact with other particles on edges that they are directly linked to. As a result topological particles can only interact with four other particles. Since topological particles are abstractions of surface patches, this means that surface patches should themselves be quadrilateral in nature. But what kind of information do topological particles contain aside from connectivity and what rules govern their interactions?

The essential property we’re after with the topological particles is genus. Once we can talk about genus in an efficient and hierarchically consistent manner, we can start building the next level up from topological particles. The genus of a network is determined by its Euler characteristic which is given by \(\chi = V – E + F\). Genus is related to the Euler characteristic by \(\chi = 2 – 2*g\).

If we look at a torus, which has a genus of 1, and Euler characteristic of 0, we see it can be divided up into 4 quadrilaterals. The quadrilaterals on the outer shell have positive curvature while the inner ones have both positive and negative curvature in orthogonal directions. By comparison, a sphere has positive curvature everywhere and a genus of 0. The fundamental different between a torus and a sphere is that some surface patches on the torus have negative curvature, inducing a change in genus. For higher genus structures, it’s the patterns of connectivity between positive and negative curvature regions that determine genus. All of this points to the idea that the topological properties simply need to know whether they represent surface patches with positive or negative curvature in their two principal directions.

To sum up, topological particles can connect to 4 other particles along 2 principal directions. Each principal direction has associated with it a value indicating either positive or negative curvature. For simplicity, we will simply assign +1 for positive curvature and -1 for negative curvature.

Now, if we think about how these particles interact at a very basic level, we can further constrain things by requiring particles that connect to each other to only connect if the principal directions linked by an edge have the same principal curvature. While this limits a lot of possibilities in principal, it doesn’t actually limit what kinds of topological structures we can make and it vastly simplifies the way the particles operate to great benefit. Under this constrain as well, there can be no particles with negative curvature in both directions since they would form an unbounded surface whose normals are solely on the interior. Thus, we can only have particles with (+1, +1), (+1, -1) and (-1, +1) curvatures.

The next concern is how a pattern of topological particles accumulate and transform into different structures. The most basic structures are particle pairs, of which there are 4 kinds: (+1, +1) x (+1, +1), (+1, +1) x (+1, -1), and two different forms of (+1, -1) x (+1, -1). These are represented in the image above with black indicating positive curvature and red negative curvature.

When placed into the hyperbolic tiling, the particles look like the above image, which has 4 particles. In the above image, it appears that there are only 3 edges linking the particles and it raises the question as to how the surface will close since the tiling doesn’t wrap around but continues infinitely in all directions. The answer comes from looking at the particle pairs. Whenever an edge is connected, the edge opposite it (in the same principal direction) must also be connected. In the diagram above, the edges on the outside wrap around to the next particle on the same hyperbolic line. It’s easiest to think of it as the tiles on a particular hyperbolic line as being cyclical with only one cycle shown in the image. The one constraint is that there must be at least 2 particles since a particle cannot connect to itself.

After many many hours of playing with topological particle tiles with the goal of having the particles interact in a way that is not deterministic but does favor an increase in genus, I come up with the following rules:

- When a particle is place in the hyperbolic tiling, it marks its neighbors with a score indicating how strong a preference that location has for attaching to the next particle. Neighbors with negative curvature are considered the most “reactive” and have a score of 2 while positive curvature gives a score of 1. Scoring is only done for unconnected edges. Edges that connect back around are left out.
- All particles are attached to the highest scoring (most reactive) site that they can be matched to. If there are sites which tie, one is chosen at random

There are a lot of possibilities for further refinement, including using the location of the generators of genus and measures of local complexity to tweak the scoring of reactivity. Also, there needs to be some rules for determining how things break apart. Likely this will involve a combination of overall size and genus generators. The only one of these I’ve explored so far is the measure of complexity

The diagram above shows how this would work. On the left is is a 4 tile setup. On the right a 6 tile setup. The blue lines indicate how the left-most tile/particle gets reflected through the tiling. The pattern on the left is highly regular. If the same diagram were made for each particle, they would be identical, indicating that the combinatorics around the tiling are symmetric. On the right, once the two extra tiles are added, one of the branches projecting from the left-most tile is cut off, generating an asymmetrical combinatorial pattern. One measure of complexity or asymmetry would be to look at a neighborhood around existing particles and determine their relative combinatorics, which could then be used as an input to determine reactivity. The input could be used to regulate symmetry versus asymmetry.

So what does an example interaction between two networks of topological particles look like? Above are two pairs of topological particles with their reaction sites marked by circles. Each pair of particles has four open sites that can react with each other. Since they all have the same level of reactivity, one is chose at random. One possible result is below:

]]>

These are clips of how the particle systems described in previous posts move in space. It’s all very raw, but I wanted to get something out there. The larger structures suffer from convergence problems creating oscillations, twisting, foldings, and other distortions while the small networks converge almost immediately into their designated shapes. These appear around the 6:05 mark.

]]>]]>

In order to build a complex and dynamic spatial system, there need be entities operating at multiple temporal and spatial scales. As mentioned in the previous post, the extremes of the spatial scale are local and global objects, which are typically particles and fields. It’s straightforward to build something where particles are affected by fields, but without intermediate structure formation, there is no opportunity for more complex dynamics to emerge. In morphogenetic processes, cells differentiate, cluster, striate, extend, etc. as interconnected and interwoven groups. Such patterns can be represented by meshes or networks of particles connected with edges that impart some form of behavioral change in the particles. The question is, how can the formation of such structures be described generally in terms of the patterns of formations but be given precise and specific behaviors that differentiate types? In other words, what are the primitive events and conditions that can describe the formation of a wide range of network structures?

The primitive operation here is that of connecting two particles together. (image) What is the condition that enables two particles to associate with each other? For simplicity, let’s assume that all particles are of the same kind and that particle can connect to every other particle under the proper conditions. Let’s further specify that the network structures that emerge should be surfaces and not volumetric in nature.

There isn’t much literature out there on how to connect particles together. Probably the best place to look is the point cloud research, which describes how data sets made up of particles in space can be turned into surfaces. The problem with a lot of the point cloud techniques is that they are often expensive and rely on optimization techniques that don’t map well to a real-time dynamic system. Typically, the first instinct when thinking about this problem is to make the capacity for particles to connect purely distance-based. The problem with this approach is that particles can easily crowd around each other, producing overlapping edges and degenerate mesh faces. Preferably, the particles would have a fairly regular spacing appropriate for the curvature of the surface.

To correct for this problem, a little bit of extra structure is needed to filter the conditions under which connections can be made. For the case of surfaces, what’s needed is some kind of device that tracks connection density and will only allow connections if the density criteria are met. This device could equally be called valency if it’s thought about like chemical bonds.

If we define the connection density as determining how many connections can be made within a given interval and set the number of 1, we can represent the connection criteria by a set of intervals around the particle in the particles’ plane. Each particle has a normal and two planar directions that uniquely determine its local coordinate system. If we say each particle can have up to 5 connections that sets a minimum spacing of 72 degrees between each connection. If a connection is attempted and it’s within 72 degrees of another connection, it will be rejected.

Once connections are made, the new properties and forces are applied to the particles. When a connection is formed, the particles on either end are likely not in the ideal locations to form a coherent surfaces, so we use the connections between particles to move them into some sort of optimized position and orientation. For example, in the simple case of forming a spherical shape, the particles will act on each other through the connections in order to form a surface of constant positive curvature.

Probably the most widespread technique for optimizing the structure of a mesh network is some form of spring or spring-electric simulation where each edge is a spring and in the spring-electric case each node applies a repulsive electric field to every other node. While this can give some interesting results, there is a certain uniformity to the shapes. Instead, each particle is given principle curvature values that define the local surface patch the particle represents. When particles are connected to each other, the attempt to place and orient the connected particle with the local surface patch.

The particles started out unconnected but connected themselves according to the connection filtering process described above. The light blue lines are the particle normals and the pink dots are the target locations the particles are trying to reach.

Since each particle in this simulation has a curvature of (1, 1), they will attempt curve into a sphere-like shape.

The corner and then edge particles move faster because they have anisotropic constraints.

Currently, self-intersection is not prevented. When the edges come within range of each other, they form new connections as seen in the middle of this image.

The new connections further warp the form as it “inflates”.

A view from the inside. The pink lines show target movements. The (black) particles on one end of the line are trying to reach the pink dots on the other end.

]]>There are four broad categories of spatial processes

- Purely Global
- Global <-> Local
- Local <-> Local
- Purely Local

and two general categories of spatial primitives

- Field
- Particle

Of course, the spatial primitives can be organized into composite structures such as meshes where particles are linked together in a network.

Fields represent global information over a defined N-dimensional range of space while particles represent local information located at an infinitesimal point in space. Information moves from a local to global context when particles interact with a field. For example, stigmergic behavior can be generated by having particles leave traces in a field that represents pheromone concentrations over an environment. In stigmergic systems, information flows in a positive feedback loop from local behavior to global concentrations, which in turn affect local behavior.

The simplest kinds of information flows are those that are internal to an object be it global as in the evolution of a field according to its dynamical equations or local as in the movement of a particle according to kinematics. In both cases, the state of the spatial system is affected but not in a way that adds any new information or structure. Existing information is simply transformed and displaced.

The next level of complexity in terms of spatially directed pattern formation occurs when information transfers between local and global contexts. In such instances, particles are essentially reading from and writing into fields and altering their internal state accordingly. When information is written into a field by a particle, it loses its specificity (that of being attached to a particular entity) and becomes part of an *intensive* processes as opposed to an *extensive* one. While limiting in the sense that once the information is in the field the actions that contributed to the state of the field can no longer be distinguished, it does open up an efficient communication channel between many local objects since each can read the aggregate state of the local entities as it is mediated by the field. Communication between individual local objects is not possible through a field however because the number of channels required is prohibitive. Instead, this has to be done through local to local flows.

Local to local interactions provide the most fine grained spatial information flows but are also the most computationally intensive. If there are N objects in a space, there are (N-1)^2 channels of interaction. For local to global flows there are only N channels of interaction. While incredibly useful in terms of composing complex spatial patterns, the computational cost of local to local interactions can make them impractical.

The classic example of local to local interactions is the N-body problem, which occurs in gravitational and electric field simulations among others. Both of these problems deal with particles generating fields that affect every other particle. To simplify the computational complexity of the problem, the local fields of each particle are accumulated into a hierarchy of fields that can then be precisely applied to each particle such that the total runtime is N log(N). This technique is called the Fast Multipole Method.

While having every particle able to communicate efficiently with every other particle might be desirable in the ideal case, practically speaking it’s often good enough to have particles communicate with each other within a particular range or with the nearest N particles. Such a system can be implemented by binning particles according to their location and then operating over the binned particles to realize local to local communication channels. The fastest algorithm I’ve found for spatial binning and neighborhood calculation is Query Sphere Indexing.

Query sphere indexing is a form of spatial whereby a spatial location is given an integer value that can be used as a key to look up the position quickly from a table. In the query sphere technique, the space is divided into a finite number of discrete cells and each cell is given an integer hash value based on its coordinate. Any particle falling within the cell is binned to it and associated with the cell’s integer hash value.

The speed of the query sphere technique comes from precomputing as much information as possible about neighborhoods. When the data structure is created, a table of relative coordinates in terms of hash indices is calculated for all the possible distances and ordered according to nearest to furthest offset. For example, if you want to know how many objects are within radius 1 of a point in space, the offset lookup table will provide a list of cell offsets that can be added to the query location efficiently and all of the cells within radius 1 retrieved for processing.

The calculations in the Query Sphere paper are incredibly efficient since they’re all operating on integers using addition and bit manipulation. In a version of the algorithm I’ve implemented for example, I can hash 302,500 (550×550) points at 33fps and query thousands of neighborhoods for nearest neighbors. As a result, local to local communication is incredibly efficient and used judiciously can add a lot more detail and complex pattern formation to the evolution of a spatial system. For instance, once local to local communication becomes efficient, it becomes to describe the formation of aggregate structures such as meshes in a straightforward manner, which can in turn introduce new behaviors and trajectories into the system.

The cubes in the image above are the bins holding particles. The pink dots are query locations with lines connecting the nearest 16 neighbors within a particular radius.

]]>The Transversion Automaton consists of a central unit sphere (in grey) whose center is a point on the surface being generated and two surface coordinate (UV-coordinate) generating lines. At each step, the automaton inverts the lines through the central unit sphere to generate the UV osculating circles. These circles represent the instantaneous curvature at the surface point in the principle directions. Next, the automaton is diffused through space along both principle circles and the locations of the generating lines is adjusted to properly account for the curvature values at the next location.

Above are frame capturing the automaton’s motion around one of the cycles of the torus. The total movement through the nine frames is half a rotation (180 degrees) where the curvature of the direction given by the red line changes from 1 (spherical) on the outside to -1 (hyperbolic) on the inside. Notice that in frame 5, the circle generated by inverting the red line into the central unit sphere is actually a line, or a circle of infinite radius. This is because at the top of the torus, the lateral curvature is zero or Euclidean, which makes sense since for the curvature to change form 1 to -1 it has to pass through 0.

These images show different points of view on the set of circles generated in one cycle of the torus. Notice that there are two lines indicating the two locations of Euclidean curvature. These images were generated simply by accumulating all of the circles and drawing them together.

Overall, this iteration of the Transversion Automaton is much simpler to control and move through space. The logic of its structure is now super clear. One of the aspects I appreciate most is how its internal structure dictates its own transformation. In other words, it’s a self-modifying spatial automaton. All it does is read off a couple of curvature values and it’s able to move itself and adjust its relative weights accordingly. The next step is to figure out how to get the Transversion Automaton to operate in a panelizing mode where it traces out surface patches instead of entire cycles. This is the only way it will be able to generate more complex surfaces.

]]>In a regular hyperbolic tiling, each tile is exactly the same with the tile located at the center of the Poincaré disc called the fundamental polygon. By virtue of the expansive nature of hyperbolic space, there are an infinite number of regular tilings of the hyperbolic plane since the polygons can simply be resized until the rate at which they fill space matches the rate at which the hyperbolic plane expands. Thus, the key value to calculate when generating regular hyperbolic tilings is the radius of the fundamental polygon.

The value that determines how big the radius is is the angle of parallelism of the edges of the fundamental polygon. The angle of parallelism essentially says how close to Euclidean space a particular tiling is. The closer to 90 degrees the angle of parallelism, the closer to Euclidean it is. For example, in the image above, the fundamental polygon on the far left is for the (3, 7) tiling. Notice how the radius of the pink circle is large and the arc that’s visible approaches a straight line. The angle that the line tangent to vertex D makes with the horizontal axis is the angle of parallelism. As D moves toward the top of the Poincaré disc circle, this angle approaches 90 degrees. If we changed the tiling to (3, 6), we would in fact have a 90 degree angle of parallelism and a Euclidean tiling. As the fundamental polygon increases in sides and number of polygons meeting at a vertex, the angle of parallelism decreases toward 0 as both values approach infinity.

The diagrams above show how the angle of parallelism varies with the number if polygons incident to a vertex (left) and the number of sides (right). Calculating the location of the fundamental polygon’s vertices is equivalent to calculating the location of the circles that the polygon’s arcs lie on. Since they’re all equidistant from the origin, we just need to figure out how far away the circle is. To do this, we use right triangles AEF and ABF along with the fact that angles \(\angle{}BAF = \frac{\pi}{p}\) and \(\angle{}ABF = \frac{\pi}{q}\) are known to derive the fact that:

- \(sin(\angle{}EBF) = \frac{\overline{EF}}{\overline{BF}}\) and \(sin(\angle{}BAF) = \frac{\overline{EB}}{\overline{AF}}\)
- \(1+\overline{BF}^2 = \overline{AF}^2\)
- \(sin(\angle{}EBF) = \frac{\pi}{2} – \frac{\pi}{q}\)
- \(sin(\angle{}BAF) = \frac{\pi}{p}\)
- \(\sqrt{\overline{AF}^2-1} \dot{} sin(\frac{\pi}{2} – \frac{\pi}{q}) = \overline{AF} \dot{} sin(\frac{\pi}{2})\)
- \(\overline{AF} = \sqrt{\frac{sin^2(\frac{\pi}{2}- \frac{\pi}{q})}{sin^2(\frac{\pi}{2}- \frac{\pi}{q}) – sin^2(\frac{\pi}{p})}}\)

The motivation for constructing a transversion-based spatial automaton was to construct a surface in 3D Euclidean space from as little local geometric information as possible. It was the automaton’s role to expand the information into a fully specified surface of arbitrary complexity. Essentially, this is a problem of differential geometry with a little bit of analysis (particularly complex harmonic functions) thrown in.

The original design was essentially a straight translation into (Geometric Algebra) GA elements of the standard approaches differential geometry uses to analyze local surface properties and particularly curvature. The idea was to have two spheres, each representing one of the two principal surface coordinates and to relate the size of the spheres to instantaneous curvature. By inverting a differential surface element into the two spheres, the Transversion Machine incrementally constructed patches of the surface.

While the design worked, it was unwieldy and suffered from some interpolation issues that lead to closed paths not actually closing like they should. The primary issue was figuring out how to update the automaton on each step. Essentially, this involved rotating and translating the entire machine to be centered at the new surface point in order to continue extended the surface while simultaneously warping with a transversion operation the two spheres such that their radii corresponded to the new local curvature values. In the end, there were just too many loosely connected transformations to keep in sync, so the mechanism felt cobbled together and inelegant. Furthermore, it wasn’t at all clear how the input curvature values parameterizing the Transversion Machine mapped to its internal transformations.

In the back of my mind, I kept thinking that there had to be another way to organize things such that it just worked, every element was tightly integrated, and the input parameters mapped in a straightforward manner to the controls of the machine. Fortunately, inspiration struck while I was working on understanding hyperbolic translations.

Hyperbolic translations are just like Euclidean translations in that they move along a line. The difference is that lines in hyperbolic space are actually Euclidean circles when the Poincaré model of hyperbolic geometry is used. The one exception is for lines that pass through the center of the unit disc. These hyperbolic lines are circles of infinite radius and thus also Euclidean lines. I then wondered how this fact could be used to redesign the Transversion Machine since it also needs to be able to smoothly move between rounds such as circles and spheres to flats such as lines and planes. This kind of behavior allows it to smoothly model changes in curvature from positive to flat to negative curvature. The only question is how to make such a mapping.

One of the unique features of GA is that many objects come in both real and imaginary forms. For instance, spheres are called **real** if they have a positive radius and **imaginary** if they have a negative radius. While it may seem strange for a sphere to have a negative radius, it has profound consequences for the interpretation of the geometry of a particular arrangement of geometry objects. For instance, when two spheres intersect, they generate a real circle. If they don’t intersect, they generate an imaginary circle. Importantly, the magnitude of the imaginary circle indicates how far apart the two spheres are.

Since the curvature of a surface varies from positive through zero to negative values, I wondered if some way of transforming spheres between real and imaginary states could be used to implement the Transversion Machine. As a first test, I placed a line tangent to both a real and imaginary sphere colocated in space and with radii of the same magnitude but of opposite sign. The two spheres are given by \(S_{real} = o – \frac{\infty}{2}\) and \(S_{imaginary} = o + \frac{\infty}{2}\).

On the left is a single instance of inverting a line by a sphere. The pink circle is generated by the real sphere while the blue is generated by the imaginary sphere. The inversion is described by \(Circle = SLS^{-1}\). What’s curious is how the imaginary and real spheres result in exactly the same shape but on opposite sides of the center of the inversion sphere.

When I saw this image, my first thought was that I could map the inversion sphere’s radius to surface curvature with the convenient mapping of imaginary spheres describing negative curvature (hyperbolic patches). The only question then is how to smoothly vary the spheres.

First, however, I did the easy thing and varied the position of the line. The image on the right shows what happens when the line is varied from the center of the inversion sphere to the right. When the line is in the center, the inversion is an identity operation mapping the line to itself. As it moves to the right, the resulting circles vary in radius, smoothly shrinking to a point if the line was infinitely far away. Notice that the is exclusively to the right hand side of the inversion sphere’s center. If it was on the left, the real and imaginary circles would swap sides too. This is the answer to the question as to how to vary the size of the curvature spheres. We can simply move a plane (line) in space to generate the appropriate sized sphere. What’s really nice about this setup is it’s basically a spatial slider for controlling curvature that generates exactly the kind of geometry we need and takes precisely the kind of input we have, a scalar curvature value. We can map this curvature value to displacement along an axis running through the center of the inversion sphere. As a further bonus, the current point on the surface that the Transversion Machine is operating from is located at the center of the inversion sphere.

In the end, we don’t actually need the imaginary sphere, but it’s nice to know how it operates with respect to spherical inversion for future problems.

The second problem in the original design was interpolating the motion of the Transversion Machine according to how the surface bends at each point. The transformation describing the motion is a rigid body motion through space that rotates the machine around a circle. The transformation is given by \(V = exp(\infty\rfloor{}\frac{L}{2})\).

The image above show the action of the circular rigid body motion. The diagrams are slightly skewed to show the axis of the circle since it comes out of the page. The axis is a dual line, which is a bivector inducing the motions in the images. Of course, the motion needed by the Transversion Machine is not arbitrary. It needs to be scaled such that it moves the Transversion Machine around the circle in fixed number of steps. To do this, we have to scale the circle such that its norm is equal to the distance travelled by each step around the circle. The following algorithm generates a motion around the circle in N steps from a line \(L\) and an inversion sphere \(S\):

- Generate the circle that will move the Transversion Machine: \(C = SLS^{-1}\).
- Normalize the circler: \(C_n = \frac{C}{\sqrt{C*C}}\).
- Calculate the versor representing the transformation and scale by the length of the circular arc: \(V = exp(-\infty\rfloor{}\frac{C_n}{2}\frac{2\pi{}R}{N})\).
- Transform the Transversion Machine inversion sphere and coordinate axis line: \(S = VSV^{-1}, L = VSV^{-1}\).

Interpolation the state of the automaton across the tiling is subject to the constraint that any given arc through the tiling represents a complete cycle around some part of the surface and must therefore be a harmonic function of the transversion state.

Ideally, the interpolation scheme chosen will evenly sample the space over which the interpolation is performed in order describe the most surface for the least number of interpolation points. In the case of the transversion automaton, the interpolation space is hyperbolic. As a result, our familiar Euclidean intuition about what constitutes evenly spaced samples doesn’t apply. The most straightforward way to interpolate hyperbolically is to work with a mapping of hyperbolic space into Euclidean space and develop the geometry of interpolation through that mapping. In this case, the mapping is the Poincaré disc model of the hyperbolic plane.

In the conformal model of Geometric Algebra (GA), the usual Euclidean axes are augmented with two additional axes representing the point at infinity and the point at the origin. In other words, the point at infinity is explicitly represented in the model of Euclidean space. As a result, new symmetries appear enable exotic transformations that smoothly interpolate between circles and lines or spheres and planes since lines are essentially circles of infinite radius and planes are spheres of infinite radius. Furthermore, translations and rotations collapse into a single representation since translations are really just rotations about the infinite point.

In GA, the particular representation of infinity is fungible and depending on which representation is chosen, a particular geometry will arise. For Euclidean geometry, infinity is taken as a point. For spherical and hyperbolic geometry, infinity is typically taken as the unit sphere where the only difference is that spherical geometry is given by the **imaginary dual unit sphere** and hyperbolic geometry by the **real dual unit sphere**. Imaginary spheres have a negative radius while real spheres have a positive radius. In the Poincaré disc model of hyperbolic space, infinity is also represented by a unit sphere in 3D and a 2D sphere (aka a circle) in 2D space. The beauty of all this is that all of our previously acquired knowledge about working with spatial transformations in the conformal model of Euclidean space still apply to hyperbolic space. All that needs to be done is to swap the usual \(\infty\) with \(o – \frac{\infty}{2}\).

For example, given two points \(p\) and \(q\), the line through them is given by \(L = p\wedge{}q\wedge{}\infty\). The hyperbolic line through them is given by \(L = p\wedge{}q\wedge{}(o – \frac{\infty}{2})\). Of course, the hyperbolic line is not a line in the Euclidean sense but instead a circular arc.

The reason for this difference is that hyperbolic space expands as you move away from the center, so the mapping to Euclidean space squishes it toward the unit circle, bending the line into a circle.

Translations in hyperbolic space also occur along circular arcs and from a Euclidean point of view move quickly through the center of the disc and slowly toward the perimeter. In Euclidean space, translations along lines are described by \(exp(-\infty{}\rfloor{}\frac{L}{2})\) with the equivalent hyperbolic expression being \(exp(-e\rfloor{}\frac{L}{2})\) where \(e = o – \frac{\infty}{2}\).

The real trick with hyperbolic translations is how to get them into a form that is useful for interpolation purposes. Essentially, we want to calculate a hyperbolic translation that will move between two points in a fixed number of steps. The form of the translation given above won’t do this because the line is not normalized and thus scales the distance covered by translation by a factor. To calculate a hyperbolic translation that moves between two points in N steps, the term \(-e\rfloor{}L\) has to be normalized and then scaled by half the hyperbolic distance between the points.

Hyperbolic distance can be calculated in GA by first normalizing the given points in terms of hyperbolic space. The condition for normalization is \(-e\dot{}p = -1\). The distance between two points is then \(d(p, q) = acosh(1 – \frac{-p}{e\dot{}p}\dot{}\frac{-q}{e\dot{}q})\). The algorithm for computing the interpolating translation is:

- Construct a translation bivector between \(p\) and \(q\) as \(T = e\rfloor{}(p\wedge{}q\wedge{}e)\).
- Normalize it \(T_n = \frac{T}{\sqrt{T*T}}\).
- Calculate the distance between \(p\) and \(q\) using \(d(p, q) = acosh(1 – \frac{-p}{e\dot{}p}\dot{}\frac{-q}{e\dot{}q})\).
- Scale the normalized bivector and exponentiate it \(V = exp(T_n\frac{d}{2N})\) where N is the number of steps to take between the points.

V is then the versor used to translate \(p\) into \(q\) over N steps.

In the image above, the edges of the pentagon are drawn by interpolating between the vertices 20 steps. The pink points are midpoints while the blue divide the edges into quarters. The pink and blue points were calculated with a hyperbolic lerp constructed from the hyperbolic translation algorithm.

]]>The Meet operation in Geometric Algebra (GA) calculates the intersection of two subspaces. In set theoretical terms, the intersection is given by the relationship \(M = A\cap{}B\) where M is the intersection (or Meet). Frequently in GA papers and books, you’ll find the elegant looking equation \(M = A\cap{}B = (B^{*}\wedge{}A^{*})^{*}\) where \(A^{*}\) indicates the dual of A. Unfortunately, calculating Meet is not so simple.

Related to the intersection of two subspaces, we can define a second quantity that is the difference between their union and intersection, which is called the symmetric difference or delta product. Using this product, the intersection can be rewritten as \(A\cap{}B = \frac{A + B – A\Delta{}B}{2}\). In Geometric Algebra, the delta product of blades is defined as the highest grade part of the geometric product of A and B \(A\cap{}B = \langle{AB}\rangle{}_{max}\). The grade operation in the delta product stems from the way the geometric product automatically eliminates dependent factors. Furthermore, the grade of the Meet can equally be formulated in terms of the delta product as \(grade(A\cap{}B) = \frac{grade(A) + grade(B) – grade(A\Delta{}B)}{2}\).

Through the delta product, we now have a proper way of expressing how the two subspaces A and B relate to each other and can formulate an algorithm to compute it. In the diagrams above, notice that the delta product includes both A and B but not their intersection while the dual of the delta product includes both the surrounding space as well as the intersection. We can take advantage of this fact by computing the meet by projecting the dual of the delta product into the subspace A since whatever projects into A from the dual delta product is the intersection because the intersection is by definition a subspace of A.

Since the grade of the meet is not necessarily equal to the grade of \((A\cap{}B)^{*}\), we must first factor it into an orthogonal basis before projecting it into A. Factoring a blade essentially involves probing the blade with a candidate basis vector and iteratively taking the basis vectors out of the blade until only one is left. In order to start the process, a good candidate basis vector must first be selected. Since the specific factorization doesn’t matter, we can simply select the element of the blade with the largest magnitude and go from there. Since we are dealing with a blade in this case, we can simply project the vectors of the element chosen into the blade and iteratively remove those projections. The algorithm looks like this given a blade B of grade k:

- Calculate the norm of B \(s = \Vert{}B\Vert{}\)
- Find the basis element E in B with the largest coordinate
- Determine the vectors \(e_{i}\) of E
- Normalize B \(B_{c} = B/s\)
- For each \(e_{i}\) except for 1
- Project \(e_{i}\) onto \(B_{c}\) : \(f_{i} = (e_{i}\rfloor{}B_{c})B_{c}^{-1}\)
- Normalize \(f_{i}\)
- Remove \(f_{i}\) from \(B_{c}\) : \(B_{c} \leftarrow{} f_{i}^{-1}\rfloor{}B_{c}\)

- The last \(f_{i}\) factor is set to \(B_{c}\) : \(f_{k} = B_{c}\) and normalized.

The meet can be taken between two blades A and B of different grades and here we’ll assume blade A is of equal or greater grade than blade B. Given blades A and B:

- Compute the dual delta product \(S = (A\Delta{}B)^{*}\)
- Apply the factoring algorithm to S to get the factors \(s_{i}\)
- Compute the grade of the meet \(grade(A\cap{}B) = \frac{grade(A) + grade(B) – grade(A\Delta{}B)}{2}\)
- Set the meet result M to 1: \(M = 1\)
- For each \(s_{i}\)
- Project \(s_{i}\) into A: \(p_{i} = (s_{i}\rfloor{}A)\rfloor{}A^{-1}\)
- If the projection \(p_{i}\) is not zero, accumulate it into M: \(M \leftarrow{} M\wedge{}p_{i}\)
- If \(grade(M)\) is equal to that computed in step 3, break the loop

A simple example of the meet operation in use is the case of circle intersections. If we define two circles. If we take two circles C1 and C2 at (1, 0, 0) and (-1, 0, 0) respectively, each with a radius of 2, the meet operation will return a point pair. In other words, the meet gives back a single object describing the two intersection points of the circle.

- \(C1 = oe_1e_2 – oe_2\infty + 1.5*e_1e_2\infty\)
- \(C2 = oe_1e_2 + oe2\infty + 1.5*e_1e_2\infty\)
- \(S = C1\Delta{}C2 = -2*oe_1 – 3*e_1\infty\)
- \(S^* = -2*oe_2e_3 – 3*e_2e_3\infty\)
- \(s_1 = e_2, s_2 = e_3, s_3 = – \frac{2}{\sqrt{12}}o – \frac{3}{\sqrt{12}}\infty\)
- \(A\cap{}B = \frac{2}{\sqrt{12}}oe_2 – \frac{3}{\sqrt{12}}e_2\infty\)

The result of the meet as calculated above is a point pair. The points can be extracted by a simple equation that extracts the point pair’s axis and then pushes out a distance along this axis from their center point. The equation is given by:

\(P = p_{-}\wedge{}p_{+}\) where P is the point pair and

\(p_{\pm} = \frac{P\mp\sqrt{P^2}}{-\infty\rfloor{}P}\) are the points.

For the case above, the points are located at \(p_{\pm} = o \mp \sqrt{3}e_2 + 1.5*\infty\).

* see Geometric Algebra for Computer Science by Dorst et al, p. 536

]]>