Polar^m @ YCAM (Yamaguchi, Japan)

For the last 2 weeks, I’ve been in Yamaguchi, Japan working on an exhibition called Polar^m at YCAM with Marko Peljhan, Carsten Nicolai, and crew.  The show opened November 13 and will run through February.

The project is based around the physical processes of radiation and electromagnetic waves, using geiger counters, high frequency radios, a cloud chamber and a host of other devices  to create an immersive landscape of invisible and ephemeral processes.  The sensing apparati feed in data to a network of robotic, sonic, visual, and spatial experiences that form multi-layered feedback networks due to the fact that the physical processes involved penetrate space and matter so well.  There is no direct viewer interaction except through whatever effect the viewer has on the particles and electromagnetic waves sensed.

The entire process was an amazing experience, and the working conditions are second to none.  The space is incredible in terms of size, equipment, and functionality and the staff is capable and creative.  I’m looking forward to the next time I get to do something there.

Posted in Computational Composition, Exhibitions | Tagged , , , | Leave a comment

Folding and Difference

Continuing with the Fluids and Clusters idea, I’ve extended the simulation to include some more concepts from Topobiology.  I was particularly struck by the description of cortical tractor model of neural tube formation.  In essence, the model describes a process whereby a boundary of two different cell types causes a change in behavior compared to those cells not on the boundary.  The boundary cells will generate attractive forces on interior cells that pull them toward the boundary, both extending and bending the boundary itself as in the image below.

There are a range of other effects having to do with morphoregulatory molecules, of which there are a variety of families.  Some operate directly on the cells, some indirectly through the extracellular matrix, and others act mostly as conduits of molecular signals.

In extending my previous simulations, I have first of all upgraded the dynamics by replacing my hand-rolled dynamics simulation with the Bullet Physics Library.  The first thing I noticed once that was in place was how much of a difference a sophisticated dynamics integrator makes.  Instead of naively gathering forces as I was before and simulating one cell at a time, Bullet incorporates a modular pipelined physics simulation that eliminates some artifacts of my naive dynamics code.  Second, Bullet has a wide range of constraints that can be attached between objects, making it straightforward to implement some basic morphoregulatory actions.

As in the previous work, there are two kinds of molecules with different fields of interaction and attraction/repulsion characteristics.  The additional factor is a capacity for particles to attach to each other if they come within a certain distance.  The result is a particular kind of folding action dependent on the distribution of particle types.

Posted in Computational Composition, Research Literature, Software Engineering | Tagged , , , | Leave a comment

Fluids and Clusters

I’ve been going through Gerald Edelman’s Topobiology book and mapping the descriptions of morphogenetic operations to particle systems as an attempt to come at the design of TopoSynth from another angle.  As a first pass, I’ve been looking at how different cell types can affect each other and direct each other’s behaviors.

In order to map the more interesting but also more complex ideas of how morphoregulatory molecules and cell clusters interact to unfold the epigenetic processes, I’ve done some investigations into how heterogeneous clusters of cells with slightly different behavioral models interact.  The models are dead simple, but provide some really interesting results that can definitely be extended, but for now simplicity is best.

Essentially, I modeled each cell as a particle with a local sensing neighborhood.  Each cell reacts in some fashion to the presence of other cells in its neighborhood.  Here, I used a simplified Lennard-Jones potential model to describe how cells attract and repel to each other.  The only parameter to the model is a ratio describing the radius of the repelling area to that of the entire sensing area.  In other words, a ratio of 1/3 indicates that the inner third of the sensing area will repel neighbors and the outer two-thirds will attract them.  To see how cells with different ratios interact with each other, I mapped out a grid of 6×6 ratios and ran the simulations with 2025 particles, with most being of one type and a handful of another type with double the sensing radius.

In the chart, the columns correspond to the majority particle and the rows the minority particle (the one with a bigger radius).  The visualizations show particle densities with lines showing the paths of the minority particles.  In general, when the ratios are similar, the two types of particles mix well with the size of the particle cloud correlated to the size of the ratio.  When the ratios are quite different from each other, the particles tend to separate with the dynamics of separation dependent on which particle type is repelling the other.

Some detailed views:

The goal now is to figure out an equally simple mechanism to model changes in behavior that generate functional clusters of particles in order to model large scale behaviors such as the formation of epithelium and mesenchyme.

Posted in Computational Composition, Computational Technique | Tagged , , | Leave a comment

TopoSynth Experiments

I’ve been playing with TopoSynth some more recently, trying to wrap my head around how to setup the generating rules.  There’s a lot to chew on, particularly in how dynamics gets mapped into the system.  Right now, I’m trying to figure out how the geometric forms can become more fluid.  As it stands, there isn’t any kind of dynamics driving the relationships between vertices and edges once they’re created, which is certainly going to need to change.

In some pre-TopoSynth work, I’ve used a spring-electric model to restructure the mesh as it’s modified, but I’ve never been quite satisfied with the results.  I’m sure there’s a place for it in TopoSynth, but it’s in some as yet to be found capacity along side other dynamics models.

Posted in Computational Composition | Tagged | Leave a comment

GTC Conference Highlights: Part 2


HOOMD-blue is a general purpose many-particle dynamics simulation, particularly for heterogeneous nano-particle physics.  HOOMD stands for Highly Optimized Object-oriented Many-particle Dynamics.  Essentially it’s a Python derived DSL for such simulations.  What’s impressive about it is the range of simulation types it covers and the way very different physical situations can be described by mixing and matching different force calculations and particle types.  It’s optimized for the new Fermi architecture, so high end hardware is a must, and simulations are of course far from real-time, but what’s interesting about it is that all of their simulations are self-organizing.  In other words, the simulations start out in a random configuration and eventually reach a highly organized spatio-temporal equilibrium.

Shadie: A Domain-Specific Language for Volume Rendering

Continuing with the Python DSL theme of the conference, a project called Shadie was presented that extends Python for volume rendering.  The goal for the project was to enable physicists working in radiation oncology, who don’t want to (nor hav the time to) deal with the low-level programming of volume visualizations, to write custom visualizations in order to visualize their data.  In volume visualization, the transfer functions mapping volume data to a color on the screen is critical and highly dependent on both the data and desired output.  Often, one needs free parameters that can be manipulated during the visualization process to focus in on particular details.  Simple volume programs are easy to write, but once you add lighting and shading effects along with data conditioners, it quickly gets rather complicated.

Shadie addresses these problems by providing an interface that focuses solely on how rays intersect a dataset and how the intersected data is transferred into color information.  The questions at the end of the presentation of Shadie seemed to really question the usefulness of such a project, as the questioners couldn’t comprehend why one would need a DSL for volume visualization instead of DSL.  In my opinion, these people were close-minded programmers who have the technical chops to write these kinds of programs themselves and can’t fathom how others who aren’t so savvy would be unproductive in a straight GLSL environment.  What I appreciate about Shadie is how it focused on the problem at hand and enabled a wide variety of problems to be solved and explored with minimum knowledge on the users end.

Adobe and Pixel Bender

On the last day, I attended 2 Adobe presentations: one on Pixel Bender and the other on GPU computation in Adobe products and the lessons learned.  For me, the Pixel Bender presentation was wonderful since I’ve been doing a lot of code generation work recently along the same lines as what Pixel Bender does.  I’ve been trying to figure out what they’re doing under the hood, so it was wonderful to be able to ask all of the questions I’ve had about implementation details. Apparently they started out using LLVM/Clang, but because of the size of the binaries, length of build time, and spotty 64-but code generation support, eventually move to their own parsting, lexing, and code generation system using lex/yacc and custom generation backends that support GLSL as well as multi-threaded SSE CPU targets.

There has been some impressive work done with Pixel Bender and it seems incredibly robust.  The development system they have in place is equally impressive, automatically running through 21k+ tests when new checkins are made to the repository so that the development team knows instantly when something breaks or performance degrades.  They also do what’s called ‘fuzz testing’ to address the practically infinite possibility space a language like Pixel Bender has.  Fuzz testing takes a Pixel Bender file and randomly changes characters before running it through the testing system that verifies parser errors, intermediate representation consistency, etc.  The idea is that it’s much more difficult to handle input that is almost correct than input that is clearly incorrect.

The second presentation provided some good tidbits on how to structure the development process and manage the complexity of support multiple operating systems, graphics cards drivers, and graphics cards.  On the development team, someone always has a machine at the low-end and a machine at the high-end of what is supported so that the range of user experience is accounted for on the development side.  If all the devs have high-end machines, then it’s unlikely that performance issues of older hardware will be worked on because it doesn’t enter the developer’s consciousness on a daily basis.  Also, Adobe keeps a blacklist of driver and hardware versions to preemptively notify the user that their system configuration has bugs and will not properly run their software.

To handle cross-platform GPU computation, Adobe is also betting on OpenCL.  While they didn’t say when their products would ship with OpenCL code, it’s likely to be early next year.  Adobe is part of the Khronos consortium and is actively pushing hardware and driver companies to produce stable and efficient OpenCL-enabled systems.

Posted in Computational Technique, Software Engineering | Tagged , , , | Leave a comment

GTC Conference Highlights: Part 1

The GTC conference is an Nvidia run and sponsored high-end computation conference dedicated to graphics hardware.  There are a wide variety of talks covering a number of interesting a relevant areas from graphics to quantum computation to biological electrostatics.  I’m enjoying this much more than any SIGGRAPH I’ve been to because it’s focused on making the technologies relevant, so you get a lot of interesting information and people are willing to share.


iRay is a product of Mental Images, a subsidiary of Nvidia.  It’s a fully photorealistic ray tracer taking full advantage of the GPU.  The only other fully photorealistic renderer out there is Maxwell Render, but it is CPU-based.  They’ve shown some impressive demos during the conference, including a cloud-based fully interactive ray trace of complicated interior shots powered by a 32 Tesla compute cluster.

The best part about it is that there are no tweaky parameters to set related to how different approximations to the rendering are done because there are no actual approximations.  All you supply is the geometry, BRDFs, and a camera location.


Another impressive project, which had a poster, was a Python-based OpenCL programming interface call cl.oquence.  What makes cl.oquence different from other scripting interfaces to GPU computation such as PyCUDA or pyopencl it is not a direct mapping of low-level OpenCL functions, but provides an interesting high-level interface that allows for, among other things, higher order functions and type inference.  It’s written as an extension within Python and uses Python syntax.  The backend, however, parses out the Python code, inferring types and constant, local, global memory usage as the OpenCL kernel is constructed.  It’s a beautiful example of how existing scripting languages can easily become DSLs (Domain Specific Languages) allowing the programmer a continuum of programming paradigms within existing computational infrastructure.

GPU Fast Multipole Method

In the scientific computing realm, there was a presentation about bio electrostatics and the use of the Fast Multipole Method (FMM) and its GPU implementation.  The presenter, Lorena Barba, and her research group have been at the forefront of this method, publishing the only open source implementations of this highly complicated algorithm.  The talk touched on some of the upcoming research using these techniques, such as the design of nanoscale bio sensors.  There’s a GPU Gems article coming out about the GPU FMM implementation and a repository for the code accompanying the article up on Google Code.

Posted in Computational Technique, Software Engineering | Tagged , , , , , | Leave a comment

LuaAV3 Pre-Release on Its Way

In preparation for a workshop at UCSB, we’ve been pushing hard to get the next major release of LuaAV out the door.  This release represents a consolidation and standardization of the source code, making it substantially cleaner and easier to extend in the future.  We also have quite a bit of documentation now!

Much of the work is under the hood, but there are quite a lot of API changes that will break scripts written for previous versions of LuaAV.  Fortunately, the changes can be easily accounted for so porting old scripts shouldn’t be too difficult.  On the API front, the OpenGL module has been completely re-written and now incorporates an additional high-level interface for Textures, Shaders, Meshes, etc.  For an overview of what it looks like now, see the graphics tutorial on the LuaAV site.

Other improvements include a porting of the image and video modules to native frameworks on OSX so that we can build a full 64-bit version, the addition of an OpenCL module, and the pervasive use of a Lattice data structure throughout the graphics and audio systems for easy mapping of data across modalities.

The major new addition for this release is a redesign of the audio API to look more SuperCollider-y.  Audio definitions are composed using Lua, which are then translated into a byte-code and JIT compiled with LLVM.  The audio system now handles multi-channel audio to boot.  A brief tutorial on the audio system has been prepared for the workshop.

We’re just about done with the OS X version.  Next up will be resuscitating the Linux version and writing some more tutorials, documentation, and examples.

Posted in Computational Composition, Software Engineering | Tagged | Leave a comment

TopoSynth: Incorporating Cosm

I’ve finally completed all of the basic topological operations for TopoSynth and am now moving on to treating the dynamics of the system.  This last rule involved figuring out how to connect two extended vertices together when they collide.

For example, in the above image, there are 2 vertices that have been progressively extended, leaving a tail behind.  The vertices being extended are at the tip.  If these vertices happen to collide (or any other vertices for that matter), I want a way to merge them in such a way as to create a handle and thus preserve the 2-manifold nature of the mesh.  Simply connecting them with an edge isn’t a solution since that would either break the 2-manifold or create an extremely awkward face.  Instead, what I do is a multi-step process of deleting each vertex, which leaves 2 faces behind.  This is a kind of reversestellate operation.  Then, I join the 2 faces together with a pipe creating a seamless connection between the previously disjoint sections of the mesh.

As an illustration of the delete-vertex operation, imagine the mesh on the left and we want to delete vertex 93 in the middle.  When the operation is done, vertex 93 will be gone and a new face consisting of all of the vertices connected to vertex 93 by an edge will be left behind.  This operation is easily carried out by circulating around all of the face vertices of vertex 93 and successively adding the face vertices not connected by and edgeto vertex 93 to the new face.  To finish, simply delete the vertex, all of its face vertices as well as the edges and the face vertices they link to vertex 93. The face vertices in dark blue circles are the ones that will be deleted in addition to the vertex 93 face vertices.


One of the problems that came up when testing this out was how to direct the vertices near enough to each other to connect them in a nice way.  There’s the typical procedural methods using splines to model the transition, but in TopoSynth, mesh vertices are more like a kind of super-particle that contain their own ways of interacting with the environment.  Eventually, I’m going to be incorporating fields and other forces into the system, so I decided to integrate some previous particle dynamics work we’ve done in Cosm.  Cosm already incorporates basic dynamics and most importantly collision detection.  This allows me to direct the topological operations based on particle movements and collision events.

I incorporated Cosm into TopoSynth by replacing the notion of a vertex with a particle aka a cosm.nav (with nav being a directed point in space).

A cosm.nav consists of a position in world space and an orientation defined by a quaternion.  This is why vertices are not drawn as points, but instead as circles with a perpendicular line.  The line depicts the direction the nav is facing while the circle indicates xy-plane of the local coordinate system.

Right now, I’m trying to get more control over the dynamics (e.g. how one vertex meets another while both are moving), thinking over how rule definitions for dynamics and behavior will be described, and what the overall control flow of the system needs to be to make sure the mesh stays valid as do the elements rules are operating on.  For example, if a rule deletes two vertices, that rule gets killed, but what happens to other rules that might refer to the vertex in some way?  Another big challenge is figuring out how to provide a high-level interface to placing rules on the mesh without having to assign them individually by hand.  There’s a lot of approaches to play around with, and now that the nitty-gritty of the topological operations is taken care of, the fun part of figuring out how to compose with the system begins.  Here’s a first render of an example, perhaps a little hard to read, but I’ll post it anyway.  Expect more soon…

Posted in Computational Composition, Computational Technique, Software Engineering | Tagged , , , | Leave a comment

TopoSynth: Topological Structure

DLFL Background

As I had mentioned in the last post, the topological inspiration for TopoSynth is derived from TopMod and particularly the doubly-linked face list (DLFL) data structure.  The DLFL makes it particularly straightforward to always maintain a 2-manifold mesh structure as mesh transformation operations can all be expressed in terms of 4 primitive operations:

  1. Insert vertex
  2. Remove vertex
  3. Insert edge
  4. Remove edge

Any other operation can be constructed in terms of these primitive operations.

A lot of mesh representation structures maintain double representations of edges, which are typically called half-edges, where each half-edge is opposite in direction to its other half (e.g. v1->v2 and v2->v1).  The idea here is that each half-edge belongs to a different face.  To get to the other face, one simply uses moves from one half-edge to the other.  In contrast, the DLFL maintains a single edge for any pair of connected vertices, saving memory.  The difference compared to the half-edge representation is that the vertices on either edge belong to different faces.  In fact, the trickiest conceptual leap to make in understanding the DLFL is that there are actually two different types of vertices: actual vertices and face vertices that shadow the actual vertices.  When I say that an edge connects vertices on different faces, I mean face vertices not actual vertices since actual vertices effectively belong to many faces whereas face vertices belong to a single face.  Face vertices are basically a per-face proxy for actual vertices.  Thus, if a mesh is made of quadrilaterals, each vertex will belong to 4 faces and have 4 face vertices.

Here, each actual vertex is given a unique number and the black face vertices are clustered together.  Edges are the dotted blue lines.  Notice how they connect face vertices across faces.  This is the key attribute of the DLFL.  Each face in the DLFL also maintains a doubly-linked list of its boundary face vertices.  The direction of the pink and grey arrows indices the sense of the face boundary.  Pink arrows are in the direction of the blue edges while grey arrows are not.

For the DLFL to work properly, the mesh must always be a 2-manifold geometric complex.  This means that not only must every edge belong to two faces, but the orientations of edge-abutting faces must be opposite in direction.  One way to think about it is that when circulating a face, if traversing from v1->v2 is given a value +1, then when traversing another face that goes from v2->v1 the value -1 is given and v1->v2 + v2->v1 will sum to 0.  If all such traversals sum to 0, then the mesh is a geometric complex.  Effectively this means that face normals must all face either outward or inward.  There can’t be a mixture.  The four primitive operations listed above are implemented such that these conditions always hold.

TopoSynth Operations

TopoSynth operations, as in TopMod, are implemented over the primitive operations such that the 2-manifold condition is always maintained.  Unlike in TopMod, however, the operations are not simply applied as mesh sculpting events one at a time, but instead are micro programs attached to mesh elements such as vertices, faces, and edges.  In this sense, they are akin to rules in StructureSynth with the addition of a more elaborate context sensitivity.  StructureSynth is context sensitive in the sense that rules terminate under certain global conditions related to primitive size (is the rule producing visible shapes) and total number of primitives (has the max number of shapes been generated).  TopoSynth rules have an additional context sensitivity related to their immediate spatial neighborhood.  For example, a rule can be written such that a face is pulled in a particular direction, but when it approaches another face, it attaches itself, creating a handle.

The more involved TopoSynth operations are built on a combination of spatial partitioning and standard mesh operations.  The basic operations include face extrude, vertex extend, stellate, etc.  How these are built into TopoSynth operations is the subject for another post.  Below are the basic extension and extrusion operations.  Extend pulls a vertex in a direction, splitting each edge in half that links to the vertex and forming a ring of edges to connect the new midpoints.  Extrude pulls a face out in a direction, creating a new ring of faces.

Posted in Computational Composition, Computational Technique, Software Engineering | Tagged , , , , , | Leave a comment


Ever since I found out about the excellent Structure Synth program, I wondered what it would be like to apply the same concept to topological operations.  Structure Synth works by writing rules that describe a pattern of transformations.  The transformations can be spatial or they can operate on the colors.  When the interpreter processes a rule, it applies the transformations to a terminal node, which can be a primitive shape like a box or sphere, or more interestingly, another rule.

The beauty of Structure Synth, and similar systems like Context Free, is that a simple rules can end up building complex and surprising forms by virtue of how they are connected.


While these structure can be quite stunning, there are quite a number of reasons why one would want to move beyond a haphazard collection of objects in space to something more reified such as 2-manifold meshes or a at least a guarantee of non-intersecting objects.  Instead of just repeating primitive shapes under transformation, I’d like to see how this could work by operating on mesh vertices themselves.

As a paradigm for operating on the mesh, topological operations are going to be both the most interesting and generally applicable.  One need only look at what the topological meshing software TopMod is capable of to imagine the power of topological processes.


After doing some research about this, it seems I’m not the only one with this ideas.  In a paper on grammar-based approaches to pattern making, the author of Structure Synth mentions as much when speculating about possible directions it could go in the future.

I’ve been working on TopoSynth for about a week now and have a pretty good handle on how it’s going to work.  The most difficult part was finally coming to a clear understanding of how TopMod works and getting to the point where I know enough to easily write my own topological operations.  For background research, there’s the sill-in-draft TopMod book, but truth be told, nothing is better than stepping through the source code or better yet, writing your own version.  I ended up writing my own take on TopMod in Lua, which is what I’m using to prototype TopoSynth.  I’ll be posting my findings shortly, but first here’s a teaser rendered with PBRT:

Posted in Computational Composition, Computational Technique | Tagged , , , , , | Leave a comment