GTC Conference Highlights: Part 2

HOOMD-blue

HOOMD-blue is a general purpose many-particle dynamics simulation, particularly for heterogeneous nano-particle physics.  HOOMD stands for Highly Optimized Object-oriented Many-particle Dynamics.  Essentially it’s a Python derived DSL for such simulations.  What’s impressive about it is the range of simulation types it covers and the way very different physical situations can be described by mixing and matching different force calculations and particle types.  It’s optimized for the new Fermi architecture, so high end hardware is a must, and simulations are of course far from real-time, but what’s interesting about it is that all of their simulations are self-organizing.  In other words, the simulations start out in a random configuration and eventually reach a highly organized spatio-temporal equilibrium.

Shadie: A Domain-Specific Language for Volume Rendering

Continuing with the Python DSL theme of the conference, a project called Shadie was presented that extends Python for volume rendering.  The goal for the project was to enable physicists working in radiation oncology, who don’t want to (nor hav the time to) deal with the low-level programming of volume visualizations, to write custom visualizations in order to visualize their data.  In volume visualization, the transfer functions mapping volume data to a color on the screen is critical and highly dependent on both the data and desired output.  Often, one needs free parameters that can be manipulated during the visualization process to focus in on particular details.  Simple volume programs are easy to write, but once you add lighting and shading effects along with data conditioners, it quickly gets rather complicated.

Shadie addresses these problems by providing an interface that focuses solely on how rays intersect a dataset and how the intersected data is transferred into color information.  The questions at the end of the presentation of Shadie seemed to really question the usefulness of such a project, as the questioners couldn’t comprehend why one would need a DSL for volume visualization instead of DSL.  In my opinion, these people were close-minded programmers who have the technical chops to write these kinds of programs themselves and can’t fathom how others who aren’t so savvy would be unproductive in a straight GLSL environment.  What I appreciate about Shadie is how it focused on the problem at hand and enabled a wide variety of problems to be solved and explored with minimum knowledge on the users end.

Adobe and Pixel Bender

On the last day, I attended 2 Adobe presentations: one on Pixel Bender and the other on GPU computation in Adobe products and the lessons learned.  For me, the Pixel Bender presentation was wonderful since I’ve been doing a lot of code generation work recently along the same lines as what Pixel Bender does.  I’ve been trying to figure out what they’re doing under the hood, so it was wonderful to be able to ask all of the questions I’ve had about implementation details. Apparently they started out using LLVM/Clang, but because of the size of the binaries, length of build time, and spotty 64-but code generation support, eventually move to their own parsting, lexing, and code generation system using lex/yacc and custom generation backends that support GLSL as well as multi-threaded SSE CPU targets.

There has been some impressive work done with Pixel Bender and it seems incredibly robust.  The development system they have in place is equally impressive, automatically running through 21k+ tests when new checkins are made to the repository so that the development team knows instantly when something breaks or performance degrades.  They also do what’s called ‘fuzz testing’ to address the practically infinite possibility space a language like Pixel Bender has.  Fuzz testing takes a Pixel Bender file and randomly changes characters before running it through the testing system that verifies parser errors, intermediate representation consistency, etc.  The idea is that it’s much more difficult to handle input that is almost correct than input that is clearly incorrect.

The second presentation provided some good tidbits on how to structure the development process and manage the complexity of support multiple operating systems, graphics cards drivers, and graphics cards.  On the development team, someone always has a machine at the low-end and a machine at the high-end of what is supported so that the range of user experience is accounted for on the development side.  If all the devs have high-end machines, then it’s unlikely that performance issues of older hardware will be worked on because it doesn’t enter the developer’s consciousness on a daily basis.  Also, Adobe keeps a blacklist of driver and hardware versions to preemptively notify the user that their system configuration has bugs and will not properly run their software.

To handle cross-platform GPU computation, Adobe is also betting on OpenCL.  While they didn’t say when their products would ship with OpenCL code, it’s likely to be early next year.  Adobe is part of the Khronos consortium and is actively pushing hardware and driver companies to produce stable and efficient OpenCL-enabled systems.

This entry was posted in Computational Technique, Software Engineering and tagged , , , . Bookmark the permalink.

Leave a Reply