Monday, November 28, 2011

Sprinkle behind the scenes

This summer me and a friend released a physics puzzler for iOS and Android based on fluid simulation. It started as a really small project almost a year ago, but grew along the way and has been really well received on both platforms.

Last year i posted movie clips from a fluid simulator I was working on, and the fluid in Sprinkle is basically the same algorithm, but in 2D and with lots of optimizations. The simulator is particle-based, but not traditional SPH. Instead, the pressure and velocity is solved with a regular "sequential impulse"-solver. It's quite a mindjob to work out the constraint formulation, but it's worth the effort, since you get unconditionally stable fluid that is reasonably fast and interacts seamlessly with rigid bodies.




The most compoutationally intensive operation is neighbor finding. I'm using a pretty standard spatial hashing technique, with a twist. Each particle gets assigned which quadrant within the cell it belongs to, and a table is used to search neighboring cells, so only four cells need to be searched for each particle instead of nine. For this to work, the cell size needs to be at least four times the particle radius. I also do all neighbor finding on quantized 8-bit positions within each cell, and cell positions are also 8-bit quantized. This is to reduce data size and cache impact.

The fluid step does two iterations per frame to get the desired incompressibility, but neighbors are only computed once. There is also a maximum number of neighbors per particle, and all pair-wise information is stored directly in each particle, duplicated per pair, so it can be traversed linearly. Particle positions are stored in a separate list to minimize cache impact when updated.

For rigid bodies, the excellent Box2D engine is used, and was loosely integrated with the fluid. Box2D is also setup to do two velocity iterations per frame, but the rigid/fluid solver only kicks in once per frame, so the overall update goes something like: rigid -> rigid -> fluid -> fluid -> rigid/fluid. Ideally I should have integrated the fluid solver more tightly, so the update loop would be rigid -> fluid -> rigid/fluid -> rigid -> fluid -> rigid/fluid, but it turned out to be unnecessary for the type of scenarios we wanted to create.

There is a maximum of 600 particles simulated at the same time, so there is a life time on each particle of about 3.5 seconds, then they disappear and gets recycled. An important part of levels design was of course to create levels that drain naturally, and do not rely on pools of water.


In addition to the 600 simulated particles, there is 240 simplified particles, which only follow ballistic trajectories until they hit something. These are rendered separately in a brighter color to look more like foam and splashes. I spawn these "decoration" particles based on certain criteria, for example when fluid is hitting something, or is moving upwards. Full collision detection is still being done on these particles, in very much the same way as the regular particles, but they are instantly removed as soon as they hit something. The particle collison detection is performed first on the fluid cells, using standard Box2D queries and then for each particle using custom methods. Each rigid body shape has a dual representation used only for fluid collisions, and internal edges between convexes in a concave shape are filtered out in a preprocessing step to avoid "edge bumps". This was really crucial for any level that includes a ramp or a slide, which are highly concave.

For rendering, a dynamic particle mesh is built on the cpu every frame and rendered by the GPU using a simple refraction shader. On Android, I use double-buffered dynamic VBOs to update the particle mesh, while on iOS it seems faster to just use a vertex array. I guess for these shared memory architectures most of the old graphics wisdom is no longer applicable. Particles are stretched, aligned, resized, colored, etc based on their physical properties, so it's quite sophisticated for being a particle renderer. If there is one thing that really stands out about Sprinkle I'd say it's how smooth the water looks, considering it's only 600 simulated particles.

The fluid simulation is done on a separate thread, and so is the particle mesh generation, so it scales quite well onto multiple cores. For Tegra 3, which is a quad-core architecture we also added a smoke simulation on a separate thread, but that will be another blog post.

Getting the game to run at 60 FPS on a mobile device was really quite a challenge. The 0.8-1.2 GHz ARM processors found in mobile devices today are indeed quite fast, but if you compare to a regular desktop PC, it's actually still quite far away. For a frame time of 16 ms on an iPad 1 I had to target about 1.5 ms on my desktop PC, so it's roughly a factor of 10x.

Thursday, March 31, 2011

Impressions of the green robot

I've been working on a mobile, physics-based game over the last five months (I'll post stuff about the project very soon) and today I started toying around with Android and porting the game. I'm not really sure what to think yet honestly. Some things are better than iOS development and other things are quite annoying. I really appreciate having command line tools for everything, and the ability to login to the device and do maintenance. Especially the ability to login and run a shell command on the device via a command line tool on the host. That way you can run scripts on your development machine that coordinate things both on the device and on the host at the same time. Awesome!

When it comes to development tools I think command line tools are far superior to graphical user interace in most cases (except for debuggers). I'm pretty happy with visual studio, but it's probably because I've been more or less forced to use it every day for the last ten years. Nothing beats having good command line tools and the ability to script everything the way you want them.

Being dependent on Java definitely sucks. They have really tried to work around it in the latest releases of the NDK, but it's still there, and you really do notice it. A lot. For a game programmer who keeps all his stuff in C++ this is no worse than Apple's stupid fascination for Objective-C though. A couple of revisions away, the Android NDK will probably be pretty complete, while iOS will always be knee-deep in Objective-C, forcing us to create horrible wrappers.

Android documentation is bad in the best case and non-existent everywhere else, and the whole development procedure is very far from streamlined with a gazillion tools and configuration files to tie everything together. Note though that I'm talking about writing native C++ apps using Open GL ES 2 here, not the Java crap you see in all the tutorials. (By the way, the NDK compiler did not support C++ exceptions until very recently. I talked about exactly this in my previous blog post)

Asset management is the part I like the least about Android so far. You throw your files in the asset folder, and it automatically gets compressed into a zipped bundle. Then you can stream access resources from this bundle using the NDK, but not quite the way you'd expect. On iOS this works beautifully by just translating the path into a location on the device and then you can use fopen, fseek or whatever you like. On Android the tools automatically compress stuff into the bundle based on the file suffix (oh please..), and there doesn't seem to be any way of accessing compressed data from the NDK unless you write your of virtual file system. Solution? Add a .mp3 suffix to all the files! Seriosly...

Monday, March 14, 2011

General wisdom

I'm following quite a few game programming blogs, and whenever there is a post about a lifehack or general wisdom that can help me simplify my work I'm all ears. So, I thought I'd share some of my own experiences:

Automate everything that can be automated. Especially project file generation. Editing project files is a real energy drainer, and even though IDE's are trying to make the process smooth, it never is. This becomes a big problem first when multiple platforms come into the picture. Personally I have a Python script that take a few input parameters, scans the source tree and outputs a nice project file for Visual Studio, or a makefile. You have to bite the sour apple every time Visual Studio changes it's project file format, but it's so worth it. I also have similar scripts for documentation, distribution and in some cases code generation. Writing the scripts take a while, but they can be reused, you get better at writing them every time you do it, and it's more fun than doing dull monkeywork over and over again.

Minimize external library dependencies. People are way too eager on including external libraries and middleware in their projects. I think it is very common that the usage of libraries and middleware end up costing way more than it would have done just writing the code yourself. Only include an external library to your project if it: 1) Solves one specific task extremely well. 2) Can be trusted doing that. 3) Puts you in control of all memory and IO operations. 4) Can easily be included as source code.

Keep everything in the same project. This ties into the last criteria for using external libraries above. I want all third party libraries to be part of the source tree. Not a dynamic library, not a static library, not even a separate project in Visual Studio, just plain soure code in a separate folder. This is important, because it simplifies cross-platform development, especially when automatically generating project files. It also completely takes away the problems with conflicting runtimes for static libraries, mismatching PDB's, etc. It's all going in the same binary anyway, just put your files in the same project and be done with it.

Refactor code by first adding new and then remove old. I used to do it the other way around for a long time, ripping out what's ugly, leaving the code base in a broken state until the replacement code is in place. Yes, it sounds kind of obvious in retrospect, but it took me a long time to actually implement this behavior in practice. The only practical problem I've experienced is naming clashes. I usually add a suffix to the replacement code while developing and then remove once the original code is gone. As an example, if you want to replace your Vector3, create a new called Vector3New, and then gradually move your code base over to using Vector3New, while continuously testing, and when you're done, remove the original Vector3 and do a search/replace for Vector3New to Vector3.

Don't over-structure your code. This one is really hard. People often talk about code bases lacking structure, but I think it's a much worse and more common problem that a code base has inappropriate structure, or just too much of it. Consider this - given two implementation of some algorithm, where one is a couple of large messy functions in a single file and the other is fifteen files with a ton of inherited classes, abstract interfaces, visitors and decorators. Given none of them suits your current needs, which one would you rather refactor? My point is that you shouldn't try to structure something until you know all the requirements. Not to save time first building it, but because it's a pain in the ass to restructure something that already has structure. You can compare it to building a house. Would you rather start with a pile of building material or first disassemble an existing building? To me that's a no-brainer, even if the pile happens to be quite messy. Hence, never define an abstract interface with only one implementation, never write a manager that manages one object, etc. Just start out writing your desired functionality in the simplest possible way, then structure it if and when there is a need for it.

Stay away from modern C++ features and standard libraries. I've tried introducing bits and pieces from STL, boost, exceptions and RTTI throughout the years, but every time I do, something comes out and bites me right in the butt. Buggy implementation, compiler bugs, missing feaures, restrictions on memory alignment, etc. This is depressing and discouraging, but the sad truth we have to deal with. If you want your code to be truly portable without the hassle (not just in theory, but in practice) you'll have to stick to a very small subset of the C++ standard. In my experience it's better to just accept this and design for it rather than putting up a fight.

Use naming prefixes rather than namespaces. I was advocating namespaces for a long time, but now I've switched sides completely and use prefixes for everything. I kind of agree prefixes are ugly, but it has two obvious benefits that just makes it worth it. A) You can search your code base for all instances of a particular class or function, and B) it makes forward declarations as easy as they should be. With namespaces, especially nested, forward declarations is just painful, to a point where you tend to not use them at all, leaving you with ridiculous build times. I usually don't even forward declare classes at the top any more, but rather inline them where needed, like: "class PfxSomeClass* myFunction(class PfxSomeOtherClass& param)".