Tuesday, March 30, 2010

Thoughts on shape casting

I implemented a different type of shape casting yesterday. The most common way, to my knowledge at least, is to do it with conservative advancement:

1) Run the GJK algorithm to find closest point to the origin in configuration space
2) Use the distance and angle to compute a safe estimate to move (cast) the configuration space object towards the origin, in the cast direction.
3) Repeat until below a certain threshold, or the direction to the origin is pointing away from the cast direction (in which case it is a miss).

Even though this is an iterative method, relying of GJK, which in itself is iterative, this is actually pretty fast, because it converges quickly, like two or three iterations and you can benefit from warmstarting GJK with the simplex from the previous iteration.

However, there is a completely different method, which I thought I came up with, but after googling it for a while it seems Jay Stelly has mentioned similar methods before across the Bullet and MollyRocket forums. The method uses MPR on a swept CSO. The idea is to sweep the CSO in the direction of the cast far enough to enclose the origin, and then raycast from the origin to find the distance to the surface.

1) Construct the swept CSO. This may sound complicated, but since support functions are additive it's as easy as adding a vector to the existing support function. The resulting shape will be a swept version of the original CSO. The direction of the vector should be the sweep direction, and the length should be "long enough" to pass the origin. As showed later on, the distance here doesn't really matter much, so just keep it reasonable to avoid numerical issues. I use the distance from the geometric center of the CSO to the origin.

2) Use MPR, or XenoCollide, to find the distance from the origin to the surface of the swept CSO, in the sweep direction. If the origin is not inside the CSO, we know there is a miss.

3) Subtract the distance found in step 2) from the distance computed in step 1).

4) That's it! The difference in step 3 is the final sweep distance. Imagine a line going from the original CSO through the origin to the point computed using MPR in step 2). The line will have the same direction as the sweep and the origin will divide it into two parts, the first part, from the original CSO to the origin, and the second part from the origin to the surface of the swept CSO. You know the total length of the line (determined in step 1) and you know the length of the second part of the line (computed in step 3), so therefore the first part must be the difference. I know, I know, a picture would be ideal, but I'm lazy. Bug me about it and I might post one :)

Although I haven't done any side-by-side comparisons yet, it seems to me much more efficient to do this second approach, since it is much simpler, less prone to numerical precision issues and (yet to be proved) faster. So if this is a known method, how come everybody is still talking about conservative advancement?

[EDIT]

I quickly profiled the new method against the old one (conservative advancement). Each session is 10 000 sweep queries with different angles, computing closest points, sweep distance and contact normal.


Box - Box (old)
Min 10 ms
Avg 16 ms
Max 30 ms

Box - Box (new)
Min 11 ms
Avg 14 ms
Max 16 ms

Cylinder - Cylinder (old)
Min 14 ms
Avg 40 ms
Max 99 ms

Cylinder - Cylinder (new)
Min 20 ms
Avg 35 ms
Max 53 ms

Polyhedron - Polyhedron 128 verts (old)
Min 99 ms
Avg 130 ms
Max 252 ms

Polyhedron - Polyhedron 128 verts (new)
Min 98 ms
Avg 130 ms
Max 185 ms

It turns out that conservative advancement is slightly faster in best case, slightly slower on average, but much slower in worst case, which makes sense, since it is doubly iterative. I have not noticed any differences in robustness so far, but I haven't stressed it either.


Sunday, March 21, 2010

Greetings from the inside!

So after spending a few weeks outside the CSO, I got around to step inside and experiment with a number of different penetration distance algorithms this week. However, I failed to come up with an alteriative to EPA that can actually guarantee a global minimum separation. These are the ones I tried:

1) Use MPR to raycast the inside of the CSO in a direction estimated by the geometric center of the CSO, use the resulting normal and iterate until convergence. This works in 95% of all cases and is really fast. The downside is that there is no way to guarantee or measure if this really is the global minimum.

2) Same method as in 1) but use multiple search directions. I was for a while under the impression that two local minima had to be at least 90 degrees apart, so casting six rays should be safe (of which three can be optimized away). This works in 99% of all cases, but there is still that annoying one percent where it finds a local minima.

3) Same method as in 1) but trace the edges of the last portal. After the initial surface point has been found, tilt the search direction in the direction of each edge, so that it falls outside the last portal, choose the best direction and iterate until convergence. This way we can sample the neighboring triangles and search those directions. Again, there is a 99% success rate. Unfortunately in some cases all neighboring triangles will lean towards the initial triangle, so we will just end up at the same local minima again.

4) Broken EPA. I tried using EPA without tracking connectivity and without keeping the polytope convex. Simply a triangle soup, where the closest triangle is split and valid triangles (the ones that are portals) are kept. It doesn't work.

5) Full EPA. I used the one found in Bullet as inspiration and it works really well in all cases. I'm really impressed by the Bullet implementation by Nathanael Presson. EPA is an awfully complicated algorithm to implement, but it seems very stable and quite fast.

So all this fuzz to come up with the conclusion that EPA is the way to go? I'm still contemplating wether there is a way to combine MPR and EPA to make an early out and just find the surface using MPR. It might not be possible.

I'm still not using EPA for all penetrations. In most cases using MPR with the last known separation axis works excellent, even just sampling and projecting support mappings on the last axis works really well. Physics is surprisingly forgiving about something, for once.

Tuesday, March 16, 2010

Trapped in configuration space

I've spent the last couple of weeks tinkering with GJK and ended up rewriting it from scratch. Ever since I saw Casey's video about SAT-GJK a couple of years ago I've wanted to try it, so I finally did. Now Casey's video only deals with binary overlap status, not closest points, but it was relatively easily to extend his approach and even do a linear sweep operation. I especially like how his method does not include any divisions or square roots, which makes it very robust. I must add though that it's not straightforward to get a robust implementation that works on both polyhedra and quadric objects. Especially the termination criteria needed a lot of attention.

I knew from the beginning that decent visualization tools would be required to pull off a new GJK implementation, and it's certainly not trivial to visualize all the information needed. There are the two objects, the configuration space object, the simplex which comes in four flavors, normals, closest points, etc, etc. Here's a screenshot from my test bed in the middle of a GJK distance calculation. The algorithm can be stepped and visualized at both the add and the reduce step of each iteration:


On the left is the configuration space object, the red triangle is current simplex, with closest point and normal. The corresponding object space closest points are also visualized on the two objects. I could have spent even more time on visualization, and I might just do that, because next I want to try out a couple of ideas for penetration depth computation.

The new implementation is both more robust and higher performance that the old one from Meqon one, which uses Johnson's distance algorithm from the original paper. I'm not sure how it compares to the Bullet implementation with geometric distance algorithm. Part of the performance increase is due to a more efficient caching scheme. To old version only supported an initial 1-simplex, which was the previous frame's closest points, while the new uses the whole previous simplex, which is a bit more complicated, since it can deform badly due to motion, but it really pays off, and most of the times it will terminate immediately after the first iteration.

Here are some performance comparisons:

Scene with 16-vert polyhedra
Old: 22.5 ms
New: 17.1 ms
New+caching: 9.3 ms

Scene with cylinders
Old: 2.6 ms
New: 2.5 ms
New+caching: 1.7 ms

Scene with boxes
Old: 10.7 ms
New: 8.8 ms
New+caching: 6.3 ms

Scene with mixed objects
Old: 19.3 ms
New: 13.9 ms
New+caching: 6.9 ms

So, depending on the scenario, and how much motion there is, the new version ranges from slightly faster to almost three times faster. More importantly though, is that the new version is more numerically robust.

I find it quite fascinating how Casey's approach with caching is very similar to the old Lin-Canny closest feature algorithm. Both methods are based on voronoi regions, but Casey's operates in configuration space, while Lin-Canny uses object space. Casey's approach uses implicit voronoi regions determined by support functions, while Lin-Canny uses explicit pre-computed voronoi regions, but the underlying principle, that the closest distance is always found within the voronoi regions of a particular feature pair is the same.

What currently keeps me awake at night is if there might be an implicit configuration space counterpart to V-Clip, which would then work for penetrating objects as well? The only implicit penetration depth method I'm aware of that has been proven to work is EPA, but it's so awkward I'm not even going to try implementing it. Intuitively I feel like there must be a simpler way to compute exact penetration depth that doesn't involve dynamic data structures.