I've been struggling with a decision regarding whether or not to implement a scene graph in my game. I have some use cases that call for such a tool, but I haven't been able to get through some of the implementation details.
Some background: I'm writing a space shooter type game targeted at mobile platform (Android, primarily) and my code is almost entirely C++. I'm not using any middleware; the rendering and physics engines in particular are my own creations. My physics engine updates locations of objects based on forces and impulses. I have no animation system as of yet but may visit this at some point (which may or may not have anything to do with this discussion).
First, I'll describe a good use case. I would like to have a boss that is made up of several discrete parts, each of which can be damaged / destroyed independently. For example, I might have a boss that has an arm that can receive damage independently of the rest of the boss entity. When the arm is destroyed, a fire particle effect located at the bosses' shoulder could indicate that the arm is now destroyed.
As it is, I have decided to try solving such problems with constraints in my physics engine to keep such compound objects together. One such constraint provides 0 degrees of freedom and is essentially a transformation matrix. This is really an attempt to circumvent a problem that ultimately turned me off of scene graphs in the first place, described below.
The primary reason I turned away from use of a scene graph is because I could not find an efficient way to keep nested objects (objects that inherit a transformation from their parent) in both the physics world and the rendering scene. The physics world needs objects to be in world-space (or at least the same space) while the rendering scene needs objects in parent-space. Tracking locations in both spaces might help (and be inevitable) but raises its own concerns, not the least of which relates to performance.
However, given use cases like the one described above, I think that being able to 'work' in parent space will become very important, and trying to force my physics engine to maintain these relationships through the use of constraints is going to become problematic.
Given the use case and predicament described above, should I use a graph structure to pass transformations from one object to another? If so, how should my physics engine calculate new locations and perform intersection tests for objects in different spaces?
Answer
Have you actually tried a hierarchical graph and measured the performance?
Have you investigated simple physics engines to see how they handle the problem, even a 2D engine that has linkage between objects would help guide you in a proven direction.
I would not try to run your physics in multiple spaces, the complexity would be daunting. Run the physics in world space and add functionality to create transforms of your hierarchy to move local space objects out to world space and back. Your constraints, of necessity, must be in local space relative to the parent object.
As a side note even a pure array of objects is a "scene graph," just a very simple one. Don't be afraid to organize data in ways to solve problems and especially don't decide that performance of that data organization is a factor without even having measured that performance.
No comments:
Post a Comment