Wednesday, March 28, 2018

unity - How do you animate/collide against a tessellated mesh?


Now that I have a implemented tessellation for my mesh, I am trying to understand how I can leverage the generated primitives.


Example:


I have the following track mesh generated procedurally, it consists of quads, in the first picture you can see one of them I am highlighting using the mouse.


enter image description here



In the following picture, the shader applies some tessellation, you can see that now a quad has generated 16 quads.


enter image description here


Question (two-folded):


Now that I have more control points for each of my quad, suppose I want to deform one of them, for instance, to look like the following:


enter image description here




  1. How is one supposed to apply such transformation ?



    • basically, what in terms of code-behind / shader code infrastructure is required to achieve such thing.





  2. How can collisions be checked for against that deformed mesh ?



    • from what I've understood, a mesh collider would be unpractical since it's a heavy operation that just forbids real-time updates





Answer




In general, you should avoid asking two separate questions in one post on StackExchange. I'll focus on your second question as it's the meatier of the two, and give a quick overview of the first:



How is one supposed to apply such transformation?



There are a few places you can do this:




  • Domain Shader



    • interpolate vertex from tesselator using hull information


    • modify vertex position <--- here

    • return tesselated vertex




  • Vertex Shader



    • process input vertex

    • modify position <--- or here

    • transform position to clip space


    • return transformed vertex




  • Geometry Shader



    • process input primitive...

    • write vertex position <--- or even here

    • emit vertex

    • next vertex...


    • output primitive




In any of these marked places, you can replace the position read from the input vertex/buffer or computed from your interpolation function with a new position of your choosing. Typically you'd calculate the new position as a function of the original vertex parameters (eg. looking up a value from a displacement map texture and offsetting the vertex along its normal by the computed distance) along with any other variables you like (eg. time, intensity parameters...)


Check out tutorials on vertex displacement for code examples in your shader dialect of choice.



How can collisions be checked for against that deformed mesh?



Generally speaking, you don't.



Mesh collisions are already expensive, to the extent that games will often use one mesh for visual display, and a separate, lower-detail mesh (or collection of primitives) for collision - even when we're not applying dynamic tesselation to increase the visual detail further.


Wherever you can, I'd recommend making your collision geometry just detailed enough for "plausible" physics behaviour, and keeping it constant.


There are some cases where that's not practical however. Depending on what you need, we'll come up with different solutions. Here are a few examples of the kinds of things we might do:




  • We need objects to follow a dynamically moving surface, like the roiling waves of a dynamically tesselated ocean.


    Here, we'll often attach invisible "floater" objects to the bodies that need buoyancy. These will typically have a simple shape we can work with analytically, like a sphere, and we'll place multiple floaters with different positions/sizes/density to approximate the shape of a ship's hull.


    Each physics step, we can query the depth of each floater below the water's surface by evaluating the same wave height function used in our dynamic tesselation / vertex displacement shader. (So we're not reading back the triangles from the GPU, just performing the same math with the same inputs to get the same result, but for a few point samples instead of the entire water surface mesh) Based on this depth, and possibly a surface normal, we can compute the buoyancy forces to apply to our parent body.


    This method isn't rigidly accurate, but for a fluid surface with some "slosh" it's plausible enough, and very flexible for a variety of collider shapes / configurations and surface behaviours.





  • We need visually-correct collisions specifically for content we're looking at (eg. correct snap-to-surface for content under the cursor, or on-screen character feet, or visible particles bouncing off a surface instead of passing through it)


    Here, we can render the tesselated mesh to a depth texture, and query it to find the exact collision point along a ray from the camera. For content like particles, we can simulate their physics and collisions wholly on the GPU this way to save expensive synchronization & readback to the CPU for collision-handling there.


    The downside is that it works only for stuff the camera sees, and it does not correctly handle physics behind occluder objects. For off-screen objects we can fall back on a simpler collision model that might not capture all the detail, but this is usually forgivable since the errors happen out of sight.




  • We need precise matching between the rendered polygons and logical collisions everywhere (DANGER)!!


    In this case, there's not a lot we can do but duplicate the tesselation & displacement logic CPU-side, and use it to re-generate the collision mesh when it changes. As you say, this is expensive, so we should avoid this wherever possible, and minimize how frequently / how many polygons worth of collision data we update this way. Dividing the mesh into chunks may be one way to do that - localizing updates to just one chunk at a time.





No comments:

Post a Comment

Simple past, Present perfect Past perfect

Can you tell me which form of the following sentences is the correct one please? Imagine two friends discussing the gym... I was in a good s...