I have been looking at shaders found here shadertoy.com and most of the cool ones have noise and raymarch in common. I do not understand the source code at all but I really want to. How do these shaders work and how does the raymarch algorithm work? I've searched all over and can't find anything on the topic.
Thanks
Answer
It's probably easiest to understand by contrast with raytracing.
To render a primitive with raytracing, you need a function that, given the primitive and input ray, tells you exactly where that ray hits the primitive. Then you can test the ray against all relevant primitives, and pick the closest intersection. CPUs are good at this.
With raymarching, you don't have such a simple ray intersection function. Given a point on the ray, you can estimate how close the point is to the surface, but you don't know exactly how far you need to extend that ray to hit the surface.
So, you "march" one step at a time:
Start at the "beginning" of the ray - the near plane for scene rendering, or the intersection with the bounding volume if it's just one object in the scene. (P0 in the diagram below)
Evaluate your distance function to get an estimate for how close you are to the surface. (The largest circle in the diagram)
Move forward along the ray according to your estimate. The move should be conservatively short, so you're confident you won't tunnel through the surface anywhere.
Now you have a new point (P1 below) - get a new estimate and repeat.
Continue getting estimates and stepping forward until you get within a threshold distance of the surface, or you hit your maximum step count. (P4 below)
Now you have the depth of the surface, and can infer things like normals/ambient occlusion from nearby samples, and use this data to light & colour the pixel.
Example diagram from GPU Gems 2, chapter 8
Because each ray is independent and uses (generally) only local information at each step, it's ripe for parallelizing on GPUs. Often, only two triangles will be drawn on the screen. After rasterizing these, each pixel passed to the fragment shader represents a single ray. The fragment shader marches that ray until it reaches the surface, returning the result (often just the depth value for texturing & shading in a separate full-screen pass).
The exact steps depend a lot on the particular effect you're trying to achieve. Raymarching techniques are used with...
- heightfields to simulate surface displacement on traditional rasterized geometry (parallax occlusion mapping)
- scene depth buffers for things like screenspace reflections
- volume textures for visualizing 3D-sampled datasets (often scientific/medical)
- implicit functions for rendering things like fractals
- procedural distance fields as in IƱigo Quilez's work (great links from msell in the comments above).
Raymarching is also used with blending at each step (often using fixed steps instead of estimating a distance each time) for rendering volumetric translucency, as in this example from Wikipedia.
This has become a popular way to render detailed clouds in realtime.
Even Interior Mapping, a way of simulating interior room detail behind building windows, could be considered a form of raymarching, where the ray is stepped from the point it enters the window to the closest wall, floor/ceiling, or furniture plane.
If there's a specific type of raymarching effect you're interested in, you can probably get more detailed answers by asking a new question with specific examples. As a family, the technique is too diverse to cover everything in one short answer. ;) I hope this gives you a framework for understanding what's happening under the hood in these shaders.
No comments:
Post a Comment