I'm fairly new to Unity and am having a bit of trouble understanding the rendering pipeline.
I need to render the scene to four "windows," so that the viewport shows a 2x2 grid of the scene, each with a slightly different projection matrix.
My first thought was to have a component that would create four RenderTextures
, and, in OnPreRender()
, iterate through them, setting the main camera's targetTexture
property to each texture in turn and rendering the camera's view to that texture:
(Pseudocode)
public class CompositiongEffect : MonoBehaviour
{
private Material m_material;
private List m_textures;
void OnEnable()
{
if (null == m_textures)
{
m_textures = new List();
for (var i = 0; i < 4; ++i)
{
m_textures.Add(new RenderTexture(512, 512, 24));
}
}
if (null == m_material)
{
m_material = new Material(Shader.Find("Custom/CompositingEffectShader"));
}
}
void OnPreRender()
{
var cam = Camera.current;
foreach (var texture in m_textures)
{
/// adjust cam projection matrix here...
cam.targetTexture = texture;
cam.Render();
}
}
void OnRenderImage(RenderTexture source, RenderTexture destination)
{
Debug.Assert(
null != m_material
&& null != m_textures
&& m_textures.Count == 4, "Texture list not initialized"
);
for(var i = 0; i < m_textures.Count; ++i)
{
var x = i % 2;
var y = i / 2;
var uniformName = string.Format("Texture{0}{1}", x, y);
m_material.SetTexture(name, m_textures[i]);
}
Graphics.Blit(source, destination, m_material);
}
}
The Image Effect m_material
would then draw the 2x2 grid to the frame buffer in a single pass.
Is this a good approach? I suspect (perhaps incorrectly) that using an image effect for the last step would actually cause an additional scene render that never gets used.
The reason for this is the signature for OnRenderImage()
void OnRenderImage(RenderTexture source, RenderTexture destination)
implies that source
is already a valid image when this is called. That image has to come from somewhere, and it seems that it would contain the full-resolution rendered scene already. Since I am already rendering the scene in the previous step, I'd like to avoid this additional render.
Am I interpreting the pipeline correctly? If so, how can I address this?
Thanks!
Edit
Can I disable the main camera, then do everything manually in my component as mentioned above? If the camera is disabled, to what should I attach my component? Perhaps the camera's parent gameObject
?
Answer
I do that way:
- Create several cameras attached to the same parent gameobject. Then create RenderTexture in editor for each camera. Put RenderTextures to cameras' Target Texture property;
- When we want to move Main Camera, we move the parent object instead;
- Main Camera is left near other cameras, but Target Texture is null;
- Next, create a shader which takes all textures as uniforms and mixes them;
- Create a new material and specify our shader for it, then put RenderTextures in uniforms of the shader;
Create a script in Main Camera and put the code like this in it:
public Material effectMaterial;
void OnRenderImage(RenderTexture source, RenderTexture destination)
{
Graphics.Blit(null, destination, effectMaterial);
}
Attach our material to effectMaterial property in script component.
Hope that helps!
No comments:
Post a Comment