In my vertex shader I have calculated the world space
o.worldSpacePosition = mul(unity_ObjectToWorld, vertIn.vertex);
How do I convert that world space into uv space coordinates in my fragment shader?
float4 frag(VertexShaderOutput i) : COLOR //SV_Target0
{
// float2 uv = i.worldSpacePosition ???
}
I have tried
float4 mvp = UNITY_MATRIX_MVP[3];
float2 uv = mvp.xy /= p.w;
but that does not appear to work as the uv is always (0, 0)
EDIT:
I am drawing a mesh on the screen in world space coordinates which needs to lookup into a screen space texture:
The way I am currently solving this is by calculating the uvs for each vertex of the mesh using the following calculation:
camera.WorldToViewportPoint(vertices[i]);
Unity uses normalised coordinates for the viewport [0,1] rather than [-1,1].
This works fine but I wasn't sure of the behaviour if part of the polygon is off the screen. I presume I should clamp the uv?
Screen space might not be the correct term for the uv coordinates. Perhaps texture space is a more appropriate term?
Answer
Do I understand correctly that you want an effect somewhat like this?
(If I'm way off, you may need to edit your question to include more description of what you're trying to do and why)
Here I have a monochrome blue background image, and a cube with a material that displays a colour version of the same image, using screenspace UV coordinates. (Textures courtesy of Kenney)
Because both textures are looked up in screenspace, they line up no matter how the cube is positioned or rotated in the world. All that matters is which screen pixels it covers.
To do this in a vertex-fragment shader, you'd add a screen position property to your VertexShaderOutput
:
struct v2f
{
float4 vertex : SV_POSITION;
float3 screenPos : TEXCOORD0;
};
then in the vertex shader, we'll use it to store our projected vertex coordinates:
v2f vert (appdata v)
{
v2f o;
o.vertex = mul(UNITY_MATRIX_MVP, v.vertex);
o.screenPos = o.vertex.xyw;
// Correct flip when rendering with a flipped projection matrix.
// (I've observed this differing between the Unity scene & game views)
o.screenPos.y *= _ProjectionParams.x;
return o;
}
Note here I've discarded the z component and stored the w component of the projected vertex in its place. This will probably just get padded out to a full float4 width anyway, but I wanted to show that we only need the three components, in case you have an extra float you need to pack in for whatever reason.
It may look a bit weird to be outputting the same projected vertex data twice, but the SV_POSITION
semantic means the vertex
float4
gets treated a bit specially and doesn't have the same values by the time the interpolator's done its work and it reaches the fragment shader. There might be more efficient ways to work with this - if anyone knows them please post in the comments below!
Lastly, in the fragment shader, we construct our screenspace position like this:
fixed4 frag (v2f i) : SV_Target
{
float2 screenUV = (i.screenPos.xy / i.screenPos.z) * 0.5f + 0.5f;
// ...rest of your shader goes here.
}
The division by z (really vertex.w) here is called a perspective divide. It needs to be done per-fragment if you're using a perspective camera where polygons can be tilted relative to the camera, to prevent artifacts where surfaces appear to crease & slide along their triangle edges.
If you're using a Surface Shader, you'd modify your Input struct to include the screenPos
property:
struct Input {
float2 uv_MainTex;
float4 screenPos;
};
Then in the shader you'd calculate your screenspace UV using:
float2 screenUV = IN.screenPos.xy / IN.screenPos.w;
No comments:
Post a Comment