Monday, July 10, 2017

unity - Any way to combine instantiated sprite renderers into one texture so I can apply into a plane at runtime?


So I am procedurally generating a tile set in Unity by instantiating different tiles that only have a transform and sprite renderer components. I managed to pack the sprites using unity sprite packer assigning them a packing tag so that I reduce the draw calls drastically.



But Im still having to deal with a great amount of game objects in the scene when the camera zooms out, increasing the batching and giving me problems when I try to apply light to them for example.


So I am trying to find a way to combine all these sprite renderers into a texture.So I can then apply this to a plane and Ill only have to deal with one plane gameObject for the map. But I cant figure out how to do this. I cant really use the combine meshes way as they are sprites only, not meshes. I tried taking a screenshot at run-time but It comes up a bit blurry. Any idea how can accomplish this? Thank you for your help.


This is is an example of the procedural generation of the tile set



Answer



Here's one way to handle it:



  • Make a texture that records which tile to draw at each cell of the grid, 1 pixel = 1 tile (so if you make a 4096 square texture, you can have over 16 million tiles in your map)

    • Ensure it's stored uncompressed, with point filtering and no mipmaps




  • Write a custom shader that reads this texture for each tile, then looks up the appropriate tile from your tileset texture.

  • Render your map as a single quad.


Example showing how a single quad can render a complex tiled map For this example I used a free RPG tileset by Kenney and a 16x16 map for simplicity.


Here's the shader I used, built on Unity's standard surface shader so I could apply lighting to it:


Shader "Custom/TilemapShader" {
Properties {
_MainTex ("Map", 2D) = "white" {}
_Tileset("Tileset", 2D) = "white" {}

_Size("Map & Tileset Size", Vector) = (16, 16, 20, 13)
}
SubShader {
Tags { "RenderType"="Opaque" }
LOD 200

CGPROGRAM
// Physically based Standard lighting model, and enable shadows on all light types
#pragma surface surf Standard fullforwardshadows


// Use shader model 3.0 target, to get nicer looking lighting
#pragma target 3.0

sampler2D _MainTex;
sampler2D _Tileset;

struct Input {
float2 uv_MainTex;
};


float4 _Size;
// _Size.xy is the size of the map in tiles.
// _Size.zw is the size of the tileset in tiles.

void surf (Input IN, inout SurfaceOutputStandard o) {

// Scale up the uvs into tile space.
float2 mapLocation = IN.uv_MainTex * _Size.xy;

// Get this pixel's position within the tile we're drawing.

float2 inTileUV = frac(mapLocation);

// Clamp the texel we read from.
// ("Point" filtering does this too, but its math is slightly
// different, which can show up as a shimmer between tiles)
mapLocation = (mapLocation - inTileUV) / _Size.xy;

// Slightly inset tiles so they don't bleed at the edges.
inTileUV = inTileUV * 126.0f / 128.0f + 1.0f / 128.0f;


// Calculate texture sampling gradients using original UV.
// Otherwise jumps at tile borders make the hardware apply filtering
// that lets adjactent tiles bleed in at the edges.
float4 grad = float4(ddx(IN.uv_MainTex), ddy(IN.uv_MainTex));

// Read our map texture to determine what tiles go here.
float4 tileIndex = tex2D(_MainTex, mapLocation);

// Generate UV offsets into our tileset texture.
tileIndex = (tileIndex * 255.0f + inTileUV.xyxy )/_Size.zwzw ;


// Sample two tiles from our tileset: one base tile and one overlay.
// (Hey, we have room for two indices, might as well use it!)
fixed4 base = tex2Dgrad(_Tileset, tileIndex.xy, grad.xy, grad.zw);
fixed4 over = tex2Dgrad(_Tileset, tileIndex.zw, grad.xy, grad.zw);

// Combine overlaid tile with base using standard alpha blending.
fixed4 c = lerp(base, over, over.a);

// Put all this into terms the Unity lighting model understands.

o.Albedo = c.rgb;
o.Metallic = 0.0f;
o.Smoothness = 0.0f;
o.Alpha = c.a;
}
ENDCG
}
FallBack "Diffuse"
}


Note that a bunch of the fixups I do there are so that texture filtering works correctly in tileset space. If you're using pixel art tiles with nearest filtering and no mipmaps, you can gut a bunch of that math.


Also, those divisions will be more efficient if you replace them with constants for your map & tileset sizes - I've exposed them as variables here to be more flexible during development & iteration.


If you need to scroll the map over a wider area than you can/want to store in a texture all at once, you can try one of two approaches:




  • use one quad & index map for each "chunk" of the map, and recycle/reposition chunks of the map as they scroll off-screen. This requires multiple draw calls, but you can clamp the worst case.




  • use one quad for everything shown on-screen, and incrementally update the tile index map as the camera moves around. You can use texture wrapping to pan the map across the quad, so you only have to update a row/column at a time instead of regenerating the whole map.





If you go this route, you'll probably want to also write a script that will generate the tile index map from a source file (like something you created in Tiled), or a custom editor script that helps you modify the tile map visually. Picking precise colour values pixel by pixel to populate the index map was by far the slowest part of putting this example together. ;)


No comments:

Post a Comment

Simple past, Present perfect Past perfect

Can you tell me which form of the following sentences is the correct one please? Imagine two friends discussing the gym... I was in a good s...