I have a large 2d tilemap, ranging anywhere from hundreds to thousands in size. The map is formed by 8x8 chunks. I am currently using A* but it isn't quite efficient because I may have as many as a hundred objects pathfinding simultaneously. The path doesn't have to be the absolute shortest, but a certain degree of shortness is expected.
I've looked here and saw that HPA* might work for me since my map is broken into smaller chunks. However, because the map is highly dynamic (let's assume that at least two chunks are modified per second) it seems there may be better alternatives.
Also, D*/Lite (mentioned in the link) might work, but how can it be optimized for a chunk based grid?
My question isn't particularly about HPA* or D*, but rather about finding an optimal algorithm for a large and dynamic, chunk-based, tilemap.
EDIT: The chunks contain procedurally generated content. Most of the time this will mean that the chunk is empty except for about 10-20% of its tiles, however sometimes this number can go up to 90% (that is in the case of generated "blobs"). The chunks will contain near-arbitrary data, and two chunks will rarely be the same.
Furthermore, each agent will have an independent path, however because the map is dynamic, it may contain man-made corridors and long passage ways that might result in them being shared often by many agents.
Most goals will be arbitrarily located, however there will be some predefined goals (locations) as well. For those predefined locations, I am considering using a vector field (sort of like a heat map) that can be used to approach it. However, doing this for every goal would be terribly inefficient especially since I would use the generated field only once (or in better cases, maybe a few times).
No comments:
Post a Comment