To create this algorithm, I first looked at how NavMeshes work.
We can’t add all possible points on the mesh, since those are effectively infinite, so instead we simplify polygons into single nodes, and connect those that have a common edge. After processing this with our pathfinder, we can interpolate the details.
Simplifying this to 1D is straightforward: instead of a NavMesh of all connected faces on a surface, we can model surfaces as a polyline. Instead of connecting the nodes of bordering faces, we can connect them if a jump could reasonably be made from one to the other.
Due to the surface detection method I chose, we must first determine the world bounds. This is very straightforward thanks to every Collider having a bounds. Therefore, we can loop over all Colliders in the scene and Encapsulate them.
Once we know the world bounds, we can begin raycasting to probe for surfaces. When we’ve hit something that isn’t part of a known surface yet, and that is of a walkable angle, we can do a flood-fill using more raycasts and offsetting by the surface’s tangent. Since raycasting is an expensive operation, the first set of raycasts should be much coarser, but not wider than the smallest walkable surface. The second doesn’t have to be very fine either, since it’s only an approximation that informs the pathfinder where it can go–if the pathfinder does actually evaluate a path there, it will do so using physics simulation, guaranteeing validity when pathed at runtime.
As surfaces are modelled as hierarchical nodes, the connections are from jumping between them. We can try to cache the character’s jump arc, but we must raycast to ensure these are actual jumps that can be made, and to know when we hit a surface. Since a surface must be a continuous walkable area, we can cull jumps that land on the same surface they started on. This also simplifies pathfinding time by preventing the creation of connections that will never be run, since from the algorithm’s perspective it has no discernable benefit, but by definition must have cost greater than zero. To even further simplify, we can record only the start, end, and airborne time, and in the “fill in the details” step equivalent to filling in the path from faces of the NavMesh, copy over the precalculated jump arc and update the timestamps.
This is versatile enough I’m including this in my platforming toolchain project. In theory, this can be applied to any game in which AIs must jump while pathfinding. Its best application would be a platformer, since it can act as the NPCs, model player speedruns, and help speed up the level design process. I see plenty of potential in combat-focused RPGs as well, since we can model abilities that impart movement similar to jumps.
Since surfaces can be generated anywhere, it may be possible to boost pathfinding performance by only scanning a smaller bounds rather than the entire world. However, this doesn’t guarantee a path will be found since some of those paths may go outside the selected bounds. This may also be applied to AI with limited environment knowledge, generating surface representations on the fly.
As it is right now, the algorithm is highly dependent on raycasts or capsule-casts, which are infamously slow. By reducing the number of raycasts made to validate a surface or jump, the time taken to generate a mesh would also be reduced. One such possibility could be coarser jump validation, waiting several physics frames between each raycast.
As previously mentioned, the algorithm currently requires frequent raycasts. In Unity, this is not a threadsafe operation. However, it would be possible if the world were to be approximated as grid cells, possibly with a marching squares algorithm to approximate normals. Since the pathfinder only needs to validate physics once it has a valid path, this can be pretty coarse, allowing for fast startup times. Furthermore, recently there has been a Jobs-style way to asynchronously request raycasts, which further suggests a Jobs sweeper and pathfinder.
Full project is available on my GitHub.