An influence map is a grid-like data structure where each cell carries information about its surroundings. This is typically in terms of threat or safety, whether that be cover or line of sight on a target, squad cohesion or separation, range of potential attacks, or even emergency evasive action against incoming attacks. This makes it useful for tactical reasoning.
For example, for two characters with different attack ranges, it is easy to set up a tethering strategy, where the longer-range character tries to stay within their effective attack range while avoiding their opponent’s. Each character would add two influences: their opponent’s attack range as a “threat” influence, and their attack range (also centered on the opponent) as a “safe” influence.

Implementation
While strategic decision-making is a proven application for influence maps, it is not the application I wish to focus on. Instead I wish to create an agent that behaves like an ant, leaving behind a dissipating pheromone trail. Other agents then attempt to follow this trail while it lasts. Whether the trail is present or not, agents random walk to allow forking off into new trails, smooth trails over time, and try to prevent bunching.
When visualizing an influence map, values are often displayed as color graphics based on the normalized value. Influences also often have different shapes, multipliers, and scales. This gave me the idea to build the influence map using graphics hardware: the influence map is a texture, and each influence provider draws a stamp texture to it, using shaders and matrices to change its parameters. This eliminates the need for pre-caching calculations like square roots for radial falloff, and even adds the possibility for stamps to be a source of both threat and safety.
My implementation uses Unreal, so I took advantage of Runtime Virtual Textures for this. If you setting up a similar system, make sure the RVT Volume’s bounds line up with the surface you are using to preview it.
The next step was to accumulate a trail and dissipate it over time. For this, I set up a shader that took the RVT with the stamps and added them to an accumulator, while also blurring the accumulator’s previous frame (stored as two separate Render Textures). The blur weights were tuned to about 0.98 to cause the trails to slowly fade rather than continuing to grow in intensity indefinitely.
However, blurring the trail with this encoding scheme causes the decoded velocity to be distorted since black is negative on both axes. I solved this solution by adding a constant blue value, which I could then use as a reference to retrieve the original values of the red and green channels.
The last step was to steer agents to align to the trail. To do this, the agent samples the accumulator at a fixed distance on its current heading, and decodes the color back into velocity. However, this introduced an issue where agents would clump up in vortex-shaped attractor fields, and would only break out if a random walk moved them far enough across multiple frames. This could likely be solved by adding a non-accumulating buffer for separation, which could also easily double as obstacle avoidance.
Further steps
This approach to influence maps somewhat resembles flocking, except the only steering force is alignment. If flocking were to be fully implemented using influence maps as a layer of indirect communication, and agents ran as particles on the GPU, I would expect the number of supported agents to be virtually unlimited.
This experiment also blends influence maps with flow fields. By using vectors rather than scalars as the cell data, only already-travelled areas will have flow information instead of propagating the entire grid, acting like shared memory. Applying a steering force to this method may have an advantage in believability over cached A*, as pathfinders often use data that they shouldn’t have such as knowledge of tiles outside their line of sight.