This project was an assignment from my game AI class to create an agent that will wander around a space while avoiding collisions with static obstacles as well as other agents.
At its core, a context map is a way of assigning values to a fixed set of directions, and picking the most favorable one. This can be combined with the traditional steering component approach of communication using either XY or heading-speed.
Sudden changes to the context map’s values can result in an unnaturally sharp turn, so context steering typically uses weighted blending instead of swapping out discrete behaviors. This is usually remedied by ensuring all piecewise functions are continuous and that there are no sharp jumps, changing weight based on criteria such as distance, urgency, and perceived threat.
For my implementation, I’m steering towards the highest-value direction, which is smoothed by the movement component. This would possibly result in accelerating into a wall, so I’m taking the context map’s value at the current heading to determine target speed, normalizing it, and treating negative values as a command to stop commands rather than avoid. I’m also using continuous piecewise functions for obstacle avoidance.
When writing AI agents, I tend to think of them as little robots. In this case, robotics would approach the problem by probing for distance, either with time-delay like sonar or lidar, or something mechanical like a whisker switch.
Extending this to games, distance probing can easily be accomplished through raycasts. Whisker switches can be simulated using overlap tests or shapecasts, but these are much more expensive. Whisker switch readings can be easily mapped as negative influence proportional to the distance before hit, making the steering consider directions with a hit less preferable.
However, small objects can easily slip through raycasts without being consistently detection. One option would be increasing the number of raycasts, but this is expensive when a lot of agents are active at once. Therefore, the context map instead handles dynamic (small) objects such as other agents by calculating the heading-to-target, then applying a negative influence at that angle smoothly blended away.
Advantages and disadvantages
This method produces believable wandering while avoiding collisions, but requires a lot of tuning. If weights are improperly set, an agent may not steer away from a wall in time, or may steer into the wall. It may also jitter away from the wall too sharply, or if the wander weight is too high it may spin in place instead of exploring.
The problem of values being tightly coupled only worsens when adjustments are made. For example if movement speed is increased, the range or weight of distance probing must also increase, balancing either by response time or response intensity. Unless extra preventative measures are taken, this chain reaction continues and the entire system must be rebalanced.
This implementation considers steering towards a negative value as a command to stop rather than a command to run away. It might be interesting to have this configurable, especially for something like a character dodging a dangerous ability in an action game.
One of the biggest advantages of using context maps is the ability to shape influences. For example, an enemy in an action game might approach the player straight on, but then blend that off into strafing once at close range. They might also try to dodge impending danger at an angle rather than running straight away.
This approach is also locked to move in the directions of the context map. For grid-based games (both hexagonal and square) this may be useful, but for a game with free motion this would be undesirable. To solve this, one could increase context map resolution, which costs processing power, or interpolate the target heading based on the gradient of the context map values.
Code is available in this project on my GitHub. (tag v1-obstacle-avoidance)
See also the continuation of this project.