Flocking and Context Maps

In my last post, I built an AI that wandered around an environment while avoiding collisions. The next assignment was to do flocking instead of wandering, while still avoiding collisions. As I’ve already implemented flocking, I will instead be focusing on how this project is different from its previous iteration, and from other implementations of the flocking algorithm.

Framework improvements

The first major improvement was to change ContextMapSteering from using inheritance to composition. The class I used for wandering had to be broken into its component behaviors, which was fine since its internals were already fairly disconnected. This decision made building and tuning new AI very easy, and often codeless.

Component behaviors also allowed common values to be standardized, like weight, distance falloff, and shaping function. This in turn made automatic training possible (more on that later). It also made it possible to visualize the effects a single component behavior has, and to know an agent’s general behavior at a glance, both of which make tuning more intuitive.

Shaping functions are one of the most powerful tools available to context maps, so I implemented them as well. This updates live, which made tuning to desirable behavior much faster! It also gave insight as to why everything was running so slow: ManualShape uses AnimationCurve.Evaluate() and runs significantly slower than CircularShape which uses Mathf.Cos(). I don’t know how the internals of AnimationCurve work, but it looks like it uses multiple trig/sqrt calls.

I later added the Triangle Shape function, which is good for moving towards or away from a stimulus at an angle.

After reading over the documentation some more, I found this function to find the closest point on a collider. This makes it possible to do away with the inconsistent reaction provided by raycasting, to generalize reactions against static and dynamic obstacles, and to shape those reactions.

Finally, I overhauled the debug info for context maps. The option to draw gizmos in the scene is still present, but disabled by default to prevent visual clutter. Instead this can be found in the Inspector under the Preview foldout. This is available for the entire ContextMapSteering, summarizing the current state of the agent just like the gizmos, and on individual component behaviors, which further speeds up the tuning process.

Automatic training

Since tuning the values is typically a very time-consuming step, I decided to make a tool to automatically hone in on the ideal weights and parameters. The implementation I chose is a genetic algorithm. First, the agents are scored: time spent near other agents, while collisions take away points. Then, all but the best-performing agents are cut (an adjustable ratio), and those remaining best performers are duplicated and mutate slightly. The mutation rate decays a bit every generation, allowing for more and more fine tuning.

The Trainer script handles the culling and respawning, and relies on the Scatterer script to place agents in the world, as well as the Flocking Scoring System to pick which agents move forward to the next generation.

While running the training program, I noticed some interesting emergent behaviors that evolved over time. At first they learned to strongly distrust other agents, reacting sharply to any collider nearby. I suspect this was due to the wide potential variation and severe penalty on collision, making them unpredictable and risky to be around. However as they got better they began to ease off the sharp reaction, since they could trust their neighbors to be less erratic and handle all collisions more reliably.

Further development

Unfortunately my implementation and training was rushed, so I don’t have the data saved, but I plan to run this experiment again once I improve the trainer script. In particular I’d like to change the randomness from linear to logarithmic, graph performance over time, and save snapshots of the best performers from each generation. I’d also like to be able to pick which attributes are being tuned using reflection and a custom editor.

A major bottleneck I ran into was performance. I suspect this was in part due to too many calls to AnimationCurve.Evaluate(), so I will be transitioning away from that. The flocking neighborhood updating every frame seemed to have a bigger impact, however. This could be fixed by staggering and spacing neighborhood updates so only a couple happen every frame, or by using trigger colliders to take advantage of Unity’s physics system optimizations.

This is more of a irritation, but when splitting up the component behaviors I found that the weights were far apart for intuitive manual tuning. This could be fixed by using a custom Editor to show them in the context map steering host, possibly along side miniature preview windows.

Code is available in this project on my GitHub. (tag v2-flocking)

Leave a Reply