Why ‘hierarchical tasks’ are a bad thing for realism
To get the behavior of the inhabitants in the life simulation part of my game right, meant hours of searching for just the right logical elements at first.
I started off with my beloved tool Behavior Designer, which is a behavior tree addon for Unity. It already offers a lot of the features that are needed to make the AI reason about its surroundings, such as tasks to let it see, hear or even navigate using a* or other means.
This was a great starting point since you can literally implement a basic, walking AI within seconds.
However, all the functionality of actually interacting with the game world did not help me at all, if the AI couldn’t actually decide on what to do in a realistic manner.
Out of the box, Behavior Designer only offers a simple ‘conditional hierarchy’ approach to decision making. This meant that at any point in game time, I had to tell the AI what’s best for it, by stacking the possible tasks in a certain hierarchy:
Can’t shower and already ate enough? Sit on the couch, even though right now playing guitar might be better!
That’s not a realistic approach at all. In real life, you would consider every possible, or at least every reasonably possible task you could do, and choose the one that offers the greatest benefit at that time. Think of it as more of a parallel decision making, where the hierarchy of tasks is determined by the value they offer to the AI at the moment, not at the start.
Thus implementing a utility-based AI was needed.
Implementing a Utility AI
Behavior Designer already comes with a utility selector task, but that only calls getUtility() on each child task, and picks the highest one. This meant that all the structure behind it was left to me, so I made this:
The user can first decide what the AI actually needs. In the picture, I left out many needs, but usually, these would be things like hunger, bladder, fun, social and so on. Then, this need should be assigned a weight curve. This means the utility is based on the value and the weight for the value, for example increasing the hunger by 20 when it already is nearly full (which means the AI isn’t really hungry), is way less important than increasing the bladder from -80 to -60, since the AI needs to take a pee much more than it needs to eat right now.
Actions are the things the AI can actually do to increase the value of needs, such as eating or sleeping. Each action can be assigned different needs and by how much it would change their value.
If you want to know more on how certain weights should look like, I recommend you read this article, and also take a look at this picture (remember, higher values means the need is more satisfied):
Telling the behavior tree our most personal needs
Above, I already mentioned that the behavior tree, using the utility selector, already calls getUtility(); so it was just a matter of implementing a task that would implement getUtility(), and source its utility from the AI_NEED_ACTION monobehavior.
First, let’s take a look on how this is actually hooked up in the behavior tree:
As you can see, the Utility selector will select the XOA Evaluator with the highest utility value and run it.
This meant, that each evaluator would have to predict the increase in utility based on the action it is tied to, and also stay active until it is completed, even if another task might become more valuable before that. Otherwise, the AI would be constantly interrupting the things it does and starting new actions – that wouldn’t be good.
Overall I think this is one of those times where the actual code is extremely easy, but it has an amazing effect on the overall way the AI behaves.
The results that this logic gives you are amazing and immediately noticeable. For testing, I created the main needs hunger, sleep, and thirst, and assigned four tasks: Eat, Work, Sleep and Pee. For all utility curves and values of actions, I just went by feel, meaning by how I felt that those things actually impacted me during the day. The AI would directly go into a day-night cycle of eating, working, peeing and sleeping. Way too regular for the final game, which will probably receive some randomness thrown into the mix, but for what it was, it really amazed me how such little things actually impacted the behavior of the AI to be so realistic.
The AI since has received countless updates, with one pretty interesting one being the addition of smart objects, objects that tell the AI how they’re used, which I will go into detail about in my next update!