December 4, 2013

Simon's Ant On The Beach


So here at Candango Games we are working on this horror game for PC which is bound to be announced soon – it's the game I'm studying the building blocks of horror for – and one of the main features I wanted it to have was a good and immersive stealth system. And we all know the one most important thing in making great stealth is great AI.

Looking to achieve that, I came up with a heuristic approach on how to design the architecture of the AI. The following paragraphs explain how this design problem was approached.

Some theories in Cognition imply that we don't see Reality for what it is, but instead, what we "see" is a Virtualized "copy" of Reality created and processed in our minds. The same line of thought is also demonstrated in ancient Plato's Cave.

That's the same way how we naturally approach the idea of AI at first thought: we make the agents acquire and process information about the world to make decisions based on that information. It makes complete sense, we're emulating the way we think of things, copying it into our artificial people, and in fact what we're trying to do with an agent is to make it act the way a real human being would, so that's not wrong at all.

But there's one thing to observe in the specific case of Video-Games: their equivalent of the "real world" is already virtual. It's already made of pure information, so why should we re-virtualize it into the agent's brain and then try to make sense out of it? Why try to bring the world into the agent's brain and not look at it from the other side and bring the agent's brain into the world instead? So this is how I decided to do it: the world will "think", be "intelligent" and have knowledge. The world, not the agents.

This was decided to be made this way because it proved better for our purpose of making AI for interesting stealth gameplay Dynamics. Imagine the following in gameplay sequence:

You're being chased inside a house, running down a corridor you see an open door leading to a bedroom, so you run inside it and lock the door. The chasers start to mash the lock and you know they'll soon break through and get in the bedroom, so you look around and you see a closed window, a bed and a closet. You open the window and then hide under the bed. The door breaks open and one of the chasers comes inside, he runs to the open window and look outside, then he screams to the others informing them that you went outside, and then jumps out of the window and goes away in the dark. You have escaped, this time.

That's what can be done with the basic heuristics of "bringing the brain to the world", with extremely simple code. Like, Pacman-simple. The detail lies in the fact that the agent (chaser) is completely unaware of anything of what happened, he doesn't know what the window is, or what it means, or where it leads to, not even that is leads somewhere. The agent is completely oblivious to all those concepts, things and ideas. All he did the entire time was to follow simple waypoints and scripts around.

Now picture the following events:

You're there on the computer desk like, game-making, and then suddenly you find yourself in the kitchen, face into the open refrigerator, looking for something you're not sure what, or coming back to the desk with a cup of coffee you barely remember wanting or even serving.

How did that happen? Did you make an informed and thoughtful decision of getting up and doing that stuff? Where did those actions originated from?

What happens it that it's not the agent that's "intelligent", it's the window. The brain is in the world, not in the agent. The window tells the agent what to do, where to look, where to go, what to say to the other agents, and how to portrait "his" decision to the player. When the agent enters the bedroom, the whole environment tells him to do things. The bed where the player is hiding under tells him to look under it. The closet tells him to look inside it. The recently-opened window tells him to go outside. After all those tasks are received, the agent proceeds to make an Utility-based Decision using the priority of each task.

By interacting with the window, the player raises it's importance, which makes the tasks given by the window to have increased priority making the agent pick its task over the others. That's controlled by the window's scripting, for example, if the player was inside the closet and made no noise, it could instead reduce the priority of its task, to lead the agent to look under the bed first, giving the player a chance to escape; it's all up to the Game Designer to tweak and decide how each piece of the environment should work.




But not all tasks come from external sources. Things internal or attached to the agent can also give them tasks. For example, if the agent has a medic pack and he's hurt, the medic pack (attached) will tell him to use it on himself, and give the task a weight relative to how gravely wound the agent is. At the same time, the damage system of the agent (internal) will tell him to run away from what's hurting him. The likely course of action for the agent in that situation based on the weights of each task is to run and seek safety, then treat the wound, then proceed to keep fighting or to keep running away. That's easily achieved by making the medic pack consider that it can't be done under dangerous immediate circumstances, as stopping to treat the wounds will leave the agent vulnerable to more attacks, so it reduces the priority of the task if the situation is not suited for it.

These are the very basics of the heuristic approach, and I was very happy with it. We can make the stealth system behave credibly and seem complex while still being very simple in code, and the system is versatile enough to allow easy addition and improvement of tasks. For example, we can create a new task source for a car or a locker or a ladder and never have to touch the agent's programming, or we can improve or tweak the systems that control a single task or environmental brain piece and not affect anything else directly. And the behavior possibilities studied seem very interesting to create an engaging experience – and for the specifics of a horror game, some moments of uncanny display of intelligence for the player to witness.

I then considered the addition of a text-parsing system to see how it could be exploited by those heuristics. A textual instruction of "put the blue ball into the green box" given to a friendly NPC would be identified by the respective green box and blue ball then be turned by them into tasks given to the agent telling him how to grab the ball and where to release it, making it all seem as if the agent understood what the player told him to do.

All nice and sound so far. Then the raw Game-Design phase ended and I got to the point of actually making it in code. The execution phase. This is what my system looks like at this point:


"Behavior is always an interaction of an agent with its environment".


As I started programming that system in Unity, I took a bit of time everyday to research basics of AI. Not only Video-Game AI, but AI in general. I wanted to make sure to not be just wasting time reinventing the wheel after all.

I started to stumble in some very interesting concepts and half-century-old studies and experiments. Among them, the "Frame Of Reference" property, experimented with in the animated movie from Heider-and-Simmel (1944) and exemplified in Herbert Simon's "The Sciences of the Artificial" (1969) by the anecdote "An Ant On The Beach" and, by the possibly not considered correlated at the time, The Kuleshov Experiment (the original from around 1919). That's where things started to surprise me (and deprive me of sleep).

Finding out that what I was working on was coming from the opposite side of those studies to meet them in the middle of the road was very eye-opening. I was coming at it from a different perspective, that making the world "intelligent" instead of the agents would be an efficient thing to do because we virtualize the world we live in and the world an agent lives in is already virtual. But that was a base of something that could have been used to do much more, and I wasn't seeing it.

What all these studies have in common is: a simple thing when combined with a complex thing creates a complex output that our minds then interpret as even more complex and intelligent than it actually is.

Of course that's Game Design 101 that the point of the AI is to make the agents seem complex and intelligent by using a diversity of simple illusionist tricks. But the interesting part is not that basic approach, but the pattern of the Simple vs Complex formula repeated in all those experiments. The fact that the same building blocks apply to each of them even thou their specifics differ.

After seeing how present that aspect was in each of those experiments and studies I started to look for similar stuff everywhere else, because maybe it was there in more things too and I didn't know. Most importantly, I was looking for them in the gaps of our AI system. And then I started asking many questions about design problems and trying to answer them using that same system and that same formula.
  • If the tasks are created under a modular system and can look into the agents information to calculate and weight behavior to give them as tasks, why not increase the amount and detail of the information they have? Give them personalities, background, mood, feelings, social dynamics and then have tasks consider that information too instead of only health and inventory? 
A behavior to make a character that was born in the jungle to know where to find water and food and the ones from the city to not know it can be just a simple background check done by the task source that sends them after the food and water. If the character is from here, give this task, if not, don't give it.
  • Why am I only considering the concrete side of the world and not using abstract stuff as well? The things that are there but cannot be seen? Things like dramasuspensecomedy? Why only create behavior originated by things and not by people and ideas? Why not create tasks originated by groups of things, by science and plot scenes?
We can make dynamic landmark plot scenes to spice up the sea of emergent behaviors. Scenes that pick from which characters are available according to how their situation and personality fits each character role required (or optional) from the scene to happen.
Scenes can be scheduled or tied to a place or trigger on a situation or on a certain point of the global story arch. For example, to create a generic zombie movie scene where one character have been bit or hurt and the other characters argue if they should help him, or kill him, or leave him behind: it doesn't matter who's hurt or who's in the scene or not, the task source (a plot scene) just has to evaluate how each character fits each scene role best and assign tasks for them to "act" in each role, and then just let everything happen naturally with their utility-based decisions. On another example, we could have the last climax scene pick whatever character makes sense and have been outside the player's watch in key situations to be revealed as being the killer of the thriller plot.
  • Why not use these dynamic plot scenes and propagate their effects to further events and change the plot naturally?
Like if the player sneaks about and flats the tires of an NPC's car, he can't show up to the scheduled plot scene "The Party", then another NPCs gets to dance with the common love interest (because the dance scene will replace him with the next NPC on the list to fit the role), and then the future love-triangle scenes invert the two characters between the roles of who's the boyfriend and who's the other guy, leading to other further outcomes later.
Or if a character that's taking on the Leader role of a group lost in the desert starts to lose his sanity or falling into despair, the abstract idea / social dynamics "Team Leadership" chooses another NPC to take on the role and the responsibility of being the leader. Similarly, if the group is unhappy with the leader decisions, the social dynamics of the group can spawn a new dynamic plot scene where they fight over and then disband into two groups. 
  • But then why stop at that? Why not let the plot scenes detect when the player himself picked on a role and then adapt the scene to consider that?
The scene picks a character for a second leader role (or leaves it open for a while to see if the player fills in) and others to be support of each side and others to argue that it's better to stick together and stop fighting.
  • But characters can also be fighting while still running away from the zombies and while arguing with each other about who's the leader or if the hurt guy is gonna be left behind, so why not improve the decision system and make the characters capable of multitasking?

Agents have "physresources" that allow them pick multiple tasks at once (and weight decisions based on groups of tasks rather than only individual ones): concentration limits, arms, legs, mouth, eyes...

    • If "behavior is always an interaction of an agent with its environment", why not extend that meaning of the term behavior to encompass personality and mood? Don't we behave differently at home and at work or in social events, or with friends or strangers? If the place is happy don't we become happier and if the place is serious we act serious in accordance to it? Isn't the same true for an event or situation?

    The Current Model

    After further consideration of many questions and aspects, without any big increase in code complexity, but simply by better exploring that same original heuristic made for stealth AI while keeping all the basics of the system intact, it finally arrived at this:


    "Everything that happens once can never happen again. But everything that
    happens 
    twice will surely happen a third time". Patterns can always go further to reach more stuff.


    No comments:

    Post a Comment