Strategy Pattern for AI
Creating realistic computer opponents in many games approaches more of an art form than a science. In puzzle games we can sometimes analyse the game to determine how best to play, perhaps capping the depth the AI will search for solutions to reduce its difficulty as an opponent. But how do we create a sophisticated opponent in less mechanical games?
Strategies
A method I've used before involves the AI selecting between a number of competing strategies (in the design pattern sense less than in the "game plan" sense).
It is usually possible to describe a variety of different strategies for an AI player. Just some might be:
- Patrol
- Watch
- Raise the alarm
- Take Cover
- Pursue
- Snipe
- Camp
- Flank
- Firing
- Blind-firing
Even games like driving games might have AI strategies, to the extent that drivers can be said to be driving aggressively or defensively. Perhaps if the car is damaged, they might drive gingerly and seek out the pit lane.
Each strategy is intended to be simple, mechanistic, and easy to code. Strategies mustn't require a big timeslice to constantly re-evaluate the situation, because we want many AIs can be run at the same time. The "strategy" an AI has adopted may control many aspects of its behaviour - invariably the actions it takes, but perhaps also what animations are shown and what phrases it says.
Note that activities like pathfinding and puzzle-solving aren't strategies - though some strategies might invoke these methods.
Choosing which strategy to adopt
Some strategies - running to a point, for example - eventually finish, and the AI would then select a suitable successor strategy or reconsider.
However every so often the strategy in use is reconsidered anyway, based on new tactical information - for example, the player hides or takes cover or climbs a tree. This can be infrequent because players will interpret any latency in reacting to the tactical situation as a human quality of reaction time (immediate reaction in a computer opponent is jarringly unnatural). An enemy that is alert may react sooner than an enemy that is taken by surprise.
It is important that strategies do not change willy-nilly. There must either be no tactical information or new tactical information for a new state to be selected, otherwise an AI that has been running in for the kill might appear bizarrely to stop and camp.
Ideally strategies will be sophisticated in their own right - something I dislike in computer games is where an enemy "patrols" by walking to a point then stopping, looking around, walking back, stopping, looking around, repeat. In real life people ordered to patrol are much less deterministic than this. They might sit in a good spot most of the time and occasionally take a random wander. They might look around and behind themselves more often rather than vacantly. So these strategies might be more granular - a guard who is in general patrolling might actually have several patrolling strategies that he swaps between.
An enhancement might be for nearby AI characters to introspect the strategies of those near them, or call out the strategies they are adopting, and adapt their choice of strategy accordingly. This would allow groups of AIs to work together.
Example Code
This technique was used in my Pyweek 10 game, Bamboo Warrior - the code for this is in aicontroller.py if you'd like to see an example (Warning - this is Pyweek code and is not as clear as it could be).
Comments
Comments powered by Disqus