Developing computer opponents for turn-based games was one of the earliest applications of artificial intelligence (AI) by computer scientists. Many of these early computer opponents make use of a static evaluation function that takes a proposed game state as input and returns a numeric value indicating the value of the position of the state.
By applying the static evaluation function to each new game state that is reachable from the current game state, the computer opponent can select the alternative with the best numeric value. The performance of computer opponents can be improved by looking ahead two or more moves before applying the static evaluation function.
An algorithm like minimax is used to combine the results of the “look ahead” moves (recognizing the fact that each player will be looking for the best move possible). Another task confronting opponents in combat or sports games is path finding through the game objects displayed on the screen.
Nothing exposes weaknesses faster than when a computer opponent allows itself to take the long way around an obstacle when an obviously shorter path is available. Heuristic search techniques like A* (pronounced A star) can be used to avoid time-consuming trial and error searching for paths between game positions.
Sometimes the positions of objects for a particular game state are known in advance and all routes can be precomputed and stored in the game database. This is not an AI technique, but it definitely is one way to speed up game play.
Pursuit of opponents and running away from opponents can often be handled by heuristic techniques based on the distances between player pieces on the game map. If the distance is getting bigger, a pursuit heuristic might be to alter direction to make the distance smaller.
If the distance is getting smaller, an evasion heuristic might call for a computer-controlled object to do whatever is needed to increase the distance from the pursuing piece. This type of heuristic is fairly effective for simple objects such as rocks or missiles.
For slightly more intelligent objects (such as birds or spaceships), some random behavior might be added every few moves to make the computer-controlled pieces seem more realistic. Movement patterns might also be used to govern the behavior of patrolling guards or prowling animals.
For important game elements, such as computercontrolled characters in role-playing games or interactive fiction, more advanced AI techniques might be used. Finite-state machines or production systems are good for modeling event-driven computing systems.
Finite-state machines only require knowledge of the current game state and the most recent user input or game event to determine the next state of computation. This requires careful analysis of the system states and their relationships, but it can be a very effective way to make a computer-controlled character appear to react intelligently to each game situation.
Finite-state machines can be represented graphically or by using state transition tables. One way to avoid repetitive behavior is to give a computer-controlled character some means of remembering previous actions and allowing them to vary them from time to time.
Many people would insist that an intelligent computer system should be capable of learning from its mistakes. Neural networks and genetic algorithms used in computer games allow computer opponents to improve their performance based on past game experiences. Even early checkers-playing programs made use of numerical learning techniques similar to these approaches to improve game play.