Adaptive AI Engine for RTS Games

Discussing the theory and practice

Archive for July, 2009

Message Based Systems

Posted by Ogail on July 30, 2009

Message Based Systems

  • Messaging is not decision making structure it’s communication structure
  • Messaging systems help in CPU utilization
  • Messaging serves as a secondary system that resides below the underling decision structure of your game
  • Characteristics of Candidate AI-Systems to use Messaging Systems:
    • AI Controlled charecters are most often created to be reactive
    • AI level is very high so it’s communicating with many other game’s engines
  • Messenging System Framework:
    • Message Object
    • Message Pump
    • Client Handlers


  • Code Skeleton:
    • Message: stores individual info requirements of a message
    • MessagePump: central message router/ Acts as post office of system
    • Client Handlers: run code to accommodate any given incoming message
  • Pros of messaging systems:
    • Optimizing client-side code, where states themselves don’t care about State-Transitions
    • Decoupling class level communication advance the design
  • Cons:
    • Additional memory footprint
    • If polling action is required it’s hard and overhead to implement
  • Extension:
    • Message priority:
      • Care about starvation
    • Message arbitration:
      • See messages that could be fixed on the fly
        • Message redundancy, message collision, starving messages
    • Atomic and extended messages type:
      • Periodic message
      • Debugging messages
      • Confirmation messages

Immediate messages

Posted in AI Game Engine Programming | 3 Comments »

Learning – Artificial Intelligence for Games – A Summary

Posted by merothehero on July 28, 2009

Learning

A summary of selected topics in the chapter

From the Book: Artificial Intelligence for Games – 2006

Chapter 7

Author: Ian Millington

  1. I.            Introduction

Online or Offline

  • Majority of Learning is done offline (either between levels of the game or at the development studio)
  • Online Learning is while the game is playing.

Intra-Behavior Learning Algorithm

  • The simplest kind of learning that changes a small part of the character’s behavior
  • Good Example: learning how to evade, learning to find cover points in a room
  • Bad Example: if a character is trying to jump a wall it won’t tell him to use the stairs instead

Inter-Behavior Learning Algorithm

  • The character learns “new” things (not in a specific scope)
  • Example: A Character that learns to tie a rope across the street to stop an escaping motorbike
  • Unfortunately this kind of AI is almost pure fantasy.
  • It can be implemented by choosing to between a range of different behaviors, but still each distinct behavior will require implementation by the developer
  • It’s not economically good to learn everything, basic movement systems for example is easier and faster to be implemented directly.
  • Developers are experimenting on replacing the ordinary decision making systems (FSM, FUSM, Scripting, Decision Trees, and Markov Systems) with Learning Techniques

Warning

  • The biggest problem with learning is reproducibility and quality control.
  • Learning the Wrong thing is a big problem that industrial companies found
  • The more flexible your learning is the less control you have on your game-play
  • Normal solution to this is to limit the kind of things that can be learned in a game

Over-Learning (Over-fitting)

  • When the AI learns only from specific situations or experiences, but we normally want the AI to be able to generalize the learning.
  • Neural Networks for example can over-fit during learning if they are wrongly parameterized or if the network is too large for the learning task at hand. (From my Readings over-fitting happens because of local maximum points)

The Balance of Effort

  • Learning is done to decrease the implementation work ,because you don’t need to handle every event or trick happening to the character, instead it will learn to handle it itself.
  • Unfortunately, Learning often increases the implementation work, because they require a lot of work such as: presenting data in the correct way, making sure the results are valid and testing the AI to avoid it learning the wrong things.
  1. II.            Parameter Modification

Introduction

  • Not considered a “learning” technique.
  • The simplest Learning technique is to calculate the value of one or more parameters such as: cost functions for path finding and possibilities in decision making.
  • Most commonly learning parameters like these is done offline, but can usually be controlled when performed online.
  • For each value of the parameter there is some energy value (or fitness value) which represents how good the value of the parameter is for the game.
  • Fitness landscape is used when we’re looking for maximum, while Energy Landscape is used when we’re looking for minimum. So they are just different terminologies.

Getting the score out of the parameter values

  • If the value can be generated from some function or formula, if it’s a simple mathematical formula then we can differentiate it so the best values can be found explicitly. In this case there’s no need for parameter optimization.
  • In most cases, this formula doesn’t exist, the only way to find out the value is to try the game and see how it performs and produce a fitness or energy score for the parameter.
  • We should have a mechanism to determine the value for a set of a parameters quickly such as: allowing the game to run at high speed without rendering the screen or use a set of heuristics that generate a value based on some assessment criteria without ever running the game, if there is no way to determine the value for the parameters other than running the game with the player, then all the techniques in the chapter are unlikely to be practical.

Hill Climbing

  • An Algorithm that tries to work out in what direction to change the parameter in order to improve its score.
  • It often fails to get the optimal solution if the score of the parameter values is random.
  • Its Performance is O(n) where n is the number of parameters while it’s O(1) in memory.

Extensions for Hill Climbing

  • There are Several extensions to support hill climbing with more random score:
    • Momentum: if the search is consistently improving in one direction, then it should continue in that direction for a little while, even if it seems that things aren’t improving anymore. It takes much longer to reach the best parameter value, but it doesn’t get stuck easily in a local maximum point on its way to the peak.
  • Adaptive Resolution: when a parameter is long away from the best value, taking small steps in learning is slow but if the steps are large, the best value could never be reached. This problem is solved by Adaptive Resolution, where long jumps are made earlier and small jumps later on, and as long as the score is improving the steps are bigger, when its stops improving the steps become smaller.
  • Multiple Trials: Hill Climbing is very dependent on the initial guess, so trying the algorithm from a different initial starting location will get the optimum score after some trials, However this Extension is not suitable for online learning because the player doesn’t expect that the AI suddenly gets worse (because it starts the hill climbing again with an initial value which can be a low value)
  • In Most Problems, we can’t even recognize the best solution when we find it, for example in an RTS game the best use of resources into construction and research, if we ran 200 trials we can’t guarantee that the best score had been scored in these trials, there’s no formula that tells us that this is the best solution, so we must just choose the most satisfying solution even if it might not be the best result.

Annealing

  • Similar idea to Hill Climbing, and will be read later due to time constraints.
  1. Action Prediction

N-Gram Predictors

  • N-Grams are used in various statistical techniques and are not limited to prediction , they have applications in analysis of human languages
  • A 3-Gram –for example- will keep track of the number of times each sequence of 3 choices is seen. For prediction, the first 2 choices form the window and the probability is calculated by looking at the proportion of times each window is taken for the third choice.
  • Example :In This string  “LRRLRLLLRRLRLRR” The 3-Gram Probabilities :
Window/3rd choice ..R ..L
LL ½ ½
LR 3/5 2/5
RL ¾ ¼
RR 0/2 2/2
  • Increasing the window size increases the efficiency of the algorithm but decreases the prediction.
  • As the Sequence of Actions gets more random, the window size needs to be reduced.
  • In Predictions with more than 2 choices, the minimum window size will need to be increased a little, also bigger window size get much poorer prediction results.
  • There are mathematic al models that are used to tell you how well an N-gram predictor will predict a sequence and they are used to tune the optimal window size.
  • A Hash Table could be used for better memory performance especially for large number of choices.

Hierarchical N-Grams

  • Several N-Gram algorithms work in parallel, each with increasingly large window sizes, A Hierarchical 3-Gram will have regular 1-Gram, 2-Gram and 3-Gram algorithms working on the same data.
  • A sequence of “LRR” is passed to hierarchical 3-Gram, gets registered in the 3-Gram, the “RR” portion gets registered in the 2-gram, and “R” gets registered in the 1-Gram.
  • When a prediction is requested, the algorithm first looks up the window actions in the 3-Gram, if there have been sufficient examples of the window then it uses the 3-Gram to generate its prediction, otherwise it looks at the 2-Gram, likewise, the prediction may come from 1-Gram. If none of the N-Grams have sufficient Examples, then the algorithm returns no prediction or random prediction.
  • There is no single correct threshold value for the number of entries required for confidence. To some extent it needs to be found by trial and error. In Online learning, it’s common for the AI to make decisions based on very sketchy information, so the confidence threshold  is small (3 or 4)
  • The algorithm is O(n) in memory and O(n) in time where n is the highest number N-Gram used.
  • The most widespread application of N-Gram is in combat games.
  1. Decision Learning

Weak Vs Strong Supervision

  • In order to improve performance, we need to provide feedback to the learning algorithm. This feedback is called “supervision,”
  • Strong supervision takes the form of a set of correct answers; these correct answers are often provided by a human player. The developer may play the game for a while and have the AI watch. The AI keeps track of the sets of observations and the decisions that the human player makes. It can then learn to act in the same way.
  • Weak supervision doesn’t require a set of correct answers. Instead, some feedback is given as to how good its action choices are.
  • Strong supervision is easier to implement and get right, but it is less flexible: it requires somebody to teach the algorithm what is right and wrong. Weak supervision can learn right and wrong for itself, but is much more difficult to get right.
  • Each of the remaining learning algorithms in this chapter works with this kind of model. It has access to observations, and it returns a single action to take next. It is supervised either weakly or strongly.
  1. V.            Decision Tree Learning

ID3 Algorithm

  • Algorithm for generating Decision Trees from a set of rules
  • Example: The Input could be the following :

Healthy – In Cover – With Ammo: Attack

Hurt – In Cover – With Ammo: Attack

Healthy – In Cover – Empty: Defend

Hurt – In Cover – Empty : Defend

Hurt – Exposed – With Ammo: Defend

  • The Output will be the following Decision Tree:

  • The Details of the ID Algorithm will be discussed Later
  1. Reinforcement Learning

Introduction

  • Reinforcement learning is the name given to a range of techniques for learning based on experience.
  • In its most general form a reinforcement learning algorithm has three components: an exploration strategy for trying out different actions in the game, a reinforcement function that gives feedback on how good each action is, and a learning rule that links the two together. Each element has several different implementations and optimizations, depending on the application.
  • One reinforcement learning technique is Q-learning which is a good start, easy to implement and can be tuned without a deep understanding of its theoretical properties.

Q-Learning – Introduction

  • Q-learning treats the game world as a state machine. At any point in time, the algorithm is in some state. The state should encode all the relevant details about the character’s environment and internal data.
  • In a game the states are made up of many factors: position, proximity of the enemy, health level, and so on. Q-learning doesn’t need to understand the components of a state. As far as the algorithm is concerned they can just be an integer value: the state number.
  • Q-learning is known as a model-free algorithm because it doesn’t try to build a model of how the world works. It simply treats everything as states. Algorithms that are not model-free try to reconstruct what is happening in the game from the states that it visits. Model-free algorithms, such as Q-learning, tend to be significantly easier to implement.
  • These four elements—the start state, the action taken, the reinforcement value, and the resulting state—are called the experience tuple, often written as (s, a, r, s).

How It Works

  • The experience tuple is split into two sections. The first two elements (the state and action) are used to look up a Q-value in the store. The second two elements (the reinforcement value and the new state) are used to update the Q-value based on how good the action was and how good it will be in the next state.
  • The update is handled by the Q-learning rule:

Q(s, a) = (1α) Q(s, a) +α(r +γ max (Q (s, a))),

Where α is the learning rate, and γ is the discount rate. Both are parameters of the algorithm.

  • The first component Q(s, a) is simply the current Q-value for the state and action.
  • The r value is the new reinforcement from the experience tuple
  • γ max(Q(s_, a_)), looks at the new state from the experience tuple. It looks at all possible actions that could be taken from that state and chooses the highest corresponding Q-value. This helps bring the success (i.e., the Q-value) of a later action back to earlier actions: if the next state is a good one, then this state should share some of its glory.
  • The discount parameter controls how much the Q-value of the current state and action depends on the Q-value of the state it leads to. A very high discount will be a large attraction to good states, and a very low discount will only give value to states that are near to success. Discount rates should be in the range [0, 1]. A value greater than 1 can lead to ever-growing Q-values, and the learning algorithm will never converge on the best solution.
  • So, in summary, the Q-value is a blend between its current value and a new value, which combines the reinforcement for the action and the quality of the state the action led to.

Exploration Strategy

  • It’s the policy for selecting which actions to take in any given state.
  • The basic Q-learning exploration strategy is partially random. Most of the time, the algorithm will select the action with the highest Q-value from the current state. The remainder, the algorithm will select a random action. The degree of randomness can be controlled by a parameter.

Convergence and Ending

  • If the problem always stays the same, and rewards are consistent (which they often aren’t if they rely on random events in the game), then the Q-values will eventually converge. Further running of the learning algorithm will not change any of the Q-values. At this point the algorithm has learned the problem completely. For very small toy problems this is achievable in a few thousand iterations, but in real problems it can take a vast number of iterations. In a practical application of Q-learning, there won’t be nearly enough time to reach convergence, so the Q-values will be used before they have settled down. It is common to begin acting under the influence of the learned values before learning is complete.

Performance

  • O(i) in time , where i is the number of iterations of learning
  • O(as) in memory where a is the number of actions, and s is the number of states per action, in case of using arrays
  • Using a Hash Table will decrease the order of the memory. However it will increase execution time

Tuning the Parameters

  • Will be discussed later

Choosing Rewards

  • Will be discussed later

Temporal-Difference learning

  • Will be discussed later

Applications and Limits

  • Reinforcement learning is most suitable for offline learning. It works well for problems with lots of different interacting components, such as optimizing the behavior of a group of characters or finding sequences of order-dependent actions. Its main strength is its ability to seamlessly handle uncertainty. This allows us to simplify the states exposed to it; we don’t have to tell the algorithm everything.
  • It is not suitable for problems where there is an easy way to see how close a solution is (we can use some kind of planning here), where there are too many states, or where the strategies that are successful change over time (i.e., it requires a good degree of stability to work).
  • It’s strong in board games and turn-based strategy games
  • It’s possible to use a neural network to act as a storing medium for Q-Values, the author is not aware of any developers who have used this combination successfully.
  1. Artificial Neural Networks

Introduction

  • Developers who have experimented with neural networks for large-scale behavior control have been left in no doubt of the approaches weaknesses. The combined hype and disappointment has clouded the issue. AI-savvy hobbyists can’t understand why the industry isn’t using them more widely, and developers often see them as being useless and a dead end.
  • It is a huge subject, full of different kinds of network and learning algorithms specialized for very small sets of task. Very little neural network theory is applicable to games, however.
  • The Focus will be on only one type of neural networks, which is the multi-layer perceptron with one particular learning rule, which is the back-propagation algorithm. This type of neural network is the most common one that’s why the author chose it.

Architecture

  • Will be discussed later

Algorithm

  • Will be discussed later

Implementation

  • Will be discussed later

Performance

  • O (nw) in memory, where n is the number of perceptrons, and w is the number of inputs per perceptron.
  • In time, the performance is also O (nw).

Other Approaches: Radial Basis Function

  • Will be discussed later

Other Approaches: Weakly Supervised Learning

  • Will be discussed later

Other Approaches: Hebbian Learning

  • Will be discussed later

Posted in AI for Games | 2 Comments »

Location-Based Information Systems

Posted by Ogail on July 27, 2009

Location-Based Information Systems

 

  • LBI could be thought as a very specialized perception data , which also include embedded logic or lower-level intelligence
  • Categories of discussing LBI:
    • Influence maps
    • Smart terrain
    • Terrain analysis
  • Influence maps (IM)
    • It refers to using simple array of data where each element represents data about a specific world position
    • the accuracy of the IM directly proportional with the memory size
    • Examples of uses:
      • Stores the areas of resources so when you want to expand your country you can choose best way for expanding and also know where to build defenses (uses in high-level planning)
      • IM that keeps the number of units that have been killed in each grid square. This could be used in pathfinding so that units will not continue to take paths into areas with high mortality score
    • In complex systems it’s recommended to have low detailed global IM and have another high detailed local IM that represents a specific area in the game
    • High-Level Approach: game characters interact with each other. Each character will consult some large list of game characters and determine whether interactions are required or necessary
      • This is not people do things in real world, people have limited senses and can only react to things in their immediate areas
    • Keep tracking of player location in IM allows for a more low-level approach to game character interaction
      • This adds realistic taste to the AI characters in the game
    • In fact, IM is the central repository for the other two LBI techniques to communicate with the rest of the game
  • Smart Terrain:
  • This notion was coined by Sims creator
    • This technique places the logic and behavior data for how to use various world feature and game objects in the objects themselves
    • This technique bundles all the interaction animations, sounds, and any other special data that a character would need to use the object, thus making the object self-contained
    • Here this facilitates any addition of new objects to the game
  • Terrain Analysis (TA):
    • TA is a family of methods that have made use of IM techniques to provide AI systems with increased strategic information about maps, especially randomly generated maps that haven’t had the benefit of ever being touched by level designer
    • TA methods are best described as specialized pattern recognition within an IM data array
    • TA systems uses neural nets or fuzzy logic systems that can be algorithmically find patterns within IM array
  • How these techniques are used:
    • Means tracking various population within the game
    • Keep tracking of the number of specific game objects within certain area

       

    • Simple occupance data can be used to:
      • help with obstacle avoidance
      • Give rough estimates of various game perception
      • Any task that requires quick access to localized population data
    • This technique is used with the fog of war line of sight systems to uncover areas of the map that are visually within line of sight of any of your units
    • Ground Control:
      • Term IM historically refers to techniques derived from the field of thermodynamics and field of analysis in general
      • This can quickly determine which player has control over which part of the game map
      • Algorithm for doing this is as following:
        • Zero out entire map
        • Assign each grid square a value based on its team-specific control
          • i.e. a positive value for player A units and negative B for units
        • Add up the values of the squares surrounding each square in the map scale by some amount and add that’s value to the square value
        • Repeat this few times to disperse the influence out until you achieve a stable state
    • Pathfinding System Helper Data:
      • When provided with additional information about specific area, the pathfinder can help smooth the solution through a tricky part of the map by giving a shortcut OR allow the AI-Controlled character to use some map features (like a ladder or a teleport tool)
      • These provided data could be:
        • Passability (relative to terrain features -hills, cliffs – and type -Sea, Land-)
        • Which player currently controls the areas of map you want to traverse
    • Danger Signification:
      • IMs are used to keep track of areas where bad things have happened over a period of time
      • And better yet the AI-Controlled character could respond to the danger area and send an attack group to investigate what’s causing the danger in this area
    • Rough Battlefield Planning:
      • By using ground control method you can look for areas with zero or near zero values and find areas where armies have met and then fight to control that area
      • Here you can know:
        • army relation to other army
        • relative sizes of each army
        • Forcing attack direction
        • Determine chances of winning to initiate charges or retreats
        • Able to send reinforcements more intelligently
        • Coordinate attacks on multiple fronts more clearly
    • Simple Terrain Analysis:
      • This includes somewhat more math determinations such as cover, visibility and height factor
      • Cover: how much a given position is open to attack from a given angle
      • Visibility: lighting concerns and light of sight issues
      • The best areas of cover become sniping spots
      • Areas with low visibility might be used as sneaky backdoors to other map areas
    • Advanced Terrain Analysis:
      • In RTS Games finding chokes points add the ability to create ambush there
      • IMs used in determining best place to build a town and defensive towers
      • Towns should be built with more preplanning like: keep crowding under control, maximizing future growth, allow maintenance
      • Building base against impassable terrain, then removing line of attack
      • Buildings walls to redirect units throw a kill zone. As explained in the picture below:


 

  • Determining important map areas using IM (powerup location density, cover info, choke points)
  • IMs Implemented Types:
    • Occupance-based IM: tracks where a given game object is in the map
    • Control-based IM: Shows areas of control around each game object
    • Bit-wise IM: splits IM element’s value into bitwise data components
    • Danger measurement
    • Providing pathfinding for more complex interactive objects
    • Could be helped in searching for objects
    • Keep track of the asteroids moving way and to predict their next movement and get them down
  • Dirty rectangles scheme of graphics drawing
  • Tip:
    • SmartObjects: each object in the game knows how to interact with ship via Interact method so adding new objects is so easily
  • Pros of LBI:
    • LBI tends to simplify perception search space y lowering the resolution of data that the AI needs when making decisions
    • Easy in debugging and developing
    • Generic interface
    • Centralizing AI data
  • Cons of LBI:
    • Size expensive
    • TA is performance expensive
  • Optimizations:
    • Using dirty drawing rectangles in occupance IM
    • Multilevel IM:
      • IMs that have levels and every greater level reqires additional performance and suppliers more details
  • Design Considerations:
    • Types of Solutions: LBI is suitable for tactical and strategic level
    • Agent Reactivity:

      • It makes your agent proactive
      • Take in consideration that LBI systems are secondary systems so agent reactivity depends on AI technique
    • System Realism: adds reality to the game
    • Genre and Platform: RTS Games is main of them

Development Limitations: allows modular implementation

Posted in AI Game Engine Programming | Leave a Comment »

Scripting Systems

Posted by Ogail on July 23, 2009

Scripting Systems

 

  • Previous techniques are coded with programmers; so when a gamer what to add new thing how?
  • Scripting: using simplified programming language to build AI elements, logic or behavior
  • Scripting Languages could be:
    • Very Simple (predicate language): contains general rules with a keyword and value
    • Full-Fledged: i.e. Neverwinter Nights uses Java in its scripting language
  • Game designers/end users use the scripting system
  • Design issues in creating scripting systems:
    • The kinds of functionality that the language will require
      • Example in shooter game:
        • Simple script: define how enemies where will spawned and their parameters
        • More complex: define how enemies move
        • More complex: create new enemies
          • Scripter can define:
            • Set of attributes: body type, speed, duration, armor level etc…
            • Se of behaviors: attack method, movement style and pattern etc…
      • System could be implemented that it has several values and the game side determine which values to use in response to the current player state
    • Will the runtime engine interpret the scripts, or will they precompiled into some kind of bytecode
      • Interpreted scripts
        • tend to be slower and large
        • flexible where you can enter new script from an in-game console
        • Simpler to implement
      • CPU power will be an important issue in determining the choice here
    • How big the scripts be in size?
      • Take care of the computer memory assigned to to
      • Also if the scripts are to be used within the game take care of memory bandwidth you are dealing with
    • How will be using scripting language?
  • Scripting System Components:
    • Parser: to load the scripted files under certain tokens
    • Tokens:
      • Token name
      • Execute(): which parser invoke it when it finds token
        • Executing involves scanning the file for any additional parameters that the token expects and then sending game message via this data
    • In-Game Callbacks:
      • Functions that will respond to the messages sent by token Execuet()
      • Here that’s the place where you actually bind real game variables to incoming script data
    • Script files
  • Implementation Notes (Simple scripting language):
    • To initialize the system you should register parser with all tokens
    • Considered problems:
      • Capital/small letters
      • Substring problem (Shout and out)

 

  • Performance of the AI with this system:
    • The loading at the startup will somehow be delayed because the loading operation
  • Additions:
    • Comments
    • Paramerarized tokens
    • Block signifier (recursive tokens)
  • Lua PL Advantages:
    • Fast, has smaller memory footprint and easier to learn
    • Procedural syntax with dynamic types
    • It can be interpreted from script or compiled bytecode
    • Have automatic memory management
    • Has easy to use APIs
  • Some language basic features:
    • Very simple scoping
    • Dynamic Typing
    • Tables
    • Control Structures
    • The Stack
  • Lua Language Fundamentals
  • Pros of Scripting Systems:
    • Rapid Prototyping:
      • Main concepts will be held in scripts so advanced logic could be added easily
    • Increased Accessibility:
      • Many game designers and end users could alter logic easily
    • Speed of Tuning:
    • Provides means of user extensibility:
      • Providing end users with tools used in creating the scripts add extensibility
    • Scales Well:
      • As the number of things under control of the scripting system increases, the real power of scripting makes itself known
  • Cons of Scripting Systems:
    • Speed of Execution:
      • Any interpreted system is going to be slower than programs that have been compiled into native code
      • Problems of bytecode that it’s unreadable but it’s faster
      • You can develop functions that require speed in the compiled language while using script to develop the parts of the game that aren’t as performance sensitive
    • Debugging Can be Difficult:
      • Custom scripting languages don’t have level of debugging tools
        • Dedicated debuggers, profiles, asserts, internal error checking, syntax checking
        • Non-technical people who write scripts tend to lack some “common sense debugging techniques” this techniques could include binary bug hunting
    • Looking Scripted:
      • When players play the game much your scripts behaviors are discovered!
    • The Question of How Much Power:
      • Answer of this question lies in the type of the game you are working on, level of functionality you need in your scripts to perform the things you need, and the level of organization and control you want of the content in your game
    • You are now maintaining Two Systems!
    • Too much functionality in the scripting system can be a problem as well
  • Extensions to this method:
    • Completely Custom Languages:
      • Creating scripted custom system using “Lex & Yacc” route
        • Create grammar file using Lex then process this file under Yacc and generates the necessary C code to parse file using the specified grammar
    • Built-In Debugging Tool:
    • Smart IDE for Writing Scripts
    • Automatic Integration With The Game
      • Ways to implement this:
        • Recognize that some change happened and then rebuild all the project to consider changes
        • Adding request system so, ay anytime when new Widget is requested it’s added to a special list that the programmer can access and find out what scripter require for new functionality and then programmer implement it as well
    • Self-Modifying Scripts:
      • Scripts could keep track of behaviors that work and doesn’t work
      • The system could append additional rules onto its scripts (or remove them) as the consequence of some event during the game
      • Genetic Algorithms:
        • which strive to use genetically derived methods to perfect algorithms for finding solutions to problems
      • Genetic Programming:
        • Deals with using genetic methods to actually write whole programs to solve problems
      • Here the system is searching for the ideal script in which to perform its job instead of the ideal parameters to use within script
      • To THE VECTOR GO THE SPOILS
  • Optimizations:
    • Performance of scripting system relies on:
      • Level of its use within the game
      • Functionality of scripting language
      • How structured your overall system is
  • Design Considerations:
    • Types of Solutions: its preferred to be used by strategic level and could work with tactical level also
    • Agent Reactivity: give you vast area of reactivity
    • System Realism:
      • Webb designed scripted systems have the ability to perform realistic, unique behavior

Posted in AI Game Engine Programming | Leave a Comment »

Fuzzy State Machines (FuSMs)

Posted by Ogail on July 19, 2009

Fuzzy State Machines (FuSMs)

 

  • FuSMs are based on fuzzy logic but don’t really represent fuzzty logic
  • They are useful for systems that could be in more than one state at the same time
  • Similarities between FSM and FuSM:
    • FSM with prioritized transitions:
      • Calculating activation level for each state and go to the highest activation state
      • Adding fuzziness to the FSM
    • Probabilistic FSMs:
      • Probabilities are assigned to each transition state and then can be changed within game
    • Markov Models:
      • Transition logic is completely probability based
    • Actual Fuzzy Logic Systems:
      • When there are many fuzzy variables a problem arise called: combinatorial explosion
        • This problem could be solved using Comb’s Method but the accuracy is decreased
  • Example of fuzzy problem in games: shooting while moving!

 


 

  • Here there are some states that should be computed as digital not fuzzy (like attack state) so some of the states are calculated in fuzzy and others are calculated in digital approach
  • Structure of FuSM:
    • FuSMState Class: Enter(), Exit(), Update(), CalculateActivation()
    • FuSMMachine Class
    • FuSMAIControl Class
  • The states get perception variables from FuSMAIControl Class to calculate the activation level
  • Activation Level: measure of how fully active the state need to respond to the perceptions
  • It’s recommended to have separate perception system in the Control Class in bigger games
  • Each Fuzzy State has no information about other states in it. Each state is only concerned with the perception checks that directly deal with itself only
  • M_powerUpNear is not needed in FuSM but it’s necessary in FSM so it can fire GetPowerUp State


 

  • Pros of FuSM Systems:
    • Used in parallel independent systems
    • No transition events between states
    • Adding new state is moiré easily , because each state is only concerned about itself!
    • Allows more range of behavioral personality to be exhibited by the AI-System
    • No existence of the oscillation problem
  • Cons of FuSM Systems:
    • FuSMs are not as general a problem solver as FSMs
    • Bad FuSM lead to the problem of oscillation and this could be solved by having totally opposite states in your machine
      • This states should be inconsistent in activation
  • Extensions to this paradigm:
    • FuSMs with a limited numbers of current states
      • Fuzzy states could be tagged with subtypes, and the highest propriety subtype would win for that particular subtype category
    • An FuSM used as a support system for a character:
      • Used in facial expressions (333)
    • An FuSM used as a single state in large FSM
    • Hieratical FuSMs:
      • Categorizing FuSMs in subtypes
      • FSM where each state in a FuSM, this becomes in effect a fuzzy system that can switch out its entire fuzzy state system based on game events or perception changes
    • Data-Driven FuSMs
  • Design Considerations:
    • Types of Solutions:
      • Used is strategic and tactic types
      • Could be used in RTS games: combine output of several fuzzy states such as resource gathering, diplomacy, combat, defense
    • Agent Reactivity: very reactive
    • System Realism:
      • Seems realistic
      • By adjusting current behavior not completely changing it this adds realism to the Game AI
    • Genre:
      • FuSMs could be used as the perception system of a game
    • Platform: there are no restrictions at all!
    • Development Limitations:
      • Easy to understand and implement but there’s no support for dynamic product

Entertainment Limitations: flexible and able to meet the entertainment requirements

Posted in AI Game Engine Programming | 9 Comments »

Finite State Machines (FSM)

Posted by Ogail on July 17, 2009

Finite State Machines (FSM)

  • FSM is a data structure used in AI contexts
  • Contents of state machine: machine states, several input conditions and transaction function
  • How transaction checking made in classic (271)
  • Most game FSM are coded using Moore model (272)
    • Where you put your actions inside the state instead of the transaction between states (Sit/ Stand Up Example)
    • Moore Model = Mealy Machine Model
  • PAC-MAN FSM Example:


  • FSM Skeletal Code:
    • FSMState class: the basic state in the system
    • FSMMachine class: houses all states and acts as state machine
    • FSMAIControl class: houses state machine and game specific code (such as perception data)
  • The FSMState class:
    • Each state should have following functions:
      • Enter(): this function is always run when you enter the state to do initializations
      • Exit(): run when leaving state to cleanup and run any code after leaving (Mealy State)
      • Update(): This is the main function that is called every AI processing call (Moore)
      • Init(): Resets the state
      • CheckTransition(): This function should return the enumeration value to next state to run
  • FSM Pros:
    • Easy to implement, debug and make code centralized
  • FSM Cons:
    • State oscillation


  • Extensions to FSM:
    • Hierarchical FSMs:
      • FS that contains another FS and called "super state"
    • Message and Event-Based FSMs:
      • Using events to triggers rather than polling model
    • FSMs with Fuzzy Transitions:
      • Adding more calculations or simple comparisons to the transition states
      • Most of the transition computations are done in the state itself rather than Control class
    • Stack-Based FSMs:
      • Use stack rather than current state so that you can make interpretations to the system and come back again to the previous state when interpretation done (Attack/Take Cover Example)
    • Multiple Concurrent FSMs:
      • multiple FSMs between multiple characters
      • multiple FSMs for same character
      • This issue in handled by a manager (could be General FSM) that coordinates between FSMs
      • RTS Game Example: AI opponent could need separate decision state machines for resources management, research and combat and so on. These FSMs might communicate throw an observer of some kind (could be a FSM that uses output of these FSMs as transition conditions)
      • Drawbacks of this issue: oscillations and data overwriting
    • Data-Driven FSM:
      • Motivation: adding new AI by nonprogrammers
      • Techniques to do this:
        • Scripted FSMs: easy for programmers hard for designers
        • Visual Editor: easy for designers hard for programmers
    • Inertial FSM:
      • If a state has been actuated, it stays actuated for some time, or that new states have to overcome some inertia before they can fire in the first place. This can be done in the states themselves or on the perceptions that fire the states
  • Optimizations to FSM:
    • Load Balancing Both FSM and Perceptions
      • Refers to spreading the amount of computation to be done over time to lessen the immediate load on the processor. This is done by:
        • Make system run at a set of scheduled rate (every 2 seconds)
        • Have system that give better results by having more time
    • Level of Detail (LOD) AI Systems:
      • Ignoring some AI details when they are not needed
      • Teleport example (Start 304)
    • Shared Data Structures:
      • If there is a computation that is done my many states then, this computation could be computed by one state and then be send to the sheared control so any other state can take the results without re-computing it
      • Blackboard paradigm (End 304)

You can separate the state-based portion of your code from the AI code

Posted in AI Game Engine Programming | 2 Comments »

Adaptive AI Engine for RTS Games

Posted by Ogail on July 15, 2009

  • Technical words that need explanation:
    • AI Opponent: we mean by AI opponent the computer that plays against human
    • Human Opponent: the human player playing against the computer
    • RTS Game:
      R
      eal Time Strategy Games; that’s the technical name of strategy games (as Red-Alert)
  • Preface:
    • This project is in the field of Artificial Intelligence specially in Machine Learning area
    • The minor goal to the project is to make the machine (computer) able to predict human future decision
    • The project aims to provide this facility (prediction) to the AI opponent in Strategy Games
  • What are problems we want to solve:
    • AI opponent weaknesses easily detected
      • In most commercial complex major games, human players only need a couple of games to spot opponents’ weaknesses and to exploit them in upcoming games. This results in not effective boredom game
    • AI opponent get in the same trap repeatedly
    • Most AI opponents don’t know the answer of the questions:
      • Does the AI opponent know the safe map locations to get out from kill zones
      • Does AI opponent know if the human opponent rushes?
      • Does human opponent rely on units that require certain resources?
      • Does human opponent frequently build a number of critical structures in a poorly defensive place?
      • Does human player attacks are balanced or not?
  • Why this project is useful?
    • Currently there are automated machines (robots) that are used in wars instead of human. The army commander can use the AI Engine to:
      • Test his plans
      • Share other commanders experience in the plan
      • Discover flaws in the plan. And solves these flaws
    • The problems considered are not solved till now in commercial RTS Games (by Alex Champandard)
  • Other Issues:
  • What’s the expected output from the project?
    • AI Library for RTS Games:
      • A commercial library that is able to compete in the Game AI Industry
    • 2 Publications:
      • Publish 2 papers that summarizes what new we’ve add
  • How to test the AI Engine:
    • By solving previous problems and then play a game with the computer and ensure that its able to learn
  • Motivations:
    • We are interested in this project for the following reasons:
      • AI is our area of interest in computer science
      • In the future, we are planning to work in the industry of game development specially in AI Engines

      • We love playing strategy games, developing this project will be so interesting and fun!
      • Our wish to add something new to this world

Posted in Orientation | Leave a Comment »