Adaptive AI Engine for RTS Games

Discussing the theory and practice

Archive for the ‘AI Game Engine Programming’ Category

Summary of AI Game Engine Programming

On-line Planning for Resource Production in Real-Time Strategy Games -part2

Posted by ferasferas on November 2, 2010

The RTS Resource Production Domain

Human players typically have no difficulty selecting actions that at least achieve a particular set of resource goals. However, it is much more difficult, and requires much more expertise, to find close to minimal makespan plans.

As an example consider the seemingly simple problem of collecting a large amount of gold starting with a single peasant and “townhall”. One could simply repeatedly collect gold with the single peasant, which would eventually achieve the goal, but it’s far from optimal where another player can choose to execute one or more actions, defined from the action set of the game. Each action produces a certain amount of products, but also consumes a certain amount of other resources, and requires that some preconditions are met before it can be executed.

Actions are Durative : take a certain amount of time to finish upon which the products are added to the game state.

In RTS games, resource-production actions are usually deterministic, and the preconditions, effects, and durations of each action are usually given or can be easily discovered through game-play. For certain actions, where a unit has to travel to a destination for an action to take place, the duration of the action will vary due to the spatial properties of a game map. However, for simplicity we assume we have a constant duration for each instance of an action. On average over the many actions taken during a game turns out to be a reasonable assumption.

Game State Representation at time t :

( 1 ) For each resource Ri , the amount ri possessed by the agent.

( 2 ) List of actions Ai , i = 1, . . . , m ,currently being executed along with the start and end times “ts” and “te” for each action ( ts < t < te ).

Goal State Reached When :

All actions currently executing in S have terminated as the projected game state, denoted by Proj(S). This state is timestamped with t = maximum te of all defined actions, has resources updated according to the effects of the actions Ai , and no actions being executed. reach a certain resource goal, G = {R1 ≥ g1 , . . . , Rn ≥ gn },defined as constraints on the resources, from the current game state. In essence, the player must determine a plan, which is a list of actions, ((A1 , t1 s , t1 e ), . . . , (Ak , tk s , tk e )), where Ai is an action that starts at time ti s and ends at time ti e .


Four Resource Tags

Require: An action requires a certain amount of a resource if it needs to be present throughout the execution of the action.

For example, the action collect-gold requires the presence of the townhall In this case, the same town-hall can be used for concurrent collect-gold actions, as the specifications townhall is not “locked up” by the collect-gold actions. Thus, the requires tag allows for sharing of resources.


Borrow: An action borrows a certain amount of a re-production source if it requires that the resource amount be “locked up” throught the execution of the action, so that no other action is allowed to borrow those resources during it’s execution. After the action has completed the resource amount is freed up for use by other actions.

For example, the collect-gold action borrows a peasant. During the execution of collect-gold action, the borrowed peasant may not be borrowed by another action. After the collect-gold action is finished, the peasant becomes available again and can be used for other action. Therefore to allow concurrent collect-gold actions,multiple peasants must be used.

Consume: An action consumes a certain amount of a resource at the start of it’s execution, as this amount is deducted from the game state. As the game state must obey the constraint that every resource value is non-negative the inferred precondition of the action is that this resource amount must be present at the start of the action.

For example, the build-barracks action consumes 700 units of gold and 450 units of wood.

Produce: An action produces a certain amount of a resource at the end of it’s execution, as this amount is added to the game state.

Resources are Divided into two Classes:

-Consumable Resources: are those that are those consumed by actions, such as gold, wood, and supply(a peasant of footman cannot be built unless there is unused supply).

-Renewable Resources: are those that are required or borrowed by actions, such as peasants, town halls and barracks.

Model Domain Characteristics :

1- Dependency Structure between resources is such that, if the initial state has a townhall and a peasant( and assuming the world map has enough consumable resources like gold and wood), there always exists a plan for any resource goal. Further if such a state can’t be reached, then no such plan exists.

2- The amount of renewable resources in a problem never decreases, since no unit is destroyed in our scenarios.

3- By Wargus action specification, all effects at the start of an action are subtractive effects, while all effects at the end of an action are additive effects.

4- Again by Wargus action specification foreach resource there exactly one action that produces it. This property implies that every plan that produces the goal resource from a game state must contain the same set of actions( though possibly not in the same sequence).

Advertisements

Posted in AI Game Engine Programming, Papers Summaries, Planning, Technical Documents | 1 Comment »

Steering Behaviors (SB)

Posted by Ogail on August 17, 2009

Steering Behaviors (SB)

  • Sometimes in games, creatures not require intelligence and you want them to behave in natural way. Here you are looking for steering behavior
  • Steering techniques tend to be low level and concerned only with which direction a character should steer (يتوجه)
  • Characters that use SB are called autonomous agents meaning that they are typically set in motion and don’t require additional help from programmer or scripted elements in order to perform within the game world
  • SB are only concerned with movement so autonomous here appears in that area only
  • Reynolds call his original creatures boids and he set up 3 simple rules that can model complex locking behavior
  • These rules are:
    • Alignment: your head should always be in the same direction with the rest of the group
    • Cohesion: force of keeping the group together; tight and focused
    • Separation:
      • Could be seen as opposite of cohesion. This gives group members “personal space bubble” so they don’t crowd each other too much and every individual has room operate

 

  • Emergent behaviors is something like holy grail of game programming, where combination of simple local rules lead to very complex global behaviors
  • Because every boid is behaving in local stimulus (حافز), by time this can lead to chaotic state. But negative feedback from the main controller brings the overall group back into more ordered state
  • Prediction of the next behavior is 1 minute is impossible!
  • When to combine behaviors take in account following:
    • The combination could lead to meaningless behavior
    • Some behaviors could cancel other behaviors (antagonistic behaviors)
  • Most popular combining behaviors are:
    • Simple Weighted:
      • Sum up all individuals steering forces
      • Apply a weight multiple to each force vector before adding it into running sum
        • This able you to control the effect of the steering behavior on the overall system
      • The problem here is: how to find the suitable weights that lead to balance
      • Expensive o n CPU time because this calculation is calculated every game loop
    • Prioritized Sum:
      • Behaviors are stored in prioritized list
      • When we update each one we sum the result into accumulated steering force
      • The magnitude of the incoming force is subtracted from the maximal force (set by the programmer)
        • Here when we pass the maximal amount we truncate sum and stop updating
      • Advantages:
        • Here we only update exactly the number of allowed behaviors
        • We set up the behaviors here from important (pure) to lower importance
    • Prioritized
      Dither:
      • All the given behaviors are stored in priority order
      • Each behaviors is assigned a probability value
      • When the manages goes to update behavior collection, it rolls the dice and tests against the behavior’s probability value
        • If it passes, the behavior is updated
        • If the behaviors does something (sends back a steering vector), then its ignored
      • And after that the behavior in the collection is tested
      • Advantages:
        • For accurate results a very probability will be assigned to it
  • Pros of Steering Behaviors:
    • Ease of implementation
    • Predictable
    • Inexpensive to execute on the CPU
  • Cons of Steering Behaviors:
    • Combinatorial complexity
    • Short sightedness (بعد النظر)
    • Deadlocking problems
    • Large tuning requirements
    • They look robotic is misused
    • Not great at modeling idle states
  • Extension to Paradigm:
    • Layered Steering:
      • Each steering behaviors has other system of steering
      • The layers tend to be general then specific
    • Learning Behaviors:
      • Use learning behaviors in:
        • Altering weights and probabilities of behaviors
        • Learn the agent when to use the specific behavior
      • GA was used successfully in that area
      • There are 2 papers useful in that topic:
        • Competition, Coevolution and the Game of Tag
        • Evolution of Corridor Following Behavior in a Noisy World
    • Common Steering Behaviors:
      • Approach
      • Pursuit
      • Evade
      • Arrive
      • Avoid
      • Wander
      • Purist With Offset
      • Patrol
      • Match Target Speed
      • Hide
      • Interpose
      • Orbit
      • Flow Field Following
      • Queuing
    • Data-Driven Steering Behaviors:
      • This will be grateful for game designers because with one mouse click a new behaviors will be added to your system
  • Optimizations:
    • Load Balance:
      • Update certain agents in certain loops
        • Take care here if you are using flocking groups so, all the group should be updated
      • Update agent under condition
        • Like if we have a leader and a following group, the group will not be updated only if the leader changes his state
      • Using less costly combination methods for certain percentage of time
        • In order to partially bring down CPU cost without losing all the accuracy in your game
    • Priority/Weight Adjustments:
      • Assign the highest propriety to the least expensive behaviors you have (also does work well)
      • The less cost behaviors could be checked first
  • Design Considerations:
    • Genre: preferred to be used with tactical types
    • Agent Reactivity: very reactive and able to be adjusted
    • System Realism: suitable for realism
    • Genre:
      • Genres that require emergent, organic movement from AI controlled agents
      • Genres that have groups and flocking behaviors exist
    • Development Limitations:
      • Make sure of the produced emergent behaviors by system
  • Note:

Always check that you problem is really simplified break down to the simplest behaviors you have

Posted in AI Game Engine Programming | Leave a Comment »

Common AI Development Concerns

Posted by Ogail on August 11, 2009

Common AI Development Concerns

  • Designing Issues:
  • Data-Driven AI System:
    • Data-Driven primitive Levels: animations, behaviors and strategies
    • Designers could build animations themselves
    • Data-Driven system doesn’t improve AI engine but supplies its degree of organization
    • Be sure that you are not driving data for areas that are better with code solutions
    • Rule of Thumb Here:
      • Create reusable, simple primitives that allow users to build more complex objects
  • The One-Track-Mind Syndrome:
    • The problem is that AI programmer apply one AI technique on all AI problems in the game
    • Most error here: “State Machines are all you need!”
    • Every AI technique is suitable for particular input and under particular game conditions
    • You can use FSM for basic layout of the game, FuSM for main short decision layer, a simpler planner to run your pathfinding and long term decision making layer , and a scripting system to drive your animations
  • Load of Details (LOD):
    • LOD might entail (handle AI details between various levels):
      • Off-screen and faraway: characters are completely non-exist for human player
      • Off-screen and close: characters can’t be seen but the human player might still hear them
      • Very far-off: character is visible a pixel or two
      • Far-off: characters are visible as solid colors but no real details yet (you can differentiate between truck and car)
      • Medium: this distance would be your true area of sight
      • Close: anything closer than medium
      • Interaction: the character is interacting directly with the player someway
    • If characters are in off-screen faraway then we can forgot about real obstacle avoidance
    • There could be LOD systems for your engine that determines the current level of detail
    • Areas under human sight could be updated thirsty times per second and far-away area could be 3/2 times
    • Scientist example (page 647)
    • LOD in RTS Game (page 648)
      • Buildings and mining will be statistically determined (without moving peons)
  • Helper AI:
    • Some areas still benefit from AI techniques as:
      • User Interface
      • AI Advisors
      • Automated Testing Systems
        • Testing Areas:
          • Limits Testing: testing values around of system’s capabilities
          • Random Testing: uses completely random input to the system
          • Smart Testing: real-game play techniques are applied
  • General AI Thinking:
    • Determine the goals of your AI controller
    • Brainstorm about how to solve these problems
    • Talk with AI staff about each proposed solution you have got
    • Coining the solutions in a small prototype (like AIsteroids)
      • Here you have 80% of the solution the rest 20% will be discovered in the code
    • Think in fuzzy with your colleagues
  • Always differentiate between good programming and programming for sake of goodness
  • Consider novelty in your AI Engines (like MOHAA when you throw grenade on them they pick up it and throw it back)

Stupid AI could do: Rules Enemy, Bad Pathfinding, Non-Contextual Enemy, Oblivious Enemy

Posted in AI Game Engine Programming | Leave a Comment »

Distributed AI

Posted by Ogail on August 10, 2009

Distributed AI

  • The first rule of ALL game programming: Keep it Simple, Stupid
  • The purpose of the distributed method is to simplify overall AI creation and maintenance, by spreading out AI tasks into modular, as well layered systems working with each other
  • Several AI Engine Layers:
    • Perception/Event Layer: Filters incoming sense data for relevance and various other factors
    • Behavior Layer: Determines the specifics of how to perform a given action
    • Animation Layer: Determines which animation to play to fit game state
    • Motion Layer: Handling aspects like pathfinding, collisions and avoidance
    • Short-Term Decision Making Layer:
      • The narrow-view intelligence for the AI entity, primarily concerned with just the entity
    • Long-Term Decision Making Layer:
      • Handles wire-view intelligence issues, like planning or team based considerations
    • Location-Based Information Layer:
      • Includes information transmitted to the entity from influence maps, smart terrains

     

  • Perception and Events Layer:
  • Why to create central perception System:
    • Prevent game values being calculated several times within single game loop
    • Supports debugging and tracking the system
  • Perception systems works fine with message based systems

     

  • Behavior Layer:
    • This layer is a candidate for data-driven systems
    • The more content designers put in this layer, the more virtually “calculation-free” personality and intelligent your characters with exhibit
    • scripts shouldn’t contain math; it contains sense-style that characters’ behaviors requires to seem realistic

     

  • Animation Layer:
  • Choosing right animation to play is not a trivial task
  • Scripted systems here saves calculation time

     

  • Motion Layer
    • Basic movement, pathfinding and obstacle avoidance algorithms stated here
    • You can adjust this layer for every character you have:
      • Smart Character: know when to use teleport in the suitable time
      • Weak Character: let the obstacle pass first and then he passes
      • Strong Character: move the obstacle away the way!

     

  • Short-Term Decision Making:
    • This layer is relative to character either because of its attributes, its current perceptions or its last experience
    • A character might be almost dead so his goal now is to get health power up or run away
    • A character’s weapon is empty so, its goal now is to hide and reload the weapon and appear again
    • Most of the games develop the ST layer in state-based manner

 

  • Long-Term Decision Making:
    • LT are likely to be outside any one unit
    • In RTS games this layer usually developed under FuSM
    • This layer contain the planning algorithms and high strategic decisions
    • In RTS games, AI opponent can use full fuzzy system logic to “guess best” action to take under current info

 

  • Location-Based Information Layer:
    • This layer is like blackboard architecture for creating AI Engines
    • Uses of LBI Systems:
      • LBI can help LT in determining weak defensive area
      • In knowing military intersect area in the map
      • Discovering valuable suitable ambush areas in the map
      • Pathfinding algorithm use LBI to avoid kill zones or bad designs in the map
    • Triggers could be added to this layer to help to simplify other layer (like be careful enemy is near you!)

     

Quote: “reaching for insect intelligence first, and then go on”

Posted in AI Game Engine Programming | Leave a Comment »

Other Techniques of Note

Posted by Ogail on August 9, 2009

Other Techniques of Note

  • Artificial Life (ALife):
    • ALife is about searching to find “governing principles” to the life
    • Rules that can’t be broken in the future and can be used without fail to understand aspects of nature and predict outcomes based on hard equations
    • Newton has proved Descartes’ clockwork universe: in which God set up everything like a clock. God’s only contribution to the universe was to set everything in motion, and from there the laws of science took hold and have governed every sequence of events since that time
    • Chances to find any governing principles are so limited so we could construct our own “life simulations” so and doing so find out more about how life in general operates
    • ALife is a field of studies that hopes to understand natural life better by attempting to recreate biological phenomena from virtual computer environments
    • One main tenet of alife is that life is simply an emergent property of nature following some very simple rules over and over again
    • An emergent property refers to a trait or behavior exhibited by a creature that reaches beyond the capabilities of its constituent parts (Question)
  • ALife Usage in Games:
    ALife was used in Black & White and Creature Games
  • Artificial Life Disciplines:
    • Cellular (خلوي) Automata (CA):
      • CAs are group of algorithms that show a stunning amount of complex behavior with the very simplest of rules
      • Cells example (page 522)
      • When to use CAs:
        • Simulating mold growth
        • City building
    • Self-Organizing Behavior and Flocking (التدفق):
      • Large group of creatures can organize their movements within groups quickly, easily with what appears to be unified mind
      • Investigations here were to produce algorithms that allow us to replicate this kind of behavior using simple rules
      • The most famous research in this area was Boid research, mainly by using few key concepts you can achieve a remarkable simulation of many types of flocking movement
        • These simple concepts are:
          • Separation to avoid crowding
          • Alignment to the average group heading
          • Positional cohesion of the group
    • Genetic Algorithms: can
      model the idea of evolution
  • Pros of ALife:
    • Emergent behavior:
      • ALife is one of the best ways we currently have of creating emergent situations
      • Emergent behavior will appears in games where the AI opponent with simple actions that can be combined in different ways leading to a wealth of different final behaviors
    • Behavior of reuse:
      • ALife forces the developers to build games out of building blocks, assembling gameplay until it can be expressed in simple rules. In fact many ALife games are simple to build
  • Cons of ALife:
    • Emergent behavior: the emergence outcome could be not entertained!
    • Tuning issues: little changes in game parameters could destroys the emergence behavior completely
  • Some games uses ALife with creatures to create actual ecologies within the world instead of random spawn points

     

  • Planning Algorithms:
  • Planning is, deciding upon a course of action before acting
  • Formulas that planning algorithms follow:
    • Break the abilities of the AI into distinct operators
    • Designing your AI character, and game environment in states manner
    • Construct either:    
      • Tree: that shows transition connections between states and lists the transition operators
      • Rules embedded in each state: that details which operators are available
    • Then, AI forms a plan within a local working memory by applying these transitions on a copy of its current state, testing for the best action to the behavior it wants
  • Here you know what your start state and end state, planning lists the best string of operations to get you to the end
  • Example: the plan to a pathfinding (page 525)
  • Because planning is costly we’ve used hardcoded plans (scripts, FSM…) which allow us usually do right behaviors
  • Usage in Games:
    • Used in pathfinding algorithms
    • Used in attacked/defense in RTS games: when AI sees that the enemy creates unit from type X so it (AI Opponent) searches in its technology tree for the path to get the anti-unit to that unit and start creating it
    • In FPS games: a human entering a room for a power up and AI should kills him (527)
      • This is a serious AI player that mimic humans
    • Make AI anticipate ambushes
  • Some Planning Techniques:
    • A*
    • Means And Analysis (MEA):
      • combines forward and backward searching of tree & tries to minimize unnecessary search
    • Patch Recalculation:
      • Get a broken plan then pass this plan to a function that patches (by come up with planning step) the hole is this plan
      • This method is used when an event happens results in broken plan & also your plan should be long
      • This planning technique provides a way of keeping plans up to date without having to start from zero
    • Minimax:
      • Considers your opponent is going to be working against you every chance he can get
      • Reinforcement Learning cold be combined with that technique resulting in more challenging behaviors
  • Pros of Planning:
    • Planning algorithms provide much intelligent looking behavior
    • Planning is generic and could be developed once and used in many contexts
    • Planning can be implemented hierarchically (See example page 528)
  • Cons of Planning:
    • Planning is computationally complex
    • Humans player are unpredictable so you should be careful in the plan depth
      • Make balance between speed and flexibility of plan Vs. having short range plans to avoid gaffs
      • For long plans you could use patching recalculation to adapt them on the situation
    • Planning can make the AI seems sluggish (بطيء) or un-reactive if plans are too monolithic (متشابهة) or take too long to adapt to new situations
  • Areas for Exploitations (الإستغلال)within Games:
    • Planning is used when creating strategic AI systems that require many steps to achieve goals
    • Most candidate for planning is RTS Games
    • Used in FPS, could be used to set ambushes and traps
    • Used in race-car to pass critical corners quickly rather than just “seed up” the car
    • Used in Frightening Games in choosing damaging combinations (box attack then kick head …)
    • Used in football games to confuse human player or waste time till much ends (to win of course!)

 

  • Production Systems:
  • Production systems are also referred as expert systems
  • Production systems are rule-based systems that strive to encompasses expert knowledge within a specific area
  • Example of this is hardcoded conditional if-then branches within your AI engine to make decisions
  • In 1969, Alan Newell and Herbert Simon released the theory of the General Problem Solver (GPS) based on how the human mind operates (means-and-analysis)
  • GPS tends to store expert knowledge about highly specific problem
  • Production system is separated into four parts:
    • Global database
    • Production rules: serve as actual if-then statements
    • Rules/Situation Matcher: decides which operator to use next upon the database get closer to your goal
    • Conflict resolution function (for use with rule collisions)
  • Production system use:
    forward chaining inference and backward chaining inference
  • Production systems could be used as planning systems or learning devices
  • John Laird and his team successfully interfaced Soar with both Decent 3 and Quack 2 and created competent, non-scripted opponents for each game. Using a system more than 700 rules, they created quack bot that can move in game levels; use all weapons and level elements (such as bonus pad, teleporters). Can anticipate human actions and also creating custom routes of travel to maximize amount of power ups it could collect and perform intelligent ambushes and human behaviors
  • Pros of production systems:
    • Generic Algorithm; production systems are data independent
    • Research; tons of researches have been on production systems fast matching algorithms like RETE and TRETE (stateless of RETE) they speed up matching
    • Goal directed
    • Highly reactive (with as good set of rules)
  • Cons of Production Systems:
    • Computationally expensive; especially with games having large rule set or non-arbitrary match collision resolution
  • Areas for Exploitation within Games:
    • Written in a data-driven way; so new rules perceptions could be added easily to the game world
    • So it’ll offer highly extendible and reusable systems

     

  • Decision Trees:
  • Here you mainly convert each if-else statement in a node within a tree
  • The root of the tree could be the question that the tree answers
  • There are 2 types of decision trees used:
    • Classification Trees: trying to determine “type” of player the human is behaving like (aggressive, defensive)
    • Regression Trees: adjusting AI behaviors (too hard, too easy) on human playing difficulty
  • The differences between using Decision Trees (DT) and NN are:
    • NNs are black box; weights couldn’t be changed or understood whereas DT are fully understandable, descriptive and easy to change
    • DT can only comprehend hard comparisons within a binary outcome whereas NN could handle noisy data or data within gaps and strange jumps in behavior
    • Output from DT always discrete values where output from a NN can be continues value
    • DT consider single variable at a time (called monothetic) whereas NN can handle multiple variables at a time (called polythetic) So when a combination of variables used to take one decision NN are more suitable here
    • NN are much accurate than DT. Relative error in traditional data set might be 10 times or more for DT over back-propagated NN

  • Pros of Decision Trees:
    • Easy to use, understand and change
    • Many algorithms exist for designing, training, debugging, tuning and optimizing decision trees
    • DT are suitable to be used in games and make more sense than NN
    • Complexity could be resolved using hierarchal decision tree
  • Cons of Decision Trees:
    • BDTs deals only with distinct states, don’t scale well and difficult to maintain or extend once they got a certain level of complexity
    • The size of your tree is direct inverse correlation between accuracy and size
      • So if you need specific and accurate outputs go to NN or other techniques
    • Non-binary trees are hard in constructing, altering and using
  • Areas of Exploitation within Games:
    • Using classification to perform simple player modeling
    • Data driving the trees and provide adding and removing nodes from trees (used in Black & White to record high level thinking that you avatar did about his environment given his experiences and training, about which actions to take at any given time)
    • Can be simply used by game level designers

 

  • Fuzzy Logic:
  • Fuzzy logic is a far more advanced system, complete with its own logical proof and methods
  • The Sony PlamTop is reported to use decision trees based on fuzzy logic to classify handwritten Kanji characters
  • Fuzzy logic could be used in “Fuzzy-Logical Production System” which follows all the rules of regular production systems, but performs all its inference using fuzzy logic instead
  • The general inference process is three (optionally four) – Page 537:
    • During Fuzzification: membership function is applied to the input to determine the degree of membership
    • Unser inference: usually MIN or PRODCUT are used in inferred rules
    • Under Composition: usually MAX or SUM are used
    • Defuzzification: converting fuzzy output to crisp number (common methods: Centroid and Maximum)
  • Fuzzy systems are used in pattern recognition, financial systems and heavy data analysis
  • They are not implemented in systems that require a heavy amount of realism
  • Fuzzy set membership discuss to what extend something already is true
  • Pros: extends Boolean logic in a new manner
  • Cons: Computationally expensive (Combs method to combat this problem)
  • Areas of Exploitation within Games:
    • Dealing with unknown or partially known information (like player modeling)
    • In online games, it could used as “helper AI” that plays a game for you temporarily (if you are afk for a while)

Posted in AI Game Engine Programming | 1 Comment »

Neural Networks (NN)

Posted by Ogail on August 7, 2009

Neural Networks (NN)

  • NN are sometimes called Artificial Neural Nets (ANN). It tries to behave like the human brain
  • Neural Nets in Nature:
    • Animals’ brains are large cluster of interconnected nerve cells called neurons
    • Human brain is composed of about 100 billion neurons. Each neuron has a number of connections to other neurons both coming in and going out (humans have about 10000 connections per neuron)
    • Connections coming in called dendrites and going out are called axons
    • Actually neurons are not connected but rather the dendrites of a neuron come very close to axons of other (0.01 micon) and spaces between them is called synaptic gap or synapse
    • Working Mechanism of NN:
      • A charge is transmitted to the neuron
      • If this charge gets too large (above certain threshold)it fires the collected energy down its axons that is called an action potential
    • When neuron fires so much in specific situation it learns that it should fire up in this situation. Also opposite is applicable
    • Introducing inhabitation (كبت) & exhibition (إعانة)
  • What we want to emulate:
    • Take input, recognizing patterns     within the input and take decision based on these patterns
  • Human brain works 100Hz a fraction of the speed of modern computers
  • Because of the symbolic way that our brains store knowledge we can employ many levels of efficiency that allow us to parallel processing
  • Notes:
    • NN tends to classify and recognize patterns within input
    • NN without direction restriction are called recurrent network
      • In this system info can go from input to output and back again allowing feedback within the system
      • To facilitate this feature a number of state variables associated with neurons
    • Most AI programmers use FF (Feed Forward) systems because they are easier to understand
    • Recurrent system can be simulated using multiple FF systems
    • Sparsely connected systems can be simulated using fully connected where weight of connection is small
    • NN could be used as a basis for player modeling (so AI system can predict player actions)
    • NN could learn in live gameplay so it could learn adaptive techniques
  • Artificial Neural Nets Overview:
  • Neuron Structure:
    • Value associated with a neuron is the sum of all input values multiplied by their connection weights added to the neurons bias value
    • Bias refers to the inhibitory or exhibitory effect the neuron has within the network
    • Axon of neuron is represented by its output value (optionally filtered through activation function)
  • Neural Network Structure:
    • In the following diagram:
      • circles are neurons (nodes)
      • Lines between nodes are connections between them
      • Neurons in the 1st column represent input layer
      • Neurons in the 2nd column represent hidden layer
        • While this layer gets bigger, more patterns are recognized
        • Perceptron: occurs when you map input to output layer without hidden layer
          • These are used for some linear pattern recognition
      • Neurons in the 3rd column represent output layer
        • Contains the actual categories that the network is trying to impose on its inputs
      • Each connection has:

        • an associated value
          (weight)&
          direction
          • Weight biologically equal to the strength of the connection between 2 nodes
    • Nodes Connectivity:
      • Fully connected: where each node is connected to other nodes next layer
      • Sparsely connected: opposite of fully connected
  • NN in World Business:
    • US Postal systems: which uses heaving trained NN for handwriting recognition when reading address on mail
    • Predict weather, Judging credit card fraud, voice recognition, diagnosis of diseases, filter internet sites
    • In Games:
      • Used in pattern recognition and prediction:
        • Patterns are recognized to take decisions
        • Enemy patterns are recognized and stored to used in prediction
      • Example:
        • NN can be trained to become a black box for potentially expensive operations like animation selection (what animation the basketball player should perform right now given state of game, his skill, surrounding player, the spread point, difficulty level of game)
  • Step of adding a NN to your Game:
    • Set up your network
    • Train it using specially prepared data
    • Use it in live-games
  • How Set Up Your Network:
    • Structure:
      • Refers to both type (FF and recurrent ) and organization (how many nodes/how many hidden layers)
      • Input Layer:
        • Number of variables the NN can categorize or pattern match on determines the # of input nodes
        • abstract variables that represent combinations or calculations based on simpler variables are better suited to NNs (i.e. in AIsteroid we could have danger abstract variable calculated from others)
        • The fewer you can get away with, the better
        • The more nodes you include in a NN, the larger search space becomes that the NN is slogging through to arrive at a suitable solution
      • Hidden Layer:
        • There are no real guidelines about how many hidden layer (but 1 layer in recommended)
        • A common practice is to use medium # of hidden nodes (two times the # of input nodes) and then go few up or down and compare the performance until you see it tapering off
        • Some Criteria in determining # of hidden nodes:
          • # of training cases & complexity of function being solved,
          • amount of noise (variance) in outputs
        • NN in capable to encapsulate a nonlinear function that maps the inputs to the outputs
        • Perceptron is capable to find linear correlations between input and output so, adding a nonlinear activation function (AF) on the nodes to give it nonlinearity
        • Any nonlinear function is suitable except polynomials. For backpropagation learning the AF must be differentiable and it helps if the function is bounded
      • Output Layer:
        • The # of output nodes = #of outputs that are required from NN
        • In continuous output, the level of the neuron’s activation (NA)would tell you how much to do
          • NN could tell you to Turn Right or Turn Left and NA would tell you how much to turn
        • Activation is calculated using an activation function
        • Common activation functions:
          • Step function, hyperbolic tangent, logistic sigmoid functions, Gaussian function
    • Learning Mechanism:
      • After setting the NN you should determine how you want to train it
      • Common Types of NN Learning:
        • Supervised:
          • involves having training data that consists of input output pairs
          • Backpropagation Method:
            • Here you feed input into NN, and adjust weights of network if there is discrepancy between output from NN and the expected output given in training data
            • Training continuous until a certain level of accuracy achieved
            • It’s called backpropagation because the way you adjust the network parameters is from back to front
          • Reinforcement Learning:
            • Here outputs is not given to the algorithm but the network is rewarded (or its behavior reinforced)when it performs well
            • Some implementations also punish when system performs poorly!
        • Unsupervised Learning:
          • Involves statistically looking at the output and adjusting the weights accordingly
          • Perturbation Learning:
            • Similar to simulated annealing
            • Here you test your NN then adjust some values a small amount and try it again. If you get better performance, you keep going by repeating the process; otherwise you get back to your last network settings
          • Using Genetic Algorithms to adjust weight values of NN
    • Creating Training Data:
      • This task is more important for supervised learning
      • Recording:
        • Record a human performing the same tasks you are trying to teach to your system then use it
        • Preceding technique is great because it could be used to build human-level performance system
      • Using Programs:
        • Another way, is to write program that generate reasonable input scenarios and have a human say which output should come up
        • This method is time consuming for person involved
        • This method is suitable for discrete outputs and for small output values
        • Enhancement:
          • you could
            generate random input/output pair and validate then storing them if winners
        • It’s recommended to have at least 10 times as many training cases as input units (10N)
  • NN is about determining the pattern between input and output whereas GA is optimizing a set of numbers to maximize some fitness function
  • NN are used primary are:
    • regression: finding a function that fits all the data points within some tolerance
    • classification: classifying given input into output
  • Functions types: under-fitting, fit and over-fitting
  • Example of using NN:
    • Feature that need AI: Building a NN to help your AI enemy evading bullets shot by the player by sidestepping out of the way
    • Inputs to NN: enemy face direction, enemy position and player position
    • Output
      of NN: NN will determine a movement vector for the enemy

  • Most of the success for a NN is based on capturing good data
  • Optimizations:
    • Optimizing NNs generally involves optimizing the training phase to get better because most of NN are used offline
    • Lessing the training time to construct a viable network design and creating high effective, relevant training data
    • Some points to consider:
      • If your networks seems to be stuck in local maxima, where error becomes stable but still higher than threshold
        • You might be using too few training data
        • Your hidden layer might be too small
      • If your training seems unstable (meaning that the error seems to jump all over the place)
        • You might have too many hidden layers
        • Network has essentially been given too much room to experiment within
      • Over-fitting:
        • You might have few training set
        • Having too many training iterations with the data
      • Under-fitting:
        • You might have large amount of very noisy training sets
        • You don’t train for enough iterations
      • If errors seems to be oscillating between values
        • You may be using too large a learning rate or momentum might be too high
        • Gradient descent in a greedy algorithm and will perform poorly if the step size is too high
          • Here you can use Newton’s method (but it’s more costly)
  • Pros of Neural Net-Based Systems:
    • NN are great way to find abstract relationships between input conditions
    • NN can extract very complex math functions solutions
      • Here you saving your CPU time
      • It has been math. proven that an NN with at least one hidden layer and nonlinear activation function can accurately depict almost any finite dimensional vector function on a compact state set
    • Can derive meaning from nonlinear to imprecise data
    • Training takes a fraction of CPU time as trial and error methods take
    • Humans make sense of them (can understand NN)
  • Cons of Neural Net-Based Systems:
    • Determining how to train an NN is usually cost
      • Here mainly we’ve shifted the problem from how to solve the problem to how teach NN solving them
    • NN could learn bad relationships so the output is not as expected
      • This problem could be caused when you use arbitrary, numerous or bogus input to the network
    • NN is a math. Black box and thud hard or even impossible to debug
    • All input fields must be numeric
    • NNs are difficult to implement
      • Due to high number of factors that must be determined without guidelines or rules
      • These factors are: network structure, input/output choices, activation function, learning rate, training data issues, weights initialization
    • Over-fitting is very common
    • NNs sometimes suffer from phenomena called catastrophic unlearning
      • This occurs when a NN given additional training data that completely undoes all previous learning
    • In complex learning scenarios, lots of training data and CPU time are required from training
    • NN larger than thousands nodes are not very stable
      • The curse of dimensionality seems to cause the learning ability of large nets implode somewhat
      • The network cycles within varying its weights forever never getting closer to a solution!
  • Extensions to Paradigm:
  • Other types of NN:
    • Simple recurrent Networks:
      • Here the hidden layer of the network is also connected back to the input layer with each connection having weight of 1
      • Could be used in sequence prediction
    • Hopfield nets:
      • Used to mimic associate memory within the brain
      • These networks allow entire “patterns” to be stored and then recalled by the system
      • Useful in image recognition
      • # of nodes needed to store information are calculated directly
    • Committee of Machines:
      • Multiple NNs are trained on the same data but with different initialization
      • During usage all the nets run on input data and the best output from these NNs is chose
    • Self-Organizing Maps (SOM):
      • Useful for classification tasks
      • Used to visualize relationships and classes within large, highly dimensioned inputs
      • Used in games in player modeling:
        • By taking a # of dimensions of behavior into account and giving the AI system a better picture of the kind of player the human tends toward
      • Uses unsupervised learning technique called competition learning
  • Other Types of NN Learning:
    • Reinforcement Learning:
      • It’s called learning with a critic (ناقد)
      • Here the network outputs values and the supervisor simply declares if the results was a good one
      • Here it seems to be working unsupervised (because the critic could be the outer environment)
    • Unsupervised Learning:
      • Genetic Algorithms are used to find best weight on the connections between neurons
      • Perturbation Learning, where weights are iteratively and randomly adjusted and tested for improvement
      • Competitive Techniques, where a # of neurons “competing” for the right to learn by having their weights adjusted within the net
    • Problems for which the output can’t be known ahead of time, in this case the main job to the network is to classify, cluster, find relationships and compress the input data in specific areas (SOMs are example of this)
  • Design Considerations:
    • Types of solutions:
      • Great when you have simple, modular system that map inputs too outputs (in surprising, nonlinear, black-box ways)
      • They are suitable for tactical level
        • Can’t be used in diplomacy systems in RTS Games
      • You can use it when you have specific inputs (incoming ball, player’s position, player skills) and specific output (each of possible catch animations to get the ball)
    • Agent Reactivity:
      • Contribute for faster reactivity for the agent
    • System Realism:
      • It gives a the system realism
      • Take care when to use NN because their faults in some games seems as “Stupid AI” rather than “a human mistake”
    • Development Limitations:
      • This is the most concern it’s NN requires investment and energy to develop
      • Also, if you will use NN in online learning, its more harder and harder to develop

Posted in AI Game Engine Programming | Leave a Comment »

Genetic Algorithms (GA)

Posted by Ogail on August 2, 2009

Genetic Algorithms (GA)

 

  • GA could solve problems where finding solution take too long
  • GA techniques uses the principle of evolution to search for solutions to algorithmic problems
  • The process in nature works as follows:
    • Species need to be able to reproduce
      • Reproduction is simply executing the encoded rules necessary to build an organism
      • These rules are stored in strings of DNA called chromosomes
    • Chromosomes are build up from small modular sequences called genes
      • Genes are various permutation of four basic proteins (T, A, C, G) each gene holds info about the setting or alleles for a particular trait (سمه)
    • When two parents reproduce, their DNA is split and child’s DNA is come from half of each one of them. This is called crossover or generic recombination
    • The problem of bad traits and good traits
      • The quality measure of any one creature’s mix of traits is called fitness
    • A gene within a child organism somehow changed so that it’s completely new and can’t be traced as one of its parents this is called mutation
  • The process can be thought of as “evolutionary search” in that we are still searching across the field of all the possible solutions to a given game problem but we are going to use method of searching like what happened in the evolution process
  • The method is split into two parts: evolving a solution & use a solution
  • Evolving a solution is usually performed during production of game, but there are some games (Creatures Series, Black & White) that include GA Learning in their games but in a limited way
  • GA are computationally expensive
  • GA belongs to class of stochastic search (discussed in appendix)
  • Problem of local maximums and global maximums solutions
  • GA separates their algorithm from the problem representation. Which able them to easily find solutions is systems of mixed variables types (discrete, continuous variables)
  • Common data structure used here is String and any data structure could be used if:
    • Each individual can encode a complete solution
    • Genetic operations can be constructed for the data structure
  • Basic Genetic Method:
    • Initialize:
      • Initialize a starting population of individuals
      • These individuals can be generated randomly or with promising values from previous knowledge
      • Size of population depends on: resources, how much time devoted, what seems work
    • Evaluation:
      • Evaluate each individual’s Success within the problem space
      • Each individual is evaluated using fitness function that returns a value indicating overall performance of this individual
      • A fitness test that requires each individual top play for 5 minutes (to try getting any actions) given a population of 100 would thus require 8.33 hours per generation
    • Generation:
      • Generate new individuals using reproduction
      • Selection is another important part of genetic process
      • If you select best genes then you could cover local max but if you select random genes it generates good solution
      • There are three common methods to generate children:
        • Crossover (sexual reproduction)
        • Mutation (genetic variation)
        • Elitism: taking the most fit genes from last generation and carrying them into the next
  • Representing the Problem:
    • Gene and Genome:
      • Determine the structure of the genes in your problem what are the traits are included and determine their alleles (on/off, real value), what their ranges do any of these traits depend on each other
      • This part of representation is the most important
      • Advantages (many good) and disadvantages (longer, unwanted) of big search space for GA – P: 447
      • Example: Gene and Genome of Pac-Man player (447)
      • GA are good for problems where you can’t formulate a good way of determining solutions
    • Fitness Function:
      • Pac-Man fitness example (499)
      • Your fitness function is really the essence of what your GA is trying to optimize so take care in design
      • Fitness data should be scaled to prevent permutation convergence (PMC) and stagnation
        • PMC
          occurs when exceptional individuals are stop generating in early generations that so they are not better than their parents
        • Stagnation occurs when many individuals have similar high numerical fitness
      • Some of ways scaling the data:
        • Sigma Truncation:
          • . Where:
            • F’ is the fitness value
            • F^ average fitness
            • Sigma is population standard deviation
            • c is reasonable multiplier (usually between 1 and 3)
          • Negative results are set to 0
        • Rank Scaling:
          • Replaces the fitness score with its position in the sorted order of fitness score
        • Sharing Scaling:
          • Only similar individual are scaled
          • The number of genes that are shared among many genomes is recorded
          • Genomes are then grouped by how many shared genes they have
          • The fitness score of each genome is scaled by the number of other genomes in its sharing group
    • Reproduction:
      • There are two main reproduction types:
        • Generational reproduction
          • Using last generation as a tool to create the next either by copying directly (elitism), crossover and mutation ; completely replacing the original generation
        • Steady State:
          • Few individuals that are created via crossover or mutation replace specific individuals in the original generation but the main body of population remains unchanged
          • Who is replaced: worst, randomly, most similar, parents
      • Elitism Technique:
        • Advantages: ensures that we don’t miss best being in any population (alter the selection routine)
        • Disadvantages: lessens diversity (يقلل التنوع)& speed up convergence (يكثر من التقارب)
      • Selection Techniques:
        • Roulette Wheel Selection:
          • Genome that have highest score will have the largest chance of being selected
            • This genome will has the biggest roulette wheel
          • High fitness individual may be selected multiple times
          • It’s random chance thus, the fittest individual is not guaranteed to be selected
            • For this reason elitism is a common practice in GA genome selection
        • Stochastic Universal Selection:
          • Roulette wheel have pointers (as shown in figure) equal to the number of wanted offspring and then choose the pointed individuals by the pointer
          • This technique has the advantages of keeping generic diversity high

 

  • Tournament Selection:
    • Number of individuals (k) are randomly drawn from the pool
    • Highest fitness individual goes to next generation
    • Then, all goes back into the pool (selected individual could be deleted)
    • The previous steps is repeated several times
    • Notes:
      • If tournament size is larger, weak individuals have a smaller chance to be selected
      • 1-way tournament is equivalent to random selection
      • This technique works fine on parallel architecture
  • After choosing individuals for crossover:
    • Generate random number (between 0 and 1) for each pair
    • If this number if less than exact number (say 0.7f) then crossover operation is applied and other 2 offspring are generated
    • Otherwise pair is copied into next generation without alteration
    • Suggestion: each crossover technique has a crossover operator value so when this value appears the technique is used
  • Which crossover operator applied to a pair depends on several things:
    • Type of variables structure your genomes are using
    • Healthy dose of experimentation
  • Some of the binary crossover operators are the following:
    • Single Point Crossover:
      • A position is randomly chosen somewhere along the length of the genome and then swap all the genes after this position among parents

 

  • Two Point Crossover:
    • Same as single point except that genes between 2 points are swapped
  • Uniform Crossover (Every Point):
    • It’s could be named multipoint crossover, it’s like 2-point but with multiple points
    • This technique seems like doing a mutation
  • Some of the continuous value variable crossover operators:
    • Discrete Crossover: Swaps the variable values between individuals
    • Intermediate Crossover:
      • Offspring variable values are calculated:
      • Scaling factor is chosen randomly from (-d, d+1). Normal intermediate uses d=0
    • Line Crossover: Same as intermediate crossover but all variables have the same scale factor
  • Some of the order specific crossover operators:
    • Partially Mapped Crossover (PMX):
      • You map a substring from one parent to another, then in crossover when you find mapped gene swap it with its value
      • Example is the figure below
    • Order-Based Crossover:
      • As is the figure, choose several random genes from P1 and impose the same order they are found in the same genes within P2 by swapping values as needed
    • Position-Based Crossover:

 

 

  • After crossover mutation occur:
  • Mutation means applying an operator to genes in offspring
  • Rate of mutation or chance for gene will be mutated can vary widely depending on the problem
  • Many academic papers discussed this issue (455)
  • The specific type of mutation operator to apply is related to structure of genome
  • Common mutation operators in order specific genomes:
    • Exchange: swapping a gene with another
    • Displacement: select 2 random positions within a genome defining substring, then re-insert it
    • Insertion: Works as displacement but on one gene. This is the recommended technique
  • Non-Order specific operators include the following:
    • Binary Mutation: flip a bit within the genome
    • Real-value Mutation: Add delta value to the gene
  • Remember, GA are all about experimentation and finding out what operators to use as well as
  • For GA the more time you put into them, the better results you will receive
  • Highly scaled game-time problem: some calculations will be missed because that problem like collision(473)
  • GA are brute force method that can find solutions in very difficult or computationally expensive areas of game AI
  • Areas of using GA:
    • When you have a number of parameters that interact in highly nonlinear or surprising ways
    • When you have many local maximums and are searching for the best one (car physics parameters example)
    • Solutions that involve discontinuous output (need explanation, page 474)
    • When complementing traditional techniques
    • When actual computation of a decision might be too costly (like trying to substitute in heavy math function)
  • Pros of Genetic Algorithms:
    • Easy set up
    • Easy start getting results
    • GAs are often a very strong optimization algorithm, meaning that you can find optimal solution
    • They tend to find global maximum solutions rather than finding local maximum solutions
    • They can operate in parallel
  • Cons of Genetic Algorithms:
    • Time Consuming
    • GA performs evolution offline
    • Hit or miss performance: To get suitable best combination this requires time and experimentation
    • Weak definition of success and failure:
      • It’s hard to know what’s the flaw in your structure
      • It’s hard to tell the difference between buggy code and un-evolved code
    • Not guaranteed optimal solution
    • Tough to tune, and even tougher to add functionality
  • Extensions to the Paradigm:
    • Ant Colony Algorithms:
      • Collective intelligence: الذكاء الجماعي – Ants help each other so they seems intelligent
      • Ants but marks in trails that are near food so other ants go there
      • Here we are building solutions based on the successes of the entire population rather than the success of one individual
    • Co-evolution (478):
      • Concept of cooperative and competitive evolution
      • In cooperative, fitness increases when 2 creatures work together in competitive one increase and the other decrease
      • This type of GA could be used in RTS civilization Games
    • Self-Adapting GAs:
      • During the evolution process, crossover or mutation rates are influenced and changed
      • Also try to apply most suitable operators on genomes
    • Genetic Programming:
      • Here we are evolving program (code) rather than parameters
      • If we use data-driven game AI (Scripting) where your data is a small program instructions that represent behavior, the technique could be used to evolve AI character scripts instead of having to create them
  • Design Considerations:
    • Types of Solutions: it’s more suitable for tactical level
    • Agent Reactivity: support reactivity for your system
    • System Realism: because of its nature of optimized solutions sometimes it behaves in non-realistic manner
    • Genre: could be used in RTS Games in: building order, finding working tactics (such as rushing)
    • Platform: that’s not an issue because the work is mostly done offline
    • Development Limitations:
      • That’s really our concern. Because:
        • GA are not debuggable in any real sense
        • Do you really have time and patience to find optimal solution with different parameters, operators, gene design
        • It your feedbacks to game will require changes? That’s hard thing to do
    • Entertainment Limitations:
      • Game difficulty levels could be handled by separate GA for each level with separate fitness function
  • Appendix:
    • Stochastic optimization (SO) methods are optimization
      algorithms which incorporate probabilistic (random) elements, either in the problem data (the objective function, the constraints, etc.), or in the algorithm itself (through random parameter values, random choices, etc.), or in both. The concept contrasts with the deterministic optimization methods, where the values of the objective function are assumed to be exact, and the computation is completely determined by the values sampled so far

Posted in AI Game Engine Programming | 4 Comments »

Message Based Systems

Posted by Ogail on July 30, 2009

Message Based Systems

  • Messaging is not decision making structure it’s communication structure
  • Messaging systems help in CPU utilization
  • Messaging serves as a secondary system that resides below the underling decision structure of your game
  • Characteristics of Candidate AI-Systems to use Messaging Systems:
    • AI Controlled charecters are most often created to be reactive
    • AI level is very high so it’s communicating with many other game’s engines
  • Messenging System Framework:
    • Message Object
    • Message Pump
    • Client Handlers


  • Code Skeleton:
    • Message: stores individual info requirements of a message
    • MessagePump: central message router/ Acts as post office of system
    • Client Handlers: run code to accommodate any given incoming message
  • Pros of messaging systems:
    • Optimizing client-side code, where states themselves don’t care about State-Transitions
    • Decoupling class level communication advance the design
  • Cons:
    • Additional memory footprint
    • If polling action is required it’s hard and overhead to implement
  • Extension:
    • Message priority:
      • Care about starvation
    • Message arbitration:
      • See messages that could be fixed on the fly
        • Message redundancy, message collision, starving messages
    • Atomic and extended messages type:
      • Periodic message
      • Debugging messages
      • Confirmation messages

Immediate messages

Posted in AI Game Engine Programming | 3 Comments »

Location-Based Information Systems

Posted by Ogail on July 27, 2009

Location-Based Information Systems

 

  • LBI could be thought as a very specialized perception data , which also include embedded logic or lower-level intelligence
  • Categories of discussing LBI:
    • Influence maps
    • Smart terrain
    • Terrain analysis
  • Influence maps (IM)
    • It refers to using simple array of data where each element represents data about a specific world position
    • the accuracy of the IM directly proportional with the memory size
    • Examples of uses:
      • Stores the areas of resources so when you want to expand your country you can choose best way for expanding and also know where to build defenses (uses in high-level planning)
      • IM that keeps the number of units that have been killed in each grid square. This could be used in pathfinding so that units will not continue to take paths into areas with high mortality score
    • In complex systems it’s recommended to have low detailed global IM and have another high detailed local IM that represents a specific area in the game
    • High-Level Approach: game characters interact with each other. Each character will consult some large list of game characters and determine whether interactions are required or necessary
      • This is not people do things in real world, people have limited senses and can only react to things in their immediate areas
    • Keep tracking of player location in IM allows for a more low-level approach to game character interaction
      • This adds realistic taste to the AI characters in the game
    • In fact, IM is the central repository for the other two LBI techniques to communicate with the rest of the game
  • Smart Terrain:
  • This notion was coined by Sims creator
    • This technique places the logic and behavior data for how to use various world feature and game objects in the objects themselves
    • This technique bundles all the interaction animations, sounds, and any other special data that a character would need to use the object, thus making the object self-contained
    • Here this facilitates any addition of new objects to the game
  • Terrain Analysis (TA):
    • TA is a family of methods that have made use of IM techniques to provide AI systems with increased strategic information about maps, especially randomly generated maps that haven’t had the benefit of ever being touched by level designer
    • TA methods are best described as specialized pattern recognition within an IM data array
    • TA systems uses neural nets or fuzzy logic systems that can be algorithmically find patterns within IM array
  • How these techniques are used:
    • Means tracking various population within the game
    • Keep tracking of the number of specific game objects within certain area

       

    • Simple occupance data can be used to:
      • help with obstacle avoidance
      • Give rough estimates of various game perception
      • Any task that requires quick access to localized population data
    • This technique is used with the fog of war line of sight systems to uncover areas of the map that are visually within line of sight of any of your units
    • Ground Control:
      • Term IM historically refers to techniques derived from the field of thermodynamics and field of analysis in general
      • This can quickly determine which player has control over which part of the game map
      • Algorithm for doing this is as following:
        • Zero out entire map
        • Assign each grid square a value based on its team-specific control
          • i.e. a positive value for player A units and negative B for units
        • Add up the values of the squares surrounding each square in the map scale by some amount and add that’s value to the square value
        • Repeat this few times to disperse the influence out until you achieve a stable state
    • Pathfinding System Helper Data:
      • When provided with additional information about specific area, the pathfinder can help smooth the solution through a tricky part of the map by giving a shortcut OR allow the AI-Controlled character to use some map features (like a ladder or a teleport tool)
      • These provided data could be:
        • Passability (relative to terrain features -hills, cliffs – and type -Sea, Land-)
        • Which player currently controls the areas of map you want to traverse
    • Danger Signification:
      • IMs are used to keep track of areas where bad things have happened over a period of time
      • And better yet the AI-Controlled character could respond to the danger area and send an attack group to investigate what’s causing the danger in this area
    • Rough Battlefield Planning:
      • By using ground control method you can look for areas with zero or near zero values and find areas where armies have met and then fight to control that area
      • Here you can know:
        • army relation to other army
        • relative sizes of each army
        • Forcing attack direction
        • Determine chances of winning to initiate charges or retreats
        • Able to send reinforcements more intelligently
        • Coordinate attacks on multiple fronts more clearly
    • Simple Terrain Analysis:
      • This includes somewhat more math determinations such as cover, visibility and height factor
      • Cover: how much a given position is open to attack from a given angle
      • Visibility: lighting concerns and light of sight issues
      • The best areas of cover become sniping spots
      • Areas with low visibility might be used as sneaky backdoors to other map areas
    • Advanced Terrain Analysis:
      • In RTS Games finding chokes points add the ability to create ambush there
      • IMs used in determining best place to build a town and defensive towers
      • Towns should be built with more preplanning like: keep crowding under control, maximizing future growth, allow maintenance
      • Building base against impassable terrain, then removing line of attack
      • Buildings walls to redirect units throw a kill zone. As explained in the picture below:


 

  • Determining important map areas using IM (powerup location density, cover info, choke points)
  • IMs Implemented Types:
    • Occupance-based IM: tracks where a given game object is in the map
    • Control-based IM: Shows areas of control around each game object
    • Bit-wise IM: splits IM element’s value into bitwise data components
    • Danger measurement
    • Providing pathfinding for more complex interactive objects
    • Could be helped in searching for objects
    • Keep track of the asteroids moving way and to predict their next movement and get them down
  • Dirty rectangles scheme of graphics drawing
  • Tip:
    • SmartObjects: each object in the game knows how to interact with ship via Interact method so adding new objects is so easily
  • Pros of LBI:
    • LBI tends to simplify perception search space y lowering the resolution of data that the AI needs when making decisions
    • Easy in debugging and developing
    • Generic interface
    • Centralizing AI data
  • Cons of LBI:
    • Size expensive
    • TA is performance expensive
  • Optimizations:
    • Using dirty drawing rectangles in occupance IM
    • Multilevel IM:
      • IMs that have levels and every greater level reqires additional performance and suppliers more details
  • Design Considerations:
    • Types of Solutions: LBI is suitable for tactical and strategic level
    • Agent Reactivity:

      • It makes your agent proactive
      • Take in consideration that LBI systems are secondary systems so agent reactivity depends on AI technique
    • System Realism: adds reality to the game
    • Genre and Platform: RTS Games is main of them

Development Limitations: allows modular implementation

Posted in AI Game Engine Programming | Leave a Comment »

Scripting Systems

Posted by Ogail on July 23, 2009

Scripting Systems

 

  • Previous techniques are coded with programmers; so when a gamer what to add new thing how?
  • Scripting: using simplified programming language to build AI elements, logic or behavior
  • Scripting Languages could be:
    • Very Simple (predicate language): contains general rules with a keyword and value
    • Full-Fledged: i.e. Neverwinter Nights uses Java in its scripting language
  • Game designers/end users use the scripting system
  • Design issues in creating scripting systems:
    • The kinds of functionality that the language will require
      • Example in shooter game:
        • Simple script: define how enemies where will spawned and their parameters
        • More complex: define how enemies move
        • More complex: create new enemies
          • Scripter can define:
            • Set of attributes: body type, speed, duration, armor level etc…
            • Se of behaviors: attack method, movement style and pattern etc…
      • System could be implemented that it has several values and the game side determine which values to use in response to the current player state
    • Will the runtime engine interpret the scripts, or will they precompiled into some kind of bytecode
      • Interpreted scripts
        • tend to be slower and large
        • flexible where you can enter new script from an in-game console
        • Simpler to implement
      • CPU power will be an important issue in determining the choice here
    • How big the scripts be in size?
      • Take care of the computer memory assigned to to
      • Also if the scripts are to be used within the game take care of memory bandwidth you are dealing with
    • How will be using scripting language?
  • Scripting System Components:
    • Parser: to load the scripted files under certain tokens
    • Tokens:
      • Token name
      • Execute(): which parser invoke it when it finds token
        • Executing involves scanning the file for any additional parameters that the token expects and then sending game message via this data
    • In-Game Callbacks:
      • Functions that will respond to the messages sent by token Execuet()
      • Here that’s the place where you actually bind real game variables to incoming script data
    • Script files
  • Implementation Notes (Simple scripting language):
    • To initialize the system you should register parser with all tokens
    • Considered problems:
      • Capital/small letters
      • Substring problem (Shout and out)

 

  • Performance of the AI with this system:
    • The loading at the startup will somehow be delayed because the loading operation
  • Additions:
    • Comments
    • Paramerarized tokens
    • Block signifier (recursive tokens)
  • Lua PL Advantages:
    • Fast, has smaller memory footprint and easier to learn
    • Procedural syntax with dynamic types
    • It can be interpreted from script or compiled bytecode
    • Have automatic memory management
    • Has easy to use APIs
  • Some language basic features:
    • Very simple scoping
    • Dynamic Typing
    • Tables
    • Control Structures
    • The Stack
  • Lua Language Fundamentals
  • Pros of Scripting Systems:
    • Rapid Prototyping:
      • Main concepts will be held in scripts so advanced logic could be added easily
    • Increased Accessibility:
      • Many game designers and end users could alter logic easily
    • Speed of Tuning:
    • Provides means of user extensibility:
      • Providing end users with tools used in creating the scripts add extensibility
    • Scales Well:
      • As the number of things under control of the scripting system increases, the real power of scripting makes itself known
  • Cons of Scripting Systems:
    • Speed of Execution:
      • Any interpreted system is going to be slower than programs that have been compiled into native code
      • Problems of bytecode that it’s unreadable but it’s faster
      • You can develop functions that require speed in the compiled language while using script to develop the parts of the game that aren’t as performance sensitive
    • Debugging Can be Difficult:
      • Custom scripting languages don’t have level of debugging tools
        • Dedicated debuggers, profiles, asserts, internal error checking, syntax checking
        • Non-technical people who write scripts tend to lack some “common sense debugging techniques” this techniques could include binary bug hunting
    • Looking Scripted:
      • When players play the game much your scripts behaviors are discovered!
    • The Question of How Much Power:
      • Answer of this question lies in the type of the game you are working on, level of functionality you need in your scripts to perform the things you need, and the level of organization and control you want of the content in your game
    • You are now maintaining Two Systems!
    • Too much functionality in the scripting system can be a problem as well
  • Extensions to this method:
    • Completely Custom Languages:
      • Creating scripted custom system using “Lex & Yacc” route
        • Create grammar file using Lex then process this file under Yacc and generates the necessary C code to parse file using the specified grammar
    • Built-In Debugging Tool:
    • Smart IDE for Writing Scripts
    • Automatic Integration With The Game
      • Ways to implement this:
        • Recognize that some change happened and then rebuild all the project to consider changes
        • Adding request system so, ay anytime when new Widget is requested it’s added to a special list that the programmer can access and find out what scripter require for new functionality and then programmer implement it as well
    • Self-Modifying Scripts:
      • Scripts could keep track of behaviors that work and doesn’t work
      • The system could append additional rules onto its scripts (or remove them) as the consequence of some event during the game
      • Genetic Algorithms:
        • which strive to use genetically derived methods to perfect algorithms for finding solutions to problems
      • Genetic Programming:
        • Deals with using genetic methods to actually write whole programs to solve problems
      • Here the system is searching for the ideal script in which to perform its job instead of the ideal parameters to use within script
      • To THE VECTOR GO THE SPOILS
  • Optimizations:
    • Performance of scripting system relies on:
      • Level of its use within the game
      • Functionality of scripting language
      • How structured your overall system is
  • Design Considerations:
    • Types of Solutions: its preferred to be used by strategic level and could work with tactical level also
    • Agent Reactivity: give you vast area of reactivity
    • System Realism:
      • Webb designed scripted systems have the ability to perform realistic, unique behavior

Posted in AI Game Engine Programming | Leave a Comment »