Adaptive AI Engine for RTS Games

Discussing the theory and practice

Archive for October, 2009

Case-Based Planning – A Framework for planning from experience (Part I)

Posted by Ogail on October 31, 2009

  • Abstract:
    • Presenting planning as a task supported by dynamic memory
    • Case-Based Planning: integrating models of memory, learning and planning into one system
    • Successful plans are stored in memory, indexed by the goals they satisfy and the problems the avoid
    • Also, failure is stored. Planner is able to anticipate an avoid future plans failures
    • CBP aims to improve planning in three areas:
      • Failure avoidance
      • Plan repair
      • Plan reuse
  • Case Based Planning (CBP) is idea of planning as remembering
  • CBP differs from rule-based planning in that it rests on the notion that new plans should be based on the planner’s knowledge of what has succeeded and failed in the past
  • Plan for a set of goals in not build up piece by piece from the individual plans for each goal. Instead constructed by modifying a plan from memory that already satisfies or particularly satisfies many if not all goals
  • CBP is concerned with previous planning errors in order to avoid them
  • CBP storing mechanism must have the ability to avoid past failure and reuse succeeded plans
  • CBP Differs from other planning techniques in:
    • Initial plan building
    • Reaction to plan failure
    • Vocabulary for describing and storing plans

Building a Plan

  • CBP must start building a plan for a set of goals by considering how they will interact
  • The CB approach to finding an initial plan is to anticipate problems so the planner can find plans that avoid them
  • Previous planning mechanism was create and debug paradigm
  • The main difference between them is that create and debug paradigm creates a plan and debugs a failure after it arises

Debugging Failed Plans

  • CBPer should be able to recognize failed plans to mark them as failed
  • A planning fails when a plan doesn’t satisfy some goal that it was designed to deal with
  • An exception failure is different from planning failure. It occurs when an unexpected event occurs. The response for exception failure differs from plan failure where in PF the planner alter the plan and in EF the planner should alters its understanding of the world
  • A planner must respond to plan failure by building casual explanations of why the failure has occurred and then use these explanations to access replanning strategies designed for the situation in general
  • A planner must respond to exception failure by using the explanations to add new inference rules that will allow it to anticipate the problem that it previously was unable to foresee. The planner should ask first “What went wrong with the plan?” and then ask “What went wrong with planning?”
  • The planner has to repair its expectations bout the world when those expectations lead to plans that fail
  • A CBPer responds to planning failures by repairing both the faulty plan and its knowledge base that allowed it to build the plan incorrectly

Storing Plans for Later Reuse

  • The basic vocabluray of plan indexing is nessicary the vocalary of planner’s domain and of the goals domain
  • Plans must be stored by descriptions of the negative gaol interactions they avoid (umbrella example)

Learning from Planning

  • Learning by remembering differs from theories of concept of learning and explanation-driven programming
  • CBP Learning is divided into 3 types:
    • Plan Learning
      • It’s the creation and storage of new plans as the result of planning for situations that the planner has never encountered before
      • The planner should build a new plan and decide what features are best for indexing it in memory
    • Exception Learning:
      • Its more complex than plan learning and closely linked to the indexing of plans in memory
      • It involves learning the features in a domain that are predictive of negative interactions between plans steps
      • This is used to anticipate particular problems and then to search plans in memory designed to avoid them
    • Critic Learning:
      • Learning the repairs that have to be made if those problems arise again in different circumstances

The Structure of Cased-Based Planning

  • Old theories concentrate on simulating behavior but don’t explain why it arises. They called “non-explanation explanations
  • Another look at human behavior is to ask what function this behavior serves

Building it from the Bottom

Why Case-Based?

Plan Retrieval

  • A planner must know:
    • Its initial planning situation
    • The states that are currently true
    • The goals it needs to satisfy
  • Several plans could satisfy the same goal for different situations
  • When the planner wants to retrieve a plan it probably will not find an exact plan that satisfied the goals requires. So it seeks for plans with similar goals as a starting point
  • This similarity could be expressed by:
    • Grouping similar goals into sets
    • Building them in ISA hierarchy
      • I’ve found a paper talking about representing ISA hierarchy using sets rather than trees. This paper state that trees have some problems in identifying similarities and it solves these problems using sets
    • Dynamically evaluating goal similarity on the basis of individual features
    • Similarity Matrix
  • If a planner have a set of plans that satisfy goals partially how to choose the best match plan? How to determine what plan out of the group
  • The abstraction hierarchy is used to determine the similarity between plans whereas value hierarchy used to determine the relative utilities of different plans with respect to a set of goals
  • Goals that are easier to incorporate into existing plans are less important than those that are more difficult to satisfy (buildings example 10)
  • In order to get a plan that is able to find best match for a set of goals, a planner needs to know 3 things:
    • A memory of plans indexed by goals they satisfy
    • A similarity matrix for judging the similarities of goals that is required for determining how close a plan comes to satisfy a set of goals
    • A value hierarchy of goals used to judge the relative utilities of plans with respect to a set of goals

Plan Modification

  • Plan modification is about modifying retrieved plan in order to satisfy unsatisfied goals in it
  • To alter old plans to meet new goals the planner needs:
    • Set of modification rules
      • These rules are sets of steps that can be added to a particular plans to achieve particular goal
      • These rules can just be the modifications that are needed to alter existing plan to achieve particular goal
    • Critics with knowledge of goal specific requirements
      • This information will let the planner tailor the general modifications of a plan to the specific needs of the items required to achieve particular goals
    • General plan specifications
      • This is needed so planner doesn’t violate the goals of overall the plan when it modify it to satisfy a particular goal
  • For now, the RETRIVER finds a good plan and the MODIFIER makes it better

Plan Storage

  • To place new plans in memory, the STORER needs to index them under the same features that the RETRIVER uses to find them:
    • Goals they satisfy
    • Situations in which they are appropriate

Plan Repair

  • Giving that the planner is going to make mistakes, we’ve to give it some mechanism for repairing the faulty plans it builds. This mechanism will be called the REPAIRER
  • Input to a REPAIRER:
    • Faulty plan
    • Some description of the fault itself
      • The desired state that it has failed to achieve
      • The undesired state that it has caused to come about
  • How planner can get this information?
    • Run the plan and examine the results
    • Run simulation for the plan and use their results to diagnose errors
    • It can ask outside source if the plan will do what it wants to do
  • The REPAIRER is going to have some vocabulary for describing plan failures that can be used to index methods for repairing the plan itself
  • The relationship between problems and repair is like the relationship between goals and plans
  • Plans are indexed under problems they solve and repair methods are indexed under the types of problems they resolve
  • A plan repairer needs access to two types of knowledge:
    • Vocabulary for describing plan failures
    • Set of rapier methods that correspond to those description

Learning from Failure

  • There’s a problem about how to anticipate that a problem will arise again
  • The fact that a plan solves a particular problem is useless unless the planner can anticipate that problem will arise
  • To decide which features in a situation are to blame for failure, the ASSIGNER needs to be able to describe the causes of failure. The more extensive its vocabulary for this description, the more exact its credit assignment will be
  • In anticipation, if one of the features that caused failure is high, then the suitable plan will be used

Problem Anticipation

  • The job of an anticipator is to look at the planner’s goals and the current situation that surrounds them and decide if there is anything in the situation that is predictive of a problem before any other planning is done
  • The whole point of anticipator is to provide
    • Information about problems that have to be avoided
    • Information that will be used by retriever to find plan that does so
  • To anticipate a problem on the basis of surface features, the anticipator needs the base of information built by the assigner

Learning from Planning

  • A CBPer learns by correctly indexing its planning experience in memory
  • CBP learning theory presents in:
    • Learning new plans
    • Learning new problems
    • Learning new solutions
  • A CBPer learns new plans, the features that predict failures and past repairs to faulty plans that it can reuse
Advertisements

Posted in Case-Based Planning | Tagged: , , , | 2 Comments »

Summary of Case-Based Reasoning Foundational Issues, Methodological Variations, and System Approaches

Posted by Ogail on October 26, 2009

  • Abstract:
    • Case-based reasoning is a recent approach to problem solving and learning that has got a lot of attention over the last few years. Originating in the US, the basic idea and underlying theories have spread to other continents, and we are now within a period of highly active research in case-based reasoning in Europe, as well. This paper gives an overview of the foundational issues related to case-based reasoning, describes some of the leading methodological approaches within the field, and exemplifies the current state through pointers to some systems. Initially, a general framework is defined, to which the subsequent descriptions and discussions will refer. The framework is influenced by recent methodologies for knowledge level descriptions of intelligent systems. The methods for case retrieval reuse, solution testing, and learning are summarized, and their actual realization is discussed in the light of a few example systems that represent different CBR approaches. We also discuss the role of case-based methods as one type of reasoning and learning method within an integrated system architecture
  • Introduction:
    • CBR is able to utilize the specific knowledge of previously experienced, concrete problem situations (cases).
    • Why to read this paper (2enta 2eshm3na 2aret el papers deh?):
      • Initially specify a general descriptive framework to which the subsequent method descriptions will refer
      • We put a strong emphasis on the methodological issues of case-based reasoning, and less on a discussion of suitable application types and on the advantages of CBR over rule-based systems
      • We strive to maintain a neutral view of existing CBR approaches, unbiased by a particular ‘school’
    • What is case-based reasoning? Basically: To solve a new problem by remembering a previous similar situation and by reusing information and knowledge of that situation
    • Part of the foundation for the case-based approach, is its psychological plausibility (presentation tip)
    • Case-based reasoning and analogy are sometimes used as synonyms
    • Problem solving is not necessarily the finding of a concrete solution to an application problem, it may be any problem put forth by the user. For example, to justify or criticize a solution proposed by the user, to interpret a problem situation, to generate a set of possible solutions, or generate expectations in observable data are also problem solving situations (presentation tip: eeh 3alket PS bel CBR? PS ma3nah te7el moshkela we dah mesh by3mel kda)
    • The learning approach of case-based reasoning is sometimes referred to as case-based learning. This term is sometimes also used synonymous with example-based learning, and may therefore point to classical induction and other generalization-driven learning methods. Hence, we will here use the term case-based reasoning both for the problem solving and learning part, and explicitly state which part we talk about whenever necessary (2enta kateb 2en mashro3ak feh learning CBR dah learning ezay?)
    • When a problem is successfully solved, the experience is retained in order to solve similar problems in the future. When an attempt to solve a problem fails, the reason for the failure is identified and remembered in order to avoid the same mistake in the future
    • How the paper is structured:
      • Brief historical overview of CBR field
      • Describing CBR framework and methods
      • Section 4->8: discuss representation issues and methods related to the four main tasks of case-based reasoning
      • In chapter 9 we look at CBR in relation to integrated architectures and multistrategy problem solving and learning
  • History of CBR:
    • There are developed applications that help in CBR development
    • Read this section from paper (4-5)
  • Fundamentals of CBR Methods:
    • Central tasks that all case-based reasoning methods have to deal with are:
      • To identify the current problem situation
      • Find a past case similar to the new one
      • Use that case to suggest a solution to the current problem
      • Evaluate the proposed solution
      • Update the system by learning from this experience.
    • Classification of Terms Related to CBR:
      • Exemplar-Based Reasoning:
        • Solving a problem is a classification task, i.e. finding the right class for the unclassified exemplar. The class of the most similar past case becomes the solution to the classification problem. The set of classes constitutes the set of possible solutions. Modification of a solution found is therefore outside the scope of this method
        • Instance-Based Reasoning:
          • This is a specialization of exemplar-based reasoning into a highly syntactic CBR-approach
          • The representation of the instances are usually simple (e.g. feature vectors), since a major focus is to study automated learning with no user in the loop
          • Basically, this is a non-generalization approach to the concept learning problem addressed by classical, inductive machine learning methods
        • Memory-Based Reasoning:
          • Adds the parallel processing abilities to one of the 2 previous approaches
        • Case-Based Reasoning:
          • A typical case is usually assumed to have a certain degree of richness of information contained in it, and a certain complexity with respect to its internal organization
          • That is, a feature vector holding some values and a corresponding class is not what we would call a typical case description
          • They are able to modify, or adapt, a retrieved solution when applied in a different problem solving context
          • Paradigmatic case-based methods also utilizes general background knowledge – although its richness, degree of explicit representation
        • Analogy-Based Reasoning:
          • ABR is often used to characterize methods that solve new problems based on past cases from a different domain, while typical case-based methods focus on indexing and matching strategies for single-domain cases
          • Research on analogy reasoning is therefore a subfield concerned with mechanisms for identification and utilization of cross-domain analogies
          • Mapping problem: Finding a way to transfer, or map, the solution of an identified analogue (called source or base) to the present problem (called target)
    • Description of the Framework:
      • Process model of CBR cycle (Process Cycle View):
        • here we identifies the main subprocesses of a CBR cycle, their interdependencies and products
      • Task method structure for CBR (Task Oriented View):
        • Here where a task decomposition and related problem solving methods are described
    • CBR Cycle:
      • RETRIEVE the most similar case or cases
      • REUSE the information and knowledge in that case(s) to solve the problem
      • REVISE the proposed solution
      • RETAIN the parts of this experience likely to be useful for future problem solving

  • A Hierarchy of CBR Tasks:
    • At the knowledge level, a system is viewed as an agent which has goals, and means to achieve its goals
    • A system description can be made from three perspectives: Tasks, methods and domain knowledge models.
    • Tasks are set up by the goals of the system, and a task is performed by applying one or more methods. For a method to be able to accomplish a task, it needs knowledge about the general application domain as well as information about the current problem and its context.


  • CBR Problem Areas:
    • Knowledge representation
    • Retrieval methods
    • Reuse methods
    • Revise methods
    • Retain methods
  • Representation of Cases:
    • Problem Definition:
      • The representation problem in CBR is primarily the problem of
        • Deciding what to store in a case
        • Finding an appropriate structure for describing case contents
        • Deciding how the case memory should be organized and indexed for effective retrieval and reuse
        • How to integrate the case memory structure into a model of general domain knowledge, to the extent that such knowledge is incorporated
    • Common Case-Memory Models:
      • Dynamic Memory Model:
        • The case memory in this model is a hierarchical structure of what is called ‘episodic memory organization packets’ or generalized episodes
        • The basic idea is to organize specific cases which share similar properties under a more general structure (a generalized episode)
        • A generalized episode (GE) contains three different types of objects:
          • Norms: are features common to all cases indexed under a GE
          • Cases:
          • Indices:
            • Are features which discriminate between a GE’s cases
            • An index may point to a more specific generalized episode, or directly to a case.
            • An index is composed of two terms: An index name and an index value
            • An index value may only point to a single case or a single generalized episode
          • Similarity assessment criteria (to match a case) can in turn can be used to guide the search – for example by identifying which indexes to follow first if there is a choice to be made
          • During storing of a new case, when a feature of the new case matches a feature of an existing case, a generalized episode is created. The two cases are then discriminated by indexing them under different indices below this generalized episode (also for GE)
          • This is the reason for naming it dynamic
          • In CASEY, failure features are also stored within each case

  • Disadvantages:
    • The indexing scheme is redundant, since there are multiple paths to a particular case or GE. This is illustrated in the figure by the indexing of case1
  • Advantages:
    • The same algorithm is used for searching, retrieving and adding new case
    • Human readable
  • Category and Exemplar Model:
    • Here cases are also referred to as exemplars
    • The psychological and philosophical basis of this method is the view that ‘real world’, natural concepts should be defined extensionally. Further, different features are assigned different importances in describing a case’s membership to a category. Any attempt to generalize a set of cases should – if attempted at all – be done very cautiously
    • Structure of memory:
      • The case memory is embedded in a network structure of categories, cases, and index pointers.
      • Each case is associated with a category.
      • An index may point to a case or a category. The indices are of three kinds:
        • Remindings:
          • Feature links pointing from problem descriptors (features) to cases or categories
        • Exemplar Links:
          • case links pointing from categories to its associated cases
        • And difference links pointing from cases to the neighbor cases that only differs in one or a small number of features
      • A feature is, generally, described by a name and a value.
      • A category’s exemplars are sorted according to their degree of prototypicality in the category
      • Here the categories are inter-linked within a semantic network. Which also contains the features and intermediate states (e.g. subclasses of goal concepts) referred to by other terms

  • Advantages:
    • Supports background knowledge
  • Disadvantages:
    • Complex calculations are required to retrieve a case
  • Case Retrieval:
    • Retrieval Criteria:
      • Syntactical similarities among problem descriptors
        • Syntactic similarity assessment – sometimes referred to as a “knowledge-poor” approach – has its advantage in domains where general domain knowledge is very difficult or impossible to acquire
      • Other approach retrieves based on deeper semantical similarities
        • semantical oriented approaches – referred to as “knowledge-intensive” – are able to use the contextual meaning of a problem description in its matching, for domains where general domain knowledge is available
    • Identify Feature:
      • Knowledge model: knowledge of cases and general knowledge
    • Initially Match:
      • This task is divided into 2 subtasks:
        • An initial matching process: which retrieves a set of plausible candidates
        • A more elaborate process of selecting the best one among these
      • Similarity Assessment Techniques:
        • Trying to understand the problem more deeply, and using the goals, constraints
        • Weigh the problem descriptors according to their importance for characterizing the problem
    • Case Reuse:
      • Here we take care of two aspects:
        • Differences between past and current case
        • What part of retrieved case can be transferred to the new case
    • Adaptation:
      • Differences between transformational reuse and derivational reuse
      • Transformational reuse does not look how a problem is solved but focuses on the equivalence of solutions, and this requires a strong domain-dependent model in the form of transformational operators {T} plus a control regime to organize the operators application
      • Derivational reuse looks at how the problem was solved in the retrieved case. The retrieved case holds information about the method used for solving the retrieved problem including a justification of the operators used, subgoals considered, alternatives generated, failed search paths, etc.
    • Case Revision:
      • At the start of this task the learned case is in intermediate period and marked as non-evaluated case
      • The case should be evaluated using any type of technique stated before. A string simulator could help here
      • Any new solution is passed to repair module and it revises the solution
    • Retainment:
      • Extract:
        • Here we are concerned with extracting the new info in the solution (could be entirely an new case, could be an explanation, could be a method, etc…)
      • Indexes:
        • Determining indices is actually a knowledge acquisition problem, and should be analyzed as part of the domain knowledge analysis and modeling step
        • A trivial solution to the problem is of course to use all input features as indices. This is the approach of syntax-based methods within instance-based and memory-based reasoning. In the memory-based method of CBR-Talk [Stanfill-86], for example, relevant features are determined by matching, in parallel, all cases in the case-base, and filtering out features that belong to cases with few features in common with the problem case
        • The indices could be adapted under which features were used to extract the solution
    • Integrated Approaches:
      • Sometimes we could integrate a Model-Based method to solve problems failed by CBR system
      • BOLERO the rule-based level
        contains domain knowledge (how to deduce plausible diagnosis from patient facts) while the meta-level
        contains strategic knowledge (it plans, from all possible goals, which are likely to be successfully achieved). The case-based planner is therefore used to control the space searched by the rule-based level, achieving a form of speed-up learning
  • Conclusion:
    • The development trends of CBR methods can be grouped around four main topics:
      • Integration with other learning methods
      • Integration with other reasoning components
      • Incorporation into massive parallel processing
      • Method advances by focusing on new cognitive aspects.
  • Questions:
    • What are the differences between EBR and IBR?
    • That is, a feature vector holding some values and a corresponding class is not what we would call a typical case description!! 2omal eh heya el case?? (Page 7)

Posted in Case-Based Reasoning | 2 Comments »

Our Project’s Research Current Situation – 25 October 2009

Posted by merothehero on October 25, 2009

We have come to the most crucial part of our project’s life cycle. It’s This CREATIVE part concerned with creating new techniques or developing existing ones using one or more approaches.

It’s now clear to us that offline Learning (Learning in the company developing the game before releasing the game) is done through Evolutionary Algorithms, also through Cased-based Planning (based on case-based reasoning) through learning from simulation. At the moment we do not wish to enter this field, our main goal is Real-Time Learning.

On the other hand, Online Learning is done through Reinforcement Learning, as well as Case-Based planning as well. Most of the online learning papers make the learning just after the game is finished, but little theses of these are doing Real-Time Learning (learning while playing).

It seems that the most approaches used are Reinforcement Learning and Case-Based Reasoning.  Genetic Algorithms are extremely offline learning.

Since more than a week, we divided ourselves to two groups, me and Saqr studying Reinforcement Learning and trying to implement it, Abdurrahman and A. Atta studying Case-Based Planning and trying to implement it.

I and Saqr are currently:

  • Mastering RL through the ultimate Reinforcement Learning book “RL : An Intro”
  • Understanding Eric Kok’s Paper “Adaptive Reinforcement Learning agents in RTS Games – 2008” as well as its implementation
  • Understanding the paper “Transfer Learning in RTS Games using Hybrid CBR/RL”
  • Implementing a simulation for a Mini-RTS Game to be able to test our Techniques on it.

While Abdurrahman and A. Atta are currently:

  • Understanding more Case-Based Planning.
  • Understanding more CBP papers and their implementation.
  • Starting to think about a design for a platform with CBP to be used with Learning in BosWars (Our Open Source Game)

We have till the 11th of November to complete at least 80% of these Tasks. So may ALLAH be with us!

Posted in Project Plan and Tasks | 3 Comments »

Reinforcement Learning – A Fast Overview

Posted by merothehero on October 25, 2009

Reinforcement Learning is one of the Machine Learning Techniques. It may be considered a combination of supervised and unsupervised learning; it also may be considered an approach to machine intelligence to solve problems using both: Dynamic programming and supervised learning.

Reinforcement learning appeals to many researchers because of its generality. In RL, the computer is simply given a goal to achieve. The computer then learns how to achieve that goal by trial-and-error interactions with its environment. Thus, many researchers are pursuing this form of machine intelligence and are excited about the possibility of solving problems that have been previously unsolvable.

The Reinforcement Learner starts with a certain state, it can choose from a list of actions an action that would transfer it to another state. With each new state acquired the Reinforcement Learner gains a reward or a punishment. The Rewards and Punishments affect the values of the states and the actions. The Aim of the Reinforcement Learner is to achieve the state with the highest value by gaining more rewards on the long run. The Reinforcement Learner uses a certain policy for choosing its actions and assigning values its states.

One of the most crucial problems in RL is the exploration-exploitation problem, whether to try a new unguaranteed action (which could a better one) or to select the best known action (which could not be the REAL best one). There are many algorithms used to solve this.

Another Problem is what’s called the credit assignment problem where we should determine what rewards should be given to which behaviors.

There are 3 main Reinforcement Learning Approaches:

Dynamic Programming

1-Needs a complete model about the environment.

2- Uses bootstrapping (Learn their estimates on the basis of other estimates or learn a guess from a guess!)

Monte-Carlo

1-needs only a sample experience of the model

2-doesn’t use bootstrapping

3-Rewards are given after a complete episode (an episode maybe a game in chess or a lap in a racing game)

Temporal-Difference Learning

1-U need not wait until the end of the episode (this is very useful because some applications have very long episodes (such as our domain) and some has no episodes at all)

2-uses bootstrapping (learns a guess from the next without waiting for an actual income)

3-does not require a model of the environment like Dynamic programming.

4-It’s considered to be a combination of dynamic programming and Monte-Carlo

We will talk later about Monte-Carlo and Temporal-Difference Learning in Detail!

Posted in Reinforcement Learning | 5 Comments »

On-Line Case-Based Plan Adaptation for RTS Games

Posted by Ahmad Atta on October 22, 2009

Abstract

After reading about Case-based Reasoning, the second step we should talk is understanding Case-based planning, my reference in this post will be the paper of Neha Sugandh, Santiago Ontanon, and Ashwin Ram titled: On-Line Case-Based Plan Adaptation for Real-Time Strategy Games. This paper presents techniques allow the system to create and adapt plans in an efficient and effective manner while performing the task.

Case-Based Planning in WARGUS

In this section we will briefly describe the Darmok system, which applies the plan adaptation techniques. Darmok was designed to play the game of WARGUS . In order to achieve this, Darmok learns behaviors from expert demonstrations, and then uses case-based planning to play the game reusing the learnt behaviors. Figure 2 shows an overview of our case-based planning approach.

Darmok System

Basically, we divide the process in two main stages:

• Behavior acquisition:

During this first stage, an expert plays a game of WARGUS and the trace of that game is stored. Then, the expert annotates the trace explaining the goals he was pursuing with the actions he took while playing. Using those annotations, a set of behaviors are extracted from the trace and stored as a set of cases.

Behavior Execution:

The execution engine consists of several modules that together maintain a current plan to win the game.

  1. The Plan Execution module: Executes the current plan, and updates its state (marking which actions succeeded or failed).
  2. The Plan Expansion module: Identifies open goals(a goal still doesn’t have an assigned behavior) in the current plan and expands them.
  3. The Behavior Retrieval module: Given an open goal and the current game state, it retrieves the most appropriate behavior to fulfill the open goal.
  4. The Plan Adaptation module (the focus of this paper): Adapts the retrieved plans according to the current game state.

————————–

Posted in Case-Based Planning | Leave a Comment »

Online Case-based planning (Part 2)

Posted by Ahmad Atta on October 22, 2009

Plan Representation Language

1. A behavior has two main parts:

  • The declarative part has the purpose of providing information to the system about the intended use of the behavior. The declarative part of a behavior consists of three parts: a goal, a set of  preconditions , a set of alive conditions.
  • The procedural part contains the generation module to generate behaviors. The procedural part of a behavior consists of 1) Executable code (basic actions) and 2) Subgoal (that need to be further expanded).

2. The state of a behavior can be:

  • pending (when it still has not started execution)
  • executing
  • succeeded
  • failed.

3. When a goal still doesn’t have an assigned behavior, we say that the goal is open.

4. A goal that has a behavior assigned and where the behavior has failed is also considered to be open.

5. Open goals can be either ready or waiting.

Run-Time Plan Expansion and Execution

1- A partial plan tree in our framework is represented as a tree consisting of two types of nodes: goals and behaviors. In the remainder of this paper we will refer to the partial plan tree simply as the “plan”.

2- Initially, the plan consists of a single goal: “win the game”.

3- The plan expansion module asks the behavior generation module to generate a behavior for that goal.

4- That behavior might have several subgoals, for which the plan expansion module will again ask the behavior module to generate a behavior for it.

5- that behavior is sent to the behavior adaptation module, and then inserted in the current plan, marked as pending.

6- The plan execution module has two main functionalities: check for basic actions that can be sent to the game engine and check the status of plans that are in execution:

7- For each pending behavior, the execution module evaluates the preconditions, and as soon as they are met, the behavior starts its execution.

8- Basic actions that are ready and with all its preconditions satisfied are sent to WARGUS to be executed. If the preconditions are not satisfied, the behavior is sent back to the adaptation module to see if the plan can be repaired. If it cannot, then the behavior is marked as failed.

9- Whenever a basic action succeeds or fails, the execution module updates the status of the behavior that contained it. When a basic action fails, the behavior is marked as failed, and thus its corresponding goal is open again.

10- If the alive conditions of an executing behavior are not satisfied, the behavior is marked as failed. If the success conditions of a behavior are satisfied, the behavior is marked as succeeded.

11- Finally, if a behavior is about to be executed and the current game state has changed since the time the behavior generation module generated it, the behavior is handed back to the plan adaptation module to make sure that the plan is adequate for the current game state.

On-Line Case-Based Plan Adaptation

Plans are composed of four basic types of elements:

  1. Actions, which are the basic actions that can be executed.
  2. Parallel plans which consist of component plans which can be executed in parallel.
  3. Sequential plans which consist of component plans which need to be executed in sequence.
  4. Sub-goal plans requiring further expansion.

We can deduce dependencies between different plans using their preconditions and success conditions. We specifically consider only plans which are completely expanded and do not contain a sub- goal which further needs to be expanded. We generate a plan dependency graph using the preconditions and success conditions of the actions. This plan dependency graph informs the plan adaptation process.

Plan Dependency Graph Generation

1- Each action has preconditions and success conditions, and each goal has only a set of success conditions.

2- Further, every plan has a root node that is always a subgoal.

3- The plan dependency graph generation, begins by

  • Taking the success conditions of the root of the plan.
  • Finding out which actions in the plan contribute to the achievement of these success conditions.
  • These actions are called direct sub-plans for the sub-goal.

4- All direct sub-plans are added to the plan dependency graph. Then the plan dependency graph generator analyzes the preconditions of each of these direct sub-plans.

5- Let B1 be an action in the plan which contributes to satisfying the preconditions of a direct sub-plan D1. Then, it adds B1 to the plan dependency graph and forms a directed edge from B1 to D1 . This directed edge can be considered as a dependency between B1 and D1 .

6- This is done for each of the direct sub-plans. Further this is repeated for each of the actions which are added to the graph. This process results in the formation of a plan dependency graph with directed edges between actions representing that one action contributes to the achievement of the preconditions of another action.

precondition-success condition matcher (ps-matcher).

  • In our system, we have developed a rule-based ps-matcher that incorporates a collection of rules for the appropriate condition matching. For example, our system has six different conditions which test the existence of units or unit types.
  • In some cases it is not clear whether a relation exists or not. However it is necessary for our system to capture all of the dependencies, even if some non- existing dependencies are included. Clearly, executing extra actions is better than missing some needed actions.

The plan adaptation after the creation of the plan dependency graph has two sub-processes: elimination of unnecessary actions, and insertion of extra actions.

————————-

Posted in Case-Based Planning | Leave a Comment »

Introduction to Pattern Recognition

Posted by Ogail on October 15, 2009

  • Model is anything that have specific descriptions in mathematical form
  • Segmentation is an operation to isolate objects from each other in images
  • If we think in business wise, you can ignore errors that are acceptable by customers (Salmon, Sea-Bass Example)
  • Decision theory is concerned with which pattern classification is perhaps the most important subfield
  • Feature vector is a vector from feature space that contains the actual values of current model
  • When selecting features in feature vector don’t choose redundant features
  • Though, our satisfaction would be premature because the central aim of designing a classifier is to suggest actions when presented with novel patterns. This is the issue of generalization
  • Sometimes classifier is tuned of the training samples, here one solution is to support the classifier with more training samples for obtaining a better estimate
  • Complex classifiers could be simplified if the process doesn’t require this level of complexity
  • Assuming that we somehow manage to optimize this trade off, can we then predict how well our system will generalize to new patterns? These are some of the central problems in statistical pattern recognition
  • (“Entities are not to be multiplied without necessity”). Decisions based on overly complex models often lead to lower accuracy of the classifier
  • There is no GPS (Simpson, Newell device). We conclude here that there’s no GPS
  • Each recognition technique is suitable for a specific domain of problem:
    • Patterns that have statistical properties are more suitable using statistical pattern recognition
    • Patterns with noise are more suitable for neural network pattern recognition
    • If the model consists of some set of crisp logical rules, then we employ the methods of syntactic pattern recognition
  • A central aspect in PR problem is to achieve good representation
    • Features of each sample are close to its category
    • Simplify number of features
    • Robust features: relatively insensitive to noise or other errors
    • In practical applications we may need the classifier to act quickly, or use few electronic components, memory or processing steps
  • A central technique, when we have insufficient training data, is to incorporate knowledge of the problem domain. Indeed the less the training data the more important is such knowledge (analysis by synthesis)
  • What about classifying patterns on depending on their functions (chair example)
  • In acts of associative memory, the system takes in a pattern and emits another pattern which is representative of a general group of patterns.

 

Sub Problems of Pattern Recognition

  • an ideal feature extractor would yield a representation that makes the job ofthe classifier trivial
  • How do we know which features are most promising? Are there ways to automatically learn which features are best for the classifier? How many shall we use?
  • We define noise very general terms: any property of the sensed pattern due not to the true underlying model but instead to randomness in the world or the sensors
  • While an overly complex model may allow perfect classification of the training samples, it is unlikely to give good classification of novel patterns — a situation known as overfitting
  • One ofthe most important areas ofresearch in statistical pattern classification is determining how to adjust the complexity ofthe model — not so simple that it cannot explain the differences between the categories, yet not so complex as to give poor classification on novel patterns
  • model selection is concerned with how are we to know to reject a class of models and try another one
  • How can we make model selection automated?
  • Prior knowledge is pre knowledge about the problem helps is classification
  • Occlusion!
  • How should we train a classifier or use one when some features are missing?
  • This is the problem of subsets and supersets — formally part of mereology, the study of part/whole relationships. It is closely related to that of prior knowledge and segmentation. In short, how do we recognize or group together the “proper” number of elements — neither too few nor too many
  • In invariation, Thus here we try to build a classifier that is invariant to transformations such as rotation
  • How might we insure that our pattern recognizer is invariant to such complex changes?
  • Evidence pooling, idea about having several classifiers used to categorize one sample. Here how can we resolve the problem of disagreeing classifying?
    • How should a “super” classifier pool the evidence from the component recognizers to achieve the best decision?
  • Costs and Risks
    • How do we incorporate knowledge about such risks and how will they affect our classification decision?
    • Can we estimate the total risk and thus tell whether our classifier is acceptable even before we field it?
  • Computations Complexity:
    • What is the tradeoff between computational ease and performance?
    • How can we optimize within such constraints?
  • Throughout this book, we shall see again and again how methods of learning relate to these central problems, and are essential in the building of classifiers.

 

Learning and Adaptation

  • Learning refers to some form of algorithm for reducing the error on a set of training data
  • Learning Forms:
    • Supervised Learning:
      • How can we be sure that a particular learning algorithm is powerful enough to learn the solution to a given problem and that it will be stable to parameter variations?
      • How can we insure that the learning algorithm appropriately favors “simple” solutions rather than complicated ones
    • Unsupervised Learning (Clustering):
      • System forms clusters or “natural groupings” of the input patterns
    • Reinforcement Learning (learning with a critic):
      • This is analogous to a critic who merely states that something is right or wrong, but does not say specifically how it is wrong
  • Team reading: Summary of chapters
  • Questions:
    • What’s
      Hypothesis testing

Posted in Pattern Classification | Leave a Comment »

Fundamentals of CBR Methods

Posted by Ahmad Atta on October 14, 2009

Posted in Case-Based Reasoning | 2 Comments »

Main Activities BosWars AI

Posted by Ogail on October 12, 2009

Hello all,

Find here popular activities in BosWars AI

Checking Units Activ

 

Each Second Activ

Making Units Activ

Managing Forces Activ

Managing Resources Activ

Posted in BosWars | Tagged: , | 1 Comment »

Class Diagram of AI Code in BosWars

Posted by Ogail on October 12, 2009

Hello all,

Ahmed Atta has created the class diagram of the AI Code in BosWars find the diagram below

AI Code Class Diagram

Posted in BosWars | Tagged: , | Leave a Comment »