Adaptive AI Engine for RTS Games

Discussing the theory and practice

Archive for the ‘Technical Documents’ Category

Contains summary of books we read

Case-Based Planner Architecture

Posted by MHesham on December 4, 2010

Kristian Hammond , . 1994. Case-Based Planning: A Framework for Planning from Experience. In Cognitive Science Journal.

After reading the first 20 page of the paper, I felt that an architecture for the planning framework is needed. The below diagram represents the architecture for a basic Case-Based Planner. The architecture is inferred from the paper author description.

CBP Architecture Colored

Thanks to Abdelrahman for the architecture review.

Posted in Case-Based Planning | Tagged: , , , , , | Leave a Comment »

Paper Read: Case-Based Reasoning- Foundational Issues, Methodological Variations, and System Approaches

Posted by MHesham on November 28, 2010

Agnar Aamodt, Enric Plaza. 1994. Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches. In AI COMMUNICATIONS.

The paper presents

  • Overview of the field in terms of its underlying foundations, its current state-of the art, future trends.
  • The description of CBR principles, methods and systems in analytical scheme.


  • Case Based Reasoning (CBR):
    • CBR is a problem solving paradigm that instead of relying on general knowledge about the problem domain, CBR is able to utilize cases.
    • A new problem is solved by finding a similar past case, and reusing it in the new problem situation.
    • CBR is an approach to incremental learning, since new experience is retained each time a problem has been solved, making it available for future problems.
  • Case:
    • Case is a specific knowledge of previously experienced concrete problem situation.
    • In CBR terms, it denotes a problem situation.
    • A previously experienced situation, which has been captured and learned in a way that it can be reused in solving future problems.
    • Previous experience is known as retained case, solved case, past case.
    • New problem to be solved is called new case, or unsolved case.

Case-based problem solving

  • Reasoning by re-using past cases is powerful and frequently applied way to solve problems for humans.
  • Several studies have given evidence for the dominating role of specific, previously experienced situations in human problem solving.
  • It has been shown that human uses past-cases as models when learning to solve problems, in early learning.
  • Other results indicate that the use of past-cases isa predominant problem solving method among any domain experts as well.
  • Problem solving is not necessarily the finding of a concrete solution to an application problem.
  • The problem can be put by the user, such as criticize user solutions, generate set of solutions, interpret a problem situation, generate expectations in observable data.

Learning in Case-based reasoning

  • CBR is coupled to learning, it is regarded as a subfield of machine learning.
  • CBR denotes a machine learning paradigm that enables sustained and incremental learning by updating the case-base after a problem has been solved.
  • Learning in CBR occurs as a natural by product of problem solving.
  • When an attempt to solve a problem fails, the reason for the failure is identified and remembered in order to avoid the same mistake in the future.
  • CBR favor learning from experience, because it is easier to learn by retaining a concrete problem solving experience, than generalizing from it.
  • Effective learning in CBR requires a well worked methods in order to:
    • Extract relevant knowledge from the experience.
    • Integrate a case into an existing knowledge structure.
    • Index the case for later matching with a similar case.

Combining cases with other knowledge

  • Human learning and problem solving in general are processes that involve the representation and utilization of several types of knowledge, and the combination of several reasoning methods.

Fundamentals of case-based reasoning methods

  • Central tasks to all CBR methods:
    • Identify the current problem situation.
    • Find a past case similar to the new one.
    • Use the case to suggest a solution to the current problem.
    • Evaluate the proposed solution.
    • Update the system by learning from this experience.

Main types of CBR methods

  • The CBR paradigm covers a rage of different methods for organizing, retrieving, and indexing the knowledge retained in the past cases.
  • Exemplar-based reasoning:
    • The term is derived from a concept view (conceptual modeling) called “exemplar view”
    • Exemplar is a specific instance of a certain category, which is used to represent the category.
    • A concept is defined extensionally, as the set of its exemplars.
    • CBR methods that learn concepts are called exemplar-based.
    • Solving a problem is a classification problem by finding the right class for the unclassified exemplar.
    • The class of the most similar past case becomes the solution for the classification problem.
  • Instance-based reasoning:
    • A specialization of the exemplar-based reasoning into a highly syntactic CBR-approach.
    • To overcome the lack of guidance from general background knowledge, a large number of instances are used to go close to the concept definition.
  • Memory-based reasoning:
    • This approach emphasizes a collection of cases as a large memory, and reasoning as a process of accessing and searching this memory.
    • The utilization of parallel processing techniques is a characteristic of this method.
  • Case-based reasoning:
    • Case-based reasoning is sometimes used as a generic term to such systems. A typical case-based reasoning methods have some characteristics that distinguish them from other methods.
      • A typical case is usually assumed to have a certain degree of richness of information contained in it.
      • A certain complexity with respect to its internal organization.
      • Case-based methods are able to modify, or adapt a retrieved solution when applied in different problem solving context.
  • Analogy-based reasoning:
    • The analogy-based reasoning is sometimes used as a synonym to typical case-based reasoning.
    • The term can be used to characterize methods that solve new problems based on past cases from a different domain. While typical case-based methods focus on using past cases from the same domain.
    • Research on analogy reasoning is therefore a subfield concerned with mechanisms for identification and utilization of cross-domain analogies.
    • The major focus of study has been on the reuse of past, that is the mapping of a solution from an identified analogy (source) to the present problem (target).

A descriptive framework

Description of the CBR methods and system has two main parts:

  • A process model of the CBR cycle
    • A dynamic model that identifies the main sub-process of a CBR cycle, their interdependency and products
  • A task-method structure for CBR
    • A task-oriented view, where a task decomposition and related problem solving methods are described.

The CBR Cycle

  1. Retrieve the most similar case or cases.
  2. Reuse the information and knowledge in that case to solve the problem.
  3. Revise the proposed solution.
  4. Retain the parts of this experience likely to be useful for future problem solving.


  • An initial description of a problem defines a new case.
  • This new case is used to retrieve a case from the previous cases.
  • The new case and the retrieved case are combined in a solved case through reuse which forms an initial solution to the problem.
  • Through revise the solution tested for success by being tested in the real environment, and repaired if failed.
  • During retain the useful experience is retained for future use, and the case base is updated with the new learned case, or a modification of the existing cases.
  • General domain-dependant knowledge supports the CBR process, this support vary from very weak (may be none) to very strong depending on the CBR method used.

A hierarchy of CBR tasks

  • This is a task-oriented view to the CBR model which gives a description for the detailed mechanisms from the CBR perspective, while the process model gives a global, external view to what is happening.
  • A system is viewed as agent which has goals, and means to achieve this goals.
  • A description to the system can be made from 3 perspectives: tasks, methods, and domain knowledge model.
  • Tasks are set up by the system goals, and a task is performed by applying one or more methods.
  • For a method to accomplish a task, it needs a general knowledge about the application domain as well as information about the current problem and its context.
  • A method is an algorithm that identifies and controls the execution of subtasks, it accesses and utilize the information needed to accomplish this.

CBR Problem Areas

  • In AI, there are no universal CBR methods for all application domains.
  • The challenge in CBR is to come up with methods that are suited for problem solving and learning in a subject domain (e.g Medical), and for a particular application environments (e.g Heart failure diagnosis).
  • Core problems address by CBR research can be grouped into 5 areas, a solution to them constitute a CBR method:
    • Knowledge representation.
    • Retrieval methods.
    • Reuse methods.
    • Revise methods.
    • Retain methods.

Representation of Cases

  • A case-based reasoned is heavily dependent on the structure and content of its collection of cases, refereed as case memory.
  • Problem solving is achieved by recalling a previous experience suitable for solving the current new problem, so the case matching and retaining process should be effective and efficient enough.
  • Problems in case representation:
    • Deciding what to store in a case.
    • Finding an appropriate structure for describing the case content.
    • Deciding how the case memory (case-base) should be organized and indexed for fast retrieval and reuse.
    • How to integrate the case memory into a model of general domain knowledge.

The Dynamic Memory Model

  • In dynamic memory model, the case memory is organized in hierarchical structure of ‘episodic memory organization packets’ also known as generalized episodes.
  • The basic idea is to organize specific cases which share similar properties under a more general structure (generalized episodes).
  • A generalized episodes (GE) contains 3 different objects:
    • Norms: features common to all cases indexed under the GE.
    • Indices: features which discriminate between GE`s cases.
    • Cases
  • An index may point to a more specific GE or to a case.


  • Case retrieval
    • When a new case description is given, the input case structure is pushed down the network structure starting from the root.
    • When one or more features of a case match the norms of a GE, the case is further discriminated based on the remaining features.
    • The case with the most common features with the input case is used.
    • As overall:
      • Case retrieval is done by finding the GE with the norms in common with problem description.
      • Indices under the GE are then traversed in order to find the case which contains most of the additional problem features not found in the GE norms.
  • Case retaining
    • It is the same as retrieval in the search method.
    • During storing a new case, when a feature of the new case matches the feature of an existing case, a GE is created and the 2 cases are discriminated under the created GE with different indices.
    • So if during case storage two cases or GEs end under the same index, a new GE is created for them.
    • As overall:
      • Storing a new case is done as the case retrieval, with the additional process of dynamically creating GEs.
  • So the memory structure is dynamic in the sense that similar features of two cases are generalized in a GE, and the cases are indexed under the GE by their different features.
  • The index structure may lead to an explosive growth of indices with increased number of cases.
  • The primary role of a generalized episode is an indexing structure for matching and retrieval of cases.
  • This knowledge organization structure is suitable for learning general and specific knowledge, and it is a simplified model of human reasoning and learning.

The Category and Exemplar Model

  • Cases are referred to as exemplars.
  • The view of this method is that the ‘real world’ natural concept should be defined extensionally.
  • Different features are assigned different importance in describing a case membership to a category.
  • The case-memory is structured as a network of categories, cases, and index pointers.
  • Each case is associated with a category, and each case is stored according to its prototypicality strength to its category.
  • Features are key-value pair.
  • Indices are of three kinds:
    • Feature links: pointing from a feature to cases or categories.
    • Difference links: pointing from a case to its neighbor that differs in only a few numbers of features.
    • Case links: pointing from a category to its associated case (called exemplar links).


  • In this organization, the categories are linked within a semantic network, which also contains features and intermediate states.
  • Case retrieval
    • Finding a case in the case base that matches an input description is done by combining the input features of the problem case into a pointer to a case or category that shares most of the features.
    • If a pointer points to a category, the links to its strongest prototypical exemplar are traversed, and these cases are returned.
    • General domain knowledge is used to match cases semantically.
  • Case retaining
    • A new case is stored by searching for a matching case.
    • If a case is found with only minor differences to the input case, the new case may not be retained, or the two cases may be merged.

Case Retrieval

  • The retrieval task starts with a problem description, and ends when a best matching previous case has been found.
  • Subtasks of case retrieval:
    • Identify Features: it is about coming up with a set of relevant problem descriptors.
    • Initially match: return a set of cases that are sufficiently similar to the new cases, given a threshold for case similarity.
    • Search
    • Select: works on the initial set of cases to find out the best match.
  • Some case-based reasoners retrieve a previous case based on superficial(apparent), syntactical similarities among problem descriptors(features). Others retrieve a previous cases based on features that have deeper, semantical similarities.
  • Semantical similarities:
    • In order to retrieve a case based on semantic similarities and relative importance of features, an extensive body of general domain knowledge is required to say how 2 cases match and how strong is the match.
    • It is referred as “knowledge extensive” approach, and it is able to use the contextual meaning for the problem description in matching.
    • For domains where general domain knowledge is available.
  • Syntactical similarities:
    • It is referred as “knowledge poor” approach.
    • For domains where general domain knowledge is very difficult or impossible.
  • It is important to know the purpose of the retrieval task, If the purpose is to be adapted for reuse then this can be accounted for in the retrieval method.
  • Identify Feature
    • Can be done by simply noticing the input features, but a more elaborate way is to understand the problem within its context.
    • Unknown features may be disregarded or requested for explanation by the user.
    • Understanding a problem involves:
      • Filtering noisy problem features (ignore ineffective features).
      • Infer relevant problem features (values of effective features).
      • Check whether the feature values make sense within the problem context.
      • Expect feature values not provided as input.
    • Inferring the values of not provided features can be done by using general knowledge model, or by retrieving a similar case from the case-base and use its features as the expected features.
      • Example on such feature expectation: previous case-1 has the value of feature A, B and C. I have inferred values A and B for the new case, and if the new case and case-1 match in A, and B then I can infer that the value of C in the new case is as of case-1.
  • Initially Match
    • The task of finding a good match is about finding a set of good candidate cases, then from this set of cases, find the best matching one (Select task).
    • The idea of matching is to use the input features as indices to the case-memory in a direct or indirect way. (there should be a mapping function).
    • The 3 principles of indexing:
      • Follow a direct index pointer from the input features to the case-base. (direct mapping).
      • Searching an index structure.
      • Search in a model of general domain knowledge. (search is guided by the underlying model of the domain).
    • Some systems combine between the 3 principles of indexing to achieve a certain goal.
    • Cases may be retrieved directly from input features, or from inferred features from the input.
    • Cases with the most matching features are a good candidates, but sometimes cases matching a fraction of features may be retrieved depending on the retrieval strategy.
    • If case is retrieved on the basis of a subset of features, a relevance test should be done. This test is about finding whether the retrieved solution is of the type as the expected solution for the new problem.
    • Similarity assessment can be more knowledge-intensive by understand the problem in its context more deeply and use the goals, constraints, etc to guide the matching.
    • Another option is to weight the features according to their importance in categorizing the problem.
  • Select
    • The best matching is done by evaluating the degree of initial match more closely.
    • This is done by attempt to generate explanations to justify non-identical features based on the knowledge in the semantic network.
    • If a match is not strong enough, the difference link between cases is followed to reach the best matching case.
    • The selection typical process is to generate consequences and expectations from each retrieved case, and attempts to evaluate consequences and justify expectations.
    • This generation, evaluation and justification of consequences and expectation are based on model of general domain knowledge, or by asking the user.
    • Rankings are made for the cases using some metric or ranking criteria.
    • Knowledge-intensive selection methods generate explanations that support this ranking process.
    • The case with the strongest explanation for being similar to the new case is selected as a solution to the new problem.
    • Other properties in the case that are considered in some CBR systems:
      • Prototypicality of case with its assigned class.
      • Difference links to related cases.
      • Relative importance and discriminately strengths of features.

Case Reuse

  • The retrieval of a previous case solution to be used in the new case context focus on 2 aspects:
    • What parts of the retrieved case should be transferred to the new case.
    • The difference among the past and the current case.
  • Copy
    • In a simple classification task, the differences are abstracted away (ignored), and the similarities are considered relevant.
    • The solution class of the retrieved case is transferred to the new case as its solution.
    • Other systems adapt the retrieved case to the new case context by taking into account the differences, and adaptation of the solution to account for those differences.
  • Adapt
    • There are 2 ways to reuse past cases:
      • Transformational reuse
        • Reuse the past case solution.
        • It doesn’t look how a problem was solved, it focus on the equivalence of solutions.
        • The past case solution is not directly a solution, there exist some knowledge in the form of transformational operator {T} that when applied to the past case solution transform it into a solution for the new case.
        • This requires a strong domain-dependant model in the form of the {T} operator.
      • Derivational reuse
        • Reuse the past methods that constructed the solution.
        • It looks how the problem was solved in the retrieved case.
        • The retrieved method holds information about the methods used for solving the retrieved problem including operators used, sub goals considered, alternatives generated, failed search path, etc…
        • Derivational reuse re-instantiate the retrieved methods to the new case and replays the old plan in the new case context.
        • During replay, the successful alternatives, paths and operators will be explored first, while unsuccessful ones will be avoided.

Case Revision

  • When a case solution generated by the reuse phase is not correct, an opportunity for learning from failure arise.
  • The revision task is divided in 2 subtasks:
    • Evaluate the case solution generated by reuse.
    • If success, then learn and retain it (a separate phase called Retain).
    • If failure, then repair the case solution using domain-specific knowledge.
  • Evaluate Solution
    • Takes the result from applying the reused case in the real environment.
    • Applying a case is a task outside the CBR, this can be done by asking a teacher or by performing it directly in the real world.
    • Results of applying the suggested solution may take time depending on the type of application:
      • In Medical CBR, in diagnosis, a case result may appear after hours, or may be months, so the case must be marked as unevaluated, so that it can be evaluated later when its results are out.
      • A solution may be applied to a model of the domain, using simulation program.
  • Repair Fault
    • Involves detecting the errors of the current solution and retrieving or generating explanation for them.
    • Failure detection in CHEF system:
      • Casual knowledge is used to generate explanations of why goals of the solution plan where not achieved.
      • Learns the general situations that will cause the failures using explanation-based system.
      • These learnt general situations are used during the reuse phase, to predict the shortcomings of using a certain solution plan.
      • This allows for early detection of errors so that they can be predicted, avoided and handled.
    • Solution repair task:
      • uses the explanations to modify the solution in such a way that failures don’t occur.
    • Solution repair in CHEF system:
      • Failed plan is corrected by a repair module, that adds steps to the plan that will assure that the causes of the errors will not occur.
      • The repair module possesses domain-knowledge about how to compensate or disable the cause of errors in the domain.
    • The revised plan can be retained directly if approved to be successful by the revise module or it can be repaired and revised again.

Case Retainment – Learning

  • It is the process of incorporating what is useful to retain from the new problem solving episode in the existing knowledge.
  • The learning from success or failure from the proposed solution is triggered by the result of the evaluation and possible repairs.
  • Involves selecting which information from the case to retain, in what form to retain it, how to index the case for later retrieval for similar problems, and how to integrate the new case in the memory structure.
  • Extract
    • In CBR the case base is updated no matter how the problem was solved.
    • If it was solved by the use of a previous case, then a new case may be built, or the used case for solution can be generalized to include the new case.
    • If the problem was solved without using a previous case, by asking the user may be, then an entirely new case should be created.
    • It is important to know what to use as the source of learning?
      • Relevant problem descriptors (features) and problem solutions are good candidates.
      • Sometimes it is required to include in the new case the explanation of why a solution is a solution for a problem.
      • In some systems:
        • Explanations are included in the retained case, and they are used in later modification of the solution.
        • The explanation structure is used to search up the model for other states that explains the input data of the new case, and look for the causes of this states as answer to the new problem.
        • This focuses the search and speeds up the explanation process, compared to searching in the entire domain model.
      • The last thing that can be extracted for learning is the problem solving method.
    • Failures recognized from the revise task, can be extracted and retained, either as a separate failure case, or within total problem cases.
    • When a failure is encountered, the system gets a reminding to a similar failure case, and uses it to improve its understanding of the failure causes.
  • Index
    • It is a central and much focused problem in case-based reasoning.
    • It is about deciding what type of indices to use for future retrieval, and how to structure the search space of indices.
    • Direct indices skips the structuring of the indices search space, however, the type of indices used should be identified.
    • A trivial solution is to use all the input features as indices.
    • In memory-based reasoning, all the cases are matched in parallel on the relevant features, the cases with the few features in common are filtered out.
  • Integrate
    • It is the final step of updating the knowledge base with new case knowledge.
    • Modifying the indexing of the current cases improves the CBR system, and allows it to become a better similarity assessor.
    • Index strengths or importance relevant to the case are adjusted due to the success or failure of using the case to solve the input problem.
    • Features that are judged as relevant to the case their association with the case is strengthened, while weaken of those that lead to unsuccessful cases.
    • In PATDEX system:
      • There is a relevance matrix that links features to the diagnosis (problem solutions) for which they are relevant, this matrix represents the weights of such links, these weights are strengthened or weakened based on the feedback of success or failure.
    • In knowledge intensive-CBR approaches:
      • Learning can take place within the general conceptual knowledge model.
      • The system incrementally refines and extends its general domain knowledge model.
    • The case just learnt can be tested by re-entering initial problem to the system, and finds whether the system will behave as expected.

Integrated approaches

  • Most CBR systems use general domain knowledge in addition to the knowledge represented by cases.
  • The overall architecture of CBR system has to decide interactions between the CBR method and the other components.
  • CASEY system:
    • It integrates model-based casual reasoning in the diagnosis, when the case-based method fail to find a correct solution, the mode-based method is executed to find a solution, the solution is then stored as a new case for future use. Because the model-based method is complex and slow, the case-based method represents a speed up in reasoning when it can.
  • BOLERO system:
    • It integrates a rule-based system (base-level) with a case-based planner(meta-level).
    • The base-level contains domain knowledge, while the meta-level contains strategic knowledge.
    • The case-based planner is used to control the space searched by the base-level achieving a speed up.
    • The control is passed to the meta-level whenever a new information is know by the base-level assuming that the system will able to deduce a better strategic plan based on the new information.

Example applications and tools

  • Application
    • The CBR used because the knowledge needed to perform the task resides in the head of few people.
    • There is no theory, and very few generally applicable schemes for doing this job.
    • Building up experience in the form of previously successful and unsuccessful situations is important.

Posted in Papers Summaries | Leave a Comment »

On-line Planning for Resource Production in Real-Time Strategy Games -part5

Posted by ferasferas on November 5, 2010

Heuristic Scheduler

The previous component(sequential planner) finds a sequential plan. However, to accurately estimate the utility of any renewable resources, we need to reschedule actions to allow concurrency and decrease the makespan of the found plan. We do this by using a heuristic scheduling procedure that traverses the found action sequence in order. For each action Ai , the procedure moves the start time of Ai to the earliest possible time such that its preconditions are still satisfied.

Assume that Ai starts at time ts , and the state R+ (ts ) is the resource state at time ts after the effects of all actions that end at time ts are added to the game state, and R− (ts ) is the resource game state before the effects are added.

Obviously, the preconditions of Ai are satisfied by R+ (ts ). If they are also satisfied by R− (ts ), this means the satisfaction of the preconditions of Ai is not due to any of the actions that end at time ts , Then we can now move action Ai to start earlier than ts , to the previous decision epoch (time where an action starts or ends).This is repeated until the preconditions of A are satisfied by some R+ (ts ) but not R− (ts ), i.e., the satisfaction of the preconditions of A is due to the actions that end at time ts .

The plan is now rescheduled such that action A starts at time ts , and we can proceed to attempt to reschedule the next action in our sequential plan. The pseudo-code for this procedure is given in Algorithm 3.

We can show this procedure is sound using the following informal argument. We need to ensure that when we reschedule an action, every action between the new start time and the old start time remains executable.

Now when processing each action, we can always schedule it before a previous action if they do not consume or borrow the same resource:

1- Consider a pair of actions A and B, A before B, in a valid plan that both consume or borrow resource R, and assume we are about to reschedule B. First, if A and B are adjacent, the state before A must have enough R for A and B to execute. This means that at that state, A and B could be executed concurrently, or B could be scheduled before A, if possible.

2- On the other hand, if A and B were separated by any actions that produced R in order to satisfy B’s precondition, then our procedure would not shift B before the effects of those actions. If A and B are separated by actions not producing R, this effectively reduces to the adjacent case. Thus, this procedure is sound.

Future Work

While this procedure is not guaranteed to produce a plan with the optimal makespan, it is fast (at most quadratic in the number of actions) and performs well in practice. Investigating more complex scheduling algorithms for this problem is a topic of future work.

Posted in AI for Games, Papers Summaries, Planning, Technical Documents | Leave a Comment »

On-line Planning for Resource Production in Real-Time Strategy Games -part4

Posted by ferasferas on November 4, 2010

Sequential Planner

The first component of our online planner is a sequential planner which outputs a sequential plan to achieve the goal from the given initial state.

In principle, any off-the-shelf sequential planner can be used in this step. However, given the specific properties of our domain discussed earlier, a simple sequential planner based on means-ends analysis suffices.

MEA operates by selecting a subgoal to solve which will decrease the difference between the initial state and the goal state, and then executing the necessary actions to solve the sub-goal. Then from the new state which satisfies the subgoal the process is recursively applied until we reach the goal state. Notice that if one subgoal becomes unsolved while solving another, it will be resolved later, in the recursive step.


The pseudo-code is given in Algorithm 2. For simplicity this pseudo-code assumes that there is no resource that is both produced and consumed by a single action. It is straightforward to lift this assumption while still maintaining the polynomial-time guarantee outlined below.

First, means-ends analysis repeatedly picks an unsatisfied sub-goal Ri ≥ gi , and constructs a sub-plan Plan which satisfies it.

We will first solves all renewable resource goals before any non-renewable resource goals (because renewable resource goals, once solved, always stay solved using the monotonic increase property of renewable resources). In this ordering, every action added to the plan is necessary to solve the final goal.

Further, if we choose any other ordering of goals that necessitates revisiting previously solved goals, we will only generate permutations of the set of actions produced by the “canonical” ordering. This is because, if we revisit a goal, it must be because some actions used to solve that goal were “used up” by the preconditions of some other goal. Thus, we are effectively permuting the sequence of necessary actions if we choose a different ordering.

Since the plan found by MEA has the minimal set of actions, it consumes the minimal set of resources necessary to reach the goal. Finally, because each step of means-ends analysis adds at least one useful action to the plan, its running time is bounded by the minimum number of actions to the goal.

Notice that if the dependency graph between resources contains cycles, it is possible for means-ends analysis to get stuck in an infinite loop for certain initial states.

For example, in Wargus, collecting gold requires a townhall and borrows a peasant, while building a townhall or a peasant consumes certain amounts of gold. The presence of these cycles means there is a possibility that there is no plan to achieve a goal in some cases.

However, we can easily extend our algorithm to detect such cases if they happen. Further, as we have noted above, if the initial game state contains a peasant and a townhall, we can guarantee that there is always a plan no matter what the goal state is.

Posted in AI for Games, Papers Summaries, Planning, Technical Documents | Leave a Comment »

On-line Planning for Resource Production in Real-Time Strategy Games -part3

Posted by ferasferas on November 4, 2010

Quick Revision

Some key properties of our domain are:

1. Actions have durations.

2. There are multiple units, so actions can be executed concurrently.

3. Units and buildings can be created as the game progresses.

4. Many actions involve numeric fluents.

5. Solution plans typically involve a large number of actions compared to most standard planning benchmarks, and

6. In our setting, the planner must find a plan in real-time.

Planning Architecture

As the game proceeds, the world map and the planner’s action models may change, requiring fast re-planning with respect to the changed environment. In fact, a changing environment may make it impossible for the agent to execute a plan that is constructed offline ahead of time. Therefore, a planner which takes a long time to plan or does not provide a bound on its planning time is undesirable for our domain even if it may otherwise return the optimal plan. Instead, we aim to a planner which finds a good plan quickly while being able to scale with the number of resources, and the number of actions in the plan.

Characteristics of the Online-Planner:

1- To adapt to changing goals and environments, planner re-plans every certain Game Cycle with the current goal and game state.

2- To find a new plan, it carries out a bounded search over possible intermediate goals. The set of possible intermediate goals includes all states that have an extra renewable resource of every type. For each such goal, the planner employs means-ends analysis followed by a heuristic scheduling process to generate a plan to reach the overall goal via the intermediate goal.

3- To select an action to be executed, the planner chooses the plan with the smallest makespan. If this plan has any action that is executable at the current game state, that action (or actions) is started.


Notice that the plans generated by the planner are not usually completely executed when the planner replans at the next decision epoch using the game state at that point, it may not obtain a suffix of the plan it found at the current epoch. However, constructing such plans are valuable because they help in action selection at the current step.


Algorithm :

The pseudo-code of the main loop of the algorithm is shown in Algorithm 1. Every few game “cycles” (the time unit in Stratagus), the client checks if there are some available actions that can be executed. If so, it calls the planner to find plans which satisfy our goal from the current state. One plan will aim to satisfy the overall resource goal without any intermediate goals, while the others will aim to satisfy each of the intermediate goals which explicitly creates additional renewable resources. The plan with the shortest makespan is preferred, and actions in the plan that should start now are executed.

The planner consists of two main components:

A sequential planner that uses means-ends analysis to find a plan from game state S to goal G with the minimum number of actions, M EA(S, G), and

A heuristic scheduler which reorders actions in a sequential plan to allow concurrency and decrease its makespan, Schedule(Plan).

Next, we discuss the sequential planner and the heuristic scheduler components.

Posted in AI for Games, Papers Summaries, Planning, Technical Documents | Leave a Comment »

On-line Planning for Resource Production in Real-Time Strategy Games -part2

Posted by ferasferas on November 2, 2010

The RTS Resource Production Domain

Human players typically have no difficulty selecting actions that at least achieve a particular set of resource goals. However, it is much more difficult, and requires much more expertise, to find close to minimal makespan plans.

As an example consider the seemingly simple problem of collecting a large amount of gold starting with a single peasant and “townhall”. One could simply repeatedly collect gold with the single peasant, which would eventually achieve the goal, but it’s far from optimal where another player can choose to execute one or more actions, defined from the action set of the game. Each action produces a certain amount of products, but also consumes a certain amount of other resources, and requires that some preconditions are met before it can be executed.

Actions are Durative : take a certain amount of time to finish upon which the products are added to the game state.

In RTS games, resource-production actions are usually deterministic, and the preconditions, effects, and durations of each action are usually given or can be easily discovered through game-play. For certain actions, where a unit has to travel to a destination for an action to take place, the duration of the action will vary due to the spatial properties of a game map. However, for simplicity we assume we have a constant duration for each instance of an action. On average over the many actions taken during a game turns out to be a reasonable assumption.

Game State Representation at time t :

( 1 ) For each resource Ri , the amount ri possessed by the agent.

( 2 ) List of actions Ai , i = 1, . . . , m ,currently being executed along with the start and end times “ts” and “te” for each action ( ts < t < te ).

Goal State Reached When :

All actions currently executing in S have terminated as the projected game state, denoted by Proj(S). This state is timestamped with t = maximum te of all defined actions, has resources updated according to the effects of the actions Ai , and no actions being executed. reach a certain resource goal, G = {R1 ≥ g1 , . . . , Rn ≥ gn },defined as constraints on the resources, from the current game state. In essence, the player must determine a plan, which is a list of actions, ((A1 , t1 s , t1 e ), . . . , (Ak , tk s , tk e )), where Ai is an action that starts at time ti s and ends at time ti e .

Four Resource Tags

Require: An action requires a certain amount of a resource if it needs to be present throughout the execution of the action.

For example, the action collect-gold requires the presence of the townhall In this case, the same town-hall can be used for concurrent collect-gold actions, as the specifications townhall is not “locked up” by the collect-gold actions. Thus, the requires tag allows for sharing of resources.

Borrow: An action borrows a certain amount of a re-production source if it requires that the resource amount be “locked up” throught the execution of the action, so that no other action is allowed to borrow those resources during it’s execution. After the action has completed the resource amount is freed up for use by other actions.

For example, the collect-gold action borrows a peasant. During the execution of collect-gold action, the borrowed peasant may not be borrowed by another action. After the collect-gold action is finished, the peasant becomes available again and can be used for other action. Therefore to allow concurrent collect-gold actions,multiple peasants must be used.

Consume: An action consumes a certain amount of a resource at the start of it’s execution, as this amount is deducted from the game state. As the game state must obey the constraint that every resource value is non-negative the inferred precondition of the action is that this resource amount must be present at the start of the action.

For example, the build-barracks action consumes 700 units of gold and 450 units of wood.

Produce: An action produces a certain amount of a resource at the end of it’s execution, as this amount is added to the game state.

Resources are Divided into two Classes:

-Consumable Resources: are those that are those consumed by actions, such as gold, wood, and supply(a peasant of footman cannot be built unless there is unused supply).

-Renewable Resources: are those that are required or borrowed by actions, such as peasants, town halls and barracks.

Model Domain Characteristics :

1- Dependency Structure between resources is such that, if the initial state has a townhall and a peasant( and assuming the world map has enough consumable resources like gold and wood), there always exists a plan for any resource goal. Further if such a state can’t be reached, then no such plan exists.

2- The amount of renewable resources in a problem never decreases, since no unit is destroyed in our scenarios.

3- By Wargus action specification, all effects at the start of an action are subtractive effects, while all effects at the end of an action are additive effects.

4- Again by Wargus action specification foreach resource there exactly one action that produces it. This property implies that every plan that produces the goal resource from a game state must contain the same set of actions( though possibly not in the same sequence).

Posted in AI Game Engine Programming, Papers Summaries, Planning, Technical Documents | 1 Comment »

Paper read: Call for AI Research in RTS Games

Posted by MHesham on November 2, 2010

Michael Buro. 2004. Call for AI Research in RTS Games. In Proceedings of the AAAI Workshop on AI in Games.

The paper discuss AI challenges in the real-time strategy games and presents a research agenda aimed at improving AI performance in this computer games.

RTS Games and AI Research

The current AI performance in commercial RTS games is poor. The main reasons that the AI research in RTS games is lagging behind development related fields such as classic board games:

  • RTS games feature hundreds or thousands of interacting objects, incomplete information, and fast-paced micro actions. On the other hand World-class game AI systems exist for turn-base perfect information games (e.g chess).
  • Video games companies create games under server time constraints, and don’t have the resources to engage in AI research.
  • Multi-player games do not require world-class AI performance in order to be commercially successful as long as players are more interested in on-line games.
  • RTS games are complex, which means that it is not easy to set-up an RTS game infrastructure to conduct AI experiments.

The domains where  human ability to abstract, generalize, learn, adapt and reason shine, the current RTS games fail.

Motivations for AI research in RTS games

  • RTS games constitute well-defined environments to conduct experiments in and offer a straight-forward objective ways of performance measuring.
  • RTS games can be customized to focus on specific aspects, such as local fights, scouting, resource management.
  • Strong game AI will make a different in future commercial RTS games, because graphics improvements are begging to saturate.
  • The current state of AI in RTS games is so back that there are a lot of low-hanging fruits waiting to be picked. Examples include game interface that alleviate the tedious tasks such as concentrating fire in combat.
  • Progress in AI research in RTS games is of interest for the military which uses battle simulations in training programs and purse research in autonomous weapon systems.

Research Agenda

The main goal behind the proposed research agenda is not to increase the entertainment of RTS games but to develop the strongest RTS game AI possible.

In order to repeat the success of AI in class games, the following keys are required:

  • Adversarial planning under uncertainty
    • Because the huge number of actions that had to be taken at any time and the implied complexity of RTS games, the agent can’t think at this level but in more abstracted level.
    • Agent has to search in the abstract world space and translate found solutions back into the original state.
    • All high-level decision such as what to build, when to attack are based on abstract search space augmented by beliefs about the abstract world.
    • Because the environment is hostile and dynamic, adversarial real-time planning approaches needed to be investigated, or there is no hope for RTS game AI to defeat human at commander level.
  • Learning and opponent modeling
    • One of the biggest shortcomings of current (RTS) games AI systems is their inability to learn quickly. Human players need to play a couple of games against the AI agent to exploit its style and weakness in its strategy, New efficient machine learning techniques have to be developed to tackle this problem.
    • AI would be able to discover the human player bias toward a certain unit types and strategy, and use this information to adapt its plan.
  • Spatial and temporal reasoning
    • Understanding the importance of static terrain like choke points and dynamic spatial properties such as visibility and enemy influence will influence in generating a successful plan.
    • The temporal relationship among various actions is to be understood well by the playing agent.
    • Combining terrain knowledge and simple heuristics about actions is sufficient to pick the best course of action.

Because AI is not as good as humans in planning, learning and reasoning, at least it can help humans play RTS games. However, there are numerous other ways of improving game performance which can be easily integrated in RTS game interfaces. As an example, in RTS games when attacking a group of units player has to concentrate fire or to intercept fleeing-units. This can be done by developing AI systems that handle this low-level unit management (micromanagement) and let human concentrate on the high-level decisions.

The need for an open source test-bed

Before this vision can become reality, the necessary infrastructure has to be developed. RTS game companies are reluctant to add interfaces to their products which would allow researchers to play RTS games remotely and to gauge the playing strength of RTS AI systems by means of tournaments. Therefore a free-software RTS game was developed, this game is called ORTS (Open-source RTS)

Posted in Papers Summaries | Tagged: , , , , , , , | 2 Comments »

Introducing – “On-line Planning for Resource Production in RTS”

Posted by ferasferas on October 31, 2010

On-line Planning for Resource Production in Real-Time Strategy Games

Hei Chan, Alan Fern, Soumya Ray, Nick Wilson and Chris Ventura

School of Electrical Engineering and Computer Science

Oregon State University

Corvallis, OR 97330


Goal :

Develop action-selection mechanism in producing certain amount of resources as fast as possible.

Planner :

Computationally efficient “action-selection” mechanism which at each epoch creates a possibly Sub-Optimal concurrent plan from the current state to the goal, then begin executing the set of initial actions.

How it’s done :

formed via a combination of MEA(means-ends-analysis) and Scheduling and Bounded Search over sub-goals that aren’t required for goal achievements but may improve the make span.

Two Key problem domains :

-Resource production and tactical battles.

In Resource Production : player has to produce ( or gather ) varies raw materials, buildings, civilian and military Units to improve their economic and military power.

Tactical Battles : a player uses military units to gain territory and defeat enemy Units ( offensive or defensive actions ).

“Winning the Resource Production race is often a key factor in overall success”.

Uses :

1- either in computer opponent A.I.

2- Human can use it to specify resources needed and the Planner will get the best way to get this “RTS resource production is interesting from a pure A.I. Planning perspective as it encompasses a number of challenging issues”.

First, resource production involves temporal actions with numeric effects.

Second, performing well in this task requires highly concurrent activity.

Third, real-time constraints of the problem require action selection be computational efficient in apractical sense.

Why? :

Most existing planners are :

1- not handling temporal and numerical domains.

2- simply too inefficient to be useful.

3- produce highly Sub-Optimal plans.

The planning component used by online planner is based on a combination of means-ends analysis (MEA) and scheduling.

Given an initial state and resource goal, is used to compute a sequential plan that reaches the goal using the minimum number of actions and resources in sense that any valid plan must include all of the actions in the MEA plan.

Importantly, the special structure of our domain guarantees that MEA will produce such a plan and do so efficiently (linear time in the plan length).

Given the minimum sequential plan, we then reschedule those actions, allowing for concurrency, in an attempt to minimize the make span. This scheduling step is computationally hard, however, we have found that simple worst-case quadratic time heuristic methods work quite well. Thus both the MEA step and scheduling step are both low-order polynomial operations in the minimum number of actions required to achieve the goal, allowing for real-time efficiency.

Refrences :

MEA( means-ends analysis) :

Posted in AI for Games, Case-Based Planning, Case-Based Reasoning, Papers Summaries, Planning | 3 Comments »

Paper read: An Integrated Agent for Real-Time Strategy Games (2008)

Posted by MHesham on October 27, 2010

Josh MaCoy and Michael Mateas. 2008. An Integrated Agent for Playing Real-Time Strategy Games by . In Proceedings of the 23rd national conference on Artificial intelligence.

The paper presents a real time strategy (RTS) AI agent that integrates multiple specialist components to play a complete game. The idea is to partition the problem space into domains of competence seen in expert human players, and use the expert domain knowledge of human players in each domain to play a complete game.


RTS games provide a rich and challenging domain for autonomous agent research. In games like Warcraft and Starcraft the player has to build up armies to defeat the enemy, while defending his own base.  In RTS games one has to make a real-time decisions that directly or indirectly affects the environment which impose a complexity making it a big challenge for an AI agent to play an RTS game.

RTS game contain a large number of unique objects and actions.Domain objects include units, buildings with different capabilities and attributes, researches and upgrades for these units and buildings, resources that should be gathered. Domain actions include unit and building construction, what kind of research to do for each unit and building, resource management, utilize units capabilities during battle.

Actions in RTS games occur at multiple levels:

  1. High level strategic decisions: which type of unit and building to produce, which enemy to attack.
  2. Intermediate (Medium) level tactical decisions: how to deploy a group of units across the map.
  3. Lowlevel micromanagement decisions: individual units actions.

The combination of these 3 levels of decisions made it hard to use game-tree search based technique that has been proven successful for games like chess. To illustrate the complexity, A typical RTS player must engage in a multiple, simultaneous, real-time tasks in the middle of the game, a player may be holding an attack on enemy base, while researching his army, and in the same time take care of resource management, and it is not strange to find him defending his base that is being attacked from the back. To make it more complex, the RTS game environment incorporate incomplete information (i.e semi-observable environment) through the use of “fog of war” which hides the most of the map, this requires the player to repeatedly send scouts across the map to know the current state of the enemy.

This attributes of the RTS domain requires an agent architecture that incorporate human-level decision making about multiple simultaneous tasks at multiple levels of abstraction and combine reasoning with real-time activity.

The SORTS agent is an agent capable of playing a complete RTS game, include the use of high level strategy. While SORTS is an impressive agent, there are improvements to be added. The agent developed in this paper adds the use of reactive planning language capable of more tightly coordinating asynchronous unit actions in unit micromanagement tasks, decomposes the agent into more distinct module and incorporate expert human knowledge.

Related Work

Current research in RTS AI agent tends to focus on either the low-level micromanagement or the high-level strategy leaving the tactics and the micromanagement to the individual units built-in AI. The high-level strategy and micromanagement are two important for RTS play, the failure to build integrated agent that is able to combine all the AI decision levels in RTS has resulted in an agent able to play a game not in a competitive level to human player.

A number of researchers focused on applying a single algorithm on a single perspective of the game; Monte Carlo planning for micro-management, Plan Domain Definition Language (PDDL) to explore the tactical decision involved in building orders, Relational Marcov Decision Process (MDP) to generalize strategic plans. All of these made a local improvements, but never integrated in a single agent to play a complete game. Also there were Evolutionary learning on tactical decisions, Case-based reasoning over human traces make it possible for the agent to play a complete game. However these methods were implemented as a single component concerned with the high-level strategy, limited tactics and leaving the micromanagement to the individual unit built-in AI.

The SORTS in an agent capable of playing a complete RTS game, incorporating high-level strategy. Unit micro-management is handled in using FSMs. To enable a larger amount of tactical coordination, the military and resource FSMs are coordinated by a global coordinators. There is a simple learning used in this global coordinators to enhance the performance of the agent.

While the SORTS agent is impressive, capable of playing a complete game integrating multiple modules, there are a number of improvements to be made. The agent proposed in the paper adds the use of reactive planning language capable of coordinating asynchronous unit actions in unit micromanagement.

Expert RTS Play

Expert RTS players and the RTS community has developed a standard strategies, tactics, and micro-management. In chess game, part of the expert play is to choose techniques at a multiple levels of abstractions in response to recognized opponent strategy, tactics and micro-management, and then improvising with these techniques. However the RTS play far exceeds chess play in complexity.

We will find general rules of thumb in RTS play, The “behind on economy” strategy which as I think is about producing troops based on your current economy, the more resources you have the more troops you can train, however this strategy guarantees a loss when tried verses expert player. A rule of thumb can be violated based on the situation (e.g available resources in the map, distance between player and enemy, etc…). As an example, the “Probe Stop” strategy is about halting economic expansion in favor of putting all available income in military production, which results in a temporary spark in military strength, this strategy if used unwisely will result in a complete loss if produced troops died early.

When we talk about high-level strategy, we will find that player has to develop and deploy strategies which coordinate the style of the economic build-up, the base layout, offensive an defensive style. A well known strategy in “Warcraft 2” is “Knight`s rush”, the knight is a heavy unit in the middle of the game tech-tree, the player focus on making the minimum necessary upgrades and buildings to produce the knights, and as soon as they are available a fast rush is to be performed to take out enemy. This strategy is about a tradeoff between an early defense and a later offensive power, the reason behind this is that in early game the player has no defensive structures or units.

Player decides his high-level strategy early at the beginning of the game based on some information such as map size, number of opponents and resources state in the map. However, player must be ready to switch his strategy based on the new information gathered through map scouting.

When we talk about medium-level tactics, we will find ourselves talking about deployment and grouping decisions. Unlike micromanagement, tactics involves coordinating a groups of units to do a specific task. On common knowledge found in “Warcraft3” is coordinating units to block enemy retreat using area effect attacks or block terrain bottlenecks using units (e.g stand on a bridge that allows a few units to pass at a time). Tactical decisions requires the knowledge of common tactics and their counter-tactics.

When we talk about low level micro-management, we will find that expert human players has developed a micro-management techniques applicable to nearly all RTS games. “Dancing” technique is about a specific use of ranged units in which a group of ranged units hold a ranged attack simultaneously, then “dancing” back during their “cooldown” (i.e the time needed by a unit after each attack to perform a new attack). This dancing allows the weak ranged units to stay away from the melee battle area during cooldown. We call “dancing” a micro-management technique because it involved the detailed control of the individual unit moves. When a micro-management is absent the units will receive their orders in response to the high-level directives as “Attack” using their simple built-in behaviors (e.g path finding, obstacle avoidance, etc …).

In RTS game, the map is partially revealed using “fog of war”, this requires from the player to send scouts across the map to find enemy base position, and know the nature of his economic build-up (this reveals the strategy the enemy is likely to follow) and knowing the physical layout of the base (whether the base is heavily defended).


The software framework of the developed agent consists of the ABL(A Behavior Language), a reactive planning language connected to the Wargus RTS engine.

Agent Architecture

The agent is composed of distinct managers each of which is responsible for one or more of the major tasks mentioned in the Expert RTS Play section. The agent consist of strategy, production, income, tactical and resource manager.image

The factoring the agent based on the expert play tasks, it is easier to modify individual managers and measure the effect of each manager on the overall performance when increasing or decreasing the competence of each manager.

Strategy Manager

The strategy manager is responsible for high-level strategic decisions. The first task is to determine the proper initial order in which to construct buildings and units.

The InitialStrategy module utilize the recon manager to know the distance to the enemy base, this distance determines is used to choose the proper strategy. If the enemy base is close then a rush attack strategy is applicable in which 1 barrack is built and some units are produced without building a second farm. This gives the agent the advantage to defend against early enemy attacks, and also has the potential to make an early attack (aka rush attack). If the enemy base is far then there is time to make a robust economy and produce huge military before engaging in a battle.

The TierStrategy module has the highest priority recurring task in the strategy manager. At each level of the three tiers in Wargus, TierStrategy responsibility include: maintaining a unit cap control with regards to the production capacity, constructing units and buildings superior to that of the opponent, and attacking when the agent has military unit advantage.

TierStrategy starts making decisions after the initial building order controller by the InitialStrategy is complete. A primary responsibility for the TierStrategy is to determine which kind of building or unit to produce during the game past the initial build order. TierStrategy is also responsible for determining when to attack given the number of military units controlled by the agent vs the opponent.

Income Manager

The income manager is responsible for the details of controlling workers who gather resources, releasing workers for construction and repair tasks, and maintaining wood to gold ration set by the strategy manager.

Production Manager

The production manager is responsible for constructing units and buildings, It has modules that serve 3 priority queues: units construction priority queue, buildings construction priority queue, a queue for repeated cycles of unit and buildings construction.

The production manager should also apply what is called “resource locking”, which is about subtracting the required building resources virtually from the current physical resource, that is because there is a time passed between the time of taking the construction decision and the time the worker reach his destination to start building.

Tactics Manager

The tactics manager takes care of unit tasks pertaining to multi-unit military conflicts. There are three modules, the first module assigns military units to groups, the second module keep military units on task by making sure they don’t go off the course, the third module removes slain units from military units groups.

The tactics manager provides an interface for the high-level control of the military groups to be sued by the strategy manager. All basic military unit commands are made available to the strategy manager (e.g: move, attack, stand group, patrol, etc…), also more abstract commands are available (e.g: attack enemy base).

Recon Manager

The recon manager is responsible to provide the other managers with aggregate information (e.g number of workers and military units the opponent has). The current academic and commercial RTS AI make the assumption of perfect information (i.e ignoring the “fog of war”) which makes the environment fully observable. The developed agent removed this perfect information assumption to allow the recon manager to hold reconnaissance task (e.g send scouts across the map to gather information).

Managers Interdependency

This section describes the relation between the managers, and how the individual managers competencies are integrated to play a complete game. Next the paper talked about the effect of the removal of a certain manager from the system. The results were logical and can be deduced using the rule of thumb.


This section shows that the integrated agent performed well against two scripted agents: Solider`s rush and Knight`s rush. Each script was tested on a different map size, medium and large. The agent played 15 game for each combination between a map and a scripted opponent.


The many of the losses suffered by the developed agent where due to the lack of sophistication in the tactics manager. Specifically, the tactics manager fails to concentrate military units in an area in either offensive or defensive situations. When many parallel decision are made else where in the agent, small delays are introduced when sending commands to units. causing units to trickle towards engagement and be easily defeated. A future work is to be done in the units formation management.

Posted in Papers Summaries, Uncategorized | Leave a Comment »

Situation Assessment Empirical Analysis in IStrategizer

Posted by Ogail on May 14, 2010

  • In this article we’ll introduce the idea of situation assessment in IStrategizer.
  • Situation Assessment is a process of gathering high-level information using low-level data to help in better decision making.
  • We’ll use 3 terms here:
    • Situation: is a high-level representation of the state of the world
    • Situations can be predicted based on raw features that can be directly computed from the game state, i.e. shallow features.
    • However, shallow features by themselves are not strong enough for selection of a strategy in a game. Additional derived deep features for a situation are needed.
  • For example, shallow features, like the ratio of a player’s resources to that of the opponent, by themselves are less suggestive of usefulness of an appropriate strategy. However deeper features, like knowing the existence of path or a barrier between the player and its opponent, can help in choosing a rush or a tunneling strategy.
  • We’ve 3 Situations:
    • Note that for each set of shallow features above we’ve a set of deep features that will help us to pick the most suitable set of plans to choose from.
    • Base Development Situation:
      • Shallow Features:
        • Basic player infrastructure is not available.
        • Advanced player infrastructure is not available.
        • Near resources for player base are not available.
        • Player has a lot of resources that are not consumed well in building the army.
      • Deep Features:
        • Basic player infrastructure is not available.
          • What’s the missing building exactly? Is it farm? Town Hall? Lumber Mill…
        • Advanced player infrastructure is not available.
          • What’s the advanced building you would to upgrade to? Get new units (i.e. dragons)? Upgrade unit attributes?
        • Near resources for player base are not available.
          • What’s the resource that is not available?
          • Where’s the proposed place to gather from?
        • Player has a lot of resources that are not consumed well in building the army.
          • What are units that you’d like to build their building more?
          • What’s the proposed direction for expanding your empire?
    • Build Army Situation:
      • Shallow Features:
        • Player has enough resources and infrastructure to construct an army.
        • Opponent is attacking/constructing the player and the player has no enough units to defend on himself.
      • Deep Features:
        • Player has enough resources and infrastructure to construct an army.
          • What’s the most available resource to construct an army?
          • From history, what are preferred units by my opponent?
          • What’s natural situation of the map (islands, no paths in between).
          • How opponent defend his base?
        • Opponent is attacking the player or constructing, army where the player has no enough units to defend on himself.
          • What are the most available resources and buildings to construct an army?
          • What are anti-units for the current opponent units?
    • Attack Situation:
      • Shallow Features:
        • Player has enough units to attack opponent.
        • Player resources are obsolete and he wants to get an opponent resources.
        • Player wants to expand his empire and enemy captures a near place.
        • Enemy is near from player base.
      • Deep Features:
        • Player has enough units to attack opponent.
          • What’s the natural of enemy’s base and current player units to utilize them for attacking? (i.e. backdoor exists, enemy doesn’t have long range units …).
        • Player resources are obsolete and he wants to get an opponent resources.
          • What’s the natural of enemy’s base and current player units to utilize them for attacking? (i.e. backdoor exists, enemy doesn’t have long range units …).
        • Player wants to expand his empire and enemy captures a near place.
          • What’s the natural of enemy’s base and current player units to utilize them for attacking? (i.e. backdoor exists, enemy doesn’t have long range units …).
        • Enemy is near from player base.
          • What’s the natural of enemy’s base and current player units to utilize them for attacking? (i.e. backdoor exists, enemy doesn’t have long range units …).
        • Here let us talk more about the deep feature used in this assessment.
          • Natural of player’s army:
            • Ranged.
            • Anti air-attack.
            • Air attack.
            • Strong units.
            • Massive number of units.
          • Natural of enemy’s base:
            • Not defended at all.
            • Has backdoors.
            • Has an air attack defense.
            • Has an anti air attack defense.
            • Has lot of units inside.
            • Has weak points (i.e. important buildings that are not defended well)
          • So, based on combination of these 2 natures we can pick a suitable attack strategy. Example:
            • Natural of player’s army: Air attack.
            • Natural of enemy’s base: has backdoors, has weak points
            • Air Attack Strategy is used!
    • Defense Situation:
      • Shallow Features:
        • Player base is not defended.
        • Enemy is attacking player or constructing an army.
      • Deep Features:
        • What’s the natural of the map duo to defending player’s base? (i.e. surrounded with wood)
        • Where are backdoors in player’s city?
        • Where are most important regions in my city?
        • From history, what are preferred units that player attack with?

Posted in Technical Documents | 2 Comments »