Adaptive AI Engine for RTS Games

Discussing the theory and practice

Archive for October, 2010

Introducing – “On-line Planning for Resource Production in RTS”

Posted by ferasferas on October 31, 2010

On-line Planning for Resource Production in Real-Time Strategy Games

Hei Chan, Alan Fern, Soumya Ray, Nick Wilson and Chris Ventura

School of Electrical Engineering and Computer Science

Oregon State University

Corvallis, OR 97330

{chanhe,afern,sray,wilsonic,ventura}@eecs.oregonstate.edu

Goal :

Develop action-selection mechanism in producing certain amount of resources as fast as possible.

Planner :

Computationally efficient “action-selection” mechanism which at each epoch creates a possibly Sub-Optimal concurrent plan from the current state to the goal, then begin executing the set of initial actions.

How it’s done :

formed via a combination of MEA(means-ends-analysis) and Scheduling and Bounded Search over sub-goals that aren’t required for goal achievements but may improve the make span.

Two Key problem domains :

-Resource production and tactical battles.

In Resource Production : player has to produce ( or gather ) varies raw materials, buildings, civilian and military Units to improve their economic and military power.

Tactical Battles : a player uses military units to gain territory and defeat enemy Units ( offensive or defensive actions ).

“Winning the Resource Production race is often a key factor in overall success”.

Uses :

1- either in computer opponent A.I.

2- Human can use it to specify resources needed and the Planner will get the best way to get this “RTS resource production is interesting from a pure A.I. Planning perspective as it encompasses a number of challenging issues”.

First, resource production involves temporal actions with numeric effects.

Second, performing well in this task requires highly concurrent activity.

Third, real-time constraints of the problem require action selection be computational efficient in apractical sense.

Why? :

Most existing planners are :

1- not handling temporal and numerical domains.

2- simply too inefficient to be useful.

3- produce highly Sub-Optimal plans.

The planning component used by online planner is based on a combination of means-ends analysis (MEA) and scheduling.

Given an initial state and resource goal, is used to compute a sequential plan that reaches the goal using the minimum number of actions and resources in sense that any valid plan must include all of the actions in the MEA plan.

Importantly, the special structure of our domain guarantees that MEA will produce such a plan and do so efficiently (linear time in the plan length).

Given the minimum sequential plan, we then reschedule those actions, allowing for concurrency, in an attempt to minimize the make span. This scheduling step is computationally hard, however, we have found that simple worst-case quadratic time heuristic methods work quite well. Thus both the MEA step and scheduling step are both low-order polynomial operations in the minimum number of actions required to achieve the goal, allowing for real-time efficiency.

Refrences :

MEA( means-ends analysis) : http://en.wikipedia.org/wiki/Means-ends_analysis

Posted in AI for Games, Case-Based Planning, Case-Based Reasoning, Papers Summaries, Planning | 3 Comments »

Paper read: An Integrated Agent for Real-Time Strategy Games (2008)

Posted by MHesham on October 27, 2010

Josh MaCoy and Michael Mateas. 2008. An Integrated Agent for Playing Real-Time Strategy Games by . In Proceedings of the 23rd national conference on Artificial intelligence.

The paper presents a real time strategy (RTS) AI agent that integrates multiple specialist components to play a complete game. The idea is to partition the problem space into domains of competence seen in expert human players, and use the expert domain knowledge of human players in each domain to play a complete game.

Introduction

RTS games provide a rich and challenging domain for autonomous agent research. In games like Warcraft and Starcraft the player has to build up armies to defeat the enemy, while defending his own base.  In RTS games one has to make a real-time decisions that directly or indirectly affects the environment which impose a complexity making it a big challenge for an AI agent to play an RTS game.

RTS game contain a large number of unique objects and actions.Domain objects include units, buildings with different capabilities and attributes, researches and upgrades for these units and buildings, resources that should be gathered. Domain actions include unit and building construction, what kind of research to do for each unit and building, resource management, utilize units capabilities during battle.

Actions in RTS games occur at multiple levels:

  1. High level strategic decisions: which type of unit and building to produce, which enemy to attack.
  2. Intermediate (Medium) level tactical decisions: how to deploy a group of units across the map.
  3. Lowlevel micromanagement decisions: individual units actions.

The combination of these 3 levels of decisions made it hard to use game-tree search based technique that has been proven successful for games like chess. To illustrate the complexity, A typical RTS player must engage in a multiple, simultaneous, real-time tasks in the middle of the game, a player may be holding an attack on enemy base, while researching his army, and in the same time take care of resource management, and it is not strange to find him defending his base that is being attacked from the back. To make it more complex, the RTS game environment incorporate incomplete information (i.e semi-observable environment) through the use of “fog of war” which hides the most of the map, this requires the player to repeatedly send scouts across the map to know the current state of the enemy.

This attributes of the RTS domain requires an agent architecture that incorporate human-level decision making about multiple simultaneous tasks at multiple levels of abstraction and combine reasoning with real-time activity.

The SORTS agent is an agent capable of playing a complete RTS game, include the use of high level strategy. While SORTS is an impressive agent, there are improvements to be added. The agent developed in this paper adds the use of reactive planning language capable of more tightly coordinating asynchronous unit actions in unit micromanagement tasks, decomposes the agent into more distinct module and incorporate expert human knowledge.

Related Work

Current research in RTS AI agent tends to focus on either the low-level micromanagement or the high-level strategy leaving the tactics and the micromanagement to the individual units built-in AI. The high-level strategy and micromanagement are two important for RTS play, the failure to build integrated agent that is able to combine all the AI decision levels in RTS has resulted in an agent able to play a game not in a competitive level to human player.

A number of researchers focused on applying a single algorithm on a single perspective of the game; Monte Carlo planning for micro-management, Plan Domain Definition Language (PDDL) to explore the tactical decision involved in building orders, Relational Marcov Decision Process (MDP) to generalize strategic plans. All of these made a local improvements, but never integrated in a single agent to play a complete game. Also there were Evolutionary learning on tactical decisions, Case-based reasoning over human traces make it possible for the agent to play a complete game. However these methods were implemented as a single component concerned with the high-level strategy, limited tactics and leaving the micromanagement to the individual unit built-in AI.

The SORTS in an agent capable of playing a complete RTS game, incorporating high-level strategy. Unit micro-management is handled in using FSMs. To enable a larger amount of tactical coordination, the military and resource FSMs are coordinated by a global coordinators. There is a simple learning used in this global coordinators to enhance the performance of the agent.

While the SORTS agent is impressive, capable of playing a complete game integrating multiple modules, there are a number of improvements to be made. The agent proposed in the paper adds the use of reactive planning language capable of coordinating asynchronous unit actions in unit micromanagement.

Expert RTS Play

Expert RTS players and the RTS community has developed a standard strategies, tactics, and micro-management. In chess game, part of the expert play is to choose techniques at a multiple levels of abstractions in response to recognized opponent strategy, tactics and micro-management, and then improvising with these techniques. However the RTS play far exceeds chess play in complexity.

We will find general rules of thumb in RTS play, The “behind on economy” strategy which as I think is about producing troops based on your current economy, the more resources you have the more troops you can train, however this strategy guarantees a loss when tried verses expert player. A rule of thumb can be violated based on the situation (e.g available resources in the map, distance between player and enemy, etc…). As an example, the “Probe Stop” strategy is about halting economic expansion in favor of putting all available income in military production, which results in a temporary spark in military strength, this strategy if used unwisely will result in a complete loss if produced troops died early.

When we talk about high-level strategy, we will find that player has to develop and deploy strategies which coordinate the style of the economic build-up, the base layout, offensive an defensive style. A well known strategy in “Warcraft 2” is “Knight`s rush”, the knight is a heavy unit in the middle of the game tech-tree, the player focus on making the minimum necessary upgrades and buildings to produce the knights, and as soon as they are available a fast rush is to be performed to take out enemy. This strategy is about a tradeoff between an early defense and a later offensive power, the reason behind this is that in early game the player has no defensive structures or units.

Player decides his high-level strategy early at the beginning of the game based on some information such as map size, number of opponents and resources state in the map. However, player must be ready to switch his strategy based on the new information gathered through map scouting.

When we talk about medium-level tactics, we will find ourselves talking about deployment and grouping decisions. Unlike micromanagement, tactics involves coordinating a groups of units to do a specific task. On common knowledge found in “Warcraft3” is coordinating units to block enemy retreat using area effect attacks or block terrain bottlenecks using units (e.g stand on a bridge that allows a few units to pass at a time). Tactical decisions requires the knowledge of common tactics and their counter-tactics.

When we talk about low level micro-management, we will find that expert human players has developed a micro-management techniques applicable to nearly all RTS games. “Dancing” technique is about a specific use of ranged units in which a group of ranged units hold a ranged attack simultaneously, then “dancing” back during their “cooldown” (i.e the time needed by a unit after each attack to perform a new attack). This dancing allows the weak ranged units to stay away from the melee battle area during cooldown. We call “dancing” a micro-management technique because it involved the detailed control of the individual unit moves. When a micro-management is absent the units will receive their orders in response to the high-level directives as “Attack” using their simple built-in behaviors (e.g path finding, obstacle avoidance, etc …).

In RTS game, the map is partially revealed using “fog of war”, this requires from the player to send scouts across the map to find enemy base position, and know the nature of his economic build-up (this reveals the strategy the enemy is likely to follow) and knowing the physical layout of the base (whether the base is heavily defended).

Framework

The software framework of the developed agent consists of the ABL(A Behavior Language), a reactive planning language connected to the Wargus RTS engine.

Agent Architecture

The agent is composed of distinct managers each of which is responsible for one or more of the major tasks mentioned in the Expert RTS Play section. The agent consist of strategy, production, income, tactical and resource manager.image

The factoring the agent based on the expert play tasks, it is easier to modify individual managers and measure the effect of each manager on the overall performance when increasing or decreasing the competence of each manager.

Strategy Manager

The strategy manager is responsible for high-level strategic decisions. The first task is to determine the proper initial order in which to construct buildings and units.

The InitialStrategy module utilize the recon manager to know the distance to the enemy base, this distance determines is used to choose the proper strategy. If the enemy base is close then a rush attack strategy is applicable in which 1 barrack is built and some units are produced without building a second farm. This gives the agent the advantage to defend against early enemy attacks, and also has the potential to make an early attack (aka rush attack). If the enemy base is far then there is time to make a robust economy and produce huge military before engaging in a battle.

The TierStrategy module has the highest priority recurring task in the strategy manager. At each level of the three tiers in Wargus, TierStrategy responsibility include: maintaining a unit cap control with regards to the production capacity, constructing units and buildings superior to that of the opponent, and attacking when the agent has military unit advantage.

TierStrategy starts making decisions after the initial building order controller by the InitialStrategy is complete. A primary responsibility for the TierStrategy is to determine which kind of building or unit to produce during the game past the initial build order. TierStrategy is also responsible for determining when to attack given the number of military units controlled by the agent vs the opponent.

Income Manager

The income manager is responsible for the details of controlling workers who gather resources, releasing workers for construction and repair tasks, and maintaining wood to gold ration set by the strategy manager.

Production Manager

The production manager is responsible for constructing units and buildings, It has modules that serve 3 priority queues: units construction priority queue, buildings construction priority queue, a queue for repeated cycles of unit and buildings construction.

The production manager should also apply what is called “resource locking”, which is about subtracting the required building resources virtually from the current physical resource, that is because there is a time passed between the time of taking the construction decision and the time the worker reach his destination to start building.

Tactics Manager

The tactics manager takes care of unit tasks pertaining to multi-unit military conflicts. There are three modules, the first module assigns military units to groups, the second module keep military units on task by making sure they don’t go off the course, the third module removes slain units from military units groups.

The tactics manager provides an interface for the high-level control of the military groups to be sued by the strategy manager. All basic military unit commands are made available to the strategy manager (e.g: move, attack, stand group, patrol, etc…), also more abstract commands are available (e.g: attack enemy base).

Recon Manager

The recon manager is responsible to provide the other managers with aggregate information (e.g number of workers and military units the opponent has). The current academic and commercial RTS AI make the assumption of perfect information (i.e ignoring the “fog of war”) which makes the environment fully observable. The developed agent removed this perfect information assumption to allow the recon manager to hold reconnaissance task (e.g send scouts across the map to gather information).

Managers Interdependency

This section describes the relation between the managers, and how the individual managers competencies are integrated to play a complete game. Next the paper talked about the effect of the removal of a certain manager from the system. The results were logical and can be deduced using the rule of thumb.

Results

This section shows that the integrated agent performed well against two scripted agents: Solider`s rush and Knight`s rush. Each script was tested on a different map size, medium and large. The agent played 15 game for each combination between a map and a scripted opponent.

image

The many of the losses suffered by the developed agent where due to the lack of sophistication in the tactics manager. Specifically, the tactics manager fails to concentrate military units in an area in either offensive or defensive situations. When many parallel decision are made else where in the agent, small delays are introduced when sending commands to units. causing units to trickle towards engagement and be easily defeated. A future work is to be done in the units formation management.

Posted in Papers Summaries, Uncategorized | Leave a Comment »

Decision Making Levels in RTS Games

Posted by MHesham on October 27, 2010

RTS games provide a rich and challenging domain for autonomous agent research.  In RTS games one has to make a real-time decisions that directly or indirectly affects the environment which impose a complexity making it a big challenge for an AI agent to play an RTS game.

RTS game contain a large number of unique objects and actions.Domain objects include units, buildings with different capabilities and attributes, researches and upgrades for these units and buildings, resources that should be gathered. Domain actions include unit and building construction, what kind of research to do for each unit and building, resource management, utilize units capabilities during battle.

Actions in RTS games occur at multiple levels:

  1. High level strategic decisions
  2. Intermediate (Medium) level tactical decisions
  3. Low-level micromanagement decisions

The most stunning part in AI is the lack of standards. You will find each AI book or paper author talking about the decision making levels in RTS games from his own rich perspective. This is the nature of AI field, it is the field of Wagers. You will find some authors talking about tactics and micromanagement as one thing, Others name the medium-level as tactics and the low-level as micromanagement. Each time you read about decision making levels in RTS you should expect different namings that pop-up off your face making you feel hazy.

The need of standard terms is needed, this is obvious. On the other hand we can agree on the concepts of each level in decision making in RTS games. Next we mention each level and a description of what it is all about supported with some examples, so that the reader gets a clear picture of the decision making hierarchy regardless of the namings and terminologies.

High-Level AI (Strategy)

We can think of high-level strategy as the general of a real army. High-level plans usually include actions at many different levels of AI to complete (e.g: build base, train units, set income ratio, attack enemy, request information, etc…). The perception at this level is built-on information from the lower levels to determine what the enemies are doing. Given all this precious feedback, the army general (in our situation it is the player whether human or AI agent) is able to deal with threats or take strategic decisions. In this way the high-level strategy affects everything from the individual soldiers to the entire economic system.

We will find that the player has to develop and deploy strategies which coordinate the style of the economic build-up, the base layout, offensive an defensive style. A well known strategy in “Warcraft 2” is “Knight`s rush”, the knight is a heavy unit in the middle of the game tech-tree, the player focus on making the minimum necessary upgrades and buildings to produce knights, and as soon as they are available a fast rush using knights is to be performed to take out enemy. This strategy is about a tradeoff between an early defense and a later offensive power, the reason behind this is that in early game the player has no defensive structures or units.

Player decides his high-level strategy early at the beginning of the game based on some information such as map size, number of opponents and resources state in the map. However, player must be ready to switch his strategy based on the new information gathered through map scouting.

Medium-Level AI (Tactics)

Some games use what is called “commanders” to control a group units like Total Annihilation game. In other games we find the player uses the commanders to group units into fighting elements and control them in a large war sense.

When we talk about medium-level AI we find ourselves talking about deployment and grouping decisions. Unlike micromanagement, tactics involves coordinating a groups of units to do a specific task. One common tactics found in “Warcraft3” is coordinating units to block enemy retreat using area effect attacks or block terrain bottlenecks using units (e.g stand on a bridge that allows a few units to pass at a time). This can be considered as medium-level AI because it requires more then individual units actions and it is not fully high-level strategy.Tactical decisions requires the knowledge of common tactics and their counter-tactics.

A simple example is a commander choosing a new destination for a group of units (medium-level), but the individual units decide how to stay in formation and use the terrain features to get there (low-level). By thinking in this way, You can write high-level system that cover large troop movements, and lower-level system to get over and round the map. The part of the system that tries get the units across the map doesn’t have to worry about keeping the long range units behind the short.

Low-level AI (Micromanagement)

Micromanagement in RTS game terms are defined as small, detailed gameplay commands, most commonly commands such as moving units or using a unit’s special abilities during combat. Micromanaging units in an RTS game are essentially the task of giving orders to units. The ultimate goal of micromanagement is to win by losing as few units as possible.

When we talk about human players employing micromanagement, will find that expert human players has developed a micro-management techniques applicable to nearly all RTS games. “Dancing” technique is about a specific use of ranged units in which a group of ranged units hold a ranged attack simultaneously, then “dancing” back during their “cooldown” (i.e the time needed by a unit after each attack to perform a new attack). This dancing allows the weak ranged units to stay away from the melee battle area during cooldown. We call “dancing” a micro-management technique because it involved the detailed control of the individual unit moves. When a micro-management is absent the units will receive their orders in response to the high-level directives as “Attack” using their simple built-in behaviors (e.g path finding, obstacle avoidance, etc …).

When we talk about scripted AI employing micromanagement, we can remember the archer behavior in Age of Empires games. The computer will send in many weak projectile units, which then retreat, shoot, retreat. This very simple behavior micromanagement makes these very weak units very effective because they will strike and make guards spread in all directions.

Decision Making Hierarchy

image

To support this hierarchy, lets make consider a complete example: The general decides that attacking player#3 is the best course of action (high-level) after asking the “Recon Commander” about the state of the enemy. The “Troops Commander” (medium-level) would then ask the “Production Commander” to produce the necessary troops (soldiers and archers), When troop construction is finished the “Troops Commander” divide the troops into 2 groups, and orders the first group to attack from west and the other to attack from east. As always the low-level micromanagement path finding and avoidance AI would get all those units along the map to their destination.

Notable Conclusion

The medium-level AI worth research and work, because it is usually lacking in most games due to its complexity whether in creation or tuning. High-level goals can be somehow direct and simple (e.g Attack Player#3) stripped of all necessary details required to accomplish the attack, the entire plan is 3 words. Low-level goals are also straightforward involving very atomic behaviors and local and small scale perceptions (e.g Attack unit with Id 3 at position 10, 30). In contrast, the commanders or the Medium-level AI requires a large collection of feedback information from many sources. It has to combine all these percetions into short-and medium-range  plans that coordinate group movements, resource allocation.

References

  • AI Game Engine Programming , 2nd edition by Brian Schwab
  • A CBR-RL system for learning for micromanagment in RTS Games – 2009
  • An Integrated Agent for Playing Real-Time Strategy Games – 2008

Posted in RTS Games Concepts | Leave a Comment »

11-10-2010 Meeting Minutes

Posted by Ogail on October 11, 2010

Agenda:
• What we’ve done?
• How we’ve do it?
• What you can do?
• What’s next?

What we’ve done

Developed an agent that’s capable of:
1. Learn from human demonstration.
2. Put a plan online and adapt it online.
3. Assets the current situation and react based on it.
4. Learning from its failure.
5. Encapsulate its learnt knowledge in a portable casebase.

How we’ve done it

1. Reading about general game AI:
a. AI Game Engine Programming.
b. Artificial intelligence for games.
c. Programming game AI by example.

2. Reading about latest research in RTS AI:
a. All papers reside in project repository in “Adaptation and Opponent Modeling” folder.

3. Reading about machine planning:
a. Machine planning papers resides in project repository under folder “CBR/CBP”.

4. Reading about machine learning:
a. Reinforcement Learning: An Introduction.

5. Understanding Stratagus code:
a. Open the code and enjoy.

Minimal requirement is:
1. Reading Santi’s papers.
2. Reading about machine learning.
3. Understanding Stratagus code.

What you can

1. Enhance current engine (60% Theory, 40% Code):
a. Human demonstration feature:
i. Adding parallel plan extraction.
ii. Adjusting attack learning method.
b. Planning feature:
i. Adding parallel plan execution.
c. Situation Assessment:
i. Converting it from static rules into generated decision trees.
d. Learning:
i. Needs intensive testing and tuning for parameters.
2. Modularize the engine (20% Theory, 80% Code):
a. Make the middle layer generic for any RTS games.
b. Let the configuration of middle layer scripted (or whatever but should be something external).
c. Modularize used algorithms. So we can use it any context suitable.
d. Develop the engine in API form.
3. Knowledge Visualize (N/A):
a. Develop tool to visualize agent’s knowledge, summarizes how it will react while playing in the game. This tool will let us investigate agent’s knowledge deeply.
4. Tactics Planning and Learning:
5. Propose other approaches for planning and learning.
6. Parallelized AI:
a. Some processing in the engine is done sequentially were it may be done in parallel. Using a distributed/parallel API (i.e. OpenCL) to parallelize the agent’s processing.

What’s next?
Tasks are divided as follows:
1. Muhamad Hesham will read about General Game AI.
2. Magdy Medhat will read about latest research in RTS Games AI.
3. Mohamed Abdelmonem will read about machine planning.
4. Islam Farid will read about machine learning (especially Reinforcement Learning).

Also we’ve agreed about the follows:
1. We’ll first enhance the engine (i.e. develop feature #1) while we are reading and building our knowledge.
2. Then, we’ll start developing the learning and planning in tactics level.

How we’ll enhance the engine developmed?
Every member of the team will be involved in the development of specific feature with either Abdelrahman or Omar. This will be done besides his readings. For now Magdy Medhat will in the first feature.

Final note, we’ll post tasks on the blog and you can track the results there.

Posted in Meeting Minutes | 1 Comment »

What’s done, what’s next?

Posted by Ogail on October 9, 2010

Good evening,
In this post we will give you a brief about our past research and future plans.

During the last year, We were concerned with the field of planning, learning and knowledge sharing in RTS (Real-Time Strategy) Games.

A- We started our work during our graduation project (Find it in this post) which was entitled “Adaptive Intelligent Agent for RTS Games ” (2010). We applied a novice planning technique (Online Case Based Planning) and a machine learning approach (Reinforcement Learning) in order to achieve an artificial intelligence that approaches the human behavior. We’ve made our best during this project from both research and development aspects. Below are research related aspects we’ve done:

1- Doing research using a number of papers, thesis & books as follows:

Papers encouraging research in this area:

RTS Games and Real–Time AI Research – 2003
Call for AI Research in RTS Games – 2004

Papers adopting Case Based Planning:

Case-Based Planning and Execution for RTS Games – 2007
On-Line Case-Based Plan Adaptation for RTS Games- 2008
Learning from Human Demonstrations for Real-Time Case-Based Planning – 2008
Situation Assessment for Plan Retrieval in RTS Games – 2009
On-Line Case based Planning – 2010

Papers adopting Evolutionary Algorithms & Dynamic Scripting:

Co-evolution in Hierarchical AI for Strategy Games – after 2004
Co-evolving Real-Time Strategy Game Playing Influence Map Trees with genetic algorithms
Improving Adaptive Game AI With Evolutionary Learning – 2004
Automatically Acquiring Domain Knowledge For Adaptive Game AI using Evolutionary Learning – 2005

Papers adopting Reinforcement Learning & Dynamic Scripting:

Concurrent Hierarchical Reinforcement Learning – 2005
Hierarchical Reinforcement Learning in Computer Games – After 2006
Goal-Directed Hierarchical Dynamic Scripting for RTS Games – 2006
Hierarchical Reinforcement Learning with Deictic repr. in a computer game- After 2006
Monte Carlo Planning in RTS Games – After 2004
Establishing an Evaluation Function for RTS games – After 2005
Learning Unit Values in Wargus Using Temporal Differences – 2005
Adaptive reinforcement learning agents in RTS games – 2008

Papers adopting Hybrid CBR/RL approaches :

Transfer Learning in Real-Time Strategy Games Using Hybrid CBR-RL – 2007
Learning continuous action models in a RTS Environment – 2008

Related Books:

AI Game Engine Programming
AI for Games

2- Developing an AI-Engine for RTS Games

We used an RTS Game Engine named “Stratagus” to develop our AI Game engine. The game that we used as a test-bed is a clone of the well-know Warcraft 2 game.

3- Maintaining the project blog

https://rtsairesearch.wordpress.com/


4- Maintaining the project repository:

5- Maintaining our own blogs:

  • OmarsBrain.wordpress.com (Omar Enayet)
  • AbdelrahmanOgail.wordpress.com (Abdelrahman Al-Ogail)

B- Our Next Step was publishing a paper entitled “Intelligent Online Case-Based Planning Agent Model in RTS Games” in ISDA 2010. Find it in this post.

Concerning our future plans, We are looking forward to achieve the following long-term goals:

1- Adding new theory in the area of “Simulation of Human Behavior”.
2- Developing a commercial AI Engine for RTS Games specifically and games in general. We already started and we have quite experience in game development.
3- Participate in related contests around the world for AI Engines in RTS Games (As Robocup, AAAI Starcraft Competition, ORTS Competition).
4- Initializing a major research group in Egypt in this field and become pioneers in it world wide.

However, our short term goal is enhancing the current engine , which will efficiently be able to plan and learn when playing against static AI, and use it as a test-bed to publish a number of papers , some of these papers are related to :

1- Introducing the whole Agent model and theory in AI related conference.
2- Introducing the whole AI Game Engine from a game industry point of view in a game-industry conference.
3- More Details & Testing concerning the hybridization of Online Case based Planning and Reinforcement Learning ( the topic of our last paper)
4- Knowledge representation for plans and experience in RTS games.
5- Enhancing agent’s situation assessment algorithm.
6- Comparing Case-Based Reasoning to Reinforcement Learning.

Other long-term papers’ topics include :
1- Include different planning algorithms/systems and let agent use them and make an intensive comparison between these panning systems.
2- Include different learning algorithms/systems and let agent use them and make an intensive comparison between these learning systems.
3- Multi-Agent AI : machine collaboration with other machines, or machine collaboration with human players.

4- Knowledge (Gaming Experience) Sharing.
5- Opponent Modeling.

Posted in Orientation | Leave a Comment »

I-Strategizer Project Documentation – Version 1.0

Posted by Ogail on October 9, 2010

We’ve wrote a documentation for the last year research in a document. This documents acts a reference manual for most of our research and development. It’s considered as general orientation document for any person would like to start researching in this area. Download it from link below:

I-Strategizer Documentation Version 1.0

Posted in Orientation | Leave a Comment »

Intelligent Online Case-Based Planning Agent Model for Real-Time Strategy Games

Posted by Ogail on October 9, 2010

Hi all,

We’ve recently published a paper titled under Intelligent Online Case-Based Planning Agent Model for Real-Time Strategy Games in 10th International Conference on Intelligent Systems Design and Application.

Posted in Publocations | Leave a Comment »