3. Programmer: Scoring / Weighting

EAI Programmers description,

I would like to describe the current implemented scoring system in JPortal.

What Scoring?

The latest AI implementation in JPortal goes like this:

  1. there is a current game in Progress

  2. calculate a current score of the game, create a virtual game and on that virual game:

  3. try all possible things there are

  4. for each such try do again calculate a current score

  5. if the score is higher than the initial one it is a good move

  6. take the highest found score

  7. apply the best moves to the real game

The scoring is a crucial part of the new AI. A bad scoring will result in a bad AI.

When does JPortal score

Two part Score

The score of a game situation is allways generated twofold. The current situation of the game is scored from two points of view.

  1. The players (that is the AI)

  2. The opponents

The score of the game is than:
SCORE = playerScore - opponentScore;

Obviously a positiv number is a good score for the player, and a negativ number is a bad score. The calculation of each of these playerScores must be done with the same algorithm to ensure they are comparable.

SubScores

Implementation of the scoring can be found in the package: "csa.jportal.ai.enhancedAI.weighting" The class "Weighting" is used to interface all scoring methods. The scoring itself is divided into configurable subScorings. This might sound complicated but is rather simple.

I implemented different scoring "elements" for different points of view of the game. There are implementations to score:

There is a scoring class for each of the above (all implement the interface "Scorable":

    public interface Scorable
    {
        public String getName();
        public int getScore();
        public int computeScore();
        public void setData(VirtualMatch vMatch, int playerNo);
    }

).

Each of these subscores can also have a weight, as of now the weighting is:
    public int[] scoreWeighting =
    {
        / / SCORE LIBRARY
        1,
        / / SCORE BATTLEFIELD
        3,
        / / SCORE HAND
        2,
        / / SCORE LAND
        2,
        / / SCORE GRAVE
        1,
        / / SCORE HEALTH
        4,
    };

The sum of all subScores is the final score.

Following calculation is currently implemented:

  1. battlefield
    score = sum(AllCreatures: Power, Toughness, Number of abilities) + sum(NonCreatures: ManaCost)

  2. graveyard
    score = deckSize - (graveSize-roundsplayed) - (sum(AllCreatures: Power, Toughness, Number of abilities)/2)

  3. handScore
    landHand = if (landsSize-7 <0) if (round <8)

  4. healthScore
    if (health >= 40) score = 80 + (health-40);
    else if(health >= 10) score = health*2;
    else if(health < 10) score = 20 - 4*(10-health);
    if (health < 0) score -= 10000;

  5. libraryScore
    score = lib.size();

  6. landScore
    score = (colorsHave * 10) /colorsNeeded;
    if (lands.size() < 10) score += lands.size();

Configurable

The above is the "hardwired" scoring, which is the default scoring (which might change if I see it needs tweaking). This is the internalScoring. You can configure an EnhancedAI to have "external" scoring, which means scoring scripts which can be interpreted. This is quite a bit slower but it is as configurable as you want.

The scoring can be configured using the following scheme: (in the Configure AI Window)

  1. you can build a scoring formular, the formular must fill the variable "score" with an integer number (you can access a "VirtualMatch", which gives you all information about the to be scored game situation)

  2. the formular must have a name and a weighting

  3. you can build a "Formular Collection" with any number of such formulars you created

  4. you can set that formular collection to an AI. The AI will use that formular collection as its weighting. (I implemented the above "internal scoring" also as an external scoring as an example)