Fitness Value Optimization for Disc Set in Board Game through Evolutionary Learning

Evolution in Networks and Computer Communications
© 2011 by IJCA Journal
Number 3 - Article 1
Year of Publication: 2011
Dharm Singh
Chirag S Thaker
Sanjay M Shah


The Artificial Intelligence research field since ages has incorporated a series of novel and trend setting distinct approaches including neural networks, fuzzy logic and genetic algorithms to apply them to various problem-solving domains. Machine learning techniques such as evolutionary learning, neural networks and reinforcement learning alone are difficult to apply to board games because they need an extremely large number of computations which are having tendency to increase exponentially in numbers as the search depth increases to find better move(s). Many board game researchers find that machine learning approach through evolutionary learning using some optimization methods like genetic algorithm gives better results to build robust and better artificially intelligent game playing programs. In case of board game, board squares plays vital role in terms of exploring the game based topographies to assign relative weight to board squares as per their positions. These weight assignments in game-playing programs are derived through quality search and rules acquaintance and game playing experience. When the move search reaches the end of a game tree structure, attained optimized evaluation function values are used to assess board position “goodness”. The paper takes Game of Reversi as its object game and exploits its symmetric phenomenon to develop genetically evolutionary game playing program to learn its impact on the evolution of weight values for a particular disc sets through weight value land scape. The collected results for the said disc sets endorse the earnest efficacy of genetic algorithm as an evolutionary optimization instrument. The first two sections is about game introduction and game search space. The next section discusses history of game program development and game playing phases. Section five and six aims game of Reversi implementation and collected results respectively. The last two sections are about conclusion and references.


  1. S. Schiffel and M. Thielscher. A multiagent semantics for the game description language. In Proc. of the Int.’l Conf. on Agents and Artificial Intelligence, Porto 2009. Springer LNCS.
  2. T. Srinivasan, P.J.S. Srikanth, K. Praveen and L. Harish Subramaniam, “AI Game Playing Approach for Fast Processor Allocation in Hypercube Systems using Veitch diagram (AIPA)”, IADIS International Conference on Applied Computing 2005, vol. 1, Feb. 2005, pp. 65-72.
  3. Thomas P. Runarsson and Simon M. Lucas. Co-evolution versus self-play temporal difference learning for acquiring position evaluation in small-board go. IEEE Transactions on Evolutionary Computation, 9:628 – 640, 2005.
  4. Yannakakis, G., Levine, J., and Hallam, J. (2004). An evolutionary approach for interactive computer games. In Evolutionary Computation, 2004. CEC2004. Congress on Evolutionary Computation, volume 1, pages 986–993, Piscataway, NJ. IEEE.
  5. Hauptman and M. Sipper. Evolution of an efficient search algorithm for the Mate-in-N problem in chess. In Proceedings of the 2007 European Conference on Genetic Programming, pages 78–89. Springer, Valencia, Spain, 2007.
  6. P. Aksenov. Genetic algorithms for optimising chess position scoring. Master’s Thesis, University of Joensuu, Finland, 2004. Y. Bjornsson and T.A. Marsland. Multi-cut alpha-beta-pruning in game-tree search. Theoretical Computer Science, 252(1-2):177–196, 2001.
  7. O. David-Tabibi, A. Felner, and N.S. Netanyahu. Blockage detection in pawn endings. Computers and Games CG 2004, eds. H.J. van den Herik, Y. Bjornsson, and N.S. Netanyahu, pages 187–201. Springer-Verlag, 2006.
  8. Rosenbloom, P. (1982). A world championship level Othello program. Artificial Intelligence, 19:279-320.
  9. Lee, K. -F., and Mahajan, S. (1990). The development of a world class Othello program. Artificial Intelligence, 43:21-36.
  10. Hong, J.-H. and Cho, S.-B. (2004). Evolution of emergent behaviors for shooting game characters in robocode. In Evolutionary Computation, 2004. CEC2004. Congress on Evolutionary Computation, volume 1, pages 634–638, Piscataway, NJ. IEEE.
  11. Billman, D., and Shaman, D. (1990). Strategy knowledge and strategy change in skilled performance: A study of the game Othello. American Journal of Psychology, 103:145-166.
  12. Matt Gilgenbach. Fun game AI design for beginners. In Steve Rabin, editor, AI Game Programming Wisdom 3, 2006.
  13. J. Clune. Heuristic evaluation functions for general game playing. In Proc. of AAAI, 1134–1139, 2007.
  14. Holland, J. H. (1975). Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control and Artificial Intelligence. Ann Arbor, MI: University of Michigan Press.
  15. Goldberg, D. E. (1989). Genetic Algorithms in Search,Optimization and and Machine Learning. Reading, MA: Addison-Wesley.
  16. R.M. Axelrod, The evolution of cooperation (BASIC Books, New York, 1984).
  17. J¨org Denzinger, Kevin Loose, Darryl Gates, and John Buchanan. Dealing with parameterized actions in behavior testing of commercial computer games. In Proceedings of the IEEE 2005 Symposium on Computational Intelligence and Games (CIG), pages 37–43, 2005.
  18. D. Fogel, “Using evolutionary programming to create networks that are capable of playing tic-tac-toe,” in Proceedings of IEEE International Conference on Neural Networks, San Francisco: IEEE, pp. 875–880,1993.
  19. M. Muller, “Computer Go,” Artificial Intelligence, vol. 134, pp. 145–179, 2002.
  20. Singh Dharm, Thaker Chirag S and Shah Sanjay M. Quality of State Improvisation Through Evaluation Function Optimization In Genetic Application Learning IEEE xPlore identifier 10.1109/ETNCC.2011.5958494 page(s) 93-97.