Anevaluation function,also known as aheuristic evaluation functionorstatic evaluation function,is a function used by game-playing computer programs to estimate the value or goodness of a position (usually at a leaf or terminal node) in a game tree.[1]Most of the time, the value is either areal numberor a quantizedinteger,often innths of the value of a playing piece such as a stone in go or a pawn in chess, wherenmay be tenths, hundredths or other convenient fraction, but sometimes, the value is anarrayof three values in theunit interval,representing the win, draw, and loss percentages of the position.

There do not exist analytical or theoretical models for evaluation functions for unsolved games, nor are such functions entirely ad-hoc. The composition of evaluation functions is determined empirically by inserting a candidate function into an automaton and evaluating its subsequent performance. A significant body of evidence now exists for several games like chess, shogi and go as to the general composition of evaluation functions for them.

Games in which game playing computer programs employ evaluation functions includechess,[2]go,[2]shogi(Japanese chess),[2]othello,hex,backgammon,[3]andcheckers.[4][5]In addition, with the advent of programs such asMuZero,computer programs also use evaluation functions to playvideo games,such as those from theAtari 2600.[6]Some games liketic-tac-toearestrongly solved,and do not require search or evaluation because a discrete solution tree is available.

edit

A tree of such evaluations is usually part of a search algorithm, such asMonte Carlo tree searchor aminimax algorithmlikealpha–beta search.The value is presumed to represent the relative probability of winning if the game tree were expanded from that node to the end of the game. The function looks only at the current position (i.e. what spaces the pieces are on and their relationship to each other) and does not take into account the history of the position or explore possible moves forward of the node (therefore static). This implies that for dynamic positions where tactical threats exist, the evaluation function will not be an accurate assessment of the position. These positions are termed non-quiescent;they require at least a limited kind of search extension calledquiescence searchto resolve threats before evaluation. Some values returned by evaluation functions are absolute rather than heuristic, if a win, loss or draw occurs at the node.

There is an intricate relationship between search and knowledge in the evaluation function. Deeper search favors less near-term tactical factors and more subtle long-horizon positional motifs in the evaluation. There is also a trade-off between efficacy of encoded knowledge and computational complexity: computing detailed knowledge may take so much time that performance decreases, so approximations to exact knowledge are often better. Because the evaluation function depends on the nominal depth of search as well as the extensions and reductions employed in the search, there is no generic or stand-alone formulation for an evaluation function. An evaluation function which works well in one application will usually need to be substantially re-tuned or re-trained to work effectively in another application.

In chess

edit

Incomputer chess,the output of an evaluation function is typically aninteger,and the units of the evaluation function are typically referred to aspawns.The term 'pawn' refers to the value when the player has one more pawn than the opponent in a position, as explained inChess piece relative value.The integer 1 usually represents some fraction of a pawn, and commonly used incomputer chessarecentipawns,which are a hundredth of a pawn. Larger evaluations indicate a material imbalance or positional advantage or that a win of material is usually imminent. Very large evaluations may indicate that checkmate is imminent. An evaluation function also implicitly encodes the value of the right to move, which can vary from a small fraction of a pawn to win or loss.

Handcrafted evaluation functions

edit

Historically in computer chess, the terms of an evaluation function are constructed (i.e. handcrafted) by the engine developer, as opposed to discovered through trainingneural networks.The general approach for constructing handcrafted evaluation functions is as alinear combinationof various weighted terms determined to influence the value of a position. However, not all terms in a handcrafted evaluation function are linear, some, such as king safety and pawn structure, are nonlinear. Each term may be considered to be composed of first order factors (those that depend only on the space and any piece on it), second order factors (the space in relation to other spaces), and nth-order factors (dependencies on history of the position).

A handcrafted evaluation function typically has of a material balance term that usually dominates the evaluation. Theconventional valuesused for material are Queen=9, Rook=5; Knight or Bishop=3; Pawn=1; the king is assigned an arbitrarily large value, usually larger than the total value of all the other pieces.[1]In addition, it typically has a set of positional terms usually totaling no more than the value of a pawn, though in some positions the positional terms can get much larger, such as when checkmate is imminent. Handcrafted evaluation functions typically contain dozens to hundreds of individual terms.

In practice, effective handcrafted evaluation functions are not created by expanding the list of evaluated parameters, but by careful tuning or training of the weights relative to each other, of a modest set of parameters such as those described above. Toward this end, positions from various databases are employed, such as from master games, engine games,Lichessgames, or even from self-play, as inreinforcement learning.

Example

edit

An example handcrafted evaluation function forchessmight look like the following:

  • c1* material + c2* mobility + c3* king safety + c4* center control + c5* pawn structure + c6* king tropism +...

Each of the terms is a weight multiplied by a difference factor: the value of white's material or positional terms minus black's.

  • The material term is obtained by assigning a value in pawn-units to each of the pieces.
  • Mobility is the number of legal moves available to a player, or alternately the sum of the number of spaces attacked or defended by each piece, including spaces occupied by friendly or opposing pieces. Effective mobility, or the number of "safe" spaces a piece may move to, may also be taken into account.
  • King safety is a set of bonuses and penalties assessed for the location of the king and the configuration of pawns and pieces adjacent to or in front of the king, and opposing pieces bearing on spaces around the king.
  • Center control is derived from how many pawns and pieces occupy or bear on the four center spaces and sometimes the 12 spaces of the extended center.
  • Pawn structure is a set of penalties and bonuses for various strengths and weaknesses in pawn structure, such as penalties for doubled and isolated pawns.
  • King tropism is a bonus for closeness (or penalty for distance) of certain pieces, especially queens and knights, to the opposing king.

Neural networks

edit

Whileneural networkshave been used in the evaluation functions of chess engines since the late 1980s,[7][8]they did not become popular in computer chess until the late 2010s, as the hardware needed to train neural networks was not strong enough at the time, and fast training algorithms and network topology and architectures have not been developed yet. Initially, neural network based evaluation functions generally consisted of one neural network for the entire evaluation function, with input features selected from the board and whose output is aninteger,normalizedto the centipawn scale so that a value of 100 is roughly equivalent to a material advantage of a pawn. The parameters in neural networks are typically trained usingreinforcement learningorsupervised learning.More recently, evaluation functions in computer chess have started to use multiple neural networks, with each neural network trained for a specific part of the evaluation, such as pawn structure or endgames. This allows for hybrid approaches where an evaluation function consists of both neural networks and handcrafted terms.

Deep neural networkshave been used, albeit infrequently, in computer chess after Matthew Lai's Giraffe[9]in 2015 andDeepmind'sAlphaZeroin 2017 demonstrated the feasibility of deep neural networks in evaluation functions. Thedistributed computingprojectLeela Chess Zerowas started shortly after to attempt toreplicatethe results of Deepmind's AlphaZero paper. Apart from the size of the networks, the neural networks used in AlphaZero and Leela Chess Zero also differ from those used in traditional chess engines as they have two outputs, one for evaluation (thevalue head) and one for move ordering (thepolicy head), rather than only one output for evaluation.[10]In addition, while it is possible to set the output of the value head of Leela's neural network to areal numberto approximate the centipawn scale used in traditional chess engines, by default the output is the win-draw-loss percentages, a vector of three values each from theunit interval.[10]Since deep neural networks are very large, engines using deep neural networks in their evaluation function usually require agraphics processing unitin order to efficiently calculate the evaluation function.

Piece-square tables

edit

An important technique in evaluation since at least the early 1990s is the use of piece-square tables (also called piece-value tables) for evaluation.[11][12]Each table is a set of 64 values corresponding to the squares of the chessboard. The most basic implementation of piece-square table consists of separate tables for each type of piece per player, which in chess results in 12 piece-square tables in total. More complex variants of piece-square tables are used in computer chess, one of the most prominent being the king-piece-square table, used inStockfish,Komodo Dragon,Ethereal, and many other engines, where each table considers the position of every type of piece in relation to the player's king, rather than the position of the every type of piece alone. The values in the tables are bonuses/penalties for the location of each piece on each space, and encode a composite of many subtle factors difficult to quantify analytically. In handcrafted evaluation functions, there are sometimes two sets of tables: one for the opening/middlegame, and one for the endgame; positions of the middle game are interpolated between the two.[13]

Originally developed in computershogiin 2018 by Yu Nasu,[14][15]the most common evaluation function used in computer chess today[citation needed]is theefficiently updatable neural network,orNNUEfor short, a sparse and shallowneural networkthat has only piece-square tables as the inputs into the neural network.[16]In fact, the most basic NNUE architecture is simply the 12 piece-square tables described above, a neural network with only one layer and noactivation functions.An efficiently updatable neural network architecture, using king-piece-square tables as its inputs, was first ported to chess in a Stockfish derivative called Stockfish NNUE, publicly released on May 30, 2020,[17]and was adopted by many other engines before eventually being incorporated into the official Stockfish engine on August 6, 2020.[18][19]

Endgame tablebases

edit

Chess engines frequently use endgame tablebases in their evaluation function, as it allows the engine to play perfectly in the endgame.

In Go

edit

Historically, evaluation functions inComputer Gotook into account both territory controlled, influence of stones, number of prisoners and life and death of groups on the board. However, modern go playing computer programs largely use deep neural networks in their evaluation functions, such asAlphaGo,Leela Zero,Fine Art,andKataGo,and output a win/draw/loss percentage rather than a value in number of stones.

References

edit
  1. ^abShannon, Claude (1950),Programming a Computer for Playing Chess(PDF),Ser. 7, vol. 41, Philosophical Magazine,retrieved12 December2021
  2. ^abcSilver, David;Hubert, Thomas; Schrittwieser, Julian; Antonoglou, Ioannis; Lai, Matthew; Guez, Arthur; Lanctot, Marc; Sifre, Laurent; Kumaran, Dharshan; Graepel, Thore; Lillicrap, Timothy; Simonyan, Karen;Hassabis, Demis(7 December 2018)."A general reinforcement learning algorithm that masters chess, shogi, and go through self-play".Science.362(6419): 1140–1144.Bibcode:2018Sci...362.1140S.doi:10.1126/science.aar6404.PMID30523106.
  3. ^Tesauro, Gerald (March 1995)."Temporal Difference Learning and TD-Gammon".Communications of the ACM.38(3): 58–68.doi:10.1145/203330.203343.S2CID8763243.RetrievedNov 1,2013.
  4. ^Schaeffer, J.; Burch, N.; Y. Björnsson; Kishimoto, A.; Müller, M.; Lake, R.; Lu, P.; Sutphen, S. (2007)."Checkers is Solved"(PDF).Science.317(5844): 1518–22.doi:10.1126/science.1144079.PMID17641166.S2CID10274228.
  5. ^Schaeffer, J.; Björnsson, Y.; Burch, N.; Kishimoto, A.; Müller, M.; Lake, R.; Lu, P.; Sutphen, S."Solving Checkers"(PDF).Proceedings of the 2005 International Joint Conferences on Artificial Intelligence Organization.
  6. ^Schrittwieser, Julian; Antonoglou, Ioannis; Hubert, Thomas; Simonyan, Karen; Sifre, Laurent; Schmitt, Simon; Guez, Arthur; Lockhart, Edward; Hassabis, Demis; Graepel, Thore; Lillicrap, Timothy (2020). "Mastering Atari, Go, chess and shogi by planning with a learned model".Nature.588(7839): 604–609.arXiv:1911.08265.Bibcode:2020Natur.588..604S.doi:10.1038/s41586-020-03051-4.PMID33361790.S2CID208158225.
  7. ^Thurn, Sebastian (1995),Learning to Play the Game of Chess(PDF),MIT Press,retrieved12 December2021
  8. ^Levinson, Robert (1989),A Self-Learning, Pattern-Oriented Chess Program,vol. 12, ICCA Journal
  9. ^Lai, Matthew (4 September 2015),Giraffe: Using Deep Reinforcement Learning to Play Chess,arXiv:1509.01549v1
  10. ^ab"Neural network topology".lczero.org.Retrieved2021-12-12.
  11. ^Beal, Don; Smith, Martin C.,Learning Piece-Square Values using Temporal Differences,vol. 22, ICCA Journal
  12. ^Jun Nagashima; Masahumi Taketoshi; Yoichiro Kajihara; Tsuyoshi Hashimoto; Hiroyuki Iida (2002),An Efficient Use of Piece-Square Tables in Computer Shogi,Information Processing Society of Japan
  13. ^Stockfish Evaluation Guide,retrieved12 December2021
  14. ^Yu Nasu (April 28, 2018)."Efficiently Updatable Neural-Network-based Evaluation Function for computer Shogi"(PDF)(in Japanese).
  15. ^Yu Nasu (April 28, 2018)."Efficiently Updatable Neural-Network-based Evaluation Function for computer Shogi (Unofficial English Translation)"(PDF).GitHub.
  16. ^Gary Linscott (April 30, 2021)."NNUE".GitHub.RetrievedDecember 12,2020.
  17. ^Noda, Hisayori (30 May 2020)."Release stockfish-nnue-2020-05-30".Github.Retrieved12 December2021.
  18. ^"Introducing NNUE Evaluation".6 August 2020.
  19. ^Joost VandeVondele (July 25, 2020)."official-stockfish / Stockfish, NNUE merge".GitHub.
  • Slate, D and Atkin, L., 1983, "Chess 4.5, the Northwestern University Chess Program" in Chess Skill in Man and Machine 2nd Ed., pp. 93–100. Springer-Verlag, New York, NY.
  • Ebeling, Carl, 1987, All the Right Moves: A VLSI Architecture for Chess (ACM Distinguished Dissertation), pp. 56–86. MIT Press, Cambridge, MA
edit