Short version below.

When implementing a minimax algorithm the purpose is usually to find the best possible position of a game board for the player you call max after some amount of moves. In some games like tic-tac-toe, the game tree (a graph of all legal moves) is small enough that the minimax search can be applied exhaustively to look at the whole game tree. More complex games like chess have too large of a game tree to be feasibly searched exhaustively.

A simple version of minimax just travels through the game tree, evaluating every legal move for the position currently being evaluated before going further and evaluating the possible answers to those moves. To find an optimal winning move; minimax needs only to search until a winning state of the game has been found. If implemented using the aforementioned breadth first search minimax algorithm, it will have found the way to win in the least amount of moves.

In the case where min has a forced win the truly optimal move doesn't exist. If min is not an optimal player, the definition of optimal can be the move that is most likely to cause him to make an error, that enables max to force a win. That move isn't necessarily the move that leads to the most moves until loss.
As an example, consider a position in some game where max has two moves, move A and move B, move A leads to a loss in 100 moves and move B to a loss in one move. Naively move A is better but in this game the only legal moves following move A lead to a loss and move B leads to a position where min has hundreds of legal moves but only one causes him to win. Albeit a bit extreme, this example demonstrates that optimality is hard to define in a losing position. Put simply, is a very complex loss in 6 moves worse than a obvious loss in 20?

You did define a version of optimality however and implementing it is possible. Since you are only considering optimal moves, an exhaustive search must be performed and thus, there is no reason to give a score to any positions but a win, loss, and draw. The method I would use is to assign each state a score, much larger than the maximum possible amount of moves, e.g. a loss is -100,000 a win is 100,000 and a draw is 0. Then you maintain a variable that is the depth of the search or number of moves that have to be performed to reach this state. Then I would add the number of moves to the large number. So a loss in 20 moves would have a score of -99,980 and a draw in 15 moves would have the score of 15. 100,000 is a bit excessive for most games but it just has to be large enough that a loss, win, and draw is never confused as a draw in 100,001 moves would look better than a win in 1. Note that this method should only be used for losses and draws since using this method for wins would result in a win in 10 having a score of 100,010 and a win in 20 a score of 100,020 and thus looking better.

Short version:

- Use breadth first search.
- For winning positions: terminate the minimax when a win is found.
- For losses and draws: search the whole game tree and give the position a score of 0+MTP for draws and L+MTP for losses.

L is a large number and MTP is the number of moves to reach the position.

Why couldn't you just add a couple if else statements with the logic and get where you needed to go? – hisairnessag3 – 2017-12-08T23:22:26.463