Before answering how to stop the evaluation phase and begin exploitation of those results, one must first answer when to stop it whereby the balance the project stakeholder wants between quality and cost is found. You won't always find that in books discussing pure research, so your question is an excellent one.
The algorithm the authors are discussing on that page (101) is based on the policy improvement theorem on page 78, and the appearance of the endless loop in the algorithm in the pseudo-code line "Repeat forever (for each episode)" is obviously worse than useless in a data center if the loop is not terminated, unless it is multi-agent, exploiting multiple threads, processes, virtual hosts, hardware accelerators, cores, or hosts, and the improvements are accessed for exploitation independently or symbiotically using some scheme.
In a deployed robot, an endless loop often has a legitimate a use case. "Repeat until shutdown," might be appropriate in a production algorithm or hardware embodiment if the robot's goal is, "Keep the living room clean." One must always try to place this theory in context when taking pure research and considering the applied research that may stem from it.
In real product and service development environments, how the balance is struck between quality of action and cost of determining it depends upon the problem size, expectations of the user or business, and the architecture of the computational resources you have. Consider some of these factors in more detail.
- Maximum number of rounds of evaluation-exploitation cycles
- Requirements for precision in terms of optimality
- Requirements for reliability in terms of completion
- Distribution of the number branches from nodes
- Distribution of lengths of possible action traversal sequences to the goal
- Average cost (in time and energy) of each evaluation
- Average cost (in time and energy) of each exploitation
In a single thread, single core, von Neumann architecture, as is sometimes the case in an embedded environment, evaluation and exploitation are time sliced. In such a case, evaluation should stop and exploitation should begin when the probability that further evaluation will produce an improved result drops below the cost of further evaluation, based on some estimation of return and cost. This is a function of the above factors, although not a linear one.
We have considered training an LSTM network to determine the function in the epoch domain (roughly related to the time domain), although it is low on our priority list.
In an embedded process, a function that approximates return on further evaluation cost can be constructed, based on statistics gathered up to that point in current learning or over a longer period of operations. The function should be fast and inexpensive. In each cycle within the evaluation phase, the function can be evaluated and compared against a configurable probability threshold. Its configuration value can be an educated guess based on the perception of the value of further path exploration.
In a simulation environment, when more computing resources for parallel processes, exploiting OS or hardware facilities for that, the time slicing is either opaque or nonexistent respectively. In those cases, continuous improvement may be unbounded because the state-action graph is not finite.