This may be an evolving answer, because the question is, in some sense, a (useful) rabbit hole. I apologize if I don't go deeply into meta-games per se, as it's a little outside of my scope, which is non-chance games of perfect information, but I think it's worthwhile to think about the underlying problem of indeterminacy in relation to games in general.
Bounded Rationality* is a useful concept because it pre-supposes a condition of computational intractability. Computational intractability can be introduced into games in several forms:
- Hidden Information
- Randomness ("quantum" indeterminacy)
[For more details on my use of "quantum" in regards to randomness, see Deterministic Games.]
The underlying purpose of game theory is to determine "optimal" strategies for any given problem. I put optimal in quotes because optimality is a spectrum, and subjective in a condition of computational intractability.
Thus, we cannot know if AlphaGo plays optimally, only that it played more optimally than Lee Sedol in 4 out of 5 games.
This is distinct from strongly solved games such as tic-tac-toe, where we can know with total certainty that a choice is optimal, because the problem of tic-tac-toe is computationally tractable.
Part of the confusion may be semantic, because the concepts are subtle and profound, and require language, what TS Eliot might have called "the intolerable wrestle with words and meanings." (For instance, I used hidden information above to avoid having to distinguish between incomplete and imperfect information.)
- Perfect Play is generally defined as a strategy that leads to the best possible outcome for a participant, regardless of the choices of the opponent.
Thus minimax is of central importance, and provided the foundation for game theory.
Even in games with incomplete information, whether "deterministic" (Battleship) or involving "quantum indeterminacy" (Prisoner's Dilemma), there are optimal strategies. For simultaneous games such as Dilemma and all of the numerous extensions minimax is used. In Battleship, there are at least three strategies of increasing optimality, and although there doesn't appear to be a strategy that can yield P > .5, if one player employs a more optimal strategy, they will win in aggregate. Even Rock, Paper, Scissors seems to have an optimal strategy, which blows my mind, and carries the caveat that I need to look into it more.
- Thus, perfect play, as defined, is certainly achievable, but does not necessarily connote (objectively) optimal choices, which is a little confusing, because "perfect" implies objectivity, a condition which is only possible in regard to tractable problems.
It is also important to note that there may not be a "winning" strategy in the sense of being better off than the opponent, and in this condition, perfect or optimal play is mitigation of loss.
*In terms of incomplete information games specifically, I think there's a case for extending the concept of Bounded Rationality is extended to include information that cannot be observed or "known".
Colloquially, this would include the "unknowns" (both known and unknown) and the "unknowable" (quantum indeterminacy and superpositions).