In this talk, we will consider the computational complexity of solving stochastic games with mean-payoff objectives, where ($\epsilon$-)optimal strategies may require infinite memory. Instead of identifying special classes in which simple strategies are sufficient to play ?-optimally, we ask what can be achieved with (and against) finite-memory strategies up to a given bound on the memory. We show NP-hardness for approximating zero-sum values, already with respect to memoryless strategies and for 1-player reachability games.
We show that one can decide in polynomial space, for a given game, memory bounds b, a non-negative error $\epsilon$ and a value v, if there exists a strategy that uses at most b memory modes and achieves a value at least $v-\epsilon$ against any opponent strategy that also uses at most b memory modes. Furthermore, if $\epsilon>0$, we show that the complexity can be reduced to FNP[NP], i.e, in the second level of the polynomial hierarchy.
Our results can be easily generalised to partial-information games and other objectives, such as parity, which establish several complexity results for special classes of games. In the talk, we will focus on a well-known connection between stochastic games with discounted-payoff objectives and mean-payoff objectives to show that approximating the unrestricted value of mean-payoff games, i.e., even when the players are not restricted to bounded-memory strategies can be done in FNP[NP].
This talk is based on a joint work with Rasmus Ibsen-Jensen and Patrick Totzke accepted at LICS 2024.