Program equilibrium

Program equilibrium is a game-theoretic solution concept for a scenario in which players submit computer programs to play the game on their behalf and the programs can read each other's source code. The term was introduced by Moshe Tennenholtz in 2004. The same setting had previously been studied by R. Preston McAfee, J. V. Howard and Ariel Rubinstein.

Setting and definition
The program equilibrium literature considers the following setting. Consider a normal-form game as a base game. For simplicity, consider a two-player game in which $$S_1$$ and $$S_2$$ are the sets of available strategies and $$u_1$$ and $$u_2$$ are the players'  utility functions. Then we construct a new (normal-form) program game in which each player $$i$$ chooses a computer program $$p_i$$. The payoff (utility) for the players is then determined as follows. Each player's program $$p_i$$ is run with the other program $$p_{-i}$$ as input and outputs a strategy $$s_i$$ for Player $$i$$. For convenience one also often imagines that programs can access their own source code. Finally, the utilities for the players are given by $$u_i(s_1,s_2)$$ for $$i=1,2$$, i.e., by applying the utility functions for the base game to the chosen strategies.

One has to further deal with the possibility that one of the programs $$p_i$$ doesn't halt. One way to deal with this is to restrict both players' sets of available programs to prevent non-halting programs.

A program equilibrium is a pair of programs $$(p_1,p_2)$$ that constitute a Nash equilibrium of the program game. In other words, $$(p_1,p_2)$$ is a program equilibrium if neither player $$i$$ can deviate to an alternative program $$p_i'$$ such that their utility is higher in $$(p_i',p_{-i})$$ than in $$(p_1,p_2)$$.

Instead of programs, some authors have the players submit other kinds of objects, such as logical formulas specifying what action to play depending on an encoding of the logical formula submitted by the opponent.

Different mechanisms for achieving cooperative program equilibrium in the Prisoner's Dilemma
Various authors have proposed ways to achieve cooperative program equilibrium in the Prisoner's Dilemma.

Cooperation based on syntactic comparison
Multiple authors have independently proposed the following program for the Prisoner's Dilemma:

algorithm CliqueBot(opponent_program): if opponent_program == this_program then return Cooperate else return Defect

If both players submit this program, then the if-clause will resolve to true in the execution of both programs. As a result, both programs will cooperate. Moreover, (CliqueBot,CliqueBot) is an equilibrium. If either player deviates to some other program $$p_i$$ that is different from CliqueBot, then the opponent will defect. Therefore, deviating to $$p_i$$ can at best result in the payoff of mutual defection, which is worse than the payoff of mutual cooperation.

This approach has been criticized for being fragile. If the players fail to coordinate on the exact source code they submit (for example, if one player adds an extra space character), both programs will defect. The development of the techniques below is in part motivated by this fragility issue.

Proof-based cooperation
Another approach is based on letting each player's program try to prove something about the opponent's program or about how the two programs relate. One example of such a program is the following:

algorithm FairBot(opponent_program): if there is a proof that opponent_program(this_program) = Cooperate then return Cooperate else return Defect

Using Löb's theorem it can be shown that when both players submit this program, they cooperate against each other. Moreover, if one player were to instead submit a program that defects against the above program, then (assuming consistency of the proof system is used) the if-condition would resolve to false and the above program would defect. Therefore, (FairBot,FairBot) is a program equilibrium as well.

Cooperating with ε-grounded simulation
Another proposed program is the following:

algorithm $$\epsilon$$GroundedFairBot(opponent_program): With probability $$\epsilon$$: return Cooperate return opponent_program(this_program)

Here $$\epsilon$$ is a small positive number.

If both players submit this program, then they terminate almost surely and cooperate. The expected number of steps to termination is given by the geometric series. Moreover, if both players submit this program, neither can profitably deviate, assuming $$\epsilon$$ is sufficiently small, because defecting with probability $$\Delta$$ would cause the opponent to defect with probability $$(1-\epsilon)\Delta$$.

Folk theorem
We here give a theorem that characterizes what payoffs can be achieved in program equilibrium.

The theorem uses the following terminology: A pair of payoffs $$(v_1,v_2)$$ is called feasible if there is a pair of (potentially mixed) strategies $$(s_1,s_2)$$ such that $$u_i(s_1,s_2)=v_i$$ for both players $$i$$. That is, a pair of payoffs is called feasible if it is achieved in some strategy profile. A payoff $$v_i$$ is called  individually rational if it is better than that player's minimax payoff; that is, if $$v_i \geq \min_{\sigma_{-i}}\max_{s_{i}} u_i(\sigma_{-i},s_{i})$$, where the minimum is over all mixed strategies for Player $$-i$$.

Theorem (folk theorem for program equilibrium): Let G be a base game. Let $$(v_1,v_2)$$ be a pair of real-valued payoffs. Then the following two claims are equivalent:
 * The payoffs $$(v_1,v_2)$$ are feasible and individually rational.
 * There is a program equilibrium $$(p_1,p_2)$$ that achieves payoffs $$(v_1,v_2)$$.

The result is referred to as a folk theorem in reference to the so-called folk theorems (game theory) for repeated games, which use the same conditions on equilibrium payoffs $$(v_1,v_2)$$.