1 Introduction
The Iterated Prisoner’s Dilemma (IPD) is a common model in game theory, frequently used to understand the evolution of cooperative behaviour from complex dynamics
[15].This manuscript uses the Axelrod library [30, 49], open source software for conducting IPD research with reproducibility as a principal goal. Written in the Python programming language, to date the library contains source code contributed by over 50 individuals from a variety of geographic locations and technical backgrounds. The library is supported by a comprehensive test suite that covers all the intended behaviors of all of the strategies in the library, as well as the features that conduct matches, tournaments, and population dynamics.
The library is continuously developed and as of version 3.0.0, the library contains over 200 strategies, many from the scientific literature, including classic strategies like Win Stay Lose Shift [46] and previous tournament winners such as OmegaTFT [51], Adaptive Pavlov [34], and ZDGTFT2 [53].
Since Robert Axelrod’s seminal tournament [12], a number of IPD tournaments have been undertaken and are summarised in Table 1. Further to the work described in [30] a regular set of standard, noisy [19] and probabilistic ending [13] tournaments are carried out as more strategies are added to the Axelrod library. Details and results are available here: http://axelrodtournament.readthedocs.io. This work presents a detailed analysis of a tournament with 176 strategies (details given in Section 3).
Year  Reference  Number of Strategies  Type  Source Code 

1979  [12]  13  Standard  Not immediately available 
1979  [13]  64  Standard  Available in FORTRAN 
1991  [19]  13  Noisy  Not immediately available 
2002  [52]  16  Wildlife  Not applicable 
2005  [29]  223  Varied  Not available 
2012  [53]  13  Standard  Not fully available 
2016  [30]  129  Standard  Fully available 
In this work we describe how collections of strategies in the Axelrod library have been used to train new strategies specifically to win IPD tournaments. These strategies are trained using generic strategy archetypes based on e.g. finite state machines, arriving at particularly effective parameter choices through evolutionary or particle swarm algorithms. There are several previous publications that use evolutionary algorithms to evolve IPD strategies in various circumstances
[3, 4, 7, 9, 10, 17, 21, 39, 54, 59]. See also [24] for a strategy trained to win against a collection of wellknown IPD opponents and see [22] for a prior use of particle swarm algorithms. Our results are unique in that we are able to train against a large and diverse collection of strategies available from the scientific literature. Crucially, the software used in this work is openly available and can be used to train strategies in the future in a reliable manner, with confidence that the opponent strategies are correctly implemented, tested and documented. Moreover, as of the time of writing, we claim that this work contains the best performing strategies for the Iterated Prisoner’s Dilemma.2 The Strategy Archetypes
The Axelrod library now contains many parametrised strategies trained using machine learning methods. Most are deterministic, use many rounds of memory, and perform extremely well in tournaments as will be discussed in Section
3. Training of these strategies will be discussed in Section 4. These strategies can encode a variety of other strategies, including classic strategies like Tit For Tat [14], handshake strategies, and grudging strategies, that always defect after an opponent defection.2.1 LookerUp
The LookerUp strategy is based on a lookup table and encodes a set of deterministic responses based on the opponent’s first moves, the opponent’s last moves, and the players last moves. If then the player has infinite memory depth, otherwise it has depth . This is illustrated diagrammatically in Figure 1.
Training of this strategy corresponds to finding maps from partial histories to actions, either a cooperation or a defection. Although various combinations of and have been tried, the best performance at the time of training was obtained for and generally for . A strategy called EvolvedLookerUp2_2_2 is among the top strategies in the library.
This archetype can be used to train deterministic memory strategies with the parameters and . For , the resulting strategy cooperates if the last round was mutual cooperation and defects otherwise, known as Grim or Grudger.
Two strategies in the library, Winner12 and Winner21, from [40], are based on lookup tables for , , and . The strategy Winner12 emerged in less than 10 generations of training in our framework using a score maximizing objective. Strategies nearly identical to Winner21 arise from training with a Moran process objective.
2.2 Gambler
Gambler is a stochastic variant of LookerUp. Instead of deterministically encoded moves the lookup table emits probabilities which are used to choose cooperation or defection. This is illustrated diagrammatically in Figure
2.Training of this strategy corresponds to finding maps from histories to a probability of cooperation. The library includes a strategy trained with that is mostly deterministic, with 52 of the 64 probabilities being 0 or 1. At one time this strategy outperformed EvolvedLookerUp2_2_2.
This strategy type can be used to train arbitrary memory strategies. A memory one strategy called PSOGamblerMem1 was trained, with probabilities . Though it performs well in standard tournaments (see Table 2) it does not outperform the longer memory strategies, and is bested by a similar strategy that also uses the first round of play: PSOGambler_1_1_1.
These strategies are trained with a particle swarm algorithm rather than an evolutionary algorithm (though the former would suffice). Particle swarm algorithms have been used to trained IPD strategies previously [22].
2.3 ANN: Single Hidden Layer Artificial Neural Network
Strategies based on artificial neural networks use a variety of features computed from the history of play:

Opponent’s first move is C

Opponent’s first move is D

Opponent’s second move is C

Opponent’s second move is D

Player’s previous move is C

Player’s previous move is D

Player’s second previous move is C

Player’s second previous move is D

Opponent’s previous move is C

Opponent’s previous move is D

Opponent’s second previous move is C

Opponent’s second previous move is D

Total opponent cooperations

Total opponent defections

Total player cooperations

Total player defections

Round number
These are then input into a feed forward neural network with one layer and usersupplied width. This is illustrated diagrammatically in Figure
3.Training of this strategy corresponds to finding parameters of the neural network. An inner layer with just five nodes performs quite well in both deterministic and noisy tournaments. The output of the ANN used in this work is deterministic; a stochastic variant that outputs probabilities rather than exact moves could be easily created.
2.4 Finite State Machines
Strategies based on finite state machines are deterministic and computationally efficient. In each round of play the strategy selects an action based on the current state and the opponent’s last action, transitioning to a new state for the next round. This is illustrated diagrammatically in Figure 4.
Training this strategy corresponds to finding mappings of states and histories to an action and a state. Figure 5 shows two of the trained finite state machines. The layout of state nodes is kept the same between Figure 4(a) and 4(b) to highlight the effect of different training environments. Note also that two of the 16 states are not used, this is also an outcome of the training process.
2.5 Hidden Markov Models
A variant of finite state machine strategies are called hidden Markov models (HMMs). Like the strategies based on finite state machines, these strategies also encode an internal state. However, they use probabilistic transitions based on the prior round of play to other states and cooperate or defect with various probabilities at each state. This is shown diagrammatically in Figure
6. Training this strategy corresponds to finding mappings of states and histories to probabilities of cooperating as well as probabilities of the next internal state.2.6 Meta Strategies
There are several strategies based on ensemble methods that are common in machine learning called Meta strategies. These strategies are composed of a team of other strategies. In each round, each member of the team is polled for its desired next move. The ensemble then selects the next move based on a rule, such as the consensus vote in the case of MetaMajority or the best individual performance in the case of MetaWinner. These strategies were among the best in the library before the inclusion of those trained by reinforcement learning. The library contains strategies containing teams of all the deterministic players, all the memoryone players, and some others.
Because these strategies inherit many of the properties of the strategies on which they are based, including using knowledge of the match length to defect on the last round(s) of play, not all of these strategies were included in results of this paper. These strategies do not typically outperform the trained strategies described above.
3 Results
This section presents the results of a large IPD tournament with strategies from the Axelrod library, including some additional parametrized strategies (e.g. various parameter choices for Generous Tit For Tat [24]). These are listed in Appendix A.
All strategies in the tournament follow a simple set of rules in accordance with earlier tournaments:

Players are unaware of the number of turns in a match.

Players carry no acquired state between matches.

Players cannot observe the outcome of other matches.

Players cannot identify their opponent by any label or identifier.

Players cannot manipulate or inspect their opponents in any way.
Any strategy that does not follow these rules, such as a strategy that defects on the last round of play, was omitted from the tournament presented here (but not necessarily from the training pool).
A total of 176 are included, of which 53 are stochastic. In Section 3.1 is concerned with the standard tournament with 200 turns whereas in Section 3.2 a tournament with 5% noise is discussed. Due to the inherent stochasticity of these IPD tournaments, these tournament were repeated 50000 times. This allows for a detailed and confident analysis of the performance of strategies. To illustrate the results considered, Figure 6(a) shows the distribution of the mean score per turn of Tit For Tat over all the repetitions. Similarly, Figure 6(b) shows the ranks of of Tit For Tat for each repetition (we note that it never wins a tournament). Finally Figure 6(c) shows the number of opponents beaten in any given tournament: Tit For Tat does not win any match (this is due to the fact that it will either draw with mutual cooperation or defect second).
The utilities used are thus the specific Prisoner’s Dilemma being played is:
(1) 
All data generated for this work is archived and available at [31].
3.1 Standard Tournament
The top 11 performing strategies by median payoff are all strategies trained to maximize total payoff against a subset of the strategies (Table 2). The next strategy is Desired Belief Strategy (DBS) [11], which actively analyzes the opponent and responds accordingly. The next two strategies are Winner12, based on a lookup table, Fool Me Once [49], a grudging strategy that defects indefinitely on the second defection, and Omega Tit For Tat [29].
mean  std  min  5%  25%  50%  75%  95%  max  

EvolvedLookerUp2_2_2  2.955  0.010  2.915  2.937  2.948  2.956  2.963  2.971  2.989 
Evolved HMM 5  2.954  0.014  2.903  2.931  2.945  2.954  2.964  2.977  3.007 
Evolved FSM 16  2.952  0.013  2.900  2.930  2.943  2.953  2.962  2.973  2.993 
PSO Gambler 2_2_2  2.938  0.013  2.884  2.914  2.930  2.940  2.948  2.957  2.972 
Evolved FSM 16 Noise 05  2.919  0.013  2.874  2.898  2.910  2.919  2.928  2.939  2.965 
PSO Gambler 1_1_1  2.912  0.023  2.805  2.874  2.896  2.912  2.928  2.950  3.012 
Evolved ANN 5  2.912  0.010  2.871  2.894  2.905  2.912  2.919  2.928  2.945 
Evolved FSM 4  2.910  0.012  2.867  2.889  2.901  2.910  2.918  2.929  2.943 
Evolved ANN  2.907  0.010  2.865  2.890  2.900  2.908  2.914  2.923  2.942 
PSO Gambler Mem1  2.901  0.025  2.783  2.858  2.884  2.901  2.919  2.942  2.994 
Evolved ANN 5 Noise 05  2.864  0.008  2.830  2.850  2.858  2.865  2.870  2.877  2.891 
DBS  2.857  0.009  2.823  2.842  2.851  2.857  2.863  2.872  2.899 
Winner12  2.849  0.008  2.820  2.836  2.844  2.850  2.855  2.862  2.874 
Fool Me Once  2.844  0.008  2.818  2.830  2.838  2.844  2.850  2.857  2.882 
Omega TFT: 3, 8  2.841  0.011  2.800  2.822  2.833  2.841  2.849  2.859  2.882 
For completeness, violin plots showing the distribution of the scores of each strategy (again ranked by median score) are shown in Figure 8.
Pairwise payoff results are given as a heatmap (Figure 9) which shows that many strategies achieve mutual cooperation (obtaining a score of 3). The top performing strategies never defect first yet are able to exploit weaker strategies that attempt to defect.
The strategies that win the most matches (Table 3) are Defector [15] and Aggravater [49], followed by handshaking and zero determinant strategies [47]. This includes two handshaking strategies that were the result of training to maximize Moran process fixation (TF1 and TF2). No strategies were trained specifically to win matches. None of the top scoring strategies appear in the top 15 list of strategies ranked by match wins. This can be seen in Figure 10 where the distribution of the number of wins of each strategy is shown.
mean  std  min  5%  25%  50%  75%  95%  max  

Aggravater  161.595  0.862  160  160.0  161.0  162.0  162.0  163.0  163 
Defector  161.605  0.864  160  160.0  161.0  162.0  162.0  163.0  163 
CS  159.646  1.005  155  158.0  159.0  160.0  160.0  161.0  161 
ZDExtort4  150.598  2.662  138  146.0  149.0  151.0  152.0  155.0  162 
Handshake  149.552  1.754  142  147.0  148.0  150.0  151.0  152.0  154 
ZDExtort2  146.094  3.445  129  140.0  144.0  146.0  148.0  152.0  160 
ZDExtort2 v2  146.291  3.425  131  141.0  144.0  146.0  149.0  152.0  160 
Winner21  139.946  1.225  136  138.0  139.0  140.0  141.0  142.0  143 
TF2  138.240  1.700  130  135.0  137.0  138.0  139.0  141.0  143 
TF1  135.692  1.408  130  133.0  135.0  136.0  137.0  138.0  140 
Naive Prober: 0.1  136.016  2.504  127  132.0  134.0  136.0  138.0  140.0  147 
Feld: 1.0, 0.5, 200  136.087  1.696  130  133.0  135.0  136.0  137.0  139.0  144 
Joss: 0.9  136.015  2.503  126  132.0  134.0  136.0  138.0  140.0  146 
Predator  133.718  1.385  129  131.0  133.0  134.0  135.0  136.0  138 
SolutionB5  125.843  1.509  120  123.0  125.0  126.0  127.0  128.0  131 
The number of wins of the top strategies of Table 2 are shown in Table 4. It is evident that although these strategies score highly they do not win many matches: the strategy with the most number of wins is the Evolved FSM 16 strategy that at most won 60 () matches in a given tournament.
mean  std  min  5%  25%  50%  75%  95%  max  

EvolvedLookerUp2_2_2  48.259  1.336  43  46.0  47.0  48.0  49.0  50.0  53 
Evolved HMM 5  41.358  1.221  36  39.0  41.0  41.0  42.0  43.0  45 
Evolved FSM 16  56.978  1.099  51  55.0  56.0  57.0  58.0  59.0  60 
PSO Gambler 2_2_2  40.692  1.089  36  39.0  40.0  41.0  41.0  42.0  45 
Evolved FSM 16 Noise 05  40.070  1.673  34  37.0  39.0  40.0  41.0  43.0  47 
PSO Gambler 1_1_1  45.005  1.595  38  42.0  44.0  45.0  46.0  48.0  51 
Evolved ANN 5  43.224  0.674  41  42.0  43.0  43.0  44.0  44.0  47 
Evolved FSM 4  37.227  0.951  34  36.0  37.0  37.0  38.0  39.0  41 
Evolved ANN  43.100  1.021  40  42.0  42.0  43.0  44.0  45.0  48 
PSO Gambler Mem1  43.444  1.837  34  40.0  42.0  43.0  45.0  46.0  51 
Evolved ANN 5 Noise 05  33.711  1.125  30  32.0  33.0  34.0  34.0  35.0  38 
DBS  32.329  1.198  28  30.0  32.0  32.0  33.0  34.0  38 
Winner12  40.179  1.037  36  39.0  39.0  40.0  41.0  42.0  44 
Fool Me Once  50.121  0.422  48  50.0  50.0  50.0  50.0  51.0  52 
Omega TFT: 3, 8  35.157  0.859  32  34.0  35.0  35.0  36.0  37.0  39 
Finally, Table 5 and Figure 11 show the ranks (based on median score) of each strategy over the repeated tournaments. Whilst there is some stochasticity, the top three strategies almost always rank in the top three. For example, the worst that the Evolved Lookerup 2 2 2 ranks in any tournament is 8th.
mean  std  min  5%  25%  50%  75%  95%  max  

EvolvedLookerUp2_2_2  2.173  1.070  1  1.0  1.0  2.0  3.0  4.0  8 
Evolved HMM 5  2.321  1.275  1  1.0  1.0  2.0  3.0  5.0  10 
Evolved FSM 16  2.489  1.299  1  1.0  1.0  2.0  3.0  5.0  10 
PSO Gambler 2_2_2  3.961  1.525  1  2.0  3.0  4.0  5.0  7.0  10 
Evolved FSM 16 Noise 05  6.300  1.688  1  4.0  5.0  6.0  7.0  9.0  11 
PSO Gambler 1_1_1  7.082  2.499  1  3.0  5.0  7.0  9.0  10.0  17 
Evolved ANN 5  7.287  1.523  2  5.0  6.0  7.0  8.0  10.0  11 
Evolved FSM 4  7.527  1.631  2  5.0  6.0  8.0  9.0  10.0  12 
Evolved ANN  7.901  1.450  2  5.0  7.0  8.0  9.0  10.0  12 
PSO Gambler Mem1  8.222  2.535  1  4.0  6.0  9.0  10.0  12.0  20 
Evolved ANN 5 Noise 05  11.362  0.872  8  10.0  11.0  11.0  12.0  13.0  16 
DBS  12.197  1.125  9  11.0  11.0  12.0  13.0  14.0  16 
Winner12  13.221  1.137  9  11.0  12.0  13.0  14.0  15.0  17 
Fool Me Once  13.960  1.083  9  12.0  13.0  14.0  15.0  15.0  17 
Omega TFT: 3, 8  14.275  1.301  9  12.0  13.0  15.0  15.0  16.0  19 
Figure 12 shows the rate of cooperation in each round for the top three strategies. The opponents in these figures are ordered according to performance by median score. It is evident that the high performing strategies share a common thread against the top strategies: they do not defect first and achieve mutual cooperation. Against the lower strategies they also do not defect first (a mean cooperation rate of 1 in the first round) but do learn to quickly retaliate.
3.2 Noisy Tournament
Noisy tournaments in which there is a 5% chance that an action is flipped are now described. As shown in Table 6 and Figure 13, the best performing strategies in median payoff are DBS, designed to account for noise, followed by two strategies trained in the presence of noise and three trained strategies trained without noise. One of the strategies trained with noise (PSO Gambler) actually performs less well than some of the other high ranking strategies including Spiteful TFT (TFT but defects indefinitely if the opponent defects twice consecutively) and OmegaTFT (also designed to handle noise). While DBS is the clear winner, it comes at a 6x increased run time over Evolved FSM 16 Noise 05.
mean  std  min  5%  25%  50%  75%  95%  max  

DBS  2.573  0.025  2.474  2.533  2.556  2.573  2.589  2.614  2.675 
Evolved ANN 5 Noise 05  2.534  0.025  2.418  2.492  2.517  2.534  2.551  2.575  2.629 
Evolved FSM 16 Noise 05  2.515  0.031  2.374  2.464  2.494  2.515  2.536  2.565  2.642 
Evolved ANN 5  2.410  0.030  2.273  2.359  2.389  2.410  2.430  2.459  2.536 
Evolved FSM 4  2.393  0.027  2.286  2.348  2.374  2.393  2.411  2.437  2.505 
Evolved HMM 5  2.392  0.026  2.289  2.348  2.374  2.392  2.409  2.435  2.493 
Level Punisher  2.388  0.025  2.281  2.347  2.372  2.389  2.405  2.429  2.503 
Omega TFT: 3, 8  2.387  0.026  2.270  2.344  2.370  2.388  2.405  2.430  2.498 
Spiteful Tit For Tat  2.383  0.030  2.259  2.334  2.363  2.383  2.403  2.432  2.517 
Evolved FSM 16  2.375  0.029  2.239  2.326  2.355  2.375  2.395  2.423  2.507 
PSO Gambler 2_2_2 Noise 05  2.371  0.029  2.250  2.323  2.352  2.371  2.390  2.418  2.480 
Adaptive  2.369  0.038  2.217  2.306  2.344  2.369  2.395  2.431  2.524 
Evolved ANN  2.365  0.022  2.270  2.329  2.351  2.366  2.380  2.401  2.483 
Math Constant Hunter  2.344  0.022  2.257  2.308  2.329  2.344  2.359  2.382  2.445 
Gradual  2.341  0.021  2.248  2.306  2.327  2.341  2.355  2.376  2.429 
Recalling Table 2, the strategies trained in the presence of noise are also among the best performers in the absence of noise. As shown in Figure 14 the cluster of mutually cooperative strategies is broken by the noise at 5%. A similar collection of players excels at winning matches but again they have a poor total payoff.
As shown in Table 7 and Figure 15 the strategies tallying the most wins are somewhat similar to the standard tournaments, with Defector, the handshaking CollectiveStrategy [36], and Aggravate appearing as the top three again.
mean  std  min  5%  25%  50%  75%  95%  max  

Aggravater  156.654  3.328  141  151.0  154.0  157.0  159.0  162.0  170 
CS  156.875  3.265  144  151.0  155.0  157.0  159.0  162.0  169 
Defector  157.324  3.262  144  152.0  155.0  157.0  160.0  163.0  170 
Grudger  155.590  3.303  143  150.0  153.0  156.0  158.0  161.0  168 
Retaliate 3: 0.05  155.382  3.306  141  150.0  153.0  155.0  158.0  161.0  169 
Retaliate 2: 0.08  155.365  3.320  140  150.0  153.0  155.0  158.0  161.0  169 
MEM2  155.052  3.349  140  149.0  153.0  155.0  157.0  160.0  169 
HTfT  155.298  3.344  141  150.0  153.0  155.0  158.0  161.0  168 
Retaliate: 0.1  155.370  3.314  139  150.0  153.0  155.0  158.0  161.0  168 
Spiteful Tit For Tat  155.030  3.326  133  150.0  153.0  155.0  157.0  160.0  167 
Punisher  153.281  3.375  140  148.0  151.0  153.0  156.0  159.0  167 
2TfT  152.823  3.429  138  147.0  151.0  153.0  155.0  158.0  165 
TF3  153.031  3.327  138  148.0  151.0  153.0  155.0  158.0  166 
Fool Me Once  152.817  3.344  138  147.0  151.0  153.0  155.0  158.0  166 
Predator  151.406  3.403  138  146.0  149.0  151.0  154.0  157.0  165 
As shown in Table 8, the top ranking strategies win a larger number of matches in the presence of noise. For example Spiteful Tit For Tat [38] in one tournament won almost all its matches (167).
mean  std  min  5%  25%  50%  75%  95%  max  

DBS  102.545  3.671  87  97.0  100.0  103.0  105.0  109.0  118 
Evolved ANN 5 Noise 05  75.026  4.226  57  68.0  72.0  75.0  78.0  82.0  93 
Evolved FSM 16 Noise 05  88.699  3.864  74  82.0  86.0  89.0  91.0  95.0  104 
Evolved ANN 5  137.878  4.350  118  131.0  135.0  138.0  141.0  145.0  156 
Evolved FSM 4  74.250  2.694  64  70.0  72.0  74.0  76.0  79.0  85 
Evolved HMM 5  88.189  2.774  77  84.0  86.0  88.0  90.0  93.0  99 
Level Punisher  94.263  4.789  75  86.0  91.0  94.0  97.0  102.0  116 
Omega TFT: 3, 8  131.655  4.302  112  125.0  129.0  132.0  135.0  139.0  150 
Spiteful Tit For Tat  155.030  3.326  133  150.0  153.0  155.0  157.0  160.0  167 
Evolved FSM 16  103.288  3.631  89  97.0  101.0  103.0  106.0  109.0  118 
PSO Gambler 2_2_2 Noise 05  90.515  4.012  75  84.0  88.0  90.0  93.0  97.0  109 
Adaptive  101.898  4.899  83  94.0  99.0  102.0  105.0  110.0  124 
Evolved ANN  138.514  3.401  125  133.0  136.0  139.0  141.0  144.0  153 
Math Constant Hunter  93.010  3.254  79  88.0  91.0  93.0  95.0  98.0  107 
Gradual  101.899  2.870  91  97.0  100.0  102.0  104.0  107.0  114 
Finally, Table 9 and Figure 16 show the ranks (based on median score) of each strategy over the repeated tournaments. We see that the stochasticity of the ranks understandably increases relative to the standard tournament. An exception is the top three strategies, for example, the DBS strategy never ranks lower than second and wins 75% of the time. The two strategies trained for noisy tournaments rank in the top three 95% of the time.
mean  std  min  5%  25%  50%  75%  95%  max  

DBS  1.205  0.468  1  1.000  1.0  1.0  1.0  2.0  3 
Evolved ANN 5 Noise 05  2.184  0.629  1  1.000  2.0  2.0  3.0  3.0  5 
Evolved FSM 16 Noise 05  2.626  0.618  1  1.000  2.0  3.0  3.0  3.0  9 
Evolved ANN 5  6.371  2.786  2  4.000  4.0  5.0  8.0  12.0  31 
Evolved FSM 4  7.919  3.175  3  4.000  5.0  7.0  10.0  14.0  33 
Evolved HMM 5  7.996  3.110  3  4.000  6.0  7.0  10.0  14.0  26 
Level Punisher  8.337  3.083  3  4.000  6.0  8.0  10.0  14.0  26 
Omega TFT: 3, 8  8.510  3.249  3  4.000  6.0  8.0  11.0  14.0  32 
Spiteful Tit For Tat  9.159  3.772  3  4.000  6.0  9.0  12.0  16.0  40 
Evolved FSM 16  10.218  4.099  3  4.975  7.0  10.0  13.0  17.0  56 
PSO Gambler 2_2_2 Noise 05  10.760  4.102  3  5.000  8.0  10.0  13.0  18.0  47 
Evolved ANN  11.346  3.252  3  6.000  9.0  11.0  13.0  17.0  32 
Adaptive  11.420  5.739  3  4.000  7.0  11.0  14.0  21.0  63 
Math Constant Hunter  14.668  3.788  3  9.000  12.0  15.0  17.0  21.0  43 
Gradual  15.163  3.672  4  10.000  13.0  15.0  17.0  21.0  49 
Figure 17 shows the rate of cooperation in each round for the top three strategies (in the absense of noise) and just as for the top performing strategies in the standard tournament (Figure 12) it is evident that the strategies never defect first and learn to quickly punish poorer strategies.
4 Methods
The trained strategies (denoted by a in Appendix A) were trained using reinforcement learning algorithms. The ideas of reinforcement learning can be attributed to the original work of [55] in which the notion that computers would learn by taking random actions but according to a distribution that picked actions with high rewards more often. The two particular algorithms used here:
The Particle Swarm Algorithm is implemented using the pyswarm library: https://pypi.python.org/pypi/pyswarm. This algorithm was used only to train the Gambler archetype.
All other strategies were trained using evolutionary algorithms. The evolutionary algorithms used standard techniques, varying strategies by mutation and crossover, and evaluating the performance against each opponent for many repetitions. The best performing strategies in each generation are persisted, variants created, and objective functions computed again.
The default parameters for this procedure:

A population size of 40 individuals (kept constant across the generations);

A mutation rate of 10%;

10 individuals kept from one generation to the next;

A total of 500 generations.
All implementations of these algorithms are archived at [25]. This software is (similarly to the Axelrod library) available on github https://github.com/AxelrodPython/axelroddojo. There are objective functions for:

total or mean payoff,

total or mean payoff difference (unused in this work),

total Moran process wins (fixation probability). This lead to the strategies named TF1, TF2, TF3 listed in Appendix A.
5 Discussion
The tournament results indicate that pretrained strategies are generally better than human designed strategies at maximizing payoff against a diverse set of opponents. An evolutionary algorithm produces strategies based on multiple generic archetypes that are able to achieve a higher average score than any other known opponent in a standard tournament. Most of the trained strategies use multiple rounds of the history of play (some using all of it) and outperform memoryone strategies from the literature. Interestingly, a trained memory one strategy produced by a particle swarm algorithm performs well, better than human designed strategies such as Win Stay Lose Shift and zero determinant strategies (which enforce a payoff difference rather than maximize total payoff). The generic structure of the trained strategies did not appear to be critical for the standard tournament – strategies based on lookup tables, finite state machines, neural networks, and stochastic variants all performed well. Single layer neural networks (Section 2.3) performed well in both noisy and standard tournaments though these had some aspect of human involvement in the selection of features. This is in line with the other strategies also where some human decisions are made regarding the structure. For the LookerUp and Gambler archetypes (Sections 2.1 and 2.2) a decision has to be made regarding the number of rounds of history and initial play that are to be used. In contrast, the finite state machines and hidden Markov models (Sections2.4 and 2.5) required only a choice of the number of states, and the training algorithm can eliminate unneeded states in the case of finite state machines (evidenced by the unconnected nodes in the diagrams for the included representations).
Many strategies can be represented by multiple archetypes, however some archetypes will be more efficient in encoding the patterns present in the data. The fact that the Lookerup strategy does the best for the standard tournament indicates that it represents an efficient reduction of dimension which in turn makes its training more efficient. In particular the first rounds of play were valuable bits of information. For the noisy tournament however the dimension reduction represented by some archetypes indicates that some features of the data are not captured by the lookup tables while they are by the neural networks and the finite state machines, allowing the latter to adapt better to the noisy environment. Intuitively, a noisy environment can significantly affect a lookup table based on the last two rounds of play since these action pairs compete with probing defections, apologies, and retaliations. Accordingly, it is not surprising that additional parameter space is needed to adapt to a noisy environment.
In opposition to historical tournament results and community folklore, our results show that complex strategies can be very effective for the IPD. Designing complex strategies for the prisoner’s dilemma appears to be difficult for humans. Of all the humandesigned strategies in the library, only DBS consistently performs well, and it is substantially more complex than traditional tournament winners like TFT, OmegaTFT, and zero determinant strategies. Furthermore, dealing with noise is difficult for most strategies. Two strategies designed specifically to account for noise, DBS and OmegaTFT, perform well and only DBS performs better than the trained strategies and only in some noisy contexts. Empirically we find that DBS (with its default parameters) does not win tournaments at 1% noise. However DBS has a parameter that accounts for the expected amount of noise and a followup study with various noise levels could make a more complete study of the performance of DBS and strategies trained at various noise levels.
The strategies trained to maximize their average score are generally cooperative and do not defect first. Maximizing for individual performance across a collection of opponents leads to mutual cooperation despite the fact that mutual cooperation is an unstable evolutionary equilibrium for the prisoner’s dilemma. Specifically it is noted that the reinforcement learning process for maximizing payoff does not lead to exploitative zero determinant strategies, which may also be a result of the collection of training strategies, of which several retaliate harshly. Training with the objective of maximizing payoff difference may produce strategies more like zero determinant strategies.
For the trained strategies utilizing look up tables we generally found those that incorporate one or more of the initial rounds of play outperformed those that did not. The strategies based on neural networks and finite state machines also are able to condition throughout a match on the first rounds of play. Accordingly, we conclude that first impressions matter in the IPD. The best strategies are nice (never defecting first) and the impact of the first rounds of play could be further investigated with the Axelrod library in future work by e.g. forcing all strategies to defect on the first round.
Finally, we note that as the library grows, the top performing strategies sometimes shuffle, and are not retrained automatically. Most of the strategies were trained on an earlier version of the library (v2.2.0: [48]) that did not include DBS and several other opponents. The precise parameters that are optimal will depend on the pool of opponents. Moreover we have not extensively trained strategies to determine the minimum parameter spaces that are sufficient – neural networks with fewer nodes and features and finite state machines with fewer states may suffice. See [6] for discussion of resource availability for IPD strategies.
Acknowledgements
This work was performed using the computational facilities of the Advanced Research Computing @ Cardiff (ARCCA) Division, Cardiff University.
A variety of software libraries have been used in this work:
References
 [1] Christoph Adami and Arend Hintze. Evolutionary instability of zerodeterminant strategies demonstrates that winning is not everything. Nature communications, 4(1):2193, 2013.
 [2] Eckhart Arnold. Coopsim v0.9.9 beta 6. https://github.com/jecki/CoopSim/, 2015.
 [3] Daniel Ashlock. Training function stacks to play the iterated prisoner’s dilemma. In Computational Intelligence and Games, 2006 IEEE Symposium on, pages 111–118. IEEE, 2006.
 [4] Daniel Ashlock, Joseph Alexander Brown, and Philip Hingston. Multiple opponent optimization of prisoner’s dilemma playing agents. IEEE Transactions on Computational Intelligence and AI in Games, 7(1):53–65, 2015.

[5]
Daniel Ashlock and EunYoun Kim.
Fingerprinting: Visualization and automatic analysis of prisoner’s
dilemma strategies.
IEEE Transactions on Evolutionary Computation
, 12(5):647–659, 2008.  [6] Daniel Ashlock and EunYoun Kim. The impact of varying resources available to iterated prisoner’s dilemma agents. In Foundations of Computational Intelligence (FOCI), 2013 IEEE Symposium on, pages 60–67. IEEE, 2013.
 [7] Wendy Ashlock and Daniel Ashlock. Changes in Prisoner ’ s Dilemma Strategies Over Evolutionary Time With Different Population Sizes. pages 1001–1008, 2006.
 [8] Wendy Ashlock and Daniel Ashlock. Changes in prisoner’s dilemma strategies over evolutionary time with different population sizes. In Evolutionary Computation, 2006. CEC 2006. IEEE Congress on, pages 297–304. IEEE, 2006.
 [9] Wendy Ashlock and Daniel Ashlock. Shaped prisoner’s dilemma automata. In Computational Intelligence and Games (CIG), 2014 IEEE Conference on, pages 1–8. IEEE, 2014.
 [10] Wendy Ashlock, Jeffrey Tsang, and Daniel Ashlock. The evolution of exploitation. In Foundations of Computational Intelligence (FOCI), 2014 IEEE Symposium on, pages 135–142. IEEE, 2014.
 [11] TszChiu Au and Dana Nau. Accident or intention: that is the question (in the noisy iterated prisoner’s dilemma). In Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, pages 561–568. ACM, 2006.
 [12] R. Axelrod. Effective Choice in the Prisoner’s Dilemma. Journal of Conflict Resolution, 24(1):3–25, 1980.
 [13] R. Axelrod. More Effective Choice in the Prisoner’s Dilemma. Journal of Conflict Resolution, 24(3):379–403, 1980.
 [14] Robert Axelrod. Effective choice in the prisoner’s dilemma. Journal of conflict resolution, 24(1):3–25, 1980.
 [15] Robert M Axelrod. The evolution of cooperation. Basic books, 2006.
 [16] Jeffrey S Banks and Rangarajan K Sundaram. Repeated games, finite automata, and complexity. Games and Economic Behavior, 2(2):97–117, 1990.
 [17] LeeAnn Barlow and Daniel Ashlock. Varying decision inputs in prisoner’s dilemma. In Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), 2015 IEEE Conference on, pages 1–8. IEEE, 2015.
 [18] Bruno Beaufils, JeanPaul Delahaye, and Philippe Mathieu. Our meeting with gradual, a good strategy for the iterated prisoner’s dilemma. In Proceedings of the Fifth International Workshop on the Synthesis and Simulation of Living Systems, pages 202–209, 1997.
 [19] Jonathan Bendor, Roderick M Kramer, and Suzanne Stout. When in doubt…: Cooperation in a noisy prisoner’s dilemma. Journal of Conflict Resolution, 35(4):691–719, 1991.
 [20] Andre LC Carvalho, Honovan P Rocha, Felipe T Amaral, and Frederico G Guimaraes. Iterated prisoner’s dilemmaan extended analysis. 2013.
 [21] David B Fogel. Evolving behaviors in the iterated prisoner’s dilemma. Evolutionary Computation, 1(1):77–97, 1993.
 [22] Nelis Franken and Andries Petrus Engelbrecht. Particle swarm optimization approaches to coevolve strategies for the iterated prisoner’s dilemma. IEEE Transactions on Evolutionary Computation, 9(6):562–579, 2005.
 [23] Marcus R Frean. The prisoner’s dilemma without synchrony. Proceedings of the Royal Society of London B: Biological Sciences, 257(1348):75–79, 1994.
 [24] Marco Gaudesi, Elio Piccolo, Giovanni Squillero, and Alberto Tonda. Exploiting evolutionary modeling to prevail in iterated prisoner’s dilemma tournaments. IEEE Transactions on Computational Intelligence and AI in Games, 8(3):288–300, 2016.
 [25] Marc Harper, Vince Knight, Martin Jones, and Georgios Koutsovoulos. Axelrodpython/axelroddojo: V0.0.2. https://doi.org/10.5281/zenodo.832282, July 2017.
 [26] Christian Hilbe, Martin A Nowak, and Arne Traulsen. Adaptive dynamics of extortion and compliance. PloS one, 8(11):e77886, 2013.
 [27] John D Hunter. Matplotlib: A 2d graphics environment. Computing In Science & Engineering, 9(3):90–95, 2007.
 [28] Muhammad Imran, Rathiah Hashim, and Noor Elaiza Abd Khalid. An overview of particle swarm optimization variants. Procedia Engineering, 53:491–496, 2013.
 [29] Graham Kendall, Xin Yao, and Siang Yew Chong. The iterated prisoners’ dilemma: 20 years on, volume 4. World Scientific, 2007.
 [30] Vincent Knight, Owen Campbell, Marc Harper, Karol Langner, James Campbell, Thomas Campbell, Alex Carney, Martin Chorley, Cameron DavidsonPilon, Kristian Glass, et al. An open framework for the reproducible study of the iterated prisoner’s dilemma. Journal of Open Research Software, 4(1), 2016.
 [31] Vincent Knight and Marc Harper. Data for: Reinforcement Learning Produces Dominant Strategies for the Iterated Prisoner’s Dilemma. https://doi.org/10.5281/zenodo.832287, July 2017.
 [32] David Kraines and Vivian Kraines. Pavlov and the prisoner’s dilemma. Theory and decision, 26(1):47–79, 1989.
 [33] Steven Kuhn. Prisoner’s dilemma. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, spring 2017 edition, 2017.
 [34] Jiawei Li. How to design a strategy to win an ipd tournament. The iterated prisoner’s dilemma, 20:89–104, 2007.
 [35] Jiawei Li, Philip Hingston, Senior Member, and Graham Kendall. Engineering Design of Strategies for Winning Iterated Prisoner ’ s Dilemma Competitions. 3(4):348–360, 2011.
 [36] Jiawei Li and Graham Kendall. A strategy with novel evolutionary features for the iterated prisoner’s dilemma. Evolutionary Computation, 17(2):257–274, 2009.
 [37] Jiawei Li, Graham Kendall, and Senior Member. The effect of memory size on the evolutionary stability of strategies in iterated prisoner ’ s dilemma. X(X):1–8, 2014.
 [38] LIFL. Prison. http://www.lifl.fr/IPD/ipd.frame.html, 2008.
 [39] Robert E Marks. Niche strategies: the prisoner’s dilemma computer tournaments revisited. In JOURNAL OF EVOLUTIONARY ECONOMICS. Citeseer, 1989.
 [40] Philippe Mathieu and JeanPaul Delahaye. New Winning Strategies for the Iterated Prisoner’s Dilemma (Extended Abstract). 14th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2015), pages 1665–1666, 2015.
 [41] Wes McKinney et al. Data structures for statistical computing in python. In Proceedings of the 9th Python in Science Conference, volume 445, pages 51–56. van der Voort S, Millman J, 2010.
 [42] Shashi Mittal and Kalyanmoy Deb. Optimal strategies of the iterated prisoner’s dilemma problem for multiple conflicting objectives. IEEE Transactions on Evolutionary Computation, 13(3):554–565, 2009.
 [43] David E Moriarty, Alan C Schultz, and John J Grefenstette. Evolutionary algorithms for reinforcement learning. J. Artif. Intell. Res.(JAIR), 11:241–276, 1999.
 [44] John H Nachbar. Evolution in the finitely repeated prisoner’s dilemma. Journal of Economic Behavior & Organization, 19(3):307–326, 1992.
 [45] M Nowak and K Sigmund. A strategy of winstay, loseshift that outperforms titfortat in the Prisoner’s Dilemma game. Nature, 364(6432):56–58, 1993.
 [46] Martin Nowak and Karl Sigmund. A strategy of winstay, loseshift that outperforms titfortat in the prisoner’s dilemma game. Nature, 364(6432):56, 1993.
 [47] William H Press and Freeman J Dyson. Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent. Proceedings of the National Academy of Sciences of the United States of America, 109(26):10409–13, 2012.
 [48] The Axelrod project developers. Axelrodpython/axelrod: v2.2.0. https://doi.org/10.5281/zenodo.211828, December 2016.
 [49] The Axelrod project developers. Axelrodpython/axelrod: v2.13.0. https://doi.org/10.5281/zenodo.801749, June 2017.
 [50] Arthur J Robson. Efficiency in evolutionary games: Darwin, nash and the secret handshake. Journal of theoretical Biology, 144(3):379–396, 1990.
 [51] Wolfgang Slany and Wolfgang Kienreich. On some winning strategies for the iterated prisoner’s dilemma, or, mr. nice guy and the cosa nostra. The Iterated Prisoners’ Dilemma: 20 Years on, 4:171, 2007.
 [52] D W Stephens, C M McLinn, and J R Stevens. Discounting and reciprocity in an Iterated Prisoner’s Dilemma. Science (New York, N.Y.), 298(5601):2216–2218, 2002.
 [53] Alexander J. Stewart and Joshua B. Plotkin. Extortion and cooperation in the prisoner’s dilemma. Proceedings of the National Academy of Sciences, 109(26):10134–10135, 2012.
 [54] Takahiko Sudo, Kazushi Goto, Yusuke Nojima, and Hisao Ishibuchi. Effects of ensemble action selection with different usage of player’s memory resource on the evolution of cooperative strategies for iterated prisoner’s dilemma game. In Evolutionary Computation (CEC), 2015 IEEE Congress on, pages 1505–1512. IEEE, 2015.
 [55] Alan M Turing. Computing machinery and intelligence. Mind, 59(236):433–460, 1950.
 [56] E Tzafestas. Toward adaptive cooperative behavior. From Animals to animals: Proceedings of the 6th International Conference on the Simulation of Adaptive Behavior (SAB2000), 2:334–340, 2000.
 [57] Unknown. www.prisonersdilemma.com. http://www.prisonersdilemma.com/, 2017.
 [58] Pieter van den Berg and Franz J Weissing. The importance of mechanisms for the evolution of cooperation. In Proc. R. Soc. B, volume 282, page 20151382. The Royal Society, 2015.
 [59] Vassilis Vassiliades and Chris Christodoulou. Multiagent reinforcement learning in the iterated prisoner’s dilemma: fast cooperation through evolved payoffs. In Neural Networks (IJCNN), The 2010 International Joint Conference on, pages 1–8. IEEE, 2010.
 [60] Stéfan van der Walt, S Chris Colbert, and Gael Varoquaux. The numpy array: a structure for efficient numerical computation. Computing in Science & Engineering, 13(2):22–30, 2011.
 [61] Jianzhong Wu and Robert Axelrod. How to cope with noise in the iterated prisoner’s dilemma. Journal of Conflict resolution, 39(1):183–189, 1995.
Appendix A List of players
The players used for this study are from Axelrod version 2.13.0 [49].

 Deterministic  Memory depth: . [49]

 Deterministic  Memory depth: . [49]

 Deterministic  Memory depth: . [49]

ALLCorALLD  Stochastic  Memory depth: 1. [49]

Adaptive  Deterministic  Memory depth: . [35]

Adaptive Pavlov 2006  Deterministic  Memory depth: . [29]

Adaptive Pavlov 2011  Deterministic  Memory depth: . [35]

Adaptive Tit For Tat: 0.5  Deterministic  Memory depth: . [56]

Aggravater  Deterministic  Memory depth: . [49]

Alternator Hunter  Deterministic  Memory depth: . [49]

Anti Tit For Tat  Deterministic  Memory depth: 1. [26]

AntiCycler  Deterministic  Memory depth: . [49]

Appeaser  Deterministic  Memory depth: . [49]

Arrogant QLearner  Stochastic  Memory depth: . [49]

Average Copier  Stochastic  Memory depth: . [49]

Better and Better  Stochastic  Memory depth: . [38]

Bully  Deterministic  Memory depth: 1. [44]

Calculator  Stochastic  Memory depth: . [38]

Cautious QLearner  Stochastic  Memory depth: . [49]

CollectiveStrategy (CS)  Deterministic  Memory depth: . [36]

Contrite Tit For Tat (CTfT)  Deterministic  Memory depth: 3. [61]

Cooperator Hunter  Deterministic  Memory depth: . [49]

Cycle Hunter  Deterministic  Memory depth: . [49]

Cycler CCCCCD  Deterministic  Memory depth: 5. [49]

Cycler CCCD  Deterministic  Memory depth: 3. [49]

Cycler CCCDCD  Deterministic  Memory depth: 5. [49]

Cycler CCD  Deterministic  Memory depth: 2. [42]

Cycler DC  Deterministic  Memory depth: 1. [49]

Cycler DDC  Deterministic  Memory depth: 2. [42]

DBS: 0.75, 3, 4, 3, 5  Deterministic  Memory depth: . [11]

Davis: 10  Deterministic  Memory depth: . [14]

Defector Hunter  Deterministic  Memory depth: . [49]

Desperate  Stochastic  Memory depth: 1. [58]

DoubleResurrection  Deterministic  Memory depth: 5. [2]

Doubler  Deterministic  Memory depth: . [38]

Dynamic Two Tits For Tat  Stochastic  Memory depth: 2. [49]

Eatherley  Stochastic  Memory depth: . [13]

Eventual Cycle Hunter  Deterministic  Memory depth: . [49]

Evolved ANN  Deterministic  Memory depth: . [49]

Evolved ANN 5  Deterministic  Memory depth: . [49]

Evolved ANN 5 Noise 05  Deterministic  Memory depth: . [49]

Evolved FSM 16  Deterministic  Memory depth: 16. [49]

Evolved FSM 16 Noise 05  Deterministic  Memory depth: 16. [49]

Evolved FSM 4  Deterministic  Memory depth: 4. [49]

Evolved HMM 5  Stochastic  Memory depth: 5. [49]

EvolvedLookerUp1_1_1  Deterministic  Memory depth: . [49]

EvolvedLookerUp2_2_2  Deterministic  Memory depth: . [49]

Feld: 1.0, 0.5, 200  Stochastic  Memory depth: 200. [14]

Firm But Fair  Stochastic  Memory depth: 1. [23]

Fool Me Forever  Deterministic  Memory depth: . [49]

Fool Me Once  Deterministic  Memory depth: . [49]

Forgetful Fool Me Once: 0.05  Stochastic  Memory depth: . [49]

Forgetful Grudger  Deterministic  Memory depth: 10. [49]

Forgiver  Deterministic  Memory depth: . [49]

Forgiving Tit For Tat (FTfT)  Deterministic  Memory depth: . [49]

Fortress3  Deterministic  Memory depth: 3. [8]

Fortress4  Deterministic  Memory depth: 4. [8]

GTFT: 0.1  Stochastic  Memory depth: 1.

GTFT: 0.3  Stochastic  Memory depth: 1.

GTFT: 0.7  Stochastic  Memory depth: 1.

GTFT: 0.9  Stochastic  Memory depth: 1.

General Soft Grudger: n=1,d=4,c=2  Deterministic  Memory depth: . [49]

Gradual  Deterministic  Memory depth: . [18]

Gradual Killer: (’D’, ’D’, ’D’, ’D’, ’D’, ’C’, ’C’)  Deterministic  Memory depth: . [38]

Grofman  Stochastic  Memory depth: . [14]

GrudgerAlternator  Deterministic  Memory depth: . [38]

Grumpy: Nice, 10, 10  Deterministic  Memory depth: . [49]

Handshake  Deterministic  Memory depth: . [50]

Hard Go By Majority  Deterministic  Memory depth: . [42]

Hard Go By Majority: 10  Deterministic  Memory depth: 10. [49]

Hard Go By Majority: 20  Deterministic  Memory depth: 20. [49]

Hard Go By Majority: 40  Deterministic  Memory depth: 40. [49]

Hard Go By Majority: 5  Deterministic  Memory depth: 5. [49]

Hard Prober  Deterministic  Memory depth: . [38]

Hard Tit For 2 Tats (HTf2T)  Deterministic  Memory depth: 3. [53]

Hard Tit For Tat (HTfT)  Deterministic  Memory depth: 3. [57]

Hesitant QLearner  Stochastic  Memory depth: . [49]

Hopeless  Stochastic  Memory depth: 1. [58]

Inverse  Stochastic  Memory depth: . [49]

Inverse Punisher  Deterministic  Memory depth: . [49]

Level Punisher  Deterministic  Memory depth: . [2]

Limited Retaliate 2: 0.08, 15  Deterministic  Memory depth: . [49]

Limited Retaliate 3: 0.05, 20  Deterministic  Memory depth: . [49]

Limited Retaliate: 0.1, 20  Deterministic  Memory depth: . [49]

MEM2  Deterministic  Memory depth: . [37]

Math Constant Hunter  Deterministic  Memory depth: . [49]

Meta Hunter Aggressive: 7 players  Deterministic  Memory depth: . [49]

Meta Hunter: 6 players  Deterministic  Memory depth: . [49]

Meta Mixer: 173 players  Stochastic  Memory depth: . [49]

Naive Prober: 0.1  Stochastic  Memory depth: 1. [35]

Negation  Stochastic  Memory depth: 1. [57]

Nice Average Copier  Stochastic  Memory depth: . [49]

Nydegger  Deterministic  Memory depth: 3. [14]

Omega TFT: 3, 8  Deterministic  Memory depth: . [29]

Once Bitten  Deterministic  Memory depth: 12. [49]

Opposite Grudger  Deterministic  Memory depth: . [49]

PSO Gambler 1_1_1  Stochastic  Memory depth: . [49]

PSO Gambler 2_2_2  Stochastic  Memory depth: . [49]

PSO Gambler 2_2_2 Noise 05  Stochastic  Memory depth: . [49]

PSO Gambler Mem1  Stochastic  Memory depth: 1. [49]

Predator  Deterministic  Memory depth: 9. [8]

Prober  Deterministic  Memory depth: . [35]

Prober 2  Deterministic  Memory depth: . [38]

Prober 3  Deterministic  Memory depth: . [38]

Prober 4  Deterministic  Memory depth: . [38]

Pun1  Deterministic  Memory depth: 2. [7]

Punisher  Deterministic  Memory depth: . [49]

Raider  Deterministic  Memory depth: 3. [10]

Random Hunter  Deterministic  Memory depth: . [49]

Random: 0.1  Stochastic  Memory depth: 0.

Random: 0.3  Stochastic  Memory depth: 0.

Random: 0.7  Stochastic  Memory depth: 0.

Random: 0.9  Stochastic  Memory depth: 0.

Remorseful Prober: 0.1  Stochastic  Memory depth: 2. [35]

Resurrection  Deterministic  Memory depth: 5. [2]

Retaliate 2: 0.08  Deterministic  Memory depth: . [49]

Retaliate 3: 0.05  Deterministic  Memory depth: . [49]

Retaliate: 0.1  Deterministic  Memory depth: . [49]

Revised Downing: True  Deterministic  Memory depth: . [14]

Ripoff  Deterministic  Memory depth: 2. [5]

Risky QLearner  Stochastic  Memory depth: . [49]

SelfSteem  Stochastic  Memory depth: . [20]

ShortMem  Deterministic  Memory depth: 10. [20]

Shubik  Deterministic  Memory depth: . [14]

Slow Tit For Two Tats  Deterministic  Memory depth: 2. [49]

Slow Tit For Two Tats 2  Deterministic  Memory depth: 2. [38]

Sneaky Tit For Tat  Deterministic  Memory depth: . [49]

Soft Go By Majority: 10  Deterministic  Memory depth: 10. [49]

Soft Go By Majority: 20  Deterministic  Memory depth: 20. [49]

Soft Go By Majority: 40  Deterministic  Memory depth: 40. [49]

Soft Go By Majority: 5  Deterministic  Memory depth: 5. [49]

Soft Grudger  Deterministic  Memory depth: 6. [35]

Soft Joss: 0.9  Stochastic  Memory depth: 1. [38]

SolutionB1  Deterministic  Memory depth: 3. [4]

SolutionB5  Deterministic  Memory depth: 5. [4]

Spiteful Tit For Tat  Deterministic  Memory depth: . [38]

Stochastic Cooperator  Stochastic  Memory depth: 1. [1]

Stochastic WSLS: 0.05  Stochastic  Memory depth: 1. [49]

TF1  Deterministic  Memory depth: . [49]

TF2  Deterministic  Memory depth: . [49]

TF3  Deterministic  Memory depth: . [49]

Tester  Deterministic  Memory depth: . [13]

ThueMorse  Deterministic  Memory depth: . [49]

ThueMorseInverse  Deterministic  Memory depth: . [49]

Thumper  Deterministic  Memory depth: 2. [5]

Tit For 2 Tats (Tf2T)  Deterministic  Memory depth: 2. [15]

Tit For Tat (TfT)  Deterministic  Memory depth: 1. [14]

Tricky Cooperator  Deterministic  Memory depth: 10. [49]

Tricky Defector  Deterministic  Memory depth: . [49]

Tullock: 11  Stochastic  Memory depth: 11. [14]

Two Tits For Tat (2TfT)  Deterministic  Memory depth: 2. [15]

VeryBad  Deterministic  Memory depth: . [20]

Willing  Stochastic  Memory depth: 1. [58]

WinShift LoseStay: D (WShLSt)  Deterministic  Memory depth: 1. [35]

Winner12  Deterministic  Memory depth: 2. [40]

Winner21  Deterministic  Memory depth: 2. [40]

Worse and Worse  Stochastic  Memory depth: . [38]

Worse and Worse 2  Stochastic  Memory depth: . [38]

Worse and Worse 3  Stochastic  Memory depth: . [38]

ZDExtort2 v2: 0.125, 0.5, 1  Stochastic  Memory depth: 1. [33]

ZDExtort2: 0.1111111111111111, 0.5  Stochastic  Memory depth: 1. [53]

ZDExtort4: 0.23529411764705882, 0.25, 1  Stochastic  Memory depth: 1. [49]

ZDGEN2: 0.125, 0.5, 3  Stochastic  Memory depth: 1. [33]

ZDGTFT2: 0.25, 0.5  Stochastic  Memory depth: 1. [53]

ZDSET2: 0.25, 0.0, 2  Stochastic  Memory depth: 1. [33]
Comments
There are no comments yet.