“Turn state evidence so we can lock up your partner for ten years and you’ll get off scot free.”But there’s a hitch:
“If you both confess, we’ll lock you both up for six years.”So there are four choices:
Czerniawska uses the model in her slide show of Storm Clouds Ahead, here.
The consultancy equivalent of the prisoner’s dilemma is that:
- neither firm A nor firm B act so everyone loses
- Firm A acts, firm B does not act, so firm A damages its own reputation and firm B freeloads on the action of others
- Firm B acts, firm A does not act, so firm B damages its own reputation and firm A freeloads on the action of others
- Firm A and firm B act so everyone wins
- Client and consultant working together or
- Two consultancy firms working on one client project
However, it’s an overrated theory. To begin with the prisoners aren’t working blind. There’s nothing to stop them from conferring beforehand and agreeing to cooperate. But even if they did meet and agree to cooperate they could still stab each other after.
In a continued context, theory and computer models show collaboration works, which is what Robert Axelrod wrote about in his “the Evolution of Cooperation”. He represented cooperative situations using the iterated prisoner’s dilemma. In the prisoner’s dilemma the prisoners face their dilemma just the once, whereas in real life dilemmas tend to repeat so we might realise that our cooperation today might be an incentive for your cooperation tomorrow. Axelrod invited game theory experts to create computer programs to model the iterated prisoner’s dilemma to check this. The winner was tit-for–tat, which is the strategy of starting with cooperation and thereafter doing whatever the other player did on the previous move. Over time, infinity or no predetermined number of iterations, this tends this way to cooperate and therefore win-win. So it sounds like a suitable theory to model the consultant-client relationships that I’m looking at.
Unfortunately, if the prisoners know how many iterations there’ll be, say five, then on the 5th iteration, there’s not the incentive to cooperate, as you know it’s the end of the game, and so in that case, why cooperate on the fourth iteration either if you plan not to cooperate on the last? Or for that matter, why cooperation on the previous iteration – it’s called backwards induction and forces mutual defection. So the Prisoner’s Dilemma model has a number of flaws.
A possibly better model is the Stag Hunt. Rousseau proposed this game, which involves more than two collaborators with the task of chasing, hunting a stag. One person alone cannot catch the stag; it requires collaboration. However, individuals could go off and do their own thing, hunt rabbits, thereby ensuring a smaller prize.
Perhaps I should apply this model to clients and consultants on collaborative IT projects.
No comments:
Post a Comment