Category Archives: behavioral economics

The Return of Karl Popper: Is Social Science Really Different Than Natural Science?

karl-popper Social Scientist have contended for much of the last century that we cannot approach the study of human behavior with the same tools that we would use to study the natural world.  This is hogwash.  And Karl Popper, the great 20th century philosopher, would agree with me. Humans are animals, they are made up of chemicals and cells, their behavior is determined by a complex interaction of chemical processes and their lives are a network of cause and effect relations with other animals (some of which we’d call human).   If we are ever going to get a solid grasp on our own behavior, we’ll need to use the items from the large and well developed toolbox of natural science.

Falsifiability and Objective Reality

Karl Popper believed that a theory is scientific if and only if it is falsifiable.  Most scientists would agree with this statement, and in fact would be shocked by anyone who didn’t.  (This may be why natural scientists and social scientists don’t tend to hang out together!)  But, falsifiability presupposes a belief in an Enlightenment-style “objective” reality beyond that of our own minds.  In much of social science, especially in sociology and psychology, there is a powerful belief in a post-modern relativism that rejects an objective reality. That is to say, they don’t believe that there is a truth that is inherent to the objects of study themselves.

Without a belief in an objective truth outside the mind of the observer, it is impossible to even discuss what it would mean to falsify a statement.  Therefore, you cannot use Poppers falsifiability axiom as a demarcation line for what is and isn’t science.  And now we allow in all sorts of unfalsifiable statements and theories simply because who is to say what is and is not objectively true?  This is not good science.  But, it defines much of what passes for science in the world of social inquiry.

Now, it must be said, that the truth is likely somewhere in the middle. But, most relativists are basing their relativism on what I’d consider a false understanding of some basic ideas.  Among the more common things I hear when encountering someone who is a hardcore relativist, who wants to impress me (knowing that I’m a mathematician) is with the idea of quantum physics.  It usually goes something like this, “hey, man, you know that every time we observe a particle we change it’s state.  So, everything is relative.  Our presence changes what we observe.  There is a reflexivity between us and the object.  There is no way to know what’s what if every time observe something that something changes.  Reality can be and is manipulated.”

OK, true.  But, it’s missing the point.  When we say we change a particles state, we aren’t saying the particle didn’t have a state to begin with.  It was simply a state we can’t directly observe.  That isn’t relativism in the strictest sense.  Sure if I observe it and you observe it, we’ll see different things, but that doesn’t mean the particle didn’t have an objective state before we each changed it. True relativism would be that the particle has no state until someone observes it. But, that isn’t how it is.

Octavian-coin Think of a particle as a coin.  Suppose I spin that coin on a table.  While it is spinning is it heads or tails?  You might say neither, or more accurately, you could say both.  It’s 50% heads and 50% tails.  That’s it’s objective state.  The trouble is, suppose we can’t see a coin spinning.  Suppose we can only see coins when they are heads or tails with 100% probability – that is, when they are flat on the table.  Then if I want to observe the coin, I have to slam my hand down on top of it to stop it from spinning.   Say it lands heads up.

When I leave, suppose it pops back up and begins spinning again.  Then you come by and slam your hand down on it.  Now it’s landed on tails.  We’ve each seen the same coin in different states.  I claim the coin is heads, you claim the coin is tails, but neither of us realizes that it is both.

Many things in the natural world are of this form.  That doesn’t mean we can’t study them objectively, and infer what we can, correct for our own “observation” errors, etc.  Physics is going just fine with far crazier objects than humans could ever hope to be.  There is no reason we have to pretend that studying human behavior is somehow so much harder than studying particle physics.

Popper’s 3 Worlds

There is no doubt that the reality outside our minds is not always in line with the perceived reality we hold within our minds.  Many natural scientists, in light of this, are Cartesian dualists without even knowing it.  That is, they accept that there are two worlds: the world of material things that we study; and the world of the human mind. They accept that these don’t always jive, and that each one has an influence on the other.  But, keeping them separate, at least heuristically, is seen as useful.

In social science there is a stronger emphasis on the effects culture on the mind.  And this is seen as a feedback loop from the mind to itself.

Karl Popper goes one step further with his heuristic and suggests that the universe is really made up of three worlds.  The objective world of objects.  The world of the Mind.  And the world of human-created ideas as manifested in books, paintings, blogs, etc.   The third world includes culture and so brings it out of the second world, thus striping it of some of its recursive properties.

What I like about the idea of the third world is that it allows for this world to undergo it’s own evolution (the way the other two obviously do).  And because of this we can study it (largely) independently using the tools of evolutionary research such as game theory.

One More Time

I contend that Social Science is a proper subset of Natural Science and is in fact a subset of Biology, most specifically Human Biology.   To say that we cannot study Social Science with the same tools we use in Natural Science because we are ourselves of the type we are studying is an act in strange logic. If we follow the strange logic further, then we shouldn’t study any animals like we study the natural world, because we are animals.  We shouldn’t study chemistry as we study the natural world because we are made up of molecules.  And we shouldn’t study physics the way we study the natural world because we are nothing more than a collection of atoms.

There is always some sense of recursion in any attempt we make to study the natural world.  Our brain is made up of cells which use electricity to fire information back and forth.  Whenever we are trying to understand electricity, we are using electricity to understand it!

Popper dealt with this problem in what I consider a most reasonable way, the Popeye way:  Science is what it is, and that’s all that it is.  Scientific statements can only be falsified, not verified.   And the world (including the world of human interaction) has an objective component that must be sought after as truth in its own right.  It isn’t perfect, but it gets the job done.

The only way we’ll ever make headway into the realm of human behavior is if we are comfortable approaching the human animal the way we approach the study of all animals – with science.

Advertisements

Parochial Altruism and War: A Game Theoretic Analysis

Pleistocene North America

North America during the Pleistocene

War, what is it good for?  Apparently, altruism.  In a paper published in Science, Samuel Bowels and Jung-Kyoo Choi took a game-theoretic approach to studying the evolutionary roots of both altruism and parochialism.  They concluded that neither would have likely evolved alone, but instead co-evolved, together being a powerful combination in the survival kit  of our Pleistocene and early Holocene ancestors.

Abstract

Altruism–benefiting fellow group members at a cost to oneself–and parochialism–bostility toward individuals not of one’s own ethnic, racial, or ther group–are common human behaviors.  The intersection of the two–which we term “parochial altruism”–is puzzling from an evolutionary perspective because altruistic or parochial behavior reduces one’s payoffs by comparison to what one would gain by eschewing these behaviors.  But parochial altruism could have evolved if parochialism promoted intergroup hostilities and the combination of altruism and parochialism contributed to success in these conflicts.  Our game-theoretic analysis and agent -based simulations show that under conditions likely to have been experienced by late Pleistocene and early Holocene humans, neither parochialism nor altruism would have been viable singly, but by promoting group conflict, they could have evolved jointly.

Background

Even Darwin noted that war was a powerful tool “used” by evolution to increase  altruism and solidarity toward ones own group members.  But, there have been two major questions lingering.

  1. What is the process by which war became common enough to support the evolution of altruism in this context?
  2. What is the likelyhood that altruism itself (conditioned on group membership) contributed to the high levels of lethal intergroup conflict among humans?

Neither of these questions has been well enough analyzed and was one of reasons the authors did their study.    Empirically, both altruism and hostility are quite important to members of other groups.

The empirical importance of both altruism and hostility to members of other groups is well established.  Experimental and other evidence demonstrates that individuals often willingly give to strangers, reward good deeds, and punish individuals who violate social norms, even at a substantial personal cost (4), while favoring fellow group members over “outsiders” in the choice of friends, exchange partners, and other associates and in the allocation of valued resources (5).

They site an example of a case in Papua New Guinea, “There exists strong favoritism toward ones-own linguistic group in giving to others,”  and a higher tendency to punish those from different linguistic groups.

They use the term Parochial Altruism in reference to a person to mean that when a person engages in hostile and aggressive behavior with another group, this person incurs a mortal risk, therefore a fitness loss verses those who refrain from such aggression.

Knowing Parochial altruism exists and assuming that neither Parochialism nor Altruism would have evolved in an environment (that is survived a selection process) that favored some other trait that resulted in higher payoffs, then how DID Parochial Altruism evolve?

A Solution

One possibility is that since oiur ancestors lived in a hostile environment where resources were scarce, Parochial Altruism could have evolved and thrived because those groups with high numbers of Parochial Altruists would have been more able to engage in aggressive action and “win” on behalf of their groups.

The two most important correlates of tribal warfare are natural disasters and resource scarcity.  The Pleistocene and early Holocene (roughly 125,000 to 10,000 years ago) are known to have been times of substantial volatility.  They also coincide with the most significant periods of human evolution.

Could Parochial Altruism have evolved in such a climate?

The Game

Bowel’s and Choi’s model consists of 4 types of players.

  1. PA:  Parochial Altruists
  2. TA: Tolerant Altruists
  3. PN: Parochial Non-Altruists
  4. TN: Tolerant Non-Altruists

Note that Parochials of both types are hostile toward other groups.  But, ONLY Parochial Altruists will engage in combat.  This is because PN’s won’t risk death for the benefit of others.

Their model has two types of selection acting at once.  Intra-Group selection favors TN’s and tends to eliminate PA’s.  And, Inter-Group selection which favors PA’s via selective extinction.

In a purely risk vs. reward scenario, it makes little sense to be a PA.  While there exists two benefits to winning a war (namely 1. Greater chance of future survival, 2. Opportunity to reproduce, thereby replacing those PA’s lost in war), the risk of mortal death incured by war “offsets this direct benefit by a wide margin.”  Therefore, each PA would be better off adopting a different strategy, in terms of their own reproductive fitness.  This confirms that PA’s are, indeed, altruistic according to the traditional meaning of the term.

3 Stage Game

The game runs in 3 stages.   In stage one, when two groups A and B meet, there is a probability that they will engage hostilely.  If they do not, then the game ends.  If  their interaction is hostile, they move on to stage two.

Stage two, given that their interaction is hostile, there is a new probability that A and B will goto war.  If they don’t, they move on, game is done.  If they do, stage 3.

Stage 3, they are now at war, the group with the higher number of PA’s has a higher probability of winning.  If this group is A, then A is more likely to win a war against the PA deficient group B.   Given that A is stronger (ie, has more PA’s) there are two options:  A and B draw, and the result is simply that both groups lose a certain number of fighters (PA’s); or A wins, and still loses a certain number of fighters, but also now gains a number of replicas that make up for that loss.

From B’s perspective, given that B is weaker (has less PA’s), there is only Draw or Lose. B could get lucky and draw, and only lose some PA’s.  But, there is a higher likelyhood of a loss.  In this case, B loses both fighters (PA’s) and civilians (made up of the other types).

In the paper they are quite explicit about what these probabilities are and why they chose them.  But, the point is that not every encounter with another group is hostile, not every hostile interaction results in war, and every war is won with a higher probability if you have a large number of PA’s.

Conclusion

They ran this game through a number of iterations accounting for hundreds of generations.  They found that transitions from quite tolerant non-altruistic (read: peaceful) groups to bellicose parochial altruistic groups can happen very rapidly–in about 200 generations, or about 5,000 years.

The markedly higher reproductive success of predominantly parochial altruist groups when interacting with groups with fewer parochial altruists could therefore explain the rapid range expansions that are thought to be common among some late Pleistocene human groups, and thus may partly explain sthe still puzzling second great hominid diaspora that swept from Africa as far as Australia in the course of no more than 10 millennia.

This study aids in the study of why group boundaries have such a profound effect on human behavior, from an evolutionary perspective.

In conclusion they add:

We have explained how Homo Sapiens could have become a warlike yet altruistic species.  But there is no evidence that the hypothetical alleles in our model exist, or that were they to exist they could be expressed in the complex behaviors involved in helping others and engaging in lethal conflict.  Theus, we have not shown that a warlike genetic predisposition exists, only that should one exist, it might have coevolved with altruism and warfare in the way that we have described.

They make a good closing point.  Theoretical (ie, mathematical) biology doesn’t “prove” that certain things are true.  It tests the validity of certain hypothesis and ideas, thereby opening up further possibilities for empirical research.

References:

Choi, Jung-Kyoo, and Samuel Bowles. 2007. The Coevolution of Parochial Altruism and War. Science 318, no. 5850 (October 26): 636-640. doi:10.1126/science.1144237.

Altruism against Predation

This is a review of a recent article, cooperation in Defense against a predator,  in the Journal of Theoretical Biology by Jozsef Garay of the Research Group of Theoretical Biology and Ecology of Hungarian Academy of Sciences.  Here’s the abstract:

The origin and the evolutionary stability of cooperation between unrelated individuals is one of the key problems of evolutionary biology.  In this paper, a cooperative defense game against a predator is introduced which is based on Hamilton’s selfish herd theory and Eshel’s survival game models.  Cooperation is altruistic in the sense that the individual, which is not the target of the predator, helps the members of the group attacked by the predator and during defensive action the helper individual may also die in any attack.  In order to decrease the long term predation risk, this individual has to carry out a high risk action.  Here I show that this kind of cooperative behaviour can evolve in small groups.  The reason for the emergence of cooperation is that if the predator does not kill a mate of a cooperative individual, then the survival probability of the cooperative individual will increase in two cases.  If the mate is non-cooperative, the–according to the dilution effect, the predator cofusion effect and the higher predator vigilance–the survival probability of the cooperative individual increases.  The second case is when the mate is cooperative, because a cooperative individual has a further gain, the active help in defence during further predator attacks.  Thus, if an individual can increase the survival rate of its mates (no matter whether the mate is cooperative or not), then its own predation risk will decrease.

Hamilton’s “selfish herd” theory (1971) claims that predation risk is lowered when animals huddle in groups, and is lowest for those that are in the middle of the “herd”.  Buffalo are a good example.  We call them selfish since if one Buffalo is attacked, the others don’t generally help it, they just run.  But, as a group they are safer in large numbers.

The trouble with such a theory is it doesn’t explain how altruistic behavior (helping out a fellow group member at the risk to oneself) would develop.  Garay’s paper aims to help make sense of how this is possible.

His argument hinges on a Game Theoretic model that shows that although in the short term, a non-altruistic strategy confers a better survival rate, in the long term the altruistic strategy does.

If there are only 2 animals, A and B, and we assume that a predator can only realistically attack one at a time, then the probability of A being attacked is 1/2 in a single round of predation.  So, in a one-shot game, if B is attacked, A’s best strategy is to cut and run, since helping B may result in injury or death.

But, if the same game is played over and over (that is, if they run the risk of being attacked often, as is the case in real life), then A’s best strategy is to help out B.  This might seem incongruous, but it isn’t.

If B dies in the first attack, then A’s probability of being attacked in the next round is 100%!  If he has a less than 100% chance of dying by way of helping B in the first attack than he is better off helping B since on the next round he’ll still have only a 50% chance of being attacked.  50% is certainly better than 100%!

The above is true even if B NEVER helps out A.  That is, if A is the only Altruistic one in the (2 man) group, then it is still to his advantage to continue to be altruistic.  But, if B also is altruistic (helps A when A is attacked) then this is all the better for A.  Also, B would then enjoy the same benefits as A.

What is interesting to me is that this argument doesn’t hinge on kin selection at all.  Kin selection is the idea that an individual is far more likely to come to aid of another individual who shares a large portion of their genome (like children, siblings, ect) than they are to a total (genetic) stranger.  But, here, the two don’t need to be related at all.  The risk of predation is enough to “glue” them to one another.

References:

Garay, Jozsef.  2009.  “Cooperation in Defence against a predator.”  Journal of Theoretical Biology.  257 (2009) 45-51.

Hamilton, W.D., 1971. “Geometry for the selfish herd.” Journal of Theoretical Biology. 31, 295-311.

Wilson, E.D. 1975. Sociobiology.  The Belknap Press Harvard University Press, Boston.

Predicting Preferences is Rational

In a review of a paper by Camerer and Fehr on social neuroeconomics, Benoit Hardy-Vallée of Natural Rationality makes the point:

So basically, we have enough evidence to justify a model of rational agents as entertaining social preferences. As I argue in a forthcoming paper (let me know if you want to have a copy), these findings will have normative impact, especially for game-theoretic situations: if a rational agent anticipate other agents’s strategies, she better anticipate that they have social preferences. For instance, one might argue that in the Ultimatum Game, it is rational to make a fair offer.

Peter Pan was an Economist: Msc in Behavioral Economics at University of Nottingham

In the far distant future when I finish my Masters in Mathematics here at Portland State, I’m considering getting a second Masters in Economics.  A particularly interesting program is the one at the University of Nottingham in Behavioral Economics.

Behavioral Economics consists of much of what I like most: Game theory, emperical research, an honest assesment of how people actually think, and is open to biological approaches to understanding why people do what they do.

Here’s a post by a guy who just finished up at Nottingham with an MSc in Applied Economics talking about his dissertation.

Neuroeconomics: Decision Making and the Brain

A new book edited by Paul Glimcher, Colin Camerer, Ernst Fehr, and Russell Poldrack.  When my money tree starts fruiting I’ll buy it and give a review.

Poliheuristic Theory: An Introduction

This post is part of a series of posts I’m working on covering some basic models in Decision Theory.  For my previous post on Cognitive models, click Here.

Decision theory in foreign policy analysis has been characterized by a split between two different, and at times rival, models of human behavior (James and Zhang, 2005).  The first is the classical model of Rational Choice theory, a theory that takes as its starting point the end of the decision making process and attempts to figure out why the choice was finally made.  The second is the Cognitive approach, a theory that focuses on the “how” questions of decision making and attempts to reconstruct why the end outcome occurred.  In an attempt to integrate these two different, but useful, approaches, Alex Mintz created Poliheuristic Theory (PH) (Mintz 2005).

The model consists of two stages (Mintz, 2005).  The first is the “heuristic” stage.  In this stage the actor uses heuristics, or simple tools of thought, to limit the number of choices available to him.  This is similar in character to Robert Axelrod’s Schema Theory (1973) and other cognitive approaches (Simon, 1955; Tversky and Kahneman, 1981, 1986, and 1991), including Prospect Theory (Levi, 1997; Tversky and Kahneman, 1992).   The second stage is the “evaluation” or “calculation” stage.  Here the actor makes calculations based on the given information garnered from the first stage.

The components of the first stage culminate in a “decision matrix” that has a number of parts: alternatives, dimensions, implications, ratings, and weights.  The alternatives are the choices available to the actor.  The dimensions are the relevant criteria used to evaluate the different alternatives.  The implications are what happens at each pairing of the alternatives with the dimensions.  Ratings can be given to each of the implications to aid in analysis for the researcher.  The weights are the relative “importance level” of the different dimensions.

The second stage takes the information given in the first stage and analyzes the different outcomes associated with the given values.  This stage resembles classical Rational Choice Theory (Arrow, 1959), and uses the Expected Utility Principle to justify which options are the most viable.

The theory has been successfully applied to Presidential decisions, including Bill Clinton’s bombing of Kosovo (Redd, 2005) and Jimmy Carter’s decisions during the Iranian hostage crisis (Brulé, 2005), and even to autocratic regimes (Kinne, 2005).

For readers with knowledge of Game Theory, there may seem to be a number of similarities between it and the PH model.  But, these similarities are surface deep.  Among them is the use of a decision matrix in the PH model.  Game Theory employs a similar matrix, called the “strategic form” of the game (Gates and Humes, 1997; Gintis, 2000; Rapoport, 1966).  Another is the rating of implications in PH theory with numerical values that can then be assessed mathematically.  Game Theory uses the same devise to make calculation easier-and possible.  But, the differences are more important.  PH theory is an attempt to understand both why and how a particular actor came to a decision.  Game Theory, by contrast, is interested in the dynamics of interactions between and among actors in a given situation.  Game Theory does have methods of evaluating why a choice was made based on expected value, but not how it was made.

However, PH theory and Game Theory are compatible and may, if used together, provide a powerful method of analyzing the decisions of actors in relation with other actors.  Game Theory can illuminate the potential outcomes resulting from the interaction of the players, and PH theory can explain how and why any particular player would make the choices they would make.  This is an advantage over the traditional use of Rational Choice Theory with Game Theory which presumes players are bound by a careful analysis of their expected utility, when in fact, many people do not respond this way in the “real” world (Camerer, 2003).  It is also an advantage to using only the cognitive approach, which tends to shun rational choice thinking as too unrealistic, when in fact, once an actor (in the PH version) has narrowed down their list of actions, they are more likely to make a choice based on the expected utility principle (Redd, 2005).

Poliheuristic Theory is a welcome addition to the literature, and toolbox, of Political Scientists, Economists, and Social Scientists generally.  It bridges the gap between the two primary perspectives in the field, Rational Choice and the Cognitive Models, and provides an easy to use framework for analysis.  Further research may prove that it will work well with Game Theory as a positive approach to understanding why decisions were made in complex interactions among agents.

References

  • Arrow, Kenneth J.. 1959. “Rational Choice Functions and Orderings.”  Economica.  New Series, Vol. 26, No. 102, pp 121-127.
  • Axelrod, Robert.  1973.  “Schema Theory:  An Information Processing Model of Perception and Cognition.” The American Political Science Review, Vol. 67, No. 4, 1248-1266.
  • Brulé, David J.. 2005.  “Explaining and Forecasting Leaders’ Decisions: A Poliheuristic Analysis of the Iran Hostage Rescue Decision.”  International Studies Perspectives. 6, 99-113.
  • Camerer, Colin F.. 2003.  Behavioral Game Theory: Experiments in Strategic Interaction. Princeton University Press.
  • Gates, Scott and Humes, Brian D.. 1997.  Games, Information, and Politics: Applying Game Theoretic Models to Political Science. University of Michigan Press.
  • Gintis, Herbert.  2000.  Game Theory Evolving: A Problem-Centered Introduction to Modeling Strategic Interaction. Princeton University Press.
  • Kinne, Brandon J.  2005. “Decision Making in Autocratic Regimes: A Poliheuristic Perspective.” International Studies Perspectives.  6, 114-128.
  • Levi, Jack S.. 1997. “Prospect Theory, Rational Choice, and International Relations.” International Studies Quarterly.  Vol.41, No.1, 87-112.
  • Mintz, Alex.  2005.  “Applied Decision Analysis: Utilizing Poliheuristic Theory to Explain and Predict Foreign Policy and National Security Decisions.” International Studies Perspectives. 6, 94-98.
  • Rapoport, Anatol.  1966.  Two-Person Game Theory.  Dover Publications, Inc.
  • Redd, Steven B.. 2005. “The Influence of Advisers and Decision Strategies on Foreign Policy Choices: President Clinton’s Decision to Use Force in Kosovo.”  International Studies Perspectives.  6, 129-150.
  • Simon, Herbert A. 1955.  “A Behavioral Model of Rational Choice.” The Quarterly Journal of Economics, Vol. 69, No. 1, 99-118.
  • Tversky, Amos and Kahneman, Daniel.  1981.  “The Framing of Decisions and the Psychology of Choice.” Science, New Series, Vol. 211, No. 4481, 453-458.
  • Tversky, Amos and Kahneman, Daniel.  1986.  “Rational Choice and the Framing of Decisions.” The Journal of Business, Vol. 59, No. 4, Part 2: The Behavioral Foundations of Economic Theory, S251-S278.
  • Tversky, Amos and Kahneman, Daniel.  1991.  “Loss Aversion in Riskless Choice: A Reference-Dependent Model.” The Quarterly Journal of Economics, Vol. 106, No. 4, 1039-1061.
  • Tversky, Amos and Kahneman, Daniel.  1992. “Advances in Prospect Theory: Cumulative Representation of Uncertainty.” Journal of Risk and Uncertainty. 5:297-323.
  • James, Patrick and Zhang, Enyu.  2005.  “Chinese Choices:  A Poliheuristic Analysis of Foreign Policy Crises.”  Foreign Policy Analysis.  1, 31-5