Season 1, Episode 4: Josh Greene

Joshua Greene is a professor in the Harvard Department of Psychology where he runs the Moral Cognition Lab. He received his bachelor’s degree from Harvard and then a PhD in philosophy at Princeton where he was mentored by many bright lights of analytical philosophy, including Peter Singer, who served on his committee. After doing a post-doc in a a cognitive neuroscience lab, Greene returned to Harvard to begin his own lab, the Moral Cognition Lab, which studies both descriptive and normative psychology and philosophy.

The best blog on the internet, SlateStarCodex, says of Josh:

"My own table was moderated by a Harvard philosophy professor [in fact, Josh transitioned to psychology] who obviously deserved it. He clearly and fluidly expressed the moral principles behind each of the suggestions, encouraged us to debate them reasonably, and then led us to what seemed in hindsight, the obviously correct answer. I'd already read one of his books, but I'm planning to order more. Meanwhile, according to my girlfriend, other tables did everything short of come to blows."

Recording this episode felt similar. To see Josh’s training in action, jump to the end where he does a brilliant job at passing the ideological Turing Test.

Below are some notes to set the context of our debate.

Greene’s Take on our Moral Intuitions

Greene’s book Moral Tribes is one of the most accessible treatments of consequentialist morality. I would recommend it next to the now-archived Consequentialism FAQ by Scott Alexander as the most accessible introductions. For a shorter introduction to Greene’s framework, watch his EA Global talk from 2015.

The basic idea is that the dual process theory of the mind pioneered by Kahneman and Tversky applies to moral cognition as well. Greene uses the metaphor of a camera:

The moral brain is like a dual-mode camera with both automatic settings (such as “portrait” or “landscape”) and a manual mode. Automatic settings are efficient but inflexible. Manual mode is flexible but inefficient. The moral brain’s automatic settings are the moral emotions [...], the gut-level instincts that enable cooperation within personal relationships and small groups. Manual mode, in contrast, is a general capacity for practical reasoning that can be used to solve moral problems, as well as other practical problems. -Moral Tribes

We have fast, automatic gut reactions such as guilt, compassion, gratitude or anger. These are handled by our System 1.

­But we can also second-guess these reactions with considered judgment. In the “manual mode”, we can make carefully considered decisions based on the data we’re getting from our fast moral machinery.

If you’re not familiar with the dual-process theory, a good place to start is Thinking, Fast and Slow or Rationality: From AI to Zombies. There is also an excellent short (though a bit more technical) description by Paul Christiano, the Monkey and the Machine

Moral Bugs

Both modes, automatic and manual, have their bugs.

We talk about his early research on moral cognition with Jonathan Baron – in my opinion, a hugely underappreciated psychologist. Baron’s book Thinking and Deciding is an excellent introduction to decision making informed by cognitive biases and covers inconsistencies in moral reasoning (for LessWrong readers, think the academic equivalent of the Sequences).

Baron and Greene studied a phenomenon called scope insensitivity. Josh describes it in our interview:

You can ask people things like, "How much would you pay to save 10,000 birds?" But the problem was people are wildly insensitive to quantity, which is something that is high relevant to effective altruism. If you ask people how many to save 10,000 birds, they go oh, this much. And if you say, "How much to save 20,000 birds?" They say the same thing.

In fact, they say the same thing regardless of the number of birds: people pay the same amount for saving 1,000 birds as for 100,000. (For a compressed description, see Eliezer Yudkowsky’s essays Scope Insensitivity or One Life Against the World.)

This is a serious problem. Consider the case of Viktor Zhdanov who lobbied the WHO to start the smallpox eradication campaign. In the twentieth century alone, smallpox killed 300-500 million – a far larger toll than all wars combined.

Zhdanov’s interventions clearly saved at least tens of millions of lives. But he would probably get the same (perhaps even bigger) emotional rewards if he worked as a doctor in a local Ukrainian hospital. Same goes for Bill Gates – adopting an African child would emotionally register roughly as much as saving millions of African children from a preventable disease (For the origins of Gates’s focus on global health, listen to our episode with Larry Summers).

To quote from One Life Against the World:

Saving one life probably does feel just as good as being the first person to realize what makes the stars shine. It probably does feel just as good as saving the entire world.

But if you ever have a choice, dear reader, between saving a single life and saving the whole world - then save the world. Please. Because beyond that warm glow is one heck of a gigantic difference.

Because of this evolutionary baggage, we shouldn’t trust our moral intuitions very much when it comes to big global problems. Aiding our intuitions by consequentialist means-ends analysis (i.e. by comparing interventions in terms of their impact on Quality-Adjusted Life Years) will often feel wrong, but will provide us with a consistency that our brain is incapable of.