Moral Tribes to Moral Lives: Interview w Harvard’s Joshua Greene

Joshua Greene, who visited the Yale Humanist Community on February 5th, is a Princeton-trained philosopher, Harvard professor, and author. He uses the tools of psychology and neuroscience to tackle abstract questions about morality and ethics. In one famous experiment, he showed that we use different parts of our brains to think about moral choices with identical outcomes, depending how those choices are framed.

Moral Tribes Book Cover

Check out Joshua Greene’s new book “Moral Tribes: Emotion, Reason, and the Gap Between Us and Them”

Greene’s book Moral Tribes examines how our ethical intuitions steer our lives in the real world – rightly and wrongly. Questions that interest him include:

  • How can diverse groups of people reach consensus on difficult moral issues?
  • What situations trigger our brains to rely on instinct as we work through moral decisions?
  • How can we fight past our natural biases to solve problems with objective reasoning?

I sat down with Greene to ask him about his recent work, his thoughts on how it might be applied, and how his research has influenced the way he lives.

(If you’d like a summary of Moral Tribes, I highly recommend Robert Wright’s review of the book.)

 

The last thing of yours that I read was the slide show from your presentation at an artificial-intelligence conference in Puerto Rico (which also included Elon Musk, Skype founder Jaan Tallinn, and dozens of computer scientists). What led you there?

When I was on my book tour, I gave a talk at Google, where I met this guy named Demis Hassabis, who co-founded a company called Deepmind. Their goal is to develop a general artificial intelligence that can solve a lot of different problems – “strong AI”, it’s called – and we found out we had a lot of mutual interests. He suggested that I come to this conference as a representative of the world of people thinking about ethics and cognitive science.

 

Did you learn a lot?

Dr. Joshua Greene, Associate Professor of Psychology at Harvard University

I did! It was an amazing conference. And I read some of the work before I got there, so in a way many of the revelations happened then.

I was very impressed with the book The Second Machine Age (by Erik Brynjolfsson and Andrew McAfee), which is about the economic and social impact of increasingly sophisticated artificial intelligence. I also read Nick Bostrom’s book Superintelligence, which is about long-term and existential questions: “What happens when the machines get smarter than us? Should we be worried about this? If we should, how can we make sure things go well instead of badly?”

A lot of the dreams of the 1950s… it seems like they’re starting to come true, and this could be either a wonderful thing or a terrible thing. So I’ve been thinking about this a lot, and I think a lot of other people are as well.

 

Do you plan to explore the subject through formal research?

Maybe. You know, with research there are always two broad questions: “Is this interesting and important?” and “What can I add?” And I know that I’m not going to be a full-blown ace computer scientist anytime soon… or ever.

If anything, I’ll be looking to artificial intelligence for inspiration in tacking my own questions about how flexible, abstract thinking works in a general way. One strategy is to look at the thoughts we already have – but we can’t just slice and dice humans to find out how they think. An advantage of artificial intelligence is that you have complete control: The question is whether you can actually create the behavior you want to examine.

 

I often hear academics in Ivy League universities referred to as a “tribe” of their own. What are some tribal patterns you notice in the thinking of that community? Is there anything you’d like to change about the way the system “thinks”?

Well, I actually like academia! There are different kinds of problems in different quarters, but I feel more at home in my bespectacled academic tribe than I do anywhere else.

There are certainly trade-offs: If you want people to have great ideas but don’t know what those ideas are, you have to let a thousand flowers bloom and nine-hundred-ninety-nine of those wilt and die. It’s easy to look around at academia or science and say “look at that thing that didn’t work! Look at this pile of inbred jargon!” But that’s how we eventually produce things that are new and are valuable.

Still, some kinds of training can be very parochial; philosophy has, in the past, fallen prey to this. When I was in grad school, the first thing you’d do was learn a bunch of formal logic, and then you’d take philosophy classes. And the idea was that you can be a fully productive philosopher if all you know is what other philosophers have said. But I think that idea has not really worked out.

If I were reinventing the philosophy curriculum, I’d say that you need to do three things:

  1. You need to develop deep knowledge of an area outside philosophy.
  2. Then develop deep knowledge of another area that has nothing in common with the first.
  3. And then you’ll have to learn the tools and get a grounding in the history of ideas and learn how to think clearly.

This would give you more ability to make connections and uncover new knowledge. But philosophy has limited itself by just operating within a framework that says: “To be a good philosopher, all you need to do is know philosophy.”

 

Who are some philosophers, living or dead, who you think exemplify this notion of expertise outside of pure philosophy?

There’s a whole group of philosophers who really live this. I’m a member of the Moral Psychology Research Group, which was founded by John Doris, Stephen Stich, Gil Harman, Shaun Nichols, Walter Sinnott-Armstrong and a few others.

Outside of my own circle, people like Daniel Dennett and Jerry Fodor have taken their philosophical acumen and taken detailed knowledge of other fields and made something valuable out of it. Peter Singer as well – he’s a worldly ethicist who is deeply engaged in real-world moral issues.

And then there are people like Steven Pinker, who’s not a philosopher by training, but is a model for how philosophy could be. He thinks incredibly clearly, has an enormous knowledge base, and has written some brilliant books that connect a lot of dots. You come away from a book like The Better Angels of Our Nature and you have this broad understanding that you didn’t have before. You can see the forest, not just the trees.

 

You’ve been studying moral cognition for a long time. In the course of learning how we naturally think about problems, which techniques have you found useful for improving your own cognition?

Useful techniques… I’d like to think there’s a role for imagination in a lot of these things. Someone gives an example to illustrate some point, and you say “what if you changed the example this way, or that way?” Would you draw a different conclusion? What does it mean if you would or wouldn’t?

What I want to know is – you twiddle these knobs and you get different psychological effects, but what is moving these effects around? And is the feeling you’re being led to – why do you have it? How does it work? What is it sensitive to? This can be very useful for reasoning through moral dilemmas.

 

Do you have any moral beliefs that you’ve changed in the last few years? Or old beliefs that you’ve put into practice for the first time?

I haven’t changed my overall philosophy, and because my overall philosophy is broadly utilitarian, most of the interesting questions are empirical. If I change my mind, it will be because I’m aware of some new piece of information that will bring my attention to a new issue or change my perceived likelihood of doing good or doing harm with some action.

One example: A lot of people have been influenced by the effective altruism movement. For a long time, I had given to Oxfam thinking that was the best way to do good with my money. That still may be true, but people like the folks at Givewell have done analyses and said “we think the best thing to do with your money is to give it to the Against Malaria Foundation and other specialty charities that can document exactly how much good they do.”

I still give to Oxfam, but I also give to the portfolio of charities that Givewell recommends. Not a great philosophical transformation, but using new information to change my decisions.

One more thing I’ve thought about is abortion. This came in the course of writing part of the book – I hadn’t fully appreciated how well one can, from a certain perspective, create a good pro-life argument. If your having an abortion is preventing somebody from existing who, odds are, would have a pretty good life, then it’s pretty hard from a consequentialist perspective to ignore that. Trouble is, the same position also applies to abstinence, and to anything besides pumping out as many children as you can […] but I think the ethics of abortion is not as straightforward as a lot of liberals think, or as I had once thought myself.

 

Most people who write books for the public aren’t just trying to prove they’re right, but are trying to make people’s lives better. Have you done much work studying the practical aspects of applying your theory? That is, not just figuring out how we should talk about morality, but figuring out which techniques within a conversation will open people up to being more agreeable?

One thing that’s come out in my research, and other people’s, is the importance of trust. Part of what makes problems like the Israeli-Palestinian conflict so palpably tragic is that it’s not hard to imagine a situation in which both sides are much better off than they are now.

So the question is, whoever we are, how can we get from where we are now to this better outcome? It’s wishful thinking to think we could both make it to the Promised Land if we’d just trust each other, because in most cases, both sides have good reason not to trust each other! Still, you’ll never get anything done without some kind of trust. That trust has to be built from within, and I have some thoughts on how to do that. It’s a long, hard task – but ultimately possible.

 

Aaron Gertler (Yale University)

Aaron GertlerAaron is a member of the class of 2015 at Yale University. After he graduates, he hopes to live his life in a way that makes the lives of other people significantly better, unless he gets distracted by his dream of becoming a famous DJ/novelist/crime-fighter. His interests include electronic music, applied psychology, instrumental rationality, and effective altruism. If his beliefs are inaccurate, you should tell him so as directly as possible. You can follow him on Twitter @aarongertler, and he also writes for his own blog.

2 responses to “Moral Tribes to Moral Lives: Interview w Harvard’s Joshua Greene

  1. Pingback: Interview: Joshua Greene, Moral Tribes - Alpha Gamma·

What Do You Think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s