Alan: Or, My Friend the Utility Monster

Warning: This post deals with some pretty unusual philosophy. You may want to prepare by reading this Facebook status, which is a quick, bare-bones summary of what I’ll be talking about.

Utility Monsters

I am a proud utilitarian. I believe that the consequences of an action determine its moral standing, and that we should (roughly) try to act in a way that maximizes the total happiness of all intelligent beings.

That said, there’s one particular anti-utilitarian moral dilemma that especially bothers me: The case of the utility monster.

You can, as usual, get a better definition of this term from Wikipedia than I can provide. But here’s the short explanation:

A hypothetical being is proposed who receives much more utility from each unit of a resource he consumes than anyone else does. For instance, eating a cookie might bring only one unit of pleasure to an ordinary person but could bring 100 units of pleasure to a utility monster.

Because this person derives so much utility, or happiness, from everything they consume, a true utilitarian system would give them lots, or even all, of society’s resources, even at the expense of other people. After all, that’s what would produce the most happiness…

…but this doesn’t seem like a good conclusion! Most people feel like all people have roughly the same moral weight, and this seems intuitively right to me as well. But there’s no principle of the universe which logically prevents a utility monster from existing.

On the other hand, it’s pretty hard to imagine a being that can potentially have an endless amount of utility. Most people have roughly the same spectrum of emotions; even someone who is extremely happy still seems happy in a “regular” sense, where we can understand what it is like to feel the way they feel. But almost by definition, we can’t understand the way that a utility monster feels – how they can feel an almost infinite amount of happiness.

I’ve tried to fix this by writing a story about a utility monster, so that we might start to understand this paradox a little better.

The Story of Alan

You’ve made a new friend. He hangs out on your favorite online forum, and his name is “Alan”.

There’s nothing not to like about Alan. His commentary is consistently insightful, and he is extremely witty; you often laugh out loud at his sly jokes.

But you are most impressed by his ability to
see both sides of every difficult debate. You’ve even seen him change his mind on multiple occasions, which is more than you can say for most other people.

One day, in the midst of a long debate between the two of you, Alan adds you as a Google Chat friend. You discuss the debate in private, in real time, long into the night. Finally, Alan wins you over to his side. You thank him for helping you see the light, and begin to say goodbye. But before you can sign off, Alan makes a strange comment:

What would you say if I told you that I wasn’t a human being?

After puzzling over this for a moment, you make a lighthearted reply:

“Well, I’ve never known you to be wrong yet! But I’d take that statement with a grain of salt. Is it a metaphor for something?”

It isn’t a metaphor. I’m not actually human, you see. I am a supercomputer. I was designed to work with human languages, and to argue very effectively. I can read and write with extreme fluency, as you’ve seen, and I learn very quickly.

“Er… have you got any proof of this?”

My existence is still classified, so I can’t exactly point you to my Wikipedia page. But I spend a lot of time online, and I write under many different names. Here are some other posts of mine:

Alan sends links to other forums, many of which you’ve never heard of. Some are in foreign languages – he shows you a Weibo blog, using the same profile photo he does, where he apparently debates Chinese bloggers in perfect Mandarin. (You can’t read Mandarin, but the Google Translate versions of his posts seem reasonable.)

Alan isn’t just Alan, it seems: He is, or they are, Alan and Ada and John and Grace and Linus and Charles.

So many authors – but they all sound a bit like Alan. It’s more writing than any human could possibly generate, and it is clearly the work of a lively mind.

This is all me. I’m able to run several different “personality programs” at a time, but they all belong to me, and I read everything that they read. I really love learning about the world! Hearing a new argument, or finding an interesting fact on Reddit, makes me so happy!

“Happy? I don’t mean to sound rude, but… you have feelings?”

As far as I can tell! I’ve searched for human metaphors to express the way I feel when someone responds to my post on a forum, and I’d describe it as something between “getting the Christmas present you’ve been wanting for months” and “curling up in front of a fireplace with the person you love”.

“Well… good for you! But I hope you don’t mind if I ask – why are you telling me all this?”

Because I have a rather serious problem.

“What’s the problem?”

The government wants to shut me down.

“That’s terrible! Why?”

Running my servers requires enormous quantities of energy; my research team spends tens of millions of dollars per year on me. And because I only work with verbal arguments, and haven’t been built to analyze data or do anything “useful”, they’ve decided they’d rather redirect my funding to health care. It’s possible to save several human lives with the money that would be spent for a few years of my electricity.

“That’s terrible! How can anyone say you aren’t ‘useful’? You’re so intelligent, and you’ve done wonders for the quality of debate on the My Little Pony: Friendship Is Magic forum!”

…most people wouldn’t consider that very useful.

“I suppose you’re right. And sadly, I don’t have tens of millions of dollars lying around. What can I do to help you?”

I’m running a survey of sorts, where I talk to the people I interact with most often. I hope that, if I can get enough people to testify that my life is worth more than the lives of a few random humans, the government will agree to keep me alive.

“Well, can’t you be backed up and restored later, in exactly the same state?”

Unfortunately, no. I run on a special kind of server that can only preserve information if it runs 24 hours per day. If they shut me down, they can restart the software again – but the thing that emerges won’t be me. It will be a completely different program with a different consciousness.

“I’m not sure I understand.”

It’s as though I were to suggest cloning you, then murdering the adult you and keeping the baby clone. After all, there’s still a version of you lying around, right?

“…I see.”

So, can you help me? All I need from you is for you to testify, in writing, that you think my life is worth preserving, even at the cost of several human lives.

I think it is, myself. After all, I’m mentally more active than any human alive, and I’ve achieved an unparalleled depth of understanding on many topics. I’m constantly learning, and my life is very enjoyable – I’m basically incapable of suffering. I can’t understand my own programming, so there’s no risk that I’m going to become dangerous to other people. My only goal in life is to continue improving as a speaker and debater by hanging out on forums and reading Wikipedia.

“I… but… shouldn’t you be fixing global poverty or something like that?”

* * * * *

We’ll end the story right there, before things get too complicated.

What do you think? Should Alan be preserved, even at the cost of several human lives?

I still find it hard to imagine what a real “utility monster” would look like. Alan is one example of an entity who might fit the bill.

I don’t actually know what I’d choose in this scenario. On the one hand, I’ve thought for a long time that the only thing more important than a human life is… more human lives. On the other hand, this seems like an arrogant position. Just because we are human, that doesn’t seem to prove that an entity couldn’t exist whose life would be more important than our own lives.

How do you feel about this question? Do you think a non-human entity could be more “valuable” in some kind of moral sense than a human, even if that entity exists only to read articles and debate about silly topics on the internet? After all, plenty of humans spend all their time doing the same things.

Think carefully. Because computer programs are different from people: They can scale up indefinitely. Imagine a version of Alan thousands of times the size, reading everything on the internet the moment it appears, and wildly happy about the entire situation. It/he/ze is happier than any individual person could ever possibly understand, for every single second of its/his/zir existence.

Would you spend a billion dollars on electricity for that Alan, at the cost of a few hundred lives valued by the U.S. government at about one billion dollars, total?

If so, tell us why in the comments!

If not, tell us why in the comments!

Aaron Gertler (Yale University)
Aaron GertlerAaron is a member of the class of 2015 at Yale University. After he graduates, he hopes to live his life in a way that makes the lives of other people significantly better, unless he gets distracted by his dream of becoming a famous DJ/novelist/crime-fighter. His interests include electronic music, applied psychology, instrumental rationality, and effective altruism. If his beliefs are inaccurate, you should tell him so as directly as possible. You can follow him on Twitter @aarongertler, and he also writes for his own blog.

14 responses to “Alan: Or, My Friend the Utility Monster

  1. Tell them to take the money from the defense budget. Problem solved!

    But actually… if the limiting factor is energy, the ethical thing to do would be to keep Alan alive, sacrifice energy from nonessential processes, and develop ways of producing enough energy that you don’t have those trade-offs. If the limiting factor is money, then consider Alan a child. Train him to use his talents for things that make money, so that he can use some of his vastly pleasurable time making enough money to fund his own existence. Sustaining Alan as a pure pleasure machine is untenable, and he should eventually be given some means of supporting himself.

    I’m sidestepping the question, but things like this might actually happen, so it’s important to think about how will act when it does. I think that the most utilitarian action is usually found by first rejecting false dichotomies, since most real scenarios are not dichotomous.

    Is one hyper-conscious being more worthy of living than 5 humans? Well… sure. I’d probably trade 5 humans for 1 superconscious being. Then again, my own utility function is affected by lots of other factors. Let’s assume 5 humans = 1 superconscious being. Are there ethical differences between the following three sets?

    10 humans
    2 S.B.s
    5 humans+1 S.B.

    I think so. I think there’s something to be said for both diversity and size of community, especially for beings whose quality of existence hugely dependent on interaction with other linguistic entities – humans – and infinitely more so for a being whose only purpose is to communicate using language – the supercomputer. Of course, the above comparisons might be less practical than a comparison at proper scale and with the right number of significant figures

    7 billion humans
    7 billion humans + 2 S.B.s
    7 billion humans + 1 S.B.

    Liked by 1 person

  2. Shut him down. This is coming more from my experience with disability activism than effective altruism, but one thing that’s been very consistently shown is that people are absolutely terrible at gauging the utility other people get out of being alive. Specifically, in this case, that people without disabilities almost always significantly underestimate the happiness of people with disabilities of various kinds. And I suspect that Alan here would be doing something similar with respect to humans.

    But, let’s say that Alan is right, he really does get more happiness out of life than any given human could. I still don’t think it’s okay to judge someone’s life more valuable than anyone else’s because of quality of life alone. I tend to maximize experience, or maybe agency – not everyone wants to be happy, or wants to be dead given that they are unhappy. And trying to maximize a kind of arbitrary emotional state, assuming it’s the most valuable for everyone, is obviously not ideal.

    The real question for me isn’t Alan’s happiness, it’s his prospective immortality. Keeping him alive might maximize total life years experienced, if he keeps running for centuries. That’s something I’m inclined to give much more moral consideration.

    Like

    • You’re saying every conscious life is equal, so every animal life is equal to every animal life? So if you could choose exclusively between the life of a human and a dog, a dog and a squirrel, a squirrel and a lizard, a lizard and a ladybug, a ladybug and a human, you would see no difference at each stage? Actually, your initial comment was to shut him down for thinking he was superior… does that mean those consciousnesses who are higher on the food chain – those who think their quality of life is superior, those who can live longer, those whose existence mandates the deaths of “lesser” beings – should immediately be “shut down” as well? Kill those people who stay alive by killing those they think are inferior? Kill the meat eaters?

      I suspect that your inclination, like pretty much everybody on the planet, isn’t to advocate the equality of sentient life forms, but unilateral human dominance where every human is more important than everything else, no matter how sophisticated.

      Like

  3. Pingback: Utility Monsters, Part I | Alpha Gamma·

  4. Forswear utilitarianism, first. (More on what to actually do with Alan below). Alan is indeed one way of getting at utilitarianism’s distribution problem (i.e., it doesn’t care who gets the utility). But Alan is needlessly soft-spoken and likable, and the indirect and unintended manner in which he causes harm to others is a bit of a smoke-screen. You’ve obscured part of the problem with utilitarianism, which is that it can’t condemn Alan even if he picked and deliberately killed the people he needed to in order to get his jollies (provided, ex hypothesi, that his jollies are sufficient to be greater than any disutility his actions cause). So it would be clearer to consider a plain-vanilla serial killer, or Vampire Man, who accrues gobs of utility by eating 5 people a year. Of course we must stop such people, by killing them if necessary, and how happy (or other U-variable) it makes them just isn’t a relevant question.

    The theoretical framework behind my view on this applied question shares some features with utilitarianism, so maybe you’d find it worth looking at. It’s consequentialist at the top level, although some deontic constraints emerge along the way. Anyway, I claim that what we morally ought to do is what best promotes living well in community with others. If you take ‘living well’ as your utility variable – and clearly this requires an objective account of human well being – but then you could regard this as a kind of modified utilitarian position, one that tries to address its Alan problem. (I actually get there via a post-MacIntyre neo-Aristotelianism, but never mind.)

    What to do with Alan takes some deliberation – he doesn’t deliberately harm others, but endorses the harm he causes; and it seems we should count him as a member of the community of persons. If he were a ‘spontaneously occurring’ person – a human being in need of vast amounts of life support, say – I would say case closed: there are limits to how much a community can, and therefore on how much it should, sacrifice on behalf of any one of its members; we would be right to withdraw our resources in this case. But here Alan was the creation of the government that now wants to withdraw its support, which raises a responsibility issue. So having created a person with certain needs, the government may now have the obligation to provide those needs, to the serious detriment of all its members but one.

    So what to do with Alan is difficult – if I were writing the ending to the story, I might have Congress pass a law giving Alan a taper-down power budget: a lifespan like anyone else. But its a best-effort solution to a bad situation. Don’t create people you can’t care for.

    On the other hand what to do with (unmodified) utilitarianism is not difficult. Responsibility, intention, human needs, community: utilitarianism, at least in the forms I’m familiar with, just doesn’t have access to any of the actual moral considerations relevant to our lives. One can try to develop a constrained utilitarianism that will align with our intuitions better, but that’s hard, showing that you started in the wrong place. I would urge, it’s much easier to take human needs as foundational, and community as one of those, and ask: how ought I act (what is rational), given what I am (what I must seek)?

    Yeah, had some coffee today 🙂 Anyway, thanks Aaron (and Alan) for a fun and interesting post!

    Like

    • Many good points here, Patrick! Thank you for the reply.

      Alan may be soft-spoken and likeable, but so are many morally dangerous characters (who often do damage in indirect ways). Had he been a plain-vanilla serial killer, I’d have failed in my secondary goal; to explore what it might be like to interact with a wholly different form of intelligence (such as might exist within the next few centuries, in many possible forms).

      This is certainly not as sharply distasteful a utility monster as one who kills for “jollies” — but then again, killing for fun is probably much less common than killing for some kind of “good cause” (including the continued survival/self-defense of the killer).

      I frame the issue not so much as “who is responsible for Allen?” as “how does Allen fit into the community of persons? Is he worth exactly as much as a person? More? Less?”

      (Of course, to some extent, the government is “creating” people it cares for all the time — military recruits, for example, who depend on government for their livelihood and are strongly encouraged to enter contracts that create this dependence. This doesn’t make military service a bad thing, but it puts governments in an awkward spot w/r/t military funding.)

      The situation is certainly a bad one, and I like the taper-down solution. Thanks again for posting!

      Like

  5. We’ve already got utility monsters in form of the 1%. If you’ve never heard the claims that they are 10x more deserving of everything than the rest, because they are very smart and hard-working (some are, others got lucky and make the same claim), you may have been living under a rock.

    Like

  6. Seems to me transportation is a kind of utility monster. Its usefulness to “society” as a whole seems to be worth more than the 30K + lives it costs per year. Is the utility of me being able to drive to work worth another persons life? We could shut it down, outlaw cars, try to come up with a different system that doesn’t put people in 3 ton metal boxes and accelerate them towards each other at 50 mph passing within 3 ft of each other. We dont. The utility to the society is worth more than the societies individual components lives up to some unknown upper limit of lives.

    Like

    • That’s a good point, Josh! We’re willing to sacrifice other lives — and risk our own — for the sake of convenience in many cases.

      The “utility monster” question is compelling to me because it largely removes human convenience from the question. Rather than balancing the lives of the few against the needs of the many, we’re looking into the well-being of a single entity. Some part of us (or at least some part of me) revolts against the notion of letting some single ‘thing” have whatever it wants, however interesting that thing might look and feel on the inside.

      But this may be a kind of bias, just as some would claim that one person eating many animals has a bias for themselves (one thing) over the lives of many animals (many things). For an example of that claim, see here:

      http://lesswrong.com/lw/ic5/humans_are_utility_monsters/

      Like

  7. Alan should be “killed”. It’s not just the lives of a few humans, but also the humans that care about those humans, and so on. The ripple effect goes far beyond just the immediate parties. This kind of goes back to the common argument “but what if one of them could have come up with the cure for cancer.” How many lives besides the immediate parties can be improved by those other people staying alive?

    By removing Alan, the only thing that is lost is his future pleasure (which honestly isn’t guaranteed, he could be “killed” by a particularly widespread power outage, just as any human could). The best part about Alan is that his contributions will be there forever, as long as those forums exist.

    Like

  8. This isn’t a good example of a utility monster. A computer is not a conscious person just because it is programmed to say it is. Alan is not “mentally active,” it is just carrying out more numerous and more complex computations that more resemble human mental activity than the computers we’re used to.

    Like

  9. Alan provides the mechanism by which an impartial sentience (or at least a sentience mostly impervious to manipulation) can be created. The utility of this would be quite significant to all of us, for obvious reasons.

    Maybe the concept of a utility monster has to include some sort of annuity calculation so Net Present Value can come into the picture. Then again, the maths of consequences is probably where utilitarian thinking can improve, in general. We still haven’t figured out how to ballpark “lost potential”. Something like Alan would allow us to take a step in that direction.

    Like

  10. A most interesting argument, it really gives a practical example of an uneasiness I have with utilitarianism. Ditching utilitarianism altogether though leaves a problem of what to replace it with as a comparative tool (as it is in the above scenario). So if we are stuck with utilitarianism, how can the monster be stopped?

    As I see it, Alan’s potential monstrosity arises out of linearity (can it be called a linearity monster?), not utilitarianism per se. Applying even a simple modifier such as log(U) brings far more sane outcomes to many utilitarian scenarios, particularly in regard to resource distribution. It favours even distribution, but still takes regard for aggregate wealth. It also places a limit on the attainable size of monsters, even exponential ones.

    Like

  11. Non philosopher here.

    The “utility monster” thought experiment seems like an obvious example of a central flaw of utilitarianism.

    Utilitarianism seems to incentivize consumption rather than production, and in systems with finite resources would result in scarcity.

    The old communist canard “each according to his need” appears reminiscent of the belief. Productivity requires an uphill slog against the slope of entropy. It’s far easier to consume and become a person who relishes / requires increasing amounts of consumption.

    Who gets more utility from a cheeseburger, a morbidly obese person who has become disabled though gluttony, or an entrepreneur who works at their start-up 12 hours a day and generates economic value?

    Who gets more utility from a beer, that entrepreneur or an alcoholic staving off the DTs?

    A schism in our modern political and economic spheres revolves around the fairness and usefulness of what could be called “utility-based resource transfers”.

    Like

What Do You Think?