Westworld As Prediction

in #westworld7 years ago

(Draft, to be published in Ultimate Westworld and Philosophy. Do not cite, comments very welcome.)

Call a machine which looks and acts exactly like a human being an android. Assuming that AI and robotics continue to develop as they have been, we will be able to make androids sooner rather than later. Given this, one of the most pressing moral question of our generation and those which follow us is how we ought to treat the androids we create.

This is a topic of much science fiction, and one of the many interesting questions posed by Westworld. In episode two, the question is dramatically presented when William and Logan, newly arrived at the park, are having dinner. A host comes up to them, encouraging them to go on a treasure hunt. William, then a newcomer to the park, is unable to conceptualize what’s in front of him as a mere machine, and politely refuses, respectfully listening to the host’s spiel even though he’s not interested. The more seasoned Logan, with complete indifference, stabs the man in the hand with a fork to get him to leave. When androids come, should we be Williams (at least, young Williams) or is it morally okay to be Logans?

This fascinating question is not the one I propose to directly explore in this essay (though I will say some things about it). Instead of asking how we should treat androids, and I will ask how we will treat androids, and try my hand at predicting the future.

And my prediction is, basically, that places like Westworld will arise in human society. Harnessing work from philosophy, anthropology, computer science, as well as some basic facts about economics, I will argue that, having developed androids, we will exploit them for our purposes, but their very indistinguishability from real humans will mean we won’t be able to do that with a clear conscience. I’ll then suggest that to assuage our conscience we’ll keep the exploited androids out of sight and out of mind, just as we keep the millions of animals we use for food locked away in factory farms. But exploited androids kept out of the way for human’s use is just what Westworld presents to us. So Westworld is a dark prediction of our future.

Cheery, I know. And maybe I’m wrong. Hopefully I’m wrong. But a set of plausible claims makes it seem to me that my conclusion, if far from certain, must be taken seriously. I will first set out the claims as premises, before spending the rest of the essay giving reasons to think they are true.

The economic premise. If we develop androids, we will do so in order to have them do work (including emotional work) for us, and we will exploit them.

The computer science premise. If we develop androids, our ability to fine-tune them to prevent behaviours like the exhibition of pain and suffering will be extremely limited.

The anthropological premise. We will be inclined to treat these androids as if they were alive.

The moral premise. We ought to treat these androids as if they were alive, and in particular we ought not to cause them to suffer.

The factory farm premise. When we have beings who work for us, whose behaviours we can’t control, and which we’re both inclined to treat as alive and which we ought to treat as alive, we lock them away in order not to see the suffering imposed on them by the work we make them do.

These together lead to:

The Conclusion. We will lock androids away in order not to see the sufferings imposed on them by the work we make them do. But this is Westworld. So Westworld is our future.

The economic premise gives us that we’ll exploit androids, which will lead to (at least) the exhibition of suffering and displeasure on the androids’ part. The computer science premise gives us that we’ll be unable to code out this exhibition of suffering and displeasure. The anthropological premise will give us that we’ll be inclined to treat them as alive, and thus to be sympathetic to their exhibitions of suffering and displeasure. The moral premise tells us that if you have something acting as if suffering, you ought to relieve its suffering, even if you’re uncertain if it is in fact really suffering. Together, these mean that androids will be like factory-farmed animals, and so it’s reasonable to think that they will be treated as such, locked away from human view, which is the conclusion.

The economic premise

The economic premise says that the very raison d’être of androids is to serve humans, and, indeed to be exploited by them.

You could deny this. Surely, you might think, the reason for developing androids is just like the proverbial reason for climbing Everest — because it’s there to be done. AI isn’t primarily about serving humans, this thought goes, it’s about nerds making the best chess playing machine for the nerdy sake of it.

Maybe this was true a couple of decades ago, but it is no longer. The reason people care so much about self-driving cars is precisely because they threaten, by promising to revolutionize, a core area of human labour. Drivers are threatened by being rendered obsolete by machines which don’t make mistakes or take breaks, and the CEO of Uber looks with wonder at the possibility of getting rid of messy, inconsistent, expensive humans from his business.

The point generalises. Most populations are aging in ways that we don’t know how to deal with. Health insurance and pensions are not enough to pay for the treatment a sick, aging population will require andso we’re faced with old people with a range of conditions which need medical attention, and not enough doctors and nurses to treat them. We’re faced, moreover, with an epidemic of loneliness, which research suggests causes physical pain. Android doctors, nurses, and companions offer solutions to these problems, but they are solutions which fundamentally depend on the idea that androids serve humans.

That’s not so bad, yet, though. But here is a crucial point: for androids to be economically viable, we’ll have to exploit them. People will have built them and paid for their development, and they’ll want to get the highest return on investment possible.

One of the main roles for androids will be emotional labour: the labour of caring, of just being with the sick, lonely and bereaved. Someone to see someone through a hospice, to spend time with the [insert your own description of peak annoyingness here], or with the bereaved widow. And that costs. It’s hard to just be there while someone dies, to look after the lonely, to spend time with the annoying.

That’s why it’s vital that people who do these jobs have time off, are offered counselling, are able to decompress after a shift and come home to family and friends, to rest from their emotional labour. But now if we were to give the same to androids, time off, counselling, friends and family to support them, the androids could well end up costing, when you factor in the research and development and production that has gone in to them, as much as humans. Then there would be no economic benefit to introducing androids, and so the androids probably wouldn’t get built. What we should expect is that the androids will be expected to work longer hours, skip breaks, and so on, so that the manufacturers really get their money’s worth without having to worry about complaints about violating human rights.

So androids, if they’re to be economically viable, will have to be exploited. To bear more of the (especially) emotional costs that people require. That’s premise one.

(You might already be objecting — ‘wait, these are machines. Speaking of exploitation here makes as much sense as speaking of the exploitation of a car wash’ — hold that thought for a paragraph or two.)

The computer science premise

That’s not enough to really get us worried though. Because after all, one might think, the great thing about beings which we create is that we can create them as we wish. Westworld, to some extent, would lead us to believe this. It gives us a picture of Arnold tinkering with his hosts, changing their algorithms in subtle ways to achieve subtle effects.

Given this, then there’s an obvious response to the worrying thought above: we simply program androids not to tire out, not to feel the brunt of the emotional labour they carry out. An apt analogy would be that just as we build engines out of parts that won’t wear out quickly as compared to puny human arms, so we would build our humanoids to not mentally wear out, compared to our puny human minds.

But androids won’t be steam engines, and won’t be designed like steam engines. And that leads me to my second point: there’s little reason, given what we know about AI as its practiced at the moment, to think that we’ll be able to have such fine-grained abilities when it comes to what properties we want and what properties we don’t want our androids to have. Westworld gives us a world in which we can dial a host’s intelligence or bravery up or down with a touchscreen; that is unrealistic.

In order to make this point, it’s necessary to take a little detour into the philosophy and computer science of artificial intelligence as it’s currently practiced.

If you’ve done any coding, you might be under the impression that in developing AI we’ll simply write a program that spells out what the android should do if it finds itself in a situation of a given type. For example, you might imagine it would look something like (pardon the funky indenting):

if(HeadTapped()==True)
       if (InsectFlying(Self.Head)==TRUE)
             Swat()
        else if (ProjectileNearby(Self.Head)==TRUE)
              LookForAttackers()

In English, this would be spelling out what the humanoid does when it finds itself in a particular situation, the situation of having its head tapped. If it has, it checks to see if it’s either an insect or some projectile that tapped its head, and on that basis it performs some action.

On this model, we explicitly hardcore behaviour in our androids in the form of a computer program. And, so, we can choose to not hard code certain other behaviours in the program. In particular, we could avoid introducing any routines that dealt with pain or unhappiness at all. It simply wouldn’t be part of the design of the android to feel pain, or to be exploited.

The thing is, that’s now how AI works at the moment, and it’s very unlikely that an android, which, remember, was defined as something which acts and behaves exactly like a human, could be coded like that. One of the problems, as you might imagine, is that to hard code all the possibilities of human action in statements like the above is just not plausible. What if it wasn’t a projectile or an insect, but a leaf or a floating plastic bag or a strand of its own hair? Or what if the humanoid was a Jain so swatting the animal wasn’t an option? Thinking about it, you realize that human action is much too open ended to be coded down into a set of routines like the ones above.

This is confirmed by looking at how current AI works. AlphaGo, for example, the program which famously beat human players in the ancient game of Go, doesn’t have all the possible situations it could find itself in hard coded in its program. Instead it, as well as other famous AIs, figures out what to do by itself. This is the crucial point. We don’t say: do this when you find yourself in this situation X and want to go to situation Y (where X might be hit by insect, and Y relief from bothersome insects). Instead, we feed the AI training data in the form of inputs and outputs, Xs and Ys, and let it work out a function that will take it from situation X to situation Y. So what we would do, roughly, is show the AI a bunch of humans interacting with, in this case, buzzing insects, and try to get it to work out itself how to behave like the humans do, which is to say how to produce the appropriate human-like output to the input of being assailed by an insect.

So why and how does the AI do what it does? That’s not for us to know. You can think of it as a black box that takes in input and produces output, a blackbox encoding a function that, though learning, increasingly approximates the behaviour of humans as found in the training data.

What that means that these AIs are going to be, in a sense, uncontrollable. We won’t be able to tell the AI what to do and what not to do, and so in particular it seems like we won’t be able to tell it not to exhibit pain-like behaviour in exploitation-like situations.

(You might think with suitable ingenuity in framing the teaching data we’ll be able to avoid these problems. After all, self-driving cars are taught from human drivers, but they don’t pick up all the human behaviours of human drivers — they don’t text at red lights, and so on. Granting this, we should still not be optimistic about our ability to control what androids will be like. Consider, for example, the astonishing fact that Google Translate, off its own bat, developed its own language without any human prodding whatsoever. At a certain level of sophistication, AI is massively outside our control.)

The key point is that the behaviour of androids is likely not going to be something which we can pick and choose about. And that means that pain behaviour and exploitation behaviour is not going to be something we will have control over.

But, still, you might think, surely even if this is so, the humanoids will still be machines. It doesn’t make sense to speak of a machine being exploited or suffering, so we shouldn’t be worried even if our humanoids do show signs of suffering. But this isn’t so, and leads us to the next two premises.

The moral premise

If it doesn’t make sense to speak or machines as suffering, then we don’t need to worry. But we do need to worry. In this section, I’ll argue that we morally ought to treat androids as we treat humans.

In order to make the first point, I want to present a variation of a famous argument in the philosophy of religion, Pascal’s Wager. Pascal’s Wager is this: you should believe in God, or at least try to believe in God, because if you don’t and God exists, then you’ll miss out on an infinity of happiness in heaven (if you don’t and God doesn’t exists, you can have the mild, let’s face it, pleasure of a sinful life). On the other hand, if you do believe and God exists, then while you’ll have to act in a certain way on earth (maybe going to church, refraining from coveting thy neighbor’s ass, and so on) that will be a bit of a hassle, that hassle will be more than repaid by the infinity of happiness in heaven. So, if there’s even a slight chance that God exists, you should believe.

Without getting too bogged down in what’s known as decision theory, if you have two ways of acting, and one will bring about a small loss or gain with a good chance, while the other will bring about a massive gain with a very small chance, you should do the other thing.

Now imagine you’re Logan and you’re wondering whether or not to jam a fork in the host’s hand or not. If you do, and it’s not alive, then no harm, and you’ve had a bit of sadistic fun. If it is alive, then you’ve done something horrible — inflicted suffering on something who can feel pain. That’s really bad. Any decent person feels very bad if they hurt an innocent other. On the other hand, if you don’t do it, you neither get the small sadistic fun, but nor do you risk doing something very bad. So you should not do it, even if you’re almost sure the host is indeed a lifeless machine. The mere chance it could be a living suffering thing means you shouldn’t risk it.

That shows, I think, that we should treat androids like humans if there’s even a slight chance that we think they are capable of suffering. But now consider our exploited nurses: we should be very worried, when they’re complaining about their long hours and emotional exhaustion, that we are in fact doing something very wrong by shortening their breaks, even if we think it’s very unlikely that there’s a ghost in the machine.

However, remember I’m interested in what will happen, not what should happen. People do things they shouldn’t do all the time, so it could well be that we will all be Logans, even if we think we should be Williams.

The anthropological premise

But there is in fact strong evidence from anthropology not only that we should treat androids like creatures with minds, but that we do, indeed that we’ve been doing so for decades.

To see this, I want to discuss the work of MIT anthropologist Shelley Turkle. In her book Alone Together she reports work she has done over decades with children and robotic animals. The results are fascinating. One important finding is that children don’t act with a binary alive/not-alive distinction, one which parts simply humans and animals on one side and toys such as Furbies and Tamagotchis on the other. Rather, for children, the notion of aliveness is gradable and interest-sensitive: stripped of the jargon, this means that one thing can be more alive than another, and that something can be alive relative to a certain purpose. Here are some quotes from the children:

“Well, I love it. It’s more alive than a Tamagotchi because it
sleeps with me. It likes to sleep with me.”
“I really like to take care of it. . . . It’s as alive as you can be
if you don’t eat. . . . It’s not like an animal kind of alive.”
(Alone Together, 28)

For much more discussion, I refer the reader to Turkle’s book. But for now, we can ask: well, so what? What consequences should these facts about children and furbies have for androids?

On the one hand, you might think these findings are unimportant. That just because children treat animaloids this way, doesn’t mean that humans will humanoids in a similar way. Firstly note that this response requires much more caution, for the obvious fact that in the future, among those who interact with humanoids will be children. Indeed, if humanoids are disproportionately found in the caring services, then to the extent that children use these services, we will be faced with the problem of children-humanoid behaviour on a large scale.

The second thing to note is that it’s not only children who do this. If they aren’t as ingenuous and philosophically surprising, adults adopt similar attitudes. Turkle tells us of old Japanese people who care for dog-like creatures, of young professionals who sincerely state they’d be happy with a robot to replace their boyfriend. It could be that we haven’t come to view life as explicitly gradable in this way just because, up to now, most robotic animal-like things have been aimed at children and the elderly, thus people who tend not to, frankly, be listened to.

I’m thus somewhat tempted to make the prediction that once humanoids come along, adults will treat them as children treated their furbies: as attributing some life to them. That adults are working with a gradable concept of life, just as children are, it just hasn’t become apparent yet. Moreover, if we’re assuming that these humanoids will be much more sophisticated than furbies, then it’s plausible that they’ll consider them more alive, perhaps much more alive, than any human has so far considered any humanoid robot.

But this means that not only should we, but we will, be moved by their suffering and exploitation. The crucial question now is: how will we deal with these feelings?

The factory farm premise

This might seem all for the good. If we ought to treat humanoids as if they had minds, and we’re by nature disposed to, then things are good, right? What is normatively demanded of us and what we’re instinctively inclined to do coincide, a rare and happy occurrence.

Well, but not really. Remember why we’re going to have humanoids in the first place: to do stuff for us. These things which we ought to treat well and which we’ll be disposed to treat well — their whole raison d’être is to be of use to us. That will require, I argued, that we treat them badly. If we need so many people to function as nurses for our aging society, it’ll do no good to introduce a bunch of humanoids who’ll then need to be treated exactly as well as humans. So the economic imperative which will lead to humanoids’ creation will lead to their being treated badly. So how will we resolve this tension between our morals and instincts and economics?

This is where things start to look less nice. There are some creatures whom we mostly believe ought to be treated well, and whom we are instinctively inclined to treat well but whose economic raison d’être turns on them being treated badly: animals, and more particularly animals used for food.

Without going into all the gruesome details, consider these facts, which only touch the surface, culled from Jonathan Safran Foer’s recent popular account of the morality of food, Eating Animals. The typical chicken you find in a supermarket comes from an animal that is kept in a cage with sides about the size of the piece of paper you’re reading (less than your laptop screen, probably, only a few times bigger than your smartphone). These cages are stacked very high, they’re inside, and the chickens go from there to their death, never seeing outside, not to mention their chicken family.

Or consider the eggs you eat. These come from chickens who are tricked into believing it’s spring by being held in shed first kept entirely dark in a near starvation diet (to mimic winter), then in a shed kept entirely light 16 hours a day. The chicken thinks it’s spring and starts to lay. Once they’re done, it’s back to artificial winter and then to artificial spring again. In that way, they produce three hundred more eggs than they do in nature, and after their first year, when their yield lessens, they are killed. Moreover, the ‘husbands’ of these egg layers, which are not used for food,since they’re obviously not apt for laying eggs,are simply killed, to an order of 250 million a year.

Make no mistake, in walking in a supermarket, in pretty much any animal product aisle, you are walking in the remnants of mass torture that’s hard to conceive. And we do find it hard to conceive. Chances are, if you’re a meat eater, reading this paragraph will make no difference to your consumption. If you’re not, it’s considered bad form to judge, at least overtly, those who do eat meat or animal products.

And now let’s return to Westworld. We’re faced with a contradiction. Human nature and moral reasoning will incline us to treat these humanoids as alive — “alive enough”, at least, to be nurses or companions. Economics will require that they be exploited. That means longer hours in the ward, fewer breaks, that means them dating the really boring people who [insert your idea of undateable boredom here]. That means making them suffer. How to reconcile these facts? How to reconcile that we treat these alive enough creatures badly? Well, if the development of factory farming is a worthy analogy, as I have suggested it is, the thing to do will be to hide it from view. Don’t present the suffering we’ll inevitably cause these creatures, keep it hidden away.

A place away from day-to-day life, where humanoid creatures are used by humans. Sounds familiar? As a reader of this volume I would hope that it does, because I’ve essentially just described Westworld to you. And so my grim conclusion is that Westworld is our future.