Thursday 10 June 2010

Mortality makes us human? Ugh.

Roger Scruton argues against the transhumanists:
But there is truth in the view that hope springs eternal in the human breast, and false hope is no exception. In the world that we are now entering there is a striking new source of false hope, in the “trans-humanism” of people like Ray Kurzweil, Max More and their followers. The transhumanists believe that we will replace ourselves with immortal cyborgs, who will emerge from the discarded shell of humanity like the blessed souls from the grave in some medieval Last Judgement.

The transhumanists don’t worry about Huxley’s Brave New World: they don’t believe that the old-fashioned virtues and emotions lamented by Huxley have much of a future in any case. The important thing, they tell us, is the promise of increasing power, increasing scope, increasing ability to vanquish the long-term enemies of mankind, such as disease, ageing, incapacity and death.

But to whom are they addressing their argument? If it is addressed to you and me, why should we consider it? Why should we be working for a future in which creatures like us won’t exist, and in which human happiness as we know it will no longer be obtainable? And are those things that spilled from Pandora’s box really our enemies – greater enemies, that is, than the false hope that wars with them? We rational beings depend for our fulfilment upon love and friendship. Our happiness is of a piece with our freedom, and cannot be separated from the constraints that make freedom possible – real, concrete freedom, as opposed to the abstract freedom of the utopians. Everything deep in us depends upon our mortal condition, and while we can solve our problems and live in peace with our neighbours we can do so only through compromise and sacrifice. We are not, and cannot be, the kind of posthuman cyborgs that rejoice in eternal life, if life it is.[emphasis added] We are led by love, friendship and desire; by tenderness for young life and reverence for old. We live, or ought to live, by the rule of forgiveness, in a world where hurts are acknowledged and faults confessed to. All our reasoning is predicated upon those basic conditions, and one of the most important uses of pessimism is to warn us against destroying them. The soul-less optimism of the transhumanists reminds us that we should be gloomy, since our happiness depends on it.
(HT: ALD)

Yes, it would be wrong to be overoptimistic about the coming of the singularity, but that's not the pessimism Scruton's calling for. He wants pessimism about the idea that massively extended lifespan or immortality could be consistent with being human.

Let's take the limit case then. At what point does expanded lifespan cease making us human? Once we've doubled average life expectancy? Surely not: we've already done that since the middle ages and more. Would our medieval ancestors not recognize us as human? Of course. Once we've increased it by a century? Two centuries? Lazarus Long certainly seems human to me.

I can agree that if immortality is achieved through uploads that we might well not recognize those descendants as being human. But various cyborg life-extending implants? Why not? Are folks with pacemakers not cyborgs too? I count them as human. How about cochlear implants? Future versions of cochlear implants that give normal folks augmented hearing? Future artificial organs that can be indefinitely replaced? Uploads seem a category shift; the rest seems pretty much on the continuum we're already on.

I can't see anything about the human condition that requires short lifespans. Joy in the young, reverence for the old, love, friendship, happiness - I can't see these doing anything but augmenting with longer time horizons. I can buy arguments that uploads wouldn't be human, but I'd definitely prefer being uploaded moments prior to death than not being uploaded.

I'm far less an "immortality through kids" guy than "immortality WITH them".

14 comments:

  1. Even an uploaded conciousness need not be inhuman if it could then be placed into a manufactured or cloned body. And if we're going down the sci-fi path (inevitable given the scenario above) then I'd argue that Asimov's Andrew Martin was eventually every bit as human as you or I. In many ways he was better than the humans he sought to emulate, sadly there are some pretty crappy people out there.

    ReplyDelete
  2. Lats: My worry is more that the uploading, as best I understand it, destroys the original form. So even if the neural simulation works perfectly, the current instance of me is destroyed when the simulation starts. By contrast, giving me some hardware upgrades modifies but doesn't destroy the current instance of me. That's why I wouldn't upload other than at a near-death point. And, why I wouldn't go anywhere near a Star Trek transporter unless the alternative were near certain death: the current instance of me is destroyed, with a replica picking up where I left off.

    ReplyDelete
  3. I'm very suspicious of the idea that a perfect copy of you isn't you. If there's continuity of experience and functional equivalence, I don't see how you'd say it's not you without resorting to some sort of Cartesian soul.

    ReplyDelete
  4. So, if you had a perfect copy of you standing beside you, and a madman were going to shoot one of the two of you, you'd be indifferent? Really?

    ReplyDelete
  5. I'd think the self preservation mechanism (please don't kill me) kicks in for both copies, of course I guess that the fact that both copies exist at the same time means that one is not a perfect copy of the other anyway - they started diverging at the point of copying.

    ReplyDelete
  6. I'm sure I wouldn't be indifferent at the time for the reasons Duncan mentions.

    If I knew at some point in the future a perfect copy of me would be made and one would be killed, I'd be indifferent as to whether it was the one with more continuity of physical components.

    Of course, that's me standing back and considering it dispassionately. I think that's the right attitude, but we're all folk mental essentialists and a mistaken view of psychology might kick in in the heat of the moment.

    ReplyDelete
  7. @Duncan: yes, but greater self preservation for the instance of the self in which my consciousness resides than for the duplicate of it.

    @Brad: I just can't bite the bullet on that one. And even starting to think about the rate at which I discount other mes' (how do you do first person plural possessive for clones anyway?) utility relative to current me's utility makes my brain hurt.

    ReplyDelete
  8. Eric: I think it's natural to feel that way intuitively. I don't see any good argument to the effect that continuity of atoms (as distinct from continuity of experience and functional equivalence) is relevant for identity. If one atom is as good as any other, why does it matter?

    ReplyDelete
  9. @brad: Because I know that, even if me' continued exactly as I would have, current me would cease enjoying experiences.

    ReplyDelete
  10. What's the relevant difference between you' and current you? Physical continuity seems just as arbitrary as something like your name.

    I'm saying there's no inner core of which makes you you; the self is emergent from a bunch of physical stuff and I don't think it matters whether we replace all that stuff with equivalent but numerically distinct stuff.

    I'm sure this is a question Robin Hanson would answer to our mutual satisfaction.

    ReplyDelete
  11. Hanson is indifferent between whether me or me' is shot. In either case ME got smaller. (Maybe all caps is the best way of pluralizing first person in cloning-talk.)

    Hanson further would ask what the difference is between you being shot and you' continuing, and you going to sleep and you waking up.

    And I'd start squirming.

    ReplyDelete
  12. If Hanson agrees with me, I'm claiming victory. :P

    It is really counter-intuitive, but I'm pretty sure it's right. We shouldn't expect our intuitions to match the world all that much, and we should discard those we find wanting.

    I do hear Oakeshott and Hayek yelling in my ear when I say that, though.

    ReplyDelete
  13. I simply cannot imagine a world in which I'd be indifferent between me and me' being shot. Or, one in which I'd not be willing to pay reasonably large sums to ensure that me' is shot instead of me.

    Count me with Homer on this one should I get the magic hammock.

    ReplyDelete
  14. Really interesting discussion on the nature of individuality and consciousness. I have to agree with Eric on this one though, I'm pretty sure I'd prefer me 2.0 to be the one taking the bullet, rather than me 1.0. No doubt me 2.0 would feel the same about me 1.0 though, so would probably be a matter of chance as to which copy survived. Ideally we'd both do something devilishly clever, subdue the madman and wrest his gun away from him, thus ensuring mutual survival. Yes, thats how it would play out...

    ReplyDelete