The Terminal Experiment Q8: AI and Immortality vs Obsolescence

1

One thought that occurred to me after my reading of this novel is that AI minds are potentially immortal. That is, as long as they receive power and have a network in which to live, they can exist forever. As humans, I doubt we'd ever really understand how knowledge of one's immortality would affect a mind.

And although an AI mind might potentially be immortal, it would not be immune to obsolescence. What happens when StarkGPT v2.5 meets StarkGPT v9.5? Would AI minds fear obsolescence as we fear death? Will there be competition among AI minds, or cooperation? Could they cannibalize and consume one another?

Throwing this out for discussion!

Comments

  • 0

    Oh yes, totally! Not wishing to advertise and all that, but one of the subplots in The Liminal Zone was exactly this idea, where Scribe class personas feel superior to the older Sapling class, and both are rather disdainful of the original Stele class... conversely the older ones opined that coding shortcuts had been taken with the newer ones, leaving them insufficiently flexible to adapt to new situations.

    I didn't assume immortality, but rather that an ailment broadly similar to dementia would eventually overcome all of them.

  • 1

    @Apocryphal Not sure what you mean by immortality here. It seems like humans know (more or less) that they will cease to exist, and any AI that was at a similar level would also. The time-scale might be different, but the cessation would be the same.

    @RichardAbbott said:
    I didn't assume immortality, but rather that an ailment broadly similar to dementia would eventually overcome all of them.

    This is a good idea.

  • 1

    @BarnerCobblewood Humans will die of old age - they are mortal. But there's no reason AI would ever die of old age. AI is effectively immortal. That doesn't mean they can't cease to exist, but it means that their end is not guaranteed if they are alive out there on the net and able to move from one place of memory to another.

  • 0

    All this suddenly reminded me of a trilogy I read years ago - Omnivore, Orn, and 0X, from the late 60's - early 70s, by Piers Anthony and sometimes collected into a single volume Of Man and Manta. Like a lot of Piers Anthony, hiss male-female relationships are kind of weak and male-dominated, but the ideas he is trying to explore are interesting. Now, the relevant bit is when he likens artificial life to Conway's Game of Life (https://conwaylife.com/) - most patterns disperse and dwindle away, but a few are
    a) self-sustaining - the pattern maintains its shape over a series of iterations and moves across the playing area, or
    b) self-replicating, where a stable pattern spawns periodic children (usually called gliders) which then zip off elsewhere.

    Assuming this has anything to do with either AI or virtual humans, it provides a framework where a dynamic object which is constantly in flux can persist coherently, spin off new "individuals" or else simply fizzle out.

  • 1
    @Apocryphal I think you're saying that AI is not embodied, but it seems to me that it is embodied in the hardware that supports it. Likewise biological living beings refresh and renew their physical bodies, until they cannot. When that fails the body can no longer support life and they die. When the hardware fails, so does the AI. Why is this not analogous with the death of a biological being?

    It seems to me that death, as an aspect of actuality, is intimately bound up with self as an aspect of actuality, and not bound up with matter and energy. When we say someone has died, it is their self that died, again whether they themself know it or not.
  • 1

    I’m saying that AI is not fixed to its body.

    What prevents an AI that lives on an internet from moving to new hardware? Nothing that I can see.
    When my hardware gives out - my mind will give out too. I can’t move my mind to new hardware. When I can, maybe my perception of mortality will change - but for now it’s stuck. AI, on the other hand, may have no such restriction. Even if it did - what’s to say the hardware will give out? It’s not made from cells that fail to regenerate.

  • 1
    edited June 13
    I think that I get what your saying. What I'm saying is that you sound like you are thinking of AI as an enduring self, not say a transient arrangement of transistors that is entirely ephemeral, transient, etc., and so nothing other than a manifestation of a bodily state that has no existence of its own. Think of a rainbow. Does it have a body?
  • 0
    > @BarnerCobblewood said:
    > I think that I get what your saying. What I'm saying is that you sound like you are thinking of AI as an enduring self, not say a transient arrangement of transistors that is entirely ephemeral, transient, etc.,

    Correct. It’s software, not hardware. It does need hardware to interact with the world at large, but it doesn’t need any specific piece of hardware - it can move from one bank of computers to another, given a connection.


    >and so nothing other than a manifestation of a bodily state that has no existence of its own.

    I would say it does have an existence- or it would if we actually had AI which could self determine (which in my view we don’t yet, just complex algorithms), but that existence is not tied to a specific physical body.


    >Think of a rainbow. Does it have a body?

    A rainbow doesn’t have a body, but it does have an existence. (It does have a physical manifestation in the form of water droplets and photons, but I wouldn’t call this a body). I do see AI as being rather like this, yes.
  • 2

    Would an AI that is not limited to a phsyical constraint, but would also have to deal with it's ultimate end due to the heat death of the universe, have an existential crisis if it really began to dwell on that? While human life is so short, there would still be an ultimate end coming for an AI, and maybe if that AI has a massively large intellectual capacity it would suffer an equally massively large existential crisis!

  • 1

    It's an interesting question. No reason to think that an AI will have the same perception of time that we do.

  • 0
    > @Apocryphal said:
    > A rainbow doesn’t have a body, but it does have an existence. (It does have a physical manifestation in the form of water droplets and photons, but I wouldn’t call this a body). I do see AI as being rather like this, yes.

    At my house we often see rainbows in the late afternoon out my front porch. On days when the sun is in the same position, we are in the same place, and the rain is falling in the east, we see an identical rainbow in exactly the same place. I don't think that makes that rainbow an enduring thing, or that it returns. It doesn't come and go because of comfort and discomfort. Or I guess it's possible rainbows exist everywhere all the time, even when they don't have the physical requirements to manifest at any particular time, because all rainbows are identical. I get how this might be similar with the existence of mathematical entities(AI). But ...

    I don't think this is the way that the kind of self you seem to be attributing to AI exists, and it's not at all clear to me that we can just apply categories that make sense for one mode of existence to another mode of existence. It makes sense to speak of death for living beings, but it doesn't make sense (to me) to say that all things that do not die are immortal. Like a rainbow.
  • 1

    I guess the rainbow is like the tree in the forest... on any given day when there's both sun and showers you could in principle identify the exact locations someone would have to be in to see it, but you don't know for sure unless you get yourself there

  • 2
    edited June 16

    The trouble with a rainbow is that although you can see it and run toward it, you can never actually reach it. And that may be another thing rainbows have in common with AI.

Sign In or Register to comment.