The Terminal Experiment Q7: Scanned Minds

1

The use of Peter's scanned mind to create AI seems to me to different from generative AI in one specific sense: It creates an AI copy of a living human being, which allows for a direct comparison of how the two minds would evolve and react to external events given a common starting point. In other words, such an AI could be compared to a very similar human mind. What are the implications of this?

Comments

  • 0

    Yes I agree - completely different from generative AI which basically (this is a simplistic view which @NeilNjae may well prefer to correct) learns language skills and facts from a vast compendium of existing knowledge. My own feeling about the state of the art is that it mimics intelligence by being very versatile at language.

    That said, I now use an AI assistant extensively while coding and preparing website material (though not while writing or generating peripheral material for that). I find it an excellent tool for coding help, so long as you use it cautiously and with a decent amount of prior knowledge. I don't know how well a complete novice coder would do. Here's a geeky example

    "Can you give me some sample code for calling such-and-such API?"
    "Sure, here it is [code snippet follows]"
    "Hmmm. Shouldn't that be an async call with the await keyword?"
    "Yes you're right, here's the revised code..."

    Now, I happen to know about async/await because of previous work, and just needed the syntax of the surrounding bits to save me looking up other documentation, but if I'd just used the first version stated then it wouldn't have worked.

    Contrastingly, if Sawyer's idea is valid, that a complete person with all prior experience and whatnot could be uploaded and used as the starting point for further development, then that virtual individual is in a much better position to assess options. (Assuming that other technical problems could be overcome).

    Now, I do think it's only a matter of time before AI assistants get more authentically intelligent and aware (whatever those two words mean) and become personas in their own right. I can imagine an interesting plot where a "proper" AI individual interacts with one of Sawyer's uploaded individuals - could others tell the difference or not? A kind of new take on the Turing test?

  • 1

    I'm no expert on LLMs work so can't speak too much to that, but Sawyer's version of AI is what I believe the end goal would be. Direct copies that could be iterated on. Imagine a world in which we could "spin up" a direct copy of a person's brain in some way and then start turning on and off different parts of their virtual mind to figure out what changes were made. We would learn so much about the brain that way. Then again, to get to that point we would already have made such massive (almost fictional) advances in our knowledge of the brain that I wonder if we would be interested in that any more. There's also the ethical questions about whether a perfect copy of an individual's brain would in actuality BE that person or not, and whether it would be right to perform these types of experiments even if it was only done "digitally". Truly, I struggle to see how we would ever get to that level of accuracy in this area - it feels a bit like being able to perfectly model the weather - too many variables to control for.

  • 1

    Did I miss something? I can't recall any discussion of actually creating AI in the book. If anything the sims were a shortcut, and no one seemed to treat them as "Artificial." Mimesis is different from manufacturing.

    Maybe we're talking about different things. Can Intelligence can be separated from sentience? Sentience entails selfhood, whether or not perceived by the sentient being itself. I saw the story here more grounded on the old Socratic/Platonic idea that a Philosopher's ideas (brainwaves) are sentient-beings like children, which seems to me at best an indication of defective thinking about what things are. But obviously a minority opinion.

  • 1

    I'm not sure about that. Currently, intelligences are born with bodies, they grow with the bodies, and they die with the bodies. This is natural intelligence. Artificial intelligence would be any other kind of intelligence - an intelligence that wasn't born, or that didn't inhabit a body. The sims in the book have their origins as scanned human minds, but they take on their own personalities and existences once separated from their origin.

    Is intelligence separate from sentience? I believe so. Plants have a degree of sentience - they can detect light and grow toward it, for example. But they have no intelligence.

    And I'm not sure that sentience entails selfhood. A drone bee or ant is certainly sentient, but perhaps have little concept of the self. What about colony organisms like fungus and corals?

  • 1
    @Apocryphal interesting point. For me sentience indicates responses that suggest comfort and discomfort, which entail that there is a self there, even if it is not aware of itself.
  • 1

    @BarnerCobblewood said:
    @Apocryphal interesting point. For me sentience indicates responses that suggest comfort and discomfort, which entail that there is a self there, even if it is not aware of itself.

    That's pretty much synonymous with "life": bacteria will move from areas of "discomfort" to "comfort". If "sentience" has a useful meaning, it needs to be something more specific than that.

    Most AI researchers get around problems of these definitions by ignoring them. The Weak AI approach is simply to judge tools as being useful or not.

    The definition of "intelligence" is constantly shifting, too. One aphorism from AI research is, "If we don't know how to do some task, it's artificial intelligence; if we do know how to do it, it's just programming." And also "intelligence" is following a "god of the gaps" type argument. There have been plenty of things we've taken as defining intelligence or humanity, and then find examples (natural or artificial) that replicate them (e.g. tool making, or chess playing). That just prompts us to re-define the essential nature of intelligence.

    Personally, I think the current obsession with large language models is a combination of hype, wishful thinking, and the Eliza effect of anthropomorphising things that use words.

  • 1
    > @NeilNjae said:
    > (Quote)
    > Personally, I think the current obsession with large language models is a combination of hype, wishful thinking, and the Eliza effect of anthropomorphising things that use words.

    Do ‘large language models’ represent much of what’s currently being sold as ‘AI’? I’m thinking things like ChatGPT or midjourney, or whatever is driving autonomous trucks. Are these dead-end routes to real AI? I’m not really up on my terminology when it comes to the field, so I don’t know what ‘large language models’ are vs. ‘Generative AI’ or whatever else is out there.
  • 0

    One of the interesting things to me is the shift in AAI from lots of separate algorithms, all easily surfaced for individual use, to more generic models where the actual mechanism is submerged from the user. So back in the day (and we're talking some 25 years ago) we would code, customise and work with various types of clustering models, supervised neural networks, fuzzy logic, genetic algorithms and all. I suppose that these are still in play under the surface, but the actual algorithms are hidden behind an interface which simply accepts textual information in written or spoken form.

    Personally I liked dealing with the algorithms individually, as each has strong and weak points and is suited to dealing with particular kinds of problems. On the other hand, I totally get that for general-purpose use, the interface has to be accessible to generic users with little prior technical knowledge.

    Now, I don't know that any single one of these algorithms could in and of itself be souped up to what @Apocryphal called real AI, but I can see how particular features of the overall package of intelligence could use one or other. So genetic algorithms are often good for finding an optimal solution to a poorly specified problem, clustering models are good at classifying input signals into inferred real-world objects, fuzzy logic has some mileage in trying to understand the key driving factors behind a decision, and so on But I don't know nearly enough about the overall architecture of a learning system to know more than that - eg how you put together different bits and pieces for an autonomous vehicle, for example.

  • 1
    > @NeilNjae said:
    > (Quote)
    > That's pretty much synonymous with "life": bacteria will move from areas of "discomfort" to "comfort". If "sentience" has a useful meaning, it needs to be something more specific than that.

    I think you've understood my point. Sentience and life are intimate. When we talk about death, ask questions about it, we are asking questions of life. I'm wonderingl though, if sentience is not used to express this aspect of life, which is a complex phenomena, what word would you use?
  • 1

    @Apocryphal said:

    @NeilNjae said:
    (Quote)
    Personally, I think the current obsession with large language models is a combination of hype, wishful thinking, and the Eliza effect of anthropomorphising things that use words.

    Do ‘large language models’ represent much of what’s currently being sold as ‘AI’? I’m thinking things like ChatGPT or midjourney, or whatever is driving autonomous trucks. Are these dead-end routes to real AI? I’m not really up on my terminology when it comes to the field, so I don’t know what ‘large language models’ are vs. ‘Generative AI’ or whatever else is out there.

    A "language model" is something that is a representation of language. You can use it to understand or generate language. The "large language models" underlying things like ChatGPT use neural networks as an underlying technology and are much better models of language than what we had before. ChatGPT generates text by starting from your prompt then using the language model to predict the next word, which it generates. It then uses prompt + generated word to generate the next word, and so on, until it's generated the full response.

    Things like Midjourney also use neural networks, to represent a "model" of images, connected to a model of descriptions. When you ask it to generate a new image, it starts with some random noise, upscales that into the full image, then works out the description of it. It then adjusts the random-noise seed to nudge it in the direction of the supplied description, then generates a new image. Repeat until you get something good.

    Autonomous vehicles don't do the generation task. They use a whole collection of techniques to collect and process sensor data, build a model of the world, then pick an appropriate next action.

    @RichardAbbott said:
    One of the interesting things to me is the shift in AAI from lots of separate algorithms, all easily surfaced for individual use, to more generic models where the actual mechanism is submerged from the user. So back in the day (and we're talking some 25 years ago) we would code, customise and work with various types of clustering models, supervised neural networks, fuzzy logic, genetic algorithms and all. I suppose that these are still in play under the surface, but the actual algorithms are hidden behind an interface which simply accepts textual information in written or spoken form.

    There's a couple of things here. One is how neural networks have absorbed most other machine learning and artificial intelligence approaches. The increase in available data and processing ability have allowed us to just throw compute and data at a problem until the neural network gets good enough. Wasteful, but more effective than any hand-crafting of solutions.

    The other is the current excitement around text-based systems, for things like question answering, summarisation, and text generation. Things like ChatGPT are tools for end users, with no user-alterable parts inside. (You can adjust the underlying models, but it's a black art.)

  • 2

    @BarnerCobblewood said:

    I think you've understood my point. Sentience and life are intimate. When we talk about death, ask questions about it, we are asking questions of life. I'm wonderingl though, if sentience is not used to express this aspect of life, which is a complex phenomena, what word would you use?

    I don't. The whole thing is an ill-defined mess. Philosophers of mind use introspection to build castles in the air about what mental phenomena are. But things like qualia don't exist in any observable, physical form. They can't be the subject of empirical science (at least, at the moment). Don't get distracted by the noise, work on producing something useful (and safe, and non-discriminatory, and socially good, and...), Those are things you can point to and measure.

  • 1

    @NeilNjae said:
    I don't. The whole thing is an ill-defined mess. Philosophers of mind use introspection to build castles in the air about what mental phenomena are. But things like qualia don't exist in any observable, physical form. They can't be the subject of empirical science (at least, at the moment). Don't get distracted by the noise, work on producing something useful (and safe, and non-discriminatory, and socially good, and...), Those are things you can point to and measure.

    This makes almost perfect sense to me - I think there is a problem lurking in bracketed list that shows that observable physical forms (quanta?) can't be as neatly isolated from what I think you mean by qualia. But I agree that they don't exist at all in the same way. If you have time I'd like to know what qualia are / mean / refer to in this context. A few uncontroversial examples would suffice.

  • 0

    @NeilNjae said:
    There's a couple of things here. One is how neural networks have absorbed most other machine learning and artificial intelligence approaches. The increase in available data and processing ability have allowed us to just throw compute and data at a problem until the neural network gets good enough. Wasteful, but more effective than any hand-crafting of solutions.

    The other is the current excitement around text-based systems, for things like question answering, summarisation, and text generation. Things like ChatGPT are tools for end users, with no user-alterable parts inside. (You can adjust the underlying models, but it's a black art.)

    Thanks for this - I suspected it had developed this way but have been away from active involvement for too long to be sure. Pity really - the little firm I worked for made a nice living out of 6-week pilot projects to try to identify what kind of bespoke network / whatever we would use for their problem, and it led to a lot of job variety! One of the most fun was trying to set up a system to optimise cutting of shoe-leather pieces from a cow hide (yes, for real - it's a super-hard problem for several reasons). We never got very far with that but were able to experiment with neural networks and genetic algorithms, and the problem itself was a lot of fun to tackle. Of course, it was in the day when all the processing happened on a single machine so you were severely constrained compared to modern systems.

  • 1

    So this means that AI doesn’t really think like we do. We would conceive of a plot, then assemble the words in such a way as to reveal the plot. But AI arrives at a plot by following a word-tree, picking one word at a time without knowing the plot, but arriving at one by process of elimination. Is that correct?

  • 0

    @Apocryphal said:
    So this means that AI doesn’t really think like we do. We would conceive of a plot, then assemble the words in such a way as to reveal the plot. But AI arrives at a plot by following a word-tree, picking one word at a time without knowing the plot, but arriving at one by process of elimination. Is that correct?

    With the proviso that the original seed request, modified by subsequent clarifications, adjusts the prior probabilities of what word follows. So if I say "write a children's story about some woodland animals having a picnic" or instead "write a thriller set in London where an angry man with nothing to lose breaks up a people-trafficking gang" then the mesh of probabilities branches in completely different directions and (hopefully) results in totally different results.

    In fact, for fun I tried it out... but with just a plan rather than the whole thing... results follow...

  • 0

    Prompt 1: "write a 250-word plan for a children's story about some woodland animals having a picnic"
    Response 1
    Title: The Woodland Picnic Adventure
    Summary: A group of woodland animals—Benny the Badger, Rosie the Rabbit, Felix the Fox, and Tilly the Tortoise—decide to have a grand picnic in the meadow. Each brings something special, but unexpected challenges arise, teaching them about teamwork, friendship, and appreciating nature.
    Beginning:
    The woodland friends excitedly plan their picnic. Benny brings honey, Rosie picks wild berries, Felix packs freshly baked acorn muffins, and Tilly carries a blanket. As they set off, the sun shines warmly, and the birds chirp cheerfully.
    Middle:
    They find the perfect spot near the bubbling brook, but trouble arises! A gust of wind scatters the picnic supplies, and Felix’s muffins roll downhill. Benny’s honey attracts a swarm of bees. Rosie tries to rescue the muffins but gets tangled in vines, and Tilly struggles to lay out the blanket.
    The animals realize they must work together. Benny gently guides the bees away, Rosie untangles herself with Felix’s help, and Tilly steadies the picnic setup.
    Ending:
    With teamwork, they restore their picnic. The food is delicious, and they enjoy laughter, stories, and the beauty of the woodland. As the sun sets, they agree to make the picnic a seasonal tradition, cherishing the lessons they learned.
    Themes:

    • Friendship & Teamwork: Helping each other through challenges.
    • Problem-Solving: Finding solutions together.
    • Nature Appreciation: Enjoying the beauty around them.
      Would you like me to refine any details or expand on a particular scene?
  • 0

    Prompt 2: "write a 250-word plan for a thriller set in London where an angry man with nothing to lose breaks up a people-trafficking gang
    Response 2
    Title: Shadow of Vengeance
    Summary: Haunted by his past and desperate for justice, former detective Jack Mercer uncovers a brutal people-trafficking ring operating in London’s underground. With no family left and nothing to lose, he takes the law into his own hands, plunging into a world of danger and deception.
    Beginning:
    Jack Mercer, once a respected detective, is now a broken man after his wife’s disappearance. While searching for answers, he stumbles upon evidence linking her fate to a powerful trafficking gang operating in the city’s shadows. With his former colleagues unwilling to act, Jack embarks on a relentless mission to dismantle the network himself.
    Middle:
    Jack infiltrates the gang’s operations, moving through abandoned warehouses, hidden clubs, and corrupt networks. He faces brutal enforcers, exposes crooked officials, and gathers intelligence from unlikely allies—including a journalist risking everything to break the story. As he gets closer, the gang’s leader, Victor Kane, tightens security, turning London into a battlefield.
    Ending:
    Cornered but unrelenting, Jack orchestrates the takedown—freeing victims and exposing Kane’s empire. A final confrontation erupts in the gang’s stronghold, where Jack must decide between vengeance and justice. As sirens close in, he walks away, leaving behind a shattered empire and a city that will never forget his fight.
    Themes:

    • Justice vs. Revenge – The price of doing what’s right.
    • Corruption & Power – How crime thrives in the shadows.
    • Redemption – Finding purpose in tragedy.
      Would you like me to expand on any part?
  • 0

    PS that was Windows CoPilot's response, and no I'm not planning to take either story into any more detail :)

  • 1
    edited June 16

    @Apocryphal said:
    So this means that AI doesn’t really think like we do. We would conceive of a plot, then assemble the words in such a way as to reveal the plot. But AI arrives at a plot by following a word-tree, picking one word at a time without knowing the plot, but arriving at one by process of elimination. Is that correct?

    I'm not sure that either of these is true. I'd like to hear from @NeilNjae about this.

    Anyway this is where I am at with thinking about this stuff:

    When discussing AI, we need to be more careful in our language. There are several technologies in use, that might work in quite different ways - for example some AIs generate arrangements of pixels, or maybe better data that is congruent with hardware that displays pixels. So many LLMs (e.g. ChatGPT) use and arrange tokens (not words) congruent with strings of text, and what those are is rather unclear to me. This is not a word-tree. There is nothing tree-like that is already there that that such an LLM traverses. I don't know "what" is there. Maybe @NeilNjae can help.

    Anyway, you are using two ideas: plot, and word. Leaving aside the significance of plot, the idea of word is quite complex.

    In English we introduce many spaces into written text, and most readers think those spaces delimit words, which correspond with things. English Written language didn't always have these spaces, nor was spelling at all standardised. They also think words can be recognised because they are always composed of the same letters. This is too simple an idea - see the 4 page pdf by a scholar at NorthWestern downloadable at

    https://www.researchgate.net/publication/272651938_Saenger_Paul_Henry_Space_between_Words_The_Origins_of_Silent_Reading_Stanford_Calif_Stanford_Univ_Pr_Figurae_Reading_Medieval_Culture_1997_480p_4950_alk_paper_ISBN_0-8047-2653-1_LC_96-35088

    for a review that discusses the issues in play. Written language is nowhere near the level of phenomenological complexity of the languages it represents. It's not the entirety of language, it's already a massive reduction of language. At the moment software like LLMs process text representations of language, not what the word language seems to refer to.

    As I understand it, LLMs like ChatGPT seemingly have no capacity to deal with what words refer to, and if there are or are not such things. It doesn't mean that they won't later, just that they don't presently. Some experts disagree with this - it's a point of contention. However I think it is difficult to demonstrate that understanding (meaning processes such as connecting words with referents) is an aspect of the process of LLMs. Whatever we mean when we talk about the intelligence of this software, it is not biological, let alone human.

    Returning to plot, I think the process of conceiving of a plot, then assembling words to express it, is really so distinct from the process of generating a string of tokens that there is not much use in trying to use one as an analogy for understand the other. One thing it does reveal though is that if plot is to be found (not yet in evidence), it is not "in" the words, yet not demonstrable apart from them.

    Edited: Because I spoke about English, when I mean texts that were produced in the British Isles.

  • 1

    Hmm, yes. Much of which seems to be to say - the things we are currently calling Artificial Intelligence are not at all intelligent. They still remain tools - sets of rules or algorithms - written by human intelligences with purpose of speeding up or taking the drudge out of time-consuming tasks. In some cases, maybe improving the efficiency or precision of things humans do. But nothing, as of yet, that's actually intelligent.
    Is that correct?

    However, the scanned minds in the story really were intelligent. The question posed above is about the implications behind intelligence based on a scanned mind (which is perhaps wholly impossible?), versus a built intelligence. It sounds like a built intelligence might be totally alien to us in how it thinks. Would a scanned mind be more human?

  • 1

    On the other hand, it's hard to see how anyone could read @RichardAbbott 's two examples and not conclude that the AI in question knew what words meant, what muffins were, and that muffins could roll, lol.

  • 2

    @Apocryphal said:
    So this means that AI doesn’t really think like we do. We would conceive of a plot, then assemble the words in such a way as to reveal the plot. But AI arrives at a plot by following a word-tree, picking one word at a time without knowing the plot, but arriving at one by process of elimination. Is that correct?

    LLMs don't "think" like humans.

    Yes, LLMs generate text by creating one word at a time. But which word to generate is determined by the context of that word. That includes all the words that have come before, the prompt, and what the LLM "knows" about how words go together from processing all the training data.

    That context could be argued to contain the "plot" that the machine will elucidate. The trouble is, the context appears as a bunch of activations in a very large neural network, basically a large set of numbers. No-one has any real idea how those numbers encode any particular piece of knowledge or context. The best we can do is ask the LLM questions about how it produced some response. Do the answers to those questions mean anything? I'm far from convinced.

    @BarnerCobblewood said:

    Anyway, you are using two ideas: plot, and word. Leaving aside the significance of plot, the idea of word is quite complex.

    "Word" doesn't have a clear definition. We've all heard the amusing comments of German having a single, long word for something that would be a phrase in English. Agglutinative languages like Finnish mark tense and case with affixes, where English uses function words. In English, the "apostrophe-s" used to show possession behaves like a function word, though it has the form of a suffix (look up "clitic").

    Whatever we mean when we talk about the intelligence of this software, it is not biological, let alone human.

    ...and there's the rub. What do we mean when we talk about "intelligence"?

    Another AI aphorism.

    Birds and aeroplanes move through the air, but in different ways. They both fly.

    Fish and submarines move through the water, but in different ways. Fish swim, but submarines don't.

    Humans and computers process information, but in different ways. Humans think. Do computers?

    @Apocryphal said:
    Hmm, yes. Much of which seems to be to say - the things we are currently calling Artificial Intelligence are not at all intelligent. They still remain tools - sets of rules or algorithms - written by human intelligences with purpose of speeding up or taking the drudge out of time-consuming tasks. In some cases, maybe improving the efficiency or precision of things humans do. But nothing, as of yet, that's actually intelligent.
    Is that correct?

    That's my view. Some AI advocates in Silicon Valley take the view that we're just about to create Artificial General Intelligence, a machine that's as intelligent as a person in all functional ways.

    Definitions are so murky that we don't know.

  • 2

    @BarnerCobblewood said:

    This makes almost perfect sense to me - I think there is a problem lurking in bracketed list that shows that observable physical forms (quanta?) can't be as neatly isolated from what I think you mean by qualia. But I agree that they don't exist at all in the same way. If you have time I'd like to know what qualia are / mean / refer to in this context. A few uncontroversial examples would suffice.

    I'm not using "qualia" in any controversial way. My point is that we have no way (at present) to connect a subjective experience of something to an organic state in a brain, and certainly can't relate any of that to a collection of activations in a large language model.

    Without that empirical base, I suspect our understanding of minds and intelligence is about as accurate as Pythagoras's understanding of force and momentum.

  • 1
    Yes, thanks for lending us your expertise!
  • 0

    @NeilNjae said:

    @Apocryphal said:
    Hmm, yes. Much of which seems to be to say - the things we are currently calling Artificial Intelligence are not at all intelligent. They still remain tools - sets of rules or algorithms - written by human intelligences with purpose of speeding up or taking the drudge out of time-consuming tasks. In some cases, maybe improving the efficiency or precision of things humans do. But nothing, as of yet, that's actually intelligent.
    Is that correct?

    That's my view. Some AI advocates in Silicon Valley take the view that we're just about to create Artificial General Intelligence, a machine that's as intelligent as a person in all functional ways.

    My own feeling is that when many people talk about intelligence or sentience, what they actually mean is humanness. Both in the AI sphere and the other-species sphere (cetacea, corvids, colony insects and so on) there is a tendency to draw a line somewhere "machines or animals will never be able to do xyz but people can" - but then we find that machines or animals do in fact do just that, and we quietly drop the condition. @NeilNjae expressed this rather more eloquently as

    The definition of "intelligence" is constantly shifting, too. One aphorism from AI research is, "If we don't know how to do some task, it's artificial intelligence; if we do know how to do it, it's just programming." And also "intelligence" is following a "god of the gaps" type argument. There have been plenty of things we've taken as defining intelligence or humanity, and then find examples (natural or artificial) that replicate them (e.g. tool making, or chess playing). That just prompts us to re-define the essential nature of intelligence.

    All of which makes me think that collectively as a species we're not so interested in defining or discovering intelligence elsewhere, but rather in engaging in a constantly shifting quest to make ourselves unique.

  • 2

    I came across this article on the disagreements about "what is artificial general intelligence?" : https://arstechnica.com/ai/2025/07/agi-may-be-impossible-to-define-and-thats-a-multibillion-dollar-problem/

  • 0

    That's a nice read, thanks for sending the link.

    I was intrigued by this statement:

    Modern language models can pass some limited versions of the test not because they "think" like humans, but because they're exceptionally capable at creating highly plausible human-sounding outputs.

    Again this relates to a common theme in the article that it all depends what "kind" of human is being assumed as the basis for comparison. So for example folk with particular kinds of autism don't (so far as one can tell) have a capacity for emotional empathy with other people. However, they are very adept at imitating what they see and understand as social norms around them, and so can merge successfully into situations that would oteherwise be very exposing.

    Now is this different from being "exceptionally capable at creating highly plausible human-sounding outputs"? It seems so, as in one case we're talking about software programs and on the other about people, but it's something of a moot point whether there is an actual difference.

  • 1

    That pretty much confirms what I was thinking - that despite all the hype, there is no actual AI at the moment, and we still might never even see a true machine intelligence (however that gets defined). What we have now should probably not be called AI at all - we should think of it as advance automation.

Sign In or Register to comment.