BSFA nominee: “Crystal Nights”

Today’s story: “Crystal Nights” by Greg Egan. read here, or listen here. Today’s opinion roundup starts with Karen Burnham:

“Crystal Nights” covers enough ground for any ten short stories. Actions have consequences here too, but messing with the nature of the universe is more than simply metaphorical. A rich dot-com-style billionaire sinks a considerable portion of his fortune into developing the fastest computer ever. And he keeps the technology all to himself. (Egan may not be familiar with how computer geniuses become billionaires – they can be obsessive geniuses, but usually if they keep the things they do secret they don’t become the rich kind.) He hires a team of people to put together a complete simulation of a universe inside the computer. His plan is to evolve an intelligent lifeform inside the computer that will then be able to help him in the inevitable war of super-intelligences that he just *knows* is coming. (Again, I’m just not sure that people this unstable really run billion-dollar software companies.) He repeatedly tweaks the design of the universe to keep evolution going in the way he wants it to: towards abstract thought, towards spoken and written language, towards sophisticated mathematics. Entire species evolve and go extinct in a heartbeat. He’s literally playing God. Let’s stop for a moment and reflect on the implications of intelligent design. What if someone has designed us, and the world, to achieve an evolutionary outcome? Given all the incredible pain, misery, and suffering that goes on in the world, how fucked up would that entity have to be? Egan presents us with the answer to that question in this story’s protagonist.

Eventually the billionaire talks to his creations directly, telling one of them essentially what he wants and why. He lays out the choices: help him, or he’ll regretfully have to destroy them and start over. He leaves an “Easter Egg” for them on their Moon, in the form of a monolith straight out of 2001. Through this interface, they can interact, in the most limited possible way, with our universe. We’ve all read “Frankenstein,” and we know what happens to unethical creator figures. The computer beings find a third way and forge their own destiny, and we can’t help but cheer. This review may seem spoiler-ridden, but there’s a so much more going on in this story than my bare-bones summary can begin to cover. Egan is one of my all-time favorite authors, especially when he’s using hard sf to examine ethical propositions. Here he’s in excellent form. All the world building, the descriptions of the artificial simulation and the computerized evolutionary process are fascinating. This one substantial story is probably worth the price of the issue alone, and I’ll be keeping it in mind come awards time.

Kimberley Lundstrom at The Fix:

In Greg Egan’s “Crystal Nights,” wealthy tech entrepreneur Daniel Cliff has a vision. He wants to create true AI, functioning at a human level, through a carefully controlled evolutionary process. Because Daniel has the money and the will, he does succeed with his project, but the end result is not what he expected.

“Crystal Nights” is an interesting take on the themes first explored by Mary Shelley in Frankenstein. Unlike most stories of manmade intelligence, Egan focuses not on the plight of the creatures or the effect on society of their existence, but on the motivations of their creator and on how the creatures and their actions affect him. Although Daniel is not the most sympathetic of characters, his dreams and his flaws are quite recognizable, and therefore compelling.

Steve Redwood

Reading Greg Egan’s Crystal Nights, at times I felt the same sense of wonder and excitement I used to feel decades ago when I first came across SF. I’ve never read any of Egan’s books, all I knew was that he was a ‘hard’ SF writer, and I was mentally prepared to be bored, as I don’t even understand really how a light bulb functions (I assume there’s an alert homunculus inside with a candle)! And there were a lot of details in the story I simply knew nothing about, starting from the very beginning with FLOPS ratings, which apparently are quite the opposite of flops! But though the precise workings of the computational (and, later, subatomic physics) developments were a mystery, their effects were clear, and the creation (following a cruel natural selection process imposed by a creator not in himself cruel – an interesting touch) of AI in the Phites, and their progress, is every bit as intense and exciting – and real – as any detective story or thriller, or indeed as the history of the universe itself from the Big Bang to… well, I won’t reveal that. Read the story; be thrilled. If fuzzy (yes, yes, it’s a poor pun!) me got so much out of it, readers with a scientific background will get so much more: this is a master-class in how to avoid info-dump. And don’t go expecting a hackneyed updating of the Frankenstein myth; this is a classic in its own right.

Martin’s take:

this is a typical Egan story. Some good stuff about artificial life let down by the total implausibility of the characters. At least it has got some cool bits in it. […] could have done with being a bit more abstract

Best SF:

Back to the SF. Huzzah! It’s Greg Egan, which is good. And it’s Egan and good form, which is even better news. He follows one driven scientist whose discovery of a means of creating computational power previously only dreamt of, enables him to explore the limits of just what can be created inside silicon. He creates powerful simulations, in which the building blocks of life are created, and in which he encourages his creations to develop sentience through setting environmental challenges.

The processing power enables him to develop sophisticated creatures quite rapdily, but this does require him to play god with those he creates, discarding those headed into evolutionary dead-ends. Fortunately, he is able to recognise the point at which those which he has created are sentient enough to feel sadness, and then it becomes more of a challenge, encouraging them to grow thorugh direct intervention.

As his creations develop apace it becomes clear that he has succeeded beyond his wildest dreams, although a nightmare unfolds as they are able to make the leap from creatuers living in a computer simulation to ones which can manipulate the world outside.

Top quality.

And finally, my original thoughts:

Charles Stross with the lobsters filed off. This is a story about evolving AI by darwinian selection — crab-shaped AI with control of their own physiology, in fact — and the ethical pitfalls thereof. As with Beckett’s story, in fact, the deeply felt and convincingly articulated ethical concern for other forms of sentience is one of the most satisfying aspects of the story. It comes in this story from the author, not the protagonist; Daniel Cliff thinks himself not an unkind god, just one who is prepared to make some sacrifices, cause some suffering, to promote the development of the kind of intelligence he wants. The story accelerates nicely, in a “Sandkings” direction, with some welcome flashes of wit (how Daniel made his money, for instance, or what the crabs find when they reach their simulated moon), and an ending that is apt, if not completely satisfying.

As you may guess, I haven’t actually read “Microcosmic God”. But did my opinion of the story change on a re-read? I’ll tell you later…

29 thoughts on “BSFA nominee: “Crystal Nights”

  1. My complaint about the Chiang was “where are the humans?”, my complaint with the Egan is “where are the believeable humans?” For me, Cliff sinks the story. Egan is very good at the speculation but poor at the realism and what you identify as wit strikes me as the opposite (Egan has never been a witty writer, witness the awful attempts at humour in Teranesia).

  2. ‘Crystal Nights’… Good fun, I thought — and I agree with Niall about the humour. But I also agree with Martin that the story is let down by cartoonish characters, which mean I can’t take it quite as seriously as Egan perhaps intended.

  3. My problem with “Crystal Nights” is the way it structurally faffs around at the beginning. We are introduced to Julie Dehghani as the POV character for the first section, and then she just gets dropped. What was the point of that? Why not start with Cliff as POV?

  4. Tony: The point of switching POV, I think, is to align us with Dehghani’s outrage. She finds the route by which Cliff is proposing to create AI unacceptable; we are encouraged to accept her viewpoint as correct; so when the story switches to Cliff, in theory we should greet him with scepticism. I mean, it helps the Dehghani is right, but “Crystal Nights” as a whole seems to assume that rights for AI will be, if not a hard sell, then at least not an intuitive sell. I still think the story’s clear passion on this issue is its greatest virtue.

    Re-reading it immediately after “Exhalation”, I felt quite a resonance with Chiang’s story. I suspect the fact that they’re associated in my mind is one reason I find myself sympathetic to reading “Exhalation” as being about a created world, rather than an alternate cosmology: “Crystal Nights” looks at another created world, also inhabited by beings with the ability to observe and manipulate their own bodies, from the outside.

    Unfortunately, this also makes me rather less satisfied with the story than I was first time around. The introduction of the Thought Police feels too much like a hand-wave. We go from visceral Darwinian mechanisms of selection to abstractions like “The Thought Police identified and nurtured the seeds of writing, mathematics, and natural science”. How? It’s a similar story when the Phites invent their “boosted” language: Cliff and his experts can’t replicate it outside Sapphire because, we are told, the Phites “had experienced the effects of thousands of small experimental changes”, and that subjective experience can’t be understood from the outside.

    I find myself wanting, in other words, to see the Phites’ side of the story. The use of the monolith helps, oddly enough (as well as being amusing), in that it signals a way to imagine the conceptual breakthrough the Phites are approaching. Now, you could argue that that’s outside Egan’s concern here, that what he’s aiming for is precisely to convince us we should care about the Phites without having the sort of direct understanding of them that providing their point of view would allow. Fair enough, though to me it seems that it could also throw Cliff’s inhumanity into sharper relief.

    What I am saying, perhaps, is that my reservations about the story have more to do with the lack of relationships than the poor characterization per se. That Cliff exists in a vacuum, or in as much of a fantasyland as Sapphire, is a point made by the ending, which is perhaps one reason I found that element of the story more satisfying this time around. But that, in turn, makes him easier to dislike, makes the positions he holds more extreme and less pernicious than I suspect they are in real life. I find the length of the story dissipates its force somewhat, too; although there the obvious comparison is not with Chiang’s story, but with Rickert’s …

  5. Actually, on reading that sequence again, I think it is Cliff’s POV – it just sometimes seems like Dehghani’s. I agree that she needs to be there to have the debate, so that the reader will pick up on further clues that Cliff is of dubious morality and self-deluding. But she’s built up to be quite important to the story, and then dropped out of it like a stone, with only a single mention of her subsequently (and a reference back to this original scene). I feel she needs to return more substantially, to give some balance to the story.

  6. Like everybody, it seems, I liked the premise of the story: The Game of Life to the Nth power. And I agree with Niall that the passion of the story for AI rights is its strongest attribute — that and the path by which this passion is conveyed. Cliff’s perspective initially is not absurdly unethical or unreasonable, and he’s not without a human conscience. One of the points of the story seems to me to be that when it comes to AI, it is easy – especially for a layperson — to start from what seems like a reasonable position, and realize it has become unreasonable only after having done unforgivable harm.

    But Cliff’s humanity in that regard makes his selective insanity and/or murky reasons that much harder to accept elsewhere in the story, and thus I again find myself in the Martin camp. This is a good story that could have been a very good story with better characterization and perhaps a slightly less glib ending.

  7. Tony: yes, you’re quite right: “She ran a hand over her face; whatever else she was thinking, there was no denying that she was tempted,” for instance. Though I think the function of that section remains the same. As to whether she should return or not: well, perhaps, although I think she’d need to have a clearer motivation established in order for her return to be anything other than a repeat of her initial outrage.

    Matt: yes, I agree with all of that, I think, except possibly the comment about the ending, which struck me as much less glib, more earned, second time around. That may just be because I knew it was coming, and so it felt more appropriate.

  8. Though I think the function of that section remains the same

    The function remains the same but I agree with Tony that it really unbalances the structure.

  9. I told you yesterday I’d comment after reading the story, but today I got a couple of paragraphs in before realizing I’d read it during my hunt for potential Hugo nominees and completely forgotten about it. Not exactly a ringing endorsement.

    Like everyone else here, I’m impressed by the neatness of what Egan does, how cleanly and effortlessly he builds Sapphire from first principles. Unlike everyone else, I wasn’t bothered by Daniel, mainly because I was too busy gnawing at my real problem with the story. As neat as Egan’s SFnal construction is, it rests on an assumption that he tries to slip past us in Julie Dehghani’s speech at the beginning, and which remains unquestioned throughout the story – that the Phites are capable of suffering. Pain and pleasure are the motivators that drive us on all levels of our existence, but they are rooted in biology and emotion. It’s difficult for me to imagine what their software analogues would even be, nor do I see why they would be necessary if you can just program clearly-defined imperatives into your artificial creations’ brains. An important moment in the story comes when Daniel realizes the Phites feel grief for those who die, but that implies that he’s somehow programmed in the capacity for grief, and I just don’t see how that’s possible. I’m not saying Egan couldn’t have persuaded me on this point, but he never tried.

    So, basically, “Crystal Nights” fails for me the way so much fiction about AI fails – because it tries to imagine an AI just like us and assumes that this is the natural end result of AI design, whereas I firmly believe that an artificial intelligence will be something completely alien (I’m also not sure why we’d want to create an artificial human when the real ones are so readily available). Just because a thing is self-aware doesn’t mean it’s like us, and Egan’s assumption that it is – which to my mind isn’t necessary for the point he’s trying to make – is entirely unpersuasive.

  10. Abigail: I’m a little frustrated by your criticisms above. First, they seem to miss the use of evolution for generating the Phites, which strongly indicates emotions were not hard-coded into them, nor does the text require that such a capacity was. The Phites have a simulated biological heritage. Not a false one, but a real history which has contextualized their anatomy, their psychology and their personal histories. You can be skeptical of the validity of qualia in computational/simulation environments and you hint at that as well. I guess it just feels to me like your central paragraph puts half a foot into at least four different objections that are largely not mutually compatible, while the assumptions Egan is working from (legitimacy of qualia as computationally simulated, etc) are fairly bog standard established tropes and don’t require rigorous derivation anymore than an space elevator appearing in the story would.

  11. I agree that an AI constructed by us will be completely alien. It lacks glands. It’s not a product of biology.

    However, the Phites are. They have evolved, just like us. Their properties haven’t been programmed (although their selection has been directed to some extent), but they have emerged. That makes them completely different from AIs in the traditional sense.

  12. Matt: yes, I agree with all of that, I think, except possibly the comment about the ending, which struck me as much less glib, more earned, second time around. That may just be because I knew it was coming, and so it felt more appropriate.

    I don’t disagree that the ending is appropriate — it dovetails cleverly with both the story’s themes and Cliff’s character. But to me, the ending makes it too easy to read Cliff as an isolated case of insanity, someone who just doesn’t get it and is thus easy to shrug off. This in turn makes it too easy to shrug off the larger questions of the story as issues that won’t matter to anyone who is even marginally self-aware. “We must think about AI rights because if we don’t, unscrupulous lunatic dot-com billionaires will get there first and God only knows what will happen then because life finds a way” — basically, AI meets Jurassic Park — is not necessarily untrue, but it is less interesting to me than the story we’d get if we could see Cliff as representing more than his highly individual and peculiar map of dysfunctions. (Although it’s certainly possible that Egan meant the story to be closer to the former, and again, even as that it’s a good story.)

  13. Abigail said:

    nor do I see why they would be necessary if you can just program clearly-defined imperatives into your artificial creations’ brains.

    I think the fundamental premise upon which this story rests is that you cannot just program clearly-defined imperatives into an AI’s brain. This premise differs sharply from the standard SFnal depiction of AI, but I think the failure of good old-fashioned AI suggests that a more biological model is in fact the best way to go.

  14. Jason, Denni, Ted: You’re right that Egan posits an evolutionary path towards entirely human-seeming Phites, but I’m still unpersuaded by it. For one thing, pain and pleasure go about as far back up our evolutionary family tree as it’s possible to go – maybe not as far as single-celled organisms, but the minute you get even the most basic central nervous system, there they are. For another, it seems fairly obvious to me that the early Phites would have been programmed with evaluation functions, the way AIs are today – +10 points for finding food, -10 points for not having shelter, +1000 for having sex – which are the clearly defined imperatives I was talking about (though obviously as the Phites’ minds’ complexity grows towards consciousness, these functions would become a great deal more fuzzy). The basic Phites are nothing more than an algorithm for accumulating the highest number of points possible, and I just don’t see how that translates into suffering and feeling joy. By Egan’s reasoning, anyone who plays a game of Spore is committing a crime against humanity.

    Again, I think Egan could have persuaded me that the Phites have emotion or something analogous to it, but his choice to just ignore the question renders the story inert as far as I’m concerned.

  15. Matt:

    This in turn makes it too easy to shrug off the larger questions of the story as issues that won’t matter to anyone who is even marginally self-aware.

    Ah, I see what you mean now. Yes, that’s a good point.

    Abigail:

    The basic Phites are nothing more than an algorithm for accumulating the highest number of points possible, and I just don’t see how that translates into suffering and feeling joy.

    I feel like I’m missing something here. Life on Earth evolved pain and joy; why could life in a simulated environment not do the same?

    When you speak of “basic Phites”, are you assuming that they started with some kind of macro-organism? I don’t think the story makes that explicit, and it seems to me at least plausible that they started from a much more basic level.

  16. Macro-organism and microorganism are meaningless terms since we’re not actually talking about organisms but programs. The Phites start out as algorithms. These algorithms rewrite themselves in a process that, yes, borrows the terms and rules of evolution because that’s a proven method for developing complex structures from simple building blocks, but it doesn’t mean that all the biological terms can just be ported over to the virtual world.

    You say that life on Earth evolved pain and joy. My point is that they evolved very early on because once you get a sufficiently complicated organism you need a way to encourage them to act in ways that further their survival and propagation and discourage self-destructive action. That’s because a proto-slug can’t look at the score to see if they’re doing what they ought to be doing. A score-maximizing algorithm – which is what every AI program ultimately is – can do just that, which makes pain and pleasure superfluous.

  17. The Phites start out as algorithms

    But — and this is where I still feel I’m missing something — so do we. So did life on Earth. How is “10 GATHER FOOD. 20 WHEN FOOD = 3 COPY 10 TO 30. 30 GOTO 10” fundamentally different to “ACTGTTTGGATATCCCTGGATAC…”? Both are sets of instructions — programs — that do not, in themselves, require emotional responses of any kind. But it seems to me that once you reach a certain level of complexity of program it’s not unreasonable to theorise that there won’t be much practical difference between “I’m hungry” and “I am not maximising my score because I am not acquiring sufficient food.” It will still be a state you want to avoid.

  18. How is “10 GATHER FOOD. 20 WHEN FOOD = 3 COPY 10 TO 30. 30 GOTO 10″ fundamentally different to “ACTGTTTGGATATCCCTGGATAC…”?

    Because a proto-slug doesn’t know how to keep score, and a computer does (a computer can know something without being aware of it – come on, I know you read Blindsight). My point is that responses such as pleasure and pain are nature’s way of ensuring adherence to basic imperatives in organisms even if they lack the complexity to comprehend than 10 < 20. A computer program has that comprehension built in, and therefore doesn’t need the positive and negative stimulus provided by pleasure and pain.

    And yes, at a certain level of complexity the difference between a mind driven by a mathematical function and one driven by biology might not be significant, but that’s not really my point. We’re told, without any explanation, that the Phites suffer, and I simply don’t see how that’s possible or even necessary.

  19. The Phites, as I understand it, are simulated biological beings. Their biology is different from human biology, but they are the equivalent of flesh and blood in their world, and they live and die not by looking at scores but by the laws of physics in their simulated universe, just like we humans do in ours. They’re no more algorithms than humans simulated inside a computer would be algorithms. (Or do you think that simulated humans couldn’t, in principle, be conscious or experience emotion? If that’s the case, that’s a whole different — and to my mind unsubstantiated — argument.)

    As for the whole “why not just look at scores?”, the whole system is designed to create conscious, social beings. It stands to reason, then, that they would evolve into social, emotional beings. Egan goes to great lengths to show us how Daniel sculpts the world of the Phites to guide their evolution towards empathy, language, civilization, and all the things that humans care about, because he wants them to essentially think for him, to solve his very human problems and concerns. I really don’t see how, when all these parallels are made so clear, it’s supposed to be a mystery that these creatures grieve and suffer. Why do you think emotions evolved? I don’t know, but I assume that they somehow figured in the whole fitness cabal that produced modern Homo Sapiens, and presumably, then, it is exceedingly likely that evolution when so specifically steered towards creating a human-like outcome, would produce suffering as well.

    None of the characters in this story know how to create AI. The approach that eventually succeeds does so by closely replicating the only known conditions that have produced intelligence. I don’t think the story “tries to imagine an AI just like us and assumes that this is the natural end result of AI design”. I think it explains to us in explicit terms that the characters don’t know much more about AI than we do, and so set out to make artificial intelligence by a process close to the one that produced natural intelligence. And it’s uncontroversial that the natural end result of such a process is, if not just like us, then at least very much so.

  20. The Phites, as I understand it, are simulated biological beings. Their biology is different from human biology, but they are the equivalent of flesh and blood in their world

    You contradict yourself right there. The Phites don’t have a biology. They’re programs, running inside a simulated environment governed by rules analogous to the laws of physics and biology in the real world, but that doesn’t make them biological, nor does it mean that their responses, and the responses of their environment to them, aren’t governed by pure mathematics.

    Nor do I see why it’s inevitable that evolution should end up with human-like intelligence and psychology, especially when, as we’re told, the Phites start out with a very different skill set, such as the ability to edit their own base code, than our far-flung evolutionary ancestors.

  21. Abigail: And yes, at a certain level of complexity the difference between a mind driven by a mathematical function and one driven by biology might not be significant, but that’s not really my point. We’re told, without any explanation, that the Phites suffer, and I simply don’t see how that’s possible or even necessary.

    I think that’s one of the points I found most fascinating–if you turn the question around and ask about whoever/whatever may have designed *our* universe, as so many people think some God-like being did. If there’s intelligent design behind our universe, why all the pain and suffering? What does that say about the ethics of a being who would design such a system?

    Daniel builds a system that will result in pain and suffering; as you say, that may not be necessary. But he does, and we’re meant to understand that as morally wrong. I think the story works best if you then turn that understanding around and ask about our Creator, and what that implies about him/her/them, if he/she/they exist.

  22. Abigail: I don’t think that’s a contradiction. Suppose we had a simulation that ran simulated humans and a simulated Earth. (Can we agree that that’s at least theoretically possible?) Ultimately, there would be a program running them, sure, and ultimately, they would be data. But in the simulated world, which, since this is a thought experiment, is simulated to perfect detail, they are biological. From that point of view, nothing could be more natural. I see no reason that in principle, we couldn’t all be living inside The Matrix (just as I see no reason to actually believe that). There would be no empirically demonstrable way of determining that. That would not make us any less biological, any less real or thinking or feeling, from our perspective. Hell, there’s nothing wrong with viewing ourselves as simply data being run by the huge program that is the universe, governed by the algorithm that is Quantum Mechanics (or whatever is more fundamental than that) and fed by the input that was the starting state of the universe, whatever that was. I simply don’t understand your objection to the simulated beings being, well, biological as seen from their simulated world — and hence, subject to the same kind of consciousness that we have.

    It seems like you have a more fundamental problem with the whole idea of consciousness created in a simulation, but you’re not making it explicit.

    Nor do I see why it’s inevitable that evolution should end up with human-like intelligence and psychology, especially when, as we’re told, the Phites start out with a very different skill set, such as the ability to edit their own base code, than our far-flung evolutionary ancestors.

    I think that, with the suspension of disbelief that any SF story warrants, and with the repeated references to how the main characters guide evolution towards a human-like outcome, it is exceedingly likely. And they do, eventually, grow out of their human-like psychology — this is the boosting that Daniel doesn’t understand.

  23. Because a proto-slug doesn’t know how to keep score, and a computer does (a computer can know something without being aware of it – come on, I know you read Blindsight). My point is that responses such as pleasure and pain are nature’s way of ensuring adherence to basic imperatives in organisms even if they lack the complexity to comprehend than 10 < 20.

    Would you say that a computer program containing the statement “if (x < 20)” actually knows something about numbers? I would say that it doesn’t. Certainly, programmers will say things like “the program knows the values of x at this point,” but they are not using the word “know” in the same sense as when they say “I know that ten is less than twenty.”

    I imagine that in the source code for a Phite, a line like “if (x < 20)” doesn’t represent anything like “if hungry, go seek food.” It might represent something like “if total input to this neuron is below certain threshold, then do not pass signal to next neuron.” The Phite’s experience of hunger would be an epiphenomenon rising out of millions of such lines of code. For the Phite to know that ten is less than twenty, the way you or I do, there might have to be billions of lines of such code. But before it reaches that level of complexity, those lines of code would probably give rise to epiphenomena like pleasure and pain.

  24. Karen:

    I think the burden of determining the morality of a creator who creates through evolution falls on those who believe that evolution was intelligently driven. Since I don’t think that’s the case (and don’t even believe that the existence of a creator mandates the belief that they had a hand in every step in our evolution) that’s not really an issue for me, but it is something the story touches on.

    Simen:

    Your example of simulated humans in a simulated world (which, no, I don’t believe is possible even within the world of the story we’re discussing) is irrelevant because the Phites aren’t meant to be human. They start out as organisms very different to our evolutionary ancestors and become even more different further down the line (such as when they stop dying). There’s no reason for their evolution to parallel ours exactly.

    We evolved pleasure and pain because they are useful survival tools. As I’ve said repeatedly, the Phites have their own survival tools in the form of an evaluation function, and therefore have no need for pleasure and pain. I don’t have a problem with the idea of machine consciousness, but it makes no sense to me to claim that that consciousness would be identical to ours (nor do I see the Phites’ escape as anything but an entirely human act – you or I would try to do exactly the same thing). Saying ‘but that’s the way the story is’ doesn’t make it any more likely or sensible, and especially given the rigorousness with which Egan tried to construct his world I think this leap in his reasoning is a major flaw.

    Ted:

    What I was trying to say is that a computer can attach a true/false value to the statement X < 20 without understanding numbers or anything else. I ran into a language snag because we don’t really have a word for knowledge without comprehension.

    I agree, and in fact have said, that once a program became complex enough to achieve awareness it would probably attach a meaning to negative evaluation function results that would be analogous to pain, but we’re told that Phites experience this sensation at their most basic level, and that doesn’t work for me.

  25. Your example of simulated humans in a simulated world (which, no, I don’t believe is possible even within the world of the story we’re discussing) is irrelevant because the Phites aren’t meant to be human.

    It was an example of simulated biology. You disagreed that the Phites were, in some sense, biological, and I pointed out that humans if simulated would still be biological enough to think, feel, suffer and take pleasure in things, so I don’t really think that holds up as an argument. You’re right that they’re not meant to be human, but they process by which their evolution is coaxed into becoming very human-like despite differing starting conditions is described in some detail. Egan describes how Daniel runs millions of simulations, intervenes all the time, rewinds history to try a new path, all to get the near-human results he wants. We see them develop a human-like theory of mind in response to environmental stimuli, we see them develop language in response to environmental stimuli. When they develop culture, we see how, by some hand-wavy means that Egan glosses over, their human creator directly implants human memes into their culture so as to keep them human enough for his purposes. How much more could you want, save an explicit statement such as “The Phites’ world was sculpted carefully to produce the most human-like psychology”?

    We evolved pleasure and pain because they are useful survival tools. As I’ve said repeatedly, the Phites have their own survival tools in the form of an evaluation function, and therefore have no need for pleasure and pain.

    I think you’re the one making a leap here. There’s no evidence in the actual text that the Phites run on an evaluation function any more than humans do. In fact, it’s mentioned several times in the text that they run on simulated brains, and since we hear about “neural anatomy” we can assume they have some equivalent of a nervous system. The most obvious and satisfying interpretation of what Egan actually writes is that their brains, though simulated, are analogous to human brains. (Which, again, I see as a result of careful trial-and-error on the part of the experimenters, who want the Phites to do thinking for humans who can’t think as fast or as intelligently as them. They select, essentially, for human psychology and that’s what they get.)

    I also think we can’t see the Phites’ escape as necessarily a human act. We have no reliable guide to the psychology of the boosted; the humans in the story only understand what the boosted decide to say to them in the language they understand, and the “inside man”, Primo, turns out to be a traitor. I don’t think we can say we know his motivations.

Leave a comment