Question 5

1

The Culture interferes with the Empire of Azad for its own good (though not the good of the current rulers and social system), to the extent of bringing about its downfall. Is it morally right for it to interfere in this way? If so, what justifies the decision?

Comments

  • 0

    I suppose the Culture would argue the case on the basis of inequality - between the three genders for one thing (expressed very poignantly by the female player Trinev), but also between different social and cultural groups. I don't think that the Culture could reasonably argue on the basis of existential threat, as there was never a real threat.

    I suspect the rather dubious justification would also be made that no interference actually took place. one Culture person took part in a competition, and was protected at a few points by a couple of others. The Empire collapsed on itself once the central player - the Emperor - had not only been defeated but also killed. There was no robustness of succession, and the game play had also exposed rival factions. But equally, I'm not sure anyone would be entirely convinced by claims of Culture innocence...

  • 1

    Should NATO have intervened in Kosovo, or Yugoslavia, or Syria, or Iraq? None of those was an existential threat to NATO countries, but the humanitarian imperative was clear. Should the UK and US governments press China to implement more human rights? Should the BBC World Service continue to broadcast programmes that encourage democracy among populations without democratic representation?

    There's a clear moral argument that "the west" should interfere in other countries to improve the well-being of the people in those places. There are counter-arguments about whether the western lifestyle is actually an improvement, and pragmatic arguments about whether the interventions will actually achieve the long-term goals of improving well-being.

    In this book, Azad was a pretty evil regime. Whatever comes after it is likely to be better for the people of Azad (and those they conquered, or may not conquer in future). I'd like to see what the Culture's plan was for what happens after the collapse of the Azad regime, and how successful that was.

  • 1

    As an interesting (perhaps) aside, a biblical scholar on another forum I attend tells me that 'morality' is what God has decreed via the bible, and that Ethics are what humans have conceived of, because on some level they struggle to accept the bible as correct. Is that an interesting distinction?

    Does it make it more interesting to think of the Culture minds as God?

  • 1

    @Apocryphal said:
    As an interesting (perhaps) aside, a biblical scholar on another forum I attend tells me that 'morality' is what God has decreed via the bible, and that Ethics are what humans have conceived of, because on some level they struggle to accept the bible as correct. Is that an interesting distinction?

    It's interesting that biblical scholars think so. I don't think it's that relevant for anyone else.

    Does it make it more interesting to think of the Culture minds as God?

    To a great extent, they are. In fact, they really are the loving God of Christianity. I've not thought of it that way before. Going back to the Utopia idea, is the Culture an embodiment of christian heaven?

  • 1
    Arguably the Minds can be thought of as angels. Hmmm...
  • 2
    edited March 2020

    @dr_mitch said:
    The Culture interferes with the Empire of Azad for its own good (though not the good of the current rulers and social system), to the extent of bringing about its downfall. Is it morally right for it to interfere in this way? If so, what justifies the decision?

    Moral right can only be defined within a system of morality. The system of morality used by the Culture would say it was indeed morally right. The system used in the Empire would dispute this. The only answers are subjective. This lesson in moral relativism is brought to you by someone who thinks moral relativism is a bankrupt moral system, of no use to anyone but clever philosophers.

  • 1

    Destabilizing the Empire will (we are to assume) decrease the suffering of many people, both who live under its regime and who might have become targets of its conquest. The morality of The Culture is all about decreasing suffering.

    For the record, so am I.

  • 1
    edited March 2020

    @Michael_S_Miller said:
    Destabilizing the Empire will (we are to assume) decrease the suffering of many people, both who live under its regime and who might have become targets of its conquest. The morality of The Culture is all about decreasing suffering.

    For the record, so am I.

    As am I! Unfortunately you state as a given something which is in doubt. Destabilizing the Empire may cause the Empire to fall. Other things happen as a consequence, including definitely lots of people suffering and dying. It may lead to a more enlightened government, it may lead to a slew of smaller governments who will probably squabble and war, or it _may _lead to a successor state as bad as the Empire or worse. Might I point out the 'Arab Spring' as an example? What will happen will be chaotic, dangerous, and messy, causing a lot of pain and heartbreak over the short term for sure. We don't know about the long term. Blithe assumptions are not a good thing to make when you topple governments.

  • 1
    @clash_bowley that's *precisely* what I was trying to get at with this question. I assume the Minds have thought about it considerably, but even the Minds are not infallible.
  • 2

    You are absolutely right. We don't know what happened and we're given very little information about what happens to the remains of the Empire after the events of the book. It's possible that the Minds that pulled off this years-long con to manipulate Gurgeh and the Empire and had personell in place to follow up (Za is ready to lead a guerilla army), screw up the aftermath. It's definitely possible.

    In some sense it's playing the odds. If the Culture had done nothing, the odds are very high/near-certain that the Empire would have kept inflicting suffering on its own people and imposing it on others (apart from the description of the slums, the aside about the people who valued their library and surrendered to save it, only to have it defragmented like an old hard drive was chilling). The odds of a better, less-suffering-producing system taking root after the revolution are pretty good, I'd say. Not perfect, but pretty good.

    @dr_mitch points out that the Minds are not infallible. That's true. Nothing is. We don't live in heaven, even in The Culture. That doesn't mean we shouldn't try to make our time in this world better.

    (Is it the sign of a good science fiction book when discussion of it turns to philosophy as much as the book itself?)

  • 1
    I suspect the Minds take a long view as well. Will thinks be better in the immediate aftermath? Probably not. Will things be better in say a hundred years? Probably yes.

    And there is a case for benevolent intervention. To me the disturbing thing is the way it *is* a game to Contact, and after the elegant success of the plans as foreseen comes accompanying smugness.

    I was hoping for this sort of philosophical discussion.
  • 1
    > @dr_mitch said:
    > I suspect the Minds take a long view as well. Will thinks be better in the immediate aftermath? Probably not. Will things be better in say a hundred years? Probably yes.
    >

    Your comments reminded me of the rather late introduction into Asimov's writing of the Zeroth Law of Robotics "A robot may not harm humanity, or, by inaction, allow humanity to come to harm", implemented by only the most advanced models who were able to make the extrapolation from individual humans to humanity in the abstract (and without getting bogged down in endless analysis of consequences).

    In just the same way we are talking about the Minds in the Culture, there is the haunting possibility that short-term or geographically localised harm might be triggered in order to achieve a rosy future. The original three laws explicitly prohibited this sort of trade off between short term harm and long term good, by referring it always to single individuals.

    On a contemporary note, self-driving cars have to reckon with this kind of ruthless choice - do you avoid the mother pushing her baby in a buggy on the pavement, or the older couple on the traffic crossing? (Often called the "trolley problem" in relevant literature). At present state of the art, the decisions are of course coded by developers, rather than autonomously decided by the software. Currently firms building the software often crowd-source opinions in order to arrive at a socially acceptable consensus.
  • 0
    edited March 2020
    Funnily enough, earlier this year I reread Asimov's robot novels, finishing with Robots and Empire where the zeroth law was introduced.

    In terms of self-driving cars, from my understanding the big problem is whether the car should risk harming those *in* the car if by doing so it saves for example a pedestrian. No easy answers are there?

    From my point of view with self-driving cars, my point of view is there are are *thousands* of vehicle accident fatalities each year in the UK alone, and we kind of take that for granted. They could drastically reduce that number. Still, an accident caused by a machine, rightly or wrongly, feels different to an accident caused by a person.
  • 1

    @dr_mitch said:
    In terms of self-driving cars, from my understanding the big problem is whether the car should risk harming those in the car if by doing so it saves for example a pedestrian. No easy answers are there?

    From my point of view with self-driving cars, my point of view is there are are thousands of vehicle accident fatalities each year in the UK alone, and we kind of take that for granted. They could drastically reduce that number. Still, an accident caused by a machine, rightly or wrongly, feels different to an accident caused by a person.

    My day job is an AI researcher. This is an old problem, and one we're already dealing with. Airbus planes will over-ride the pilot's control inputs if it's dangerous. This has prevented some accidents. It's also caused others, when the plane got it wrong. A contributing cause to an Air France crash a few years ago was when the plane realised the aircraft was outside its competence zone and handed full control to the pilots. The pilots were so unused to that, they didn't understand what was happening and so didn't fly the plane correctly in the new control regime.

    And it's not just transport. If an AI system mis-diagnoses an X-ray, who's at fault? Is anyone? (The overseeing radiographer? The hospital administrator who approved the system's deployment? The software developer who created the system? The archivist who assembled the dataset the system was trained on?) If the system and a human expert disagree, which do you believe? What if the human has a higher error rate then the machine; does that change your answer?

    The law for driving currently (AFAIK) is based around what a reasonable person would do. Let's say you're driving down the road, a child jumps out in front of you, you swerve to avoid it, but hit and kill another person. Chances are, you won't be held liable for that death (so long as you were driving sensibly). But as @dr_mitch says, if a machine-driven car does the same, the public outcry could be enormous.

  • 0

    @NeilNjae said:
    My day job is an AI researcher.

    How very cool. Back in the day - and we're talking in the days when you brought all the data to a desktop and did the algorithmic stuff there - I used to write AI code. In subsequent jobs I could never spend as much time as back then, but still follow the trends. These days, being out of the s/w game, I just tinker with Alexa skills at home to keep my hand in!

  • 2

    @clash_bowley said:

    As am I! Unfortunately you state as a given something which is in doubt. Destabilizing the Empire may cause the Empire to fall. Other things happen as a consequence, including definitely lots of people suffering and dying. It may lead to a more enlightened government, it may lead to a slew of smaller governments who will probably squabble and war, or it _may _lead to a successor state as bad as the Empire or worse. Might I point out the 'Arab Spring' as an example? What will happen will be chaotic, dangerous, and messy, causing a lot of pain and heartbreak over the short term for sure. We don't know about the long term. Blithe assumptions are not a good thing to make when you topple governments.

    Another point from my delayed read. While touring the city, Flere-Imsaho says that Contact hasn't already intervened in Azad because all the interventions it's modelled have resulted in more suffering than letting Azad continue as it is. Us readers can therefore assume that the Culture is aware of the risks of intervention, and that Gurgeh's intervention (and whatever follow-up there may be) is likely to result in a net benefit.

  • 1
    I think that's what Contact has calculated. But there's a risk they could be wrong.

    One of the later Culture novels, Look to Windward, is about consequences when they should get it wrong. Come to think of it, Use of Weapons also involves careless interventions by a former Culture operative gone rogue, but it's a while since I read that one.

    On the other hand I'd also argue that Excession features a society where the Culture should do more to intervene somehow but doesn't.
  • 1

    @NeilNjae said:

    @clash_bowley said:

    As am I! Unfortunately you state as a given something which is in doubt. Destabilizing the Empire may cause the Empire to fall. Other things happen as a consequence, including definitely lots of people suffering and dying. It may lead to a more enlightened government, it may lead to a slew of smaller governments who will probably squabble and war, or it _may _lead to a successor state as bad as the Empire or worse. Might I point out the 'Arab Spring' as an example? What will happen will be chaotic, dangerous, and messy, causing a lot of pain and heartbreak over the short term for sure. We don't know about the long term. Blithe assumptions are not a good thing to make when you topple governments.

    Another point from my delayed read. While touring the city, Flere-Imsaho says that Contact hasn't already intervened in Azad because all the interventions it's modelled have resulted in more suffering than letting Azad continue as it is. Us readers can therefore assume that the Culture is aware of the risks of intervention, and that Gurgeh's intervention (and whatever follow-up there may be) is likely to result in a net benefit.

    That's a really interesting point which I had missed. Presumably getting Gurgeh to do it - which is surely a high-risk plan - has the advantages of a) plausible deniability if things go pear-shaped, and b) a way of getting an agent in to the Empire without alerting their no doubt formidable defences that an attack is under way.

  • 1
    Actually, yes, that's it @RichardAbbott. It's a relatively _safe_ way to intervene if things go wrong.
  • 2

    I agree. I don't think it is so much safe for the Azadi as it is relatively safe for the Culture. The Minds may calculate that this is the least painful way, but it will not be painless, and they may be completely wrong - there are a LOT of variables, after all.

  • 2

    @clash_bowley said:
    I agree. I don't think it is so much safe for the Azadi as it is relatively safe for the Culture. The Minds may calculate that this is the least painful way, but it will not be painless, and they may be completely wrong - there are a LOT of variables, after all.

    Chance, unpredictability, and free will are all strong themes of the book. I don't think anything is certain in the outcome of the fall of the Azad empire.

  • 2

    @Michael_S_Miller said:
    (Is it the sign of a good science fiction book when discussion of it turns to philosophy as much as the book itself?)

    Yes, I think it’s a good sign of a good book in general, not just in science fiction) if discussion turns to the book’s underlying philosophy (and also the Culture’s hidden, underlying philosophy) as much as to the explicit content of the book.

    I brought up Rousseau in another thread precisely because I’m thinking about political philosophy, among other kinds of philosophy, in relation to this book. The Culture appears to follow some kind of individualist anarchism with all physical needs met by a stateless collective of Mind(s). That combination kind of blows my mind.

Sign In or Register to comment.