Sunday, June 28, 2015


Oh hmm, this will be difficult.  ECHOPRAXIA (Second book in the BLINDSIGHT series, second half of the FIREFALL omnibus) is a well written book; its a thoughtful and astonishingly well researched book; it's a worthy sequel to Blindsight (except in one way which we will discuss), but....while it's well worth reading, I think its philosophical conclusions to the extent that I understand them, are about as wrong as any can be, and I can't deny I find this off-putting.

Now normally that wouldn't stop me liking a book but unfortunately the conclusions here are so antipathetic that to me it's not like reading about about the stupidity of reading books, it is reading a book about how stupid it is that we should imagine we are conscious, reading books, and that it is wrong to imagine that we gain any benefit from doing so - at which point, well really why bother?

Spoiler alert

To taken only one aspect, the text of BLINDSIGHT is a narrative voice, that is not only shown to be unreliable by ECHOPRAXIA - it's shown that it must not be reliable - because it is not a narrative - intended to convey infomation - but a hacking attempt on a human brain, consequently its surface accuracy as a depiction of what actually happened to the Thesius is as unlikely to be true as a message about how I used to own lots of money in a Nigerian account, but now need to transfer it to yours is as an account of actual conditions in Nigerian banking. It must be true to the extent that the recipient can not disprove it enough to avoid being hacked, but because we're repeatedly told how rubbish consciousness is at determining such things, that's what exactly - a 1% level of truth based on what can be deduced within the solar system about the Theseus?  A simulacrum of 2% of the narrator? 99% true but the 1% is everything crucial to conceal, because that's what you'd conceal?

[A commentator rightly remarks that we knew, in BLINDSIGHT that the narrator was unreliable, and this is so: but of the three known levels of unreliability (1) narrator is honestly mistaken but reports what he knows, (2) narrator is dishonestly motivated and tells truth as far as it goes and conceals detail later to be revealed outside of narration and (3) narrators entire message is a construct of an alien intelligence with a purpose other than its surface, whose surface may therefore contain any irrelevant level of truth commensurate with (i) verisimitude and (ii) effort expanded in construction (a lie is an energy intensive mechanism)  - I maintain that (3) is such a 'strong' level that where an analogy for it to be encountered in normal life the surface of the text would cease to be of interest (this message is a code, yes but what about the bit around it, well its basically noise or 'packing'.)]

If I'm reading a book the second half of which tells me that the book's account of the first half of the events it depicts [I read this as the two volume Omnibus] is at best a lie, I think it fair to expect that the second half should tell me what did happen.

ECHOPRAXIA doesn't, it shows me what's happening on Earth and latterly in near Sol space, but not anything about the people I thought I was reading about in BLINDSIGHT. [Except that one wasn't suffering from advanced anti-epilepsy surgery but advanced anti-deliberate-zombi-virus-attack surgery.]

Amusingly (I wasn't amused I was semi-angered, but I can see the case...) BLINDSIGHT is not now a novel about teaching us something by depicting characters and a world, it is a thing that teachs us something 'while disguised as a novel about characters and a world'. BLINDSIGHT is to a novel, what the message constituting the text of BLINDSIGHT (in ECHOPRAXIA) is to a truthful account of Siri of events on the Thesis, namely a simulacrum of that with a polemic / reprogramming intent, aimed at the recipients not as an entertainment, or a message, but as a weaponised lesson.

Again this hits the buffers of my problems with Peter Watt's critique of consciousness. When the main character makes [he thinks] a major choice at last, a choice the omniscient narrative - seems to tell me is free - how am I expected to evaluate this? When the entirety of the book is saying differently, that his choice can not be free nor unpredicted by an entity operating at the frankly ludicrous levels of intelligence (albeit unconscious intelligence) as the atomic slime mold that is seemingly (because as I will explain below, things are muddled) orchestrating things. Whether I am supposed to then be glad that he is prevented in his supposedly free aim, or not I can not tell - and therefore at the last I find it very hard to care.  Because, well, wtf.

On believability

Once you start writing God-like intelligences into a book, you need them at least to have an aim that if not understandable, can be mapped to the general reader-int as 'good' in some sense, otherwise you have an omnipotent evil following a bad aim - which can only be uselessly (but bravely) resisted, or an omnipotent irrelevance pushing a might-as-well-as-not destiny, at both of which we can only, finally, shrug, for if its that 'intelligent' and wants 'wot we no not wot' than chances are it will get it. While too powerful an actor makes all the cast irrelevant, it is still possible to cast a narrative where the interest is in the principle of parsimony - the subtle manner in which the overmind/god/AI/aliens move their pawn(s) to produce the end result X.  [Just as human level powerful actors in drama are dramatic in how they succeed, not in whether they succeed. A Holmes story, how will he solve it!]

But in ECHOPRAXIA what the Portia entity or the Angels of the Asteroids, or God 'wants' is so incomprehensible, and its/their plans to accomplish it - whatever it is - so at once 'omnicognisent and 'un-omnipotent'  - it's a mind that can (without being a mind) work out how to build its local agency backwards out of a quantum information channel depositing it atom by atom (in a way glossed over that I do not find remotely believeable) but it can't give its agency when its built any goal that could not have been carried out by something - less unbelievable but slightly slower (and what *is the fucking hurry God?*) or less humane but far faster (why is something that is not a consciousness concerned about the cost in suffering to human consciousnesses - if it even is?  Why not nova the Ort Cloud super-Jovian and fry every conscious entity in the Solar System? If consciousness is a threat?).

Assuming - my best reading on one read, that the atom pattern entity which is sent down the matter-stream from the Ort Cloud - which builds itself into the slime-mold Portia is backed up in its aims by the at-a-distance-reprogramming of Siri's father - what actual benefit for the expenditure does this gain the entities in the Ort Cloud?  The removal of consciousness from humanity - helps them how?  and this is the same issue I had with Blindsight. Simply, I think consciousness is useful - I don't care if Peter Watts thinks it shouldn't be: if it wasn't it wouldn't be selected for. Where is the natural real 'unconscious-species' of hive-humans, where is the zombi subspecies that mimics consciousness outside of its hermit shell houses (and once inside is either dormant or breeding for anything else is a waste of energy). We don't see these in reality, because the book's thesis is flawed. [I'm copyrighting 'hermit zombies' though I can get a story out of that.]  But even if consciousness was an evolutionary flaw in every respect, why would God / Aliens care?

The future depicted remains quasi-possible, right up to the bit with the back-engineered quantum teleportation -  but it is working hard to cease to interest me for it is self-evidently one in which humans of my subspecies - those that imagine they have conscious minds, and imagine that the health and well-being of conscious entities is the only point of action that has a value -  are stupid and quaint.  I can be told I'm worthless without paying for it.

Besides my ego : the real problem of that is where do you stop? Human characters, human hives vampires, the Ort Cloud Angels, the Slime Mold entity, de, dum, de, dum, de, dum, God  If intelligence is a bug in entropy, isn't it even more likely that universes are a glitch in nothingness.  Why should any level of the chain of being care for any other. Why should it matter that God be reconciled to Man, if neither the actions of God, nor those of Man are meaningful. What does it matter if there *is* a God that builds *universes* if nothing in or off universes matters?  Why is a universe of ants, better than a universe of not-ants?  Why is a universe better than a not-universe?  

I know why I think a universe with a model of itself is better than one with not - it's because *I* am a model of myself in a model of the universe, and its nice.

An infuriating book that's head and shoulders over a lot of sf - but which I'm finding it very hard to recommend and which I'll probably only read once more myself - as a fairness check on this review.

[Added material in brackets after online discussion.  The discussion also made me realise I do still want to read anything else set in this world, in the hope that the unclear becomes clear, and the mechanistic becomes conscious.]

No comments: