Thursday, June 25, 2015

BLINDSIGHT - On the Conscious Fallacy as an Issue In Fiction

I've been reading Peter Watt's FIREFALL (which collects BLINDSIGHT and ECHOPRAXIA:  I'm (I think, kindles make judging progress through omnibuses non-self evident) nearly through BLINDSIGHT.  [I was right, I was literally 10 pages from the end, and have now finished the first book].

I enjoyed it, it's good - I'm very interested to read the next one, but....

I find the existential angst about the fallacy/utility of consciousness theme: to be too weak to bear the load the author places on it.  Unfortunately this seems to be what the books 'about' - in the sense that its not about a first contact team from an interesting future earth, lead by a vampire.

In our world a lot of people seem to have got het up just because some/the/majority all of their actual 'conscious' decision making can be demonstrably traced to nerve impluses sent, before 'the decision' is taken by the conscious mind.

But I *am* my neurology, my glands, my history.  My 'higher' brain observes and models and turns all that into *I*.  *I*m good with that.  What does it matter if my hand starts reaching for cake, before I *know* I've decided to want it?  Some people act as if they've  never worked anywhere that needs fact checking.  Consciousness lets us second guess our neurology, by modelling at a higher level.  It is built-in extelligence. It's the *route* to exterior add-ons (look something up, ask around, research!).

I can second guess.  Some  people don't believe in free will, but I do.  My body wants to do X, (buy chocolate) in fact I'm actually walking to the sweet shop, its only slightly out of my way.  Modelling what I will do, because of what I already *want* I have time to weight the immediate want CHOCOLATE against non-immediate wants which are not the concern of the 'lower' brain NOT BEING FATTER.  The freedom to 'weigh' wants in a modelled observable area ('the mind') rather than by a blind process assigning different values to them (instinct will always favour the quick fix that doesn't carry an evolutionary downside.  CHOCOLATE doesn't kill you quickly enough not to breed) seems to me the evolutionary benefit of consciousness. A fairly clear one.

And that's before we get to whether the mind might exert a non-temporal causal effect on the lower brain - we *know* now than non-contradictory causation can elide in time.  It's not absolutely, certainly, wishful thinking to think that my body may start to do, what my brain hasn't decided yet (but will) *and* still leave the causation with the brain.  Imagine the evolutionary advantage of being able to retro-plan! even if it just shaved 'reaction' times. {And no one seems to have considered the 'conscious choice' creates 'habit-subroutines', 'habit-sub-routines' exceed speed of but do not generally contradict consciousness' route, which doesn't need time travel but still gives 'in model' decison making primacy.]

The Consciousness vs Non-Consciousness issues skew other problems with the set up
- in my view.

In the same way the vampire argues Consciousness is an energy intensive mistake loop in evolutionary neurology, any human could argue the same about [supposed] vampire intelligence.  More cogently - *they* were extinct.

Not every Predator has to out-think its prey, Predators only have to be more intelligent that the average-of their prey species *and* keep the lead,  WATT's vampires
(recreated from prehistoric DNA) would need a bell-curve ahead of human intelligence, but that wouldn't mean you couldn't find human experts who could out perform any specific vampire - particularly if you have a human population by definition 10x the maximum predator population:  so appart from being a nice conceit for the novel, why would you put a vampire in charge of humans - why not just an intelligent human?

Watt's vampires' intelligence is a given, but sadly, not especially believable  (unlike their biology which is a lovely fictional working-put by the author!). It's like that of Niven's Protectors - more in the 'told about' than the showing - with what they *actually* do in plots written for non-Protectors involving non-Protectors often being, well, stupid.  [The vampire in this stupidly gets [SPOILERED]]

Maybe the reason for vampire excess intelligence is the same as the reason for conscious rather than unconscious 'entities'.  Pretty good, but not absolute. 

I don't believe a Non-Conscious builder entity would be 'threatened' by Conscious language - I think it would ignore it, tag it with a 'low interest to productive methodology' tag.  If a potential competitor does wildly wasteful thing, that's good.  Don't waste energy investigating wildly wasteful thing.

Of course there's a whole book to go,  and the author may be BLINDSIGHTing me on his intentions. I'll let you know.  It's worth reading though.

- In fact why isn't the ships AI running the mission better, it ought to be the intellect on board? Why does something 50x a human, need a 10x a human intermediary?  (The answer U DISLIKE TAKING ORDERS FROM A MACHINE isn't a brilliant one when the other option is a *%^&ing vampire!]


Hob said...

I still haven't read Blindsight and this is a good reminder to do so. I have encountered the kind of argument you're talking about elsewhere, and you've done a good job explaining why I don't find it as earth-shaking as some others do. It seems like it's taking the basic notion of the subconscious, which was upsetting to people in Freud's time but has been pretty well integrated into psychology by now, and making it sound fresher by using more modern technological analogies and by positing a way to measure such things objectively. And it's demonstrably untrue that if you adopt a point of view that doesn't give so much weight to the ego, you'll inevitably go in a scary philosophical direction that interferes with what most of us think of as normal life; Buddhists have managed okay. (And in fiction, Greg Bear for instance has done a lot with the notion that the mind is a collective of semi-autonomous processes, while still keeping a basically humanist point of view.)

As for Niven's Protectors, they're actually one of my favorite fictional treatments of super-intelligence (at least in the first book) because he's very up front about their minds not being any more purely logical than ours-- their goals are heavily influenced by emotion, they're just very good at coming up with rationalizations and working out technical problems.

Anyway, sorry this is so wordy, it's just a subject that really interests me.

Site Owner said...

The Protectors in 'Protector' are fine (their intellects have their own evolutionary limitations just different ones from humanity), its actually the ones in some latter Niven - and some dreadful stories in the late Man-Kzin Wars collections that rile me a bit for 'stupidity which passes for intellect by authoral fiat.


Simon BJ