I flagged this article in Slate, a discussion on consciousness with a Buddhist, a couple of months ago and then never blogged about it ("OaO: We may not be timely, but at least we're unreadably opaque"TM). Anyway, the interviewee argues for exploring the origins of consciousness outside of the actual physical workings of the brain, and attempts to identify methods for experimentally testing this hypothesis.
I come down pretty solidly on the side of consciousness being neither more nor less than the physical workings of the brain, if only because otherwise I'm pretty sure The First Law of Thermodynamics gets violated (though people lots smarter than I don't think this is a problem for Cartesian Dualism, so who knows). But it also struck me, reading the linked article above, that this proposition should be testable as well. If human consciousness derives from the firing of neurons and chemical signals sent back and forth between emitters and receptors, then what's so special about this particular collection of coordinated automata that consciousness derives from it? Shouldn't we therefore expect consciousnesses to arise from any, or at least other, sufficiently complex collection(s) of coordinated automata? Here's that question stated another way: is an anthill conscious?
I argue pretty much constantly that we, as conscious beings, have a somewhat over-inflated sense of what it actually means to be conscious (well, not you of course. I happen to know that you are extremely humble about your own consciousness. But other people. They're totally arrogant jackasses about it. They're all like, "Look at me, I'm all conscious and shit, blah blah me me blah"). The one sentence version: consciousness is the sum of 4 billion years of mistakes made by evolution, now available to you in a convenient, fast acting brain. As much agency as you give, e.g. a flower in "deciding" to use a bee to spread its genes is a much agency as you're giving yourself in your "decision" as to what to have for lunch today (fine, that was two sentences).
This view of consciousness makes you and I seem like zombies, and it's probably the most common classical argument for Cartesian Dualism. If all we are is that series of neural connections, then where does the meta- come from? How can it be that a rush of chemicals secreted from somewhere makes me feel bad? How is it that I can think about the way that I think? How did I just do that internal diagnostic to make sure I'm not a zombie (it came back negative, by the way. I am not a zombie)? Obviously I'm not going to be able to answer this point in a blog post, being that's it's an argument as old as humanity, but I do propose that it is a testable proposition, and that the answer to it is the same as the answer to the question, "is an anthill conscious?"
An anthill is a co-operating amalgam of automata, just like a brain. An individual ant is nigh literally as dumb as a post, but it can dig, look for food, and leave or follow a chemical trail. An anthill will respond to stimulus if you step on it or start having a picnic nearby. So the operative question is, how does the anthill feel? If you want to follow this proposition, "networks form consciousness," down the rabbit hole, there are all manner of other networks to consider: beehives, colonies of bacteria, actual computer networks, and of course humanity itself. Further down this rabbit hole is the idea that in addition to your own consciousness, you're the equivalent of a neuron in the network of the consciousness of collective humanity. Further still, well, how would you get a message out to that consciousness that you've become aware of your part in it? Further still...well, I start to get lost, myself.
Next: Further!
3 comments:
If you have not done so already, I think you may want to read Human Traces by Sebastian Faulks. He's pretty much right there with you on evolution and consciousness, but he also thinks (well, one of his characters thnks) that the evolutionary mutation that made human consciousness possible is also what made human madness possible. I find the book a bit too long and the characters a bit too distant from me, but the philosophy/psychology/fin de siecle Europe bits are pretty good reads.
My word verification begins: EIEIO!!!
So I like knew this guy, let's just call him "Rene". Anyhow he was a total Cartesian Duelist. I mean you even suggested that Mind is not substantially different from Matter and he'd get all pistols-at-dawn, Glock! Glock!
He had already shot seven zombies. At least he *said* they were zombies. According to him you can tell the zombies because they're the ones who are always going, "Brains. Braaains. Neu-ro-bi-ol-o-gy. Brains!"
Anyhow I had him over at my flat one day. My hope was that I could temper his hard edge with a little Searle-style propery dualism. THIS WAS A TERRIBLE IDEA! I started talking about cellular automata and he was just getting, well, *antsy*. No sooner did I venture the phrase "Strong Emergence" then he went for his gun. I bolted. I ran into my Chinese Room and locked the door behind me.
Now we all know that the Chinese Room is not completely bulletproof. Still, it offered me some protection, and more importantly it has display terminals inside and out. So while he railed outside the door with fuming invective and small caliber rounds I worked the keyboard, running a program to pop up on the outside monitor. I waited for him to glance at the screen and then hit him with it. It was a variant of the Color Phi hack of the visual system. I tell you his lateral geniculate nucleus must have been running microsoft cause that bitch buffer overflowed and soon I got root on his neocortex -- pwned that mfer's supervenient ass. Color Phi lets you hijack temporal perception, so I just made him think he had already killed me and he left.
Well, that's why I never talk about consciousness anymore. Except when I do.
testing.
Post a Comment