I flagged this article in Slate, a discussion on consciousness with a Buddhist, a couple of months ago and then never blogged about it ("OaO: We may not be timely, but at least we're unreadably opaque"TM). Anyway, the interviewee argues for exploring the origins of consciousness outside of the actual physical workings of the brain, and attempts to identify methods for experimentally testing this hypothesis.
I come down pretty solidly on the side of consciousness being neither more nor less than the physical workings of the brain, if only because otherwise I'm pretty sure The First Law of Thermodynamics gets violated (though people lots smarter than I don't think this is a problem for Cartesian Dualism, so who knows). But it also struck me, reading the linked article above, that this proposition should be testable as well. If human consciousness derives from the firing of neurons and chemical signals sent back and forth between emitters and receptors, then what's so special about this particular collection of coordinated automata that consciousness derives from it? Shouldn't we therefore expect consciousnesses to arise from any, or at least other, sufficiently complex collection(s) of coordinated automata? Here's that question stated another way: is an anthill conscious?
I argue pretty much constantly that we, as conscious beings, have a somewhat over-inflated sense of what it actually means to be conscious (well, not you of course. I happen to know that you are extremely humble about your own consciousness. But other people. They're totally arrogant jackasses about it. They're all like, "Look at me, I'm all conscious and shit, blah blah me me blah"). The one sentence version: consciousness is the sum of 4 billion years of mistakes made by evolution, now available to you in a convenient, fast acting brain. As much agency as you give, e.g. a flower in "deciding" to use a bee to spread its genes is a much agency as you're giving yourself in your "decision" as to what to have for lunch today (fine, that was two sentences).
This view of consciousness makes you and I seem like zombies, and it's probably the most common classical argument for Cartesian Dualism. If all we are is that series of neural connections, then where does the meta- come from? How can it be that a rush of chemicals secreted from somewhere makes me feel bad? How is it that I can think about the way that I think? How did I just do that internal diagnostic to make sure I'm not a zombie (it came back negative, by the way. I am not a zombie)? Obviously I'm not going to be able to answer this point in a blog post, being that's it's an argument as old as humanity, but I do propose that it is a testable proposition, and that the answer to it is the same as the answer to the question, "is an anthill conscious?"
An anthill is a co-operating amalgam of automata, just like a brain. An individual ant is nigh literally as dumb as a post, but it can dig, look for food, and leave or follow a chemical trail. An anthill will respond to stimulus if you step on it or start having a picnic nearby. So the operative question is, how does the anthill feel? If you want to follow this proposition, "networks form consciousness," down the rabbit hole, there are all manner of other networks to consider: beehives, colonies of bacteria, actual computer networks, and of course humanity itself. Further down this rabbit hole is the idea that in addition to your own consciousness, you're the equivalent of a neuron in the network of the consciousness of collective humanity. Further still, well, how would you get a message out to that consciousness that you've become aware of your part in it? Further still...well, I start to get lost, myself.