I seem to be writing what is largely a philosophy blog these days. This is problematic insofar as I have read nearly no philosophy in my life. The list pretty much begins and ends with Plato. As such, I tend to run roughshod over concepts and thought experiments that other people have already written, published, and forgotten about; I instead remain blissfully ignorant of the intellectual plagarism I may or may not be committing. I plan to solve this problem by majoring in Philosophy in my next life, though that will of course create several other problems, such how I'm going to feed myself in that life (That was not a dig at Sam. I was taking the piss out of Socrates there. Ha ha Socrates. You've totally been punked). Fortunately, I am friends with people who have read Philosophy, and they, intentionally or un-, draw my attention to books, ideas, or thought experiments that I have never otherwise heard of. Here is the most recent example; Periapse's comment (What? You haven't read it yet? Sadly, we can now no longer be friends) led me to Searle's thought experiment The Chinese Room, about which I will blog, starting now.
The Chinese Room is an argument that humans are not merely computational machines; it posits that a computer can act human (pass the Turing Test, in AI-speak) without either being conscious or having an understanding of meaning in a human-like way. It goes like this: there's a room with a computer terminal on the outside. The terminal is capable of carrying on a conversation in Chinese, such that a Chinese speaker can walk up to it, start typing, and the terminal will be able to carry on a conversation in Chinese with that person. The Chinese speaker thinks, "Aha! An artificially intelligent computer that speaks Chinese! How amazing!" Unbeknownst to the speaker, however, is that inside the room is Searle, sitting at another terminal with an elaborate rulebook in his hand. When someone comes up to the terminal and starts typing, he consults his rulebook, and types back whatever it tells him. He doesn't speak Chinese or understand what he's typing, nor does the rulebook define anything for him--he just types back the symbols that correspond to the symbols he receives, thereby carrying on a perfect conversation without ever understanding what he's saying.
It so happens that after college I won a Parshull-Dimm scholarship and went to the remote island chain of Chai-Neesrüm to study an isolated tribe of Pacific Islanders. The Chais and the Neesrüms are unique in that they communicate with very subtle changes in their facial expressions, the same type you make unconsciously while speaking--raising the brow, flexing the jaw muscles, softening the eyes, and so on. In the process of doing this, they make sounds with their vocal chords, but this is completely superfluous to their communication with each other, and they're only dimly aware that they do it--it's just a strange byproduct of their facial movements, and for some reason they can't seem to stop doing it. What I discovered as soon as I arrived in Chai-Neesrüm is that the sounds they make perfectly resemble spoken English. I stepped off the plane and the matriarch of the village came up to me, and while subtly lowering her eyes and shifting her chin to the left, voiced a series of sounds which sounded identical to the English phrase, "Dude! You rock like Slayer!" I said, "Uh...Thanks!" but of course without realizing it raised my eyebrow and turned the corners of my mouth upwards. This, of course, meant something totally different to her than it did to me, and...let's just say that I spent the next three months thinking that we were discussing Star Trek, The Next Generation, while they thought I was giving them an incredibly detailed, narrative-driven recipe for Gazpacho. Needless to say, the Parshull-Dimm committee absolutely ate up my report and the subsequent fame in anthropological circles has sustained me to this day.
Being that, to review, I've read only the Philosophical equivalent of Fun With Dick And Jane, I'm sure somebody has made the critique of The Chinese Room that I'm about to make--just because it's not on the Wiki doesn't mean somebody hasn't written, published, and forgotten about it already. But here it is anyway: Searle, in this case the person sitting inside the Chinese Room doing the translation, is a conscious, Strongly (un-)Artificially Intelligent entity. The Chinese Room doesn't prove that he isn't, it proves that this intelligence doesn't derive from its ability (or lack of ability) to understand written Chinese. Humans, as it happens, do all manner of tasks to which we don't attach, or don't agree on, special imbued meaning. We respond to light, pain, pressure, sound and/or temperature in extremely elaborate and varied ways. We respond to food by breaking it down and digesting it. We respond to air by taking it into our bodies, absorbing the oxygen in it, replacing it with carbon dioxide, and expelling it again. The shared meaning or lack thereof of these acts do not define the strong AI we happen to also possess. I wrote the above example mostly to amuse people who otherwise have absolutely no idea what I'm talking about at this point, but also to hopefully make this idea a little more clear. Each side in the Chai-Neesrüm dialog could reasonably conclude that the other was a fluent speaker of his or her dialect, but the entire time neither party understood the meaning the other was taking from the "conversation." Each side instead interpreted the byproducts of the other's communication--byproducts which we could reasonably conclude were entirely mechanical, entirely unconscious--as having meaning. You can argue all day about whether "meaning" is really "exchanged" in this conversation, but that's not the interesting point. The interesting point is both parties are still Strong AIs, and "meaning" and "understanding" have no bearing on this point.
This seems to me to have all sorts of interesting ramifications not just for AI but for our own non-artificial intelligence, especially as to where that intelligence "resides," so to speak. You're already completely confused, though, so I'll leave that until the next post.
Next: Confusion is nothing new!
Tuesday, January 30, 2007
Wednesday, January 24, 2007
Ghost in the Machine, Schmost in the Machine
I flagged this article in Slate, a discussion on consciousness with a Buddhist, a couple of months ago and then never blogged about it ("OaO: We may not be timely, but at least we're unreadably opaque"TM). Anyway, the interviewee argues for exploring the origins of consciousness outside of the actual physical workings of the brain, and attempts to identify methods for experimentally testing this hypothesis.
I come down pretty solidly on the side of consciousness being neither more nor less than the physical workings of the brain, if only because otherwise I'm pretty sure The First Law of Thermodynamics gets violated (though people lots smarter than I don't think this is a problem for Cartesian Dualism, so who knows). But it also struck me, reading the linked article above, that this proposition should be testable as well. If human consciousness derives from the firing of neurons and chemical signals sent back and forth between emitters and receptors, then what's so special about this particular collection of coordinated automata that consciousness derives from it? Shouldn't we therefore expect consciousnesses to arise from any, or at least other, sufficiently complex collection(s) of coordinated automata? Here's that question stated another way: is an anthill conscious?
I argue pretty much constantly that we, as conscious beings, have a somewhat over-inflated sense of what it actually means to be conscious (well, not you of course. I happen to know that you are extremely humble about your own consciousness. But other people. They're totally arrogant jackasses about it. They're all like, "Look at me, I'm all conscious and shit, blah blah me me blah"). The one sentence version: consciousness is the sum of 4 billion years of mistakes made by evolution, now available to you in a convenient, fast acting brain. As much agency as you give, e.g. a flower in "deciding" to use a bee to spread its genes is a much agency as you're giving yourself in your "decision" as to what to have for lunch today (fine, that was two sentences).
This view of consciousness makes you and I seem like zombies, and it's probably the most common classical argument for Cartesian Dualism. If all we are is that series of neural connections, then where does the meta- come from? How can it be that a rush of chemicals secreted from somewhere makes me feel bad? How is it that I can think about the way that I think? How did I just do that internal diagnostic to make sure I'm not a zombie (it came back negative, by the way. I am not a zombie)? Obviously I'm not going to be able to answer this point in a blog post, being that's it's an argument as old as humanity, but I do propose that it is a testable proposition, and that the answer to it is the same as the answer to the question, "is an anthill conscious?"
An anthill is a co-operating amalgam of automata, just like a brain. An individual ant is nigh literally as dumb as a post, but it can dig, look for food, and leave or follow a chemical trail. An anthill will respond to stimulus if you step on it or start having a picnic nearby. So the operative question is, how does the anthill feel? If you want to follow this proposition, "networks form consciousness," down the rabbit hole, there are all manner of other networks to consider: beehives, colonies of bacteria, actual computer networks, and of course humanity itself. Further down this rabbit hole is the idea that in addition to your own consciousness, you're the equivalent of a neuron in the network of the consciousness of collective humanity. Further still, well, how would you get a message out to that consciousness that you've become aware of your part in it? Further still...well, I start to get lost, myself.
Next: Further!
I come down pretty solidly on the side of consciousness being neither more nor less than the physical workings of the brain, if only because otherwise I'm pretty sure The First Law of Thermodynamics gets violated (though people lots smarter than I don't think this is a problem for Cartesian Dualism, so who knows). But it also struck me, reading the linked article above, that this proposition should be testable as well. If human consciousness derives from the firing of neurons and chemical signals sent back and forth between emitters and receptors, then what's so special about this particular collection of coordinated automata that consciousness derives from it? Shouldn't we therefore expect consciousnesses to arise from any, or at least other, sufficiently complex collection(s) of coordinated automata? Here's that question stated another way: is an anthill conscious?
I argue pretty much constantly that we, as conscious beings, have a somewhat over-inflated sense of what it actually means to be conscious (well, not you of course. I happen to know that you are extremely humble about your own consciousness. But other people. They're totally arrogant jackasses about it. They're all like, "Look at me, I'm all conscious and shit, blah blah me me blah"). The one sentence version: consciousness is the sum of 4 billion years of mistakes made by evolution, now available to you in a convenient, fast acting brain. As much agency as you give, e.g. a flower in "deciding" to use a bee to spread its genes is a much agency as you're giving yourself in your "decision" as to what to have for lunch today (fine, that was two sentences).
This view of consciousness makes you and I seem like zombies, and it's probably the most common classical argument for Cartesian Dualism. If all we are is that series of neural connections, then where does the meta- come from? How can it be that a rush of chemicals secreted from somewhere makes me feel bad? How is it that I can think about the way that I think? How did I just do that internal diagnostic to make sure I'm not a zombie (it came back negative, by the way. I am not a zombie)? Obviously I'm not going to be able to answer this point in a blog post, being that's it's an argument as old as humanity, but I do propose that it is a testable proposition, and that the answer to it is the same as the answer to the question, "is an anthill conscious?"
An anthill is a co-operating amalgam of automata, just like a brain. An individual ant is nigh literally as dumb as a post, but it can dig, look for food, and leave or follow a chemical trail. An anthill will respond to stimulus if you step on it or start having a picnic nearby. So the operative question is, how does the anthill feel? If you want to follow this proposition, "networks form consciousness," down the rabbit hole, there are all manner of other networks to consider: beehives, colonies of bacteria, actual computer networks, and of course humanity itself. Further down this rabbit hole is the idea that in addition to your own consciousness, you're the equivalent of a neuron in the network of the consciousness of collective humanity. Further still, well, how would you get a message out to that consciousness that you've become aware of your part in it? Further still...well, I start to get lost, myself.
Next: Further!
Thursday, January 11, 2007
The Market Is Open
I am pleased to announce the newest member of the Hermeneutic Blog Circle, Slouching Towards Agalmia. STA was opened by a former manager of mine now trading under the name Periapse, of whom I have previously blogged, and whom I have long hoped would start blogging. STA is to be manned, Freedom From Blog style, by multiple authors, one whom is...wait for it...me. Together, hopefully, we will help you make sense of the much.
Next: More is more!
Next: More is more!
Subscribe to:
Posts (Atom)