If, in addition, it turned out that talking about this supposed marble had enormously useful explanatory power in my life, and if, on top of that, all my friends had similar cardboard boxes and all of them spoke ceaselessly — and wholly unskeptically — about the “marbles” inside
And thus it is with this notion of “I”. Because it encapsulates so neatly and so efficiently for us what we perceive to be truly important aspects of causality in the world, we cannot help attributing reality to our “I” and to those of other people — indeed, the highest possible level of reality.
The Size of the Strange Loop that Constitutes a Self
One more time, let’s go back and talk about mosquitoes and dogs. Do they have anything like an “I” symbol? In Chapter 1, when I spoke of “small souls” and “large souls”, I said that this is not a black-and-white matter but one of degree. We thus have to ask, is there a strange loop — a sophisticated level-crossing feedback loop — inside a mosquito’s head? Does a mosquito have a rich, symbolic representation of itself, including representations of its desires and of entities that threaten those desires, and does it have a representation of itself in comparison with other selves? Could a mosquito think a thought even vaguely reminiscent of “I can smile just like Hopalong Cassidy!” — for example, “I can bite just like Buzzaround Betty!”? I think the answer to these and similar questions is quite obviously, “No way in the world!” (thanks to the incredibly spartan symbol repertoire of a mosquito brain, barely larger than the symbol repertoire of a flush toilet or a thermostat), and accordingly, I have no qualms about dismissing the idea of there being a strange loop of selfhood in as tiny and swattable a brain as that of a mosquito.
On the other hand, where dogs are concerned, I find, not surprisingly, much more reason to think that there are at least the rudiments of such a loop in there. Not only do dogs have brains that house many rather subtle categories (such as “UPS truck” or “things I can pick up in the house and walk around with in my mouth without being punished”), but also they seem to have some rudimentary understanding of their own desires and the desires of others, whether those others are other dogs or human beings. A dog often knows when its master is unhappy with it, and wags its tail in the hopes of restoring good feelings. Nonetheless, a dog, saliently lacking an arbitrarily extensible concept repertoire and therefore possessing only a rudimentary episodic memory (and of course totally lacking any permanent storehouse of imagined future events strung out along a mental timeline, let alone counterfactual scenarios hovering around the past, the present, and even the future), necessarily has a self-representation far simpler than that of an adult human, and for that reason a dog has a far smaller soul.
The Supposed Selves of Robot Vehicles
I was most impressed when I read about “Stanley”, a robot vehicle developed at the Stanford Artificial Intelligence Laboratory that not too long ago drove all by itself across the Nevada desert, relying just on its laser rangefinders, its television camera, and GPS navigation. I could not help asking myself, “How much of an ‘I’ does Stanley have?”
In an interview shortly after the triumphant desert crossing, one gungho industrialist, the director of research and development at Intel (you should keep in mind that Intel manufactured the computer hardware on board Stanley), bluntly proclaimed: “Deep Blue [IBM’s chess machine that defeated world champion Garry Kasparov in 1997] was just processing power. It didn’t think. Stanley thinks.”
Well, with all due respect for the remarkable collective accomplishment that Stanley represents, I can only comment that this remark constitutes shameless, unadulterated, and naïve hype. I see things very differently. If and when Stanley ever acquires the ability to form limitlessly snowballing categories such as those in the list that opened this chapter,