At one point, Stanley’s video camera picked up another robot vehicle ahead of it (this was H1, a rival vehicle from Carnegie-Mellon University) and eventually Stanley pulled around H1 and left it in its dust. (By the way, I am carefully avoiding the pronoun “he” in this text, although it was par for the course in journalistic references to Stanley, and perhaps also at the AI Lab as well, given that the vehicle had been given a human name. Unfortunately, such linguistic sloppiness serves as the opening slide down a slippery slope, soon winding up in full anthropomorphism.) One can see this event taking place on the videotape made by that camera, and it is the climax of the whole story. At this crucial moment, did Stanley recognize the other vehicle as being “like me”? Did Stanley think, as it gaily whipped by H1, “There but for the grace of God go I?” or perhaps “Aha, gotcha!” Come to think of it, why did I write that Stanley “gaily whipped by” H1?
What would it take for a robot vehicle to think such thoughts or have such feelings? Would it suffice for Stanley’s rigidly mounted TV camera to be able to turn around on itself and for Stanley thereby to acquire visual imagery of itself? Of course not. That may be one indispensable move in the long process of acquiring an “I”, but as we know in the case of chickens and cockroaches, perception of a body part does not a self make.
A Counterfactual Stanley
What is lacking in Stanley that would endow it with an “I”, and what does not seem to be part of the research program for developers of self-driving vehicles, is a deep understanding of its place in the world. By this I do not mean, of course, the vehicle’s location on the earth’s surface, which is given to it down to the centimeter by GPS; it means a rich representation of the vehicle’s own actions and its relations to other vehicles, a rich representation of its goals and its “hopes”. This would require the vehicle to have a full episodic memory of thousands of experiences it had had, as well as an episodic projectory (what it would expect to happen in its “life”, and what it would hope, and what it would fear), as well as an episodic subjunctory, detailing its thoughts about near misses it had had, and what would most likely have happened had things gone some other way.
Thus, Stanley the Robot Steamer would have to be able to think to itself such hypothetical future thoughts as, “Gee, I wonder if H1 will deliberately swerve out in front of me and prevent me from passing it, or even knock me off the road into the ditch down there! That’s what
An article in
The feedback loop inside Stanley’s computational machinery is good enough to guide it down a long dusty road punctuated by potholes and lined with scraggly saguaros and tumbleweed plants. I salute it! But if one has set one’s sights not just on driving but on thinking and consciousness, then Stanley’s feedback loop is not strange enough — not anywhere close. Humanity still has a long ways to go before it will collectively have wrought an artificial “I”.
CHAPTER 14
The Inert Sponges inside our Heads
WHY, you might be wondering, do I call the lifelong loop of a human being’s self-representation, as described in the preceding chapter, a