“Only to me. What I told you before about feeling like a freak—”
“You’re not, and you know it. I doubt if the topic will come up again.”
“I’ll tell her about it, someday. Just not now. Particularly since I have arranged some lengthy sessions with Dr. Snaresbrook.” He glanced at his watch. “The first one will be starting soon. The main reason I am doing this is that I am determined to speed up the AI work.”
“How?”
“I want to improve my approach to the research. Right now all that I am doing is going through the material from the backup data bank we brought back from Mexico. But these are mostly notes and questions about work in progress. What I need to do is locate the real memories and the results of the research based upon them. At the present time it has been slow and infuriating work.”
“In what way?”
“I was, am, are…” Brian smiled wryly. “I guess there is no correct syntax to express it. What I mean is the
“Are there any results of your accessing these memories?”
“Early days yet. We are still trying to find a way to make connections that I can reliably activate at will. The CPU is a machine — and I’m not — and we interface badly at the best of times. It is like a bad phone connection at other times. You know, both people talking at once and nothing coming across. Or I just simply cannot make sense of what is getting through. Have to stop all input and go back to square A. Frustrating, I can tell you. But I’m going to lick it. It can only improve. I hope.”
Ben walked Brian over to the Megalobe clinic and left him outside Dr. Snaresbrook’s office. He watched him enter, stood there for some time, deep in thought. There was plenty to think about.
The session went well. Brian could, access the CPU at will now, use it to extract specific memories. The system was functioning better — although sometimes he would retrieve fragments of knowledge that were hard to comprehend. It was as though they came as suggestions from someone else rather than from his own memories. Occasionally, when he accessed a memory of his earlier, adult self, he would find himself losing track of his own thoughts. When he regained control he found it hard to recall how it had felt.
The probing certainly was saving a great deal of time in his research and, as the novelty began to wear off, Brian’s thoughts returned to the most serious problems that still beset him on the AI. All the different bugs that led to failures — to breakdowns in which the machine would end up at one extreme of behavior or another.
“Brian — are you there?”
“What — ?”
“Welcome back. I asked you the same question three times. You were wandering, weren’t you?”
“Sorry. It just seems so intractable and there is nothing in the notes to help me out. What I need is to have a part of my mind that is watching itself without the rest of the mind knowing what is happening. Something that would help keep the system’s control circuitry in balance. That’s not particularly hard when the system itself is stable, not changing or learning very much — but nothing seems to work when the system learns new ways to learn. What I need is some system, some sort of separate submind that can maintain a measure of control.”
“Sounds very Freudian.”
“I beg your pardon?”
“Like the theories of Sigmund Freud.”
“I don’t recall anyone with that name in any AI research.”
“Easy enough to see why. He was a psychiatrist working in the 1890s, before there were any computers. When he first proposed his theories — about how the mind is made of a number of different agencies — he gave them names like id, ego, superego, censor and so on. It is understood that every normal person is constantly dealing, unconsciously, with all sorts of conflicts, contradictions, and incompatible goals. That’s why I thought you might get some feedback if you were to study Freud’s theories of mind.”
“Sounds fine to me. Let’s do it now, download all the Freudian theories into my memory banks.”