“But our tissue matrices were all too geared toward C>Si. We were basically miniaturizing carbon computers to function like dumb silicon computers. The next step was to use our new knowledge of chromosome construction techniques to build circuit tissues to more closely resemble brain tissues, including fabricating neuron synapses and brain cell matrices. We went into business to mimic nature’s brain construction, starting at the bottom of the ladder with insect brains. Once we worked through three thousand failures, we turned to bird brain fabrication. Let me tell you, the brain of a bird is an amazing device. The motor control needed to fly is immense. A few months later we had expanded to building the brains of cats, then canines and finally lower primates. As the chromosomes came closer to resembling the human genome, the computational power of the tissue systems rose exponentially. You’d think there would be a debate as to the ethics of using artificial human chromosomes to build a circuit matrix modeled on the human brain. But no one really knew. Some of the work was classified, other areas so highly complex that mainstream media writers were unable to grasp the concepts. We were making progress at an exponential rate. Only a year after C>Si, we had succeeded in reverse-engineering the human brain. After having done that, we started looking ahead to the day when the tissue-based supercomputers would exceed the intelligence levels of individual humans, the day designated “AI>HF for artificial intelligence overcoming human intelligence.
“We were so drunk on our success that the first failure of our new technology came as a shock. The useful time span of the carbon-based tissue computers shrank, until the most sophisticated units based on human chromosome strands began to last less than a few weeks.”
“What happened?” Victor Krivak asked.
“We ran into the same problems God did,” Wang said, looking into his empty glass. Sergio refilled it from a fresh bottle. “At first, the organic computers suffered from disease and infections. That problem was overcome by the construction of special clean rooms limiting the usefulness of the computers-how can you use your computer if you have to build a clean room in your house? The solution was ugly — your computer’s tissue-matrix processor would be kept in a hospital like clean room at a central location, and instead of purchasing the physical unit, you would just possess a terminal to it, controlled by your wireless pad computer. That would at least hold us until we brought more medical doctors into the lab to help us with immunology issues. Once the disease problem was put aside, the units that survived proved susceptible to a different kind of sickness. You might call it psychosis.
“You see, the programmers for the carbon-based computers found they were spending more time teaching than actually programming. As the intelligence level of the units rose, so did the complexity of teaching them. Artificial intelligence psychologists observing the interaction of the programmers with the carbon computers and of the computers with each other reached the conclusion that the carbon computers were becoming sentient, and with consciousness came all its baggage. Emotional pain in all its varieties. Loneliness. Sadness. Anger. Lust for control. Wistfulness. Boredom. In the next year the programmers had become more like parents or teachers than technicians.
“The worst came as the most advanced carbon computers aged. Unlike their silicon counterparts, which functioned on one level until they became obsolete, the newest carbon computers developed within the same physical model, gaining intelligence and executing self-rewiring of their circuits, the same thing a human brain does on exposure to education. But the carbon units tended to cease functioning at the two-year point, all their progress gone. They would go into the biological equivalent of a silicon computer locking up. It was a catatonic state from which they never emerged, and eventually they died.”
“What was the cause?” Krivak asked.
“The terrible twos,” Wang said. “The carbon computer developed just like the brain of an infant. Programming and its own natural development bring the unit to the point that it is self-aware, or perhaps just aware of where the self stops and the outside world begins. The unit would became aware of its own dependence, of its powerlessness. At that point it had temper tantrums very much like a toddler does, except these were much more destructive. You might describe it as a form of schizophrenia. We decided the units were under stimulated and the only thing that worked was giving them toys to play with or break. Physical manifestations of themselves that they could control.”
“You gave them bodies,” Krivak said.