This is exactly what’s going on in the bouba-kiki effect: Your brain is performing an impressive feat of abstraction in linking your visual and auditory maps. The two inputs are entirely dissimilar in every way except one—the abstract properties of jaggedness or curviness—and your brain homes in on this common denominator very swiftly when you are asked to pair them up. I call this process “cross-modal abstraction.” This ability to compute similarities despite surface differences may have paved the way for more complex types of abstraction that our species takes great delight in. Mirror neurons may be the evolutionary conduit that allowed this to happen.
Why did a seemingly esoteric ability like cross-modal abstraction evolve in the first place? As I suggested in a previous chapter, it may have emerged in ancestral arboreal primates to allow them to negotiate and grasp tree branches. The vertical visual inputs of tree limbs and branches reaching the eye had to be matched with totally dissimilar inputs from joints and muscles and the body’s felt sense of where it is in space—an ability that would have favored the development of both canonical neurons and mirror neurons. The readjustments that were required in order to establish a congruence between sensory and motor maps may have initially been based on feedback, both at the genetic level of the species and at the experiential level of the individual. But once the rules of congruence were in place, the cross-modal abstraction could occur for novel inputs. For instance, picking up a shape that is visually perceived to be tiny would result in a spontaneous movement of almost-opposed thumb and forefingers, and if this were mimicked by the lips to produce a correspondingly diminutive orifice (through which you blow air), you would produce sounds (words) that sound small (such as “teeny weeny,” “diminutive,” or in French “
If this formulation is correct, some aspects of mirror-neuron function may indeed be acquired through learning, building on a genetically specified scaffolding unique to humans. Of course, many monkeys and even lower vertebrates may have mirror neurons, but the neurons may need to develop a certain minimum sophistication and number of connections with other brain areas before they can engage in the kinds of abstractions that humans are good at.
What parts of the brain are involved in such abstractions? I already hinted (about language) that the inferior parietal lobule (IPL) may have played a pivotal role, but let’s take a closer look. In lower mammals the IPL isn’t very large, but it becomes more conspicuous in primates. Even within primates it is disproportionately large in the great apes, reaching a climax in humans. Finally, only in humans do we see a major portion of this lobule splitting further into two, the angular gyrus and the supramarginal gyrus, suggesting that something important was going on in this region of the brain during human evolution. Lying at the crossroads between vision (occipital lobes), touch (parietal lobes), and hearing (temporal lobes), the IPL is strategically located to receive information from all sensory modalities. At a fundamental level, cross-modal abstraction involves the dissolution of barriers to create modality-free representations (as exemplified by the bouba-kiki effect). The evidence for this is that when we tested three patients who had damage to the left angular gyrus, they performed poorly on the bouba-kiki task. As I already noted, this ability to map one dimension onto another is one of the things that mirror neurons are thought to be doing, and not coincidentally such neurons are plentiful in the general vicinity of the IPL. The fact that this region in the human brain is disproportionately large and differentiated suggests an evolutionary leap.