Researchers at MIT have revealed that their machine learning system has learned to recognise faces regardless their orientation and this ability could provide us clues about how our brain carries out facial recognition.
The MIT machine learning system is based on a computational model of the human brain’s face-recognition mechanism. This model seemingly captures aspects of human neurology that previous models have missed and this is reflected in the machine learning system as it spontaneously included an intermediate processing step that effectively represented a face’s degree of rotation.
Tomaso Poggio, a professor of brain and cognitive sciences at MIT and director of the Center for Brains, Minds, and Machines (CBMM), says that they do not claim to have all the understanding of what goes on inside the brain when it comes to facial recognition, but given the evidence that they have retrieved through their machine learning system, it won’t be wrong to assume that they are on the right track.
Poggio has long believed that the brain must produce “invariant” representations of faces and other objects, meaning representations that are indifferent to objects’ orientation in space, their distance from the viewer, or their location in the visual field. In 2010 Winrich Freiwald, an associate professor at the Rockefeller University, published a a study describing the neuroanatomy of macaque monkeys’ face-recognition mechanism in much greater detail.
In the study it was revealed that information from the monkey’s optic nerves passes through a series of brain locations, each of which is less sensitive to face orientation than the last. Neurons in the first region fire only in response to particular face orientations; neurons in the final region fire regardless of the face’s orientation — an invariant representation.
But when it comes to neurons in an intermediate region, they appear to be “mirror symmetric” – sensitive to the angle of face rotation without respect to direction. In the first region, one cluster of neurons will fire if a face is rotated 45 degrees to the left, and a different cluster will fire if it’s rotated 45 degrees to the right. In the final region, the same cluster of neurons will fire whether the face is rotated 30 degrees, 45 degrees, 90 degrees, or anywhere in-between. But in the intermediate region, a particular cluster of neurons will fire if the face is rotated by 45 degrees in either direction, another if it’s rotated 30 degrees, and so on.
This is the behavior that the researchers’ machine-learning system reproduced. “It was not a model that was trying to explain mirror symmetry,” Poggio says. “This model was trying to explain invariance, and in the process, there is this other property that pops out.”