A research team at Stanford's Wu Tsai Neurosciences Institute has made a major stride in using AI to replicate how the brain organizes sensory information to make sense of the world, opening up new frontiers for virtual neuroscience. Watch the seconds tick by on a clock and, in visual regions of your brain, neighboring groups of angle-selective neurons will fire in sequence as the second hand sweeps around the clock face. These cells form beautiful "pinwheel" maps, with each segment representing a visual perception of a different angle.

Other visual areas of the brain contain maps of more complex and abstract visual features, such as the distinction between images of familiar faces vs. places, which activate distinct neural "neighborhoods." Such functional maps can be found across the brain, both delighting and confounding neuroscientists, who have long wondered why the brain should have evolved a map-like layout that only modern science can observe.

To address this question, the Stanford team developed a new kind of AI algorithm—a topographic deep artificial neural network (TDANN)—that uses just two rules: naturalistic sensory inputs and spatial constraints on connections; and found that it successfully predicts both the sensory responses and spatial organization of multiple parts of the human brain's visual system. After seven years of extensive research, the findings were published in a new paper—"A unifying framework for functional organization in the early and high.