Meta Tribe v2: Foundation Model of Human Brain Responses to Sound, Sight, and Language
Meta AI has released Tribe v2, a foundation model that captures human brain responses across audio, visual, and language modalities in a unified representation. The research enables in-silico neuroscience—simulating how human brains respond to stimuli without requiring physical experiments. Meta released the model weights, the full research paper, and the source code publicly, alongside an interactive mobile demo at aidemos.atmeta.com/tribev2.
Why It Matters
A publicly released foundation model of brain responses across three modalities opens neuroscience research to teams without fMRI access and represents Meta's most significant open neuroscience contribution—with downstream applications in brain-computer interface design, cognitive load modeling, and multimodal AI evaluation.