Human brain waves and AI waves

How AI Processes Signals Parallel to the Human Brain

Recent findings from the University of California, Berkeley, reveal that artificial intelligence (AI) systems can process signals in a manner strikingly akin to how the human brain interprets speech. This discovery could shed light on the enigmatic inner workings of AI systems.

The researchers, affiliated with the Berkeley Speech and Computation Lab, placed electrodes on participants’ heads to measure brain waves as they listened to a single syllable—”bah.” The team then compared this brain activity to the signals generated by an AI system trained in English. According to Gasper Begus, assistant professor of linguistics at UC Berkeley and the study’s lead author, the similarity in shape between the two sets of signals is remarkable, indicating parallel processing and encoding.

The study, published in the journal Scientific Reports, highlights this remarkable similarity with a side-by-side graph comparison. Begus emphasizes that no data manipulation was involved: “This is raw.”

Despite the tremendous advancements in AI systems, exemplified by the groundbreaking success of ChatGPT, researchers have had limited comprehension of how these tools function between input and output. Unraveling the black box of AI systems’ learning processes is crucial as they become increasingly integrated into various aspects of daily life, ranging from health care to education.

Working alongside co-authors Alan Zhou from Johns Hopkins University and T. Christina Zhao from the University of Washington, Begus leveraged his background in linguistics to investigate the AI learning process. The team discovered that brain waves for speech closely mirrored the actual sounds of language when participants listened to spoken words.

The researchers used an unsupervised neural network (an AI system) to interpret the “bah” sound and, utilizing a technique developed in the Berkeley Speech and Computation Lab, measured and documented the corresponding waves. Unlike previous studies, this research examined raw waves, which will help scientists better understand and enhance AI systems’ learning and cognition.

Begus is particularly interested in AI model interpretability. He believes that the black box between input and output can be deciphered, and understanding how AI signals relate to human brain activity is a crucial benchmark. This knowledge could help establish guidelines for increasingly powerful AI models, as well as provide insights into the origins of errors and biases in learning processes.

In collaboration with other researchers, Begus and his colleagues are using brain imaging techniques to compare signals and exploring how different languages, like Mandarin, are decoded differently in the brain. Focusing on language, which has fewer variations than visual cues, offers a more concrete understanding of AI systems.

Begus maintains that speech is the key to understanding AI model learning: “I am very hopeful that speech is the thing that will help us understand how these models are learning.” By examining the similarities and differences between AI architectures and human cognition, researchers can progress toward building mathematical models that closely resemble human thought processes, a primary goal in cognitive science.

The study: Gašper Beguš et al, Encoding of speech in convolutional layers and the brain stem based on language experience, Scientific Reports (2023). DOI: 10.1038/s41598-023-33384-9

Advertisements


Share it on: Facebook | Twitter