Presentation + Paper
6 June 2024 Exploring connections between auditory hallucinations and language model structures and functions
Author Affiliations +
Abstract
Auditory hallucinations are a hallmark symptom of mental disorders such as schizophrenia, psychosis, and bipolar disorder. The biological basis for auditory perceptions and hallucinations, however, is not well understood. Understanding hallucinations may broadly reflect how our brains work — namely, by making predictions about stimuli and the environments that we navigate. In this work, we would like to use a recently developed language model to help the understanding of auditory hallucinations. Bio-inspired Large Language Models (LLMs) such as Bidirectional Encoder Representations from Transformers (BERT) and Generative Pre-trained Transformer (GPT) can generate next words based on previously generated words from the embedded space and their pre-trained library with or without inputs. The generative nature of neural networks in GPT (like self-attention) can be analogously associated with the neurophysiological sources of hallucinations. Functional imaging studies have revealed that the hyperactivity of the auditory cortex and the disruption between auditory and verbal network activity may underlie auditory hallucinations’ etiology. Key areas involved in auditory processing suggest that regions involved in verbal working memory and language processing are also associated with hallucinations. Auditory hallucinations reflect decreased activity in verbal working memory and language processing regions, including the superior temporal and inferior parietal regions. Parallels between auditory processing and LLM transformer architecture may help to decode brain functions on meaning assignment, contextual embedding, and hallucination mechanisms. Furthermore, an improved understanding of neurophysiological functions and brain architecture would bring us one step closer to creating human-like intelligence.
Conference Presentation
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Janerra D. Allen, Luke Xia, L. Elliot Hong, and Fow-Sen Choa "Exploring connections between auditory hallucinations and language model structures and functions", Proc. SPIE 13059, Smart Biomedical and Physiological Sensor Technology XXI, 130590A (6 June 2024); https://doi.org/10.1117/12.3013964
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Brain

Transformers

Machine learning

Auditory cortex

Neural networks

Data modeling

Artificial intelligence

Back to Top