Objective: This study aims to investigate the brain activity patterns of both deaf and hearing children during the processing of three different tones (first tone, second tone, and third tone) utilizing resting-state functional magnetic resonance imaging (fMRI). Furthermore, the study seeks to identify disparities in brain activation regions between deaf and hearing children while engaged in the tone processing task. Methods: The study enlisted a cohort of five deaf children and two hearing children as participants. Resting-state functional magnetic resonance imaging (fMRI) scans were conducted on these subjects utilizing an fMRI scanner. The acquired fMRI data underwent preprocessing and subsequent analysis to scrutinize the patterns of brain activity. Results: During tone recognition tasks, it becomes evident that deaf children and hearing children exhibit variations in brain activation regions. These discrepancies manifest across multiple areas, including the pre-central gyrus, superior temporal gyrus, middle occipital gyrus, supplementary motor area, superior parietal lobe, and interior frontal gyrus, among others. A comparative analysis suggests the possibility that the brains of deaf children demonstrate heightened plasticity and compensatory mechanisms. These findings significantly contribute to the comprehension of the neural underpinnings of tone processing, potentially enhancing intervention strategies. Moreover, they furnish a theoretical foundation for the language development and rehabilitation of deaf children.
KEYWORDS: Brain, Functional magnetic resonance imaging, Image segmentation, Speech recognition, Visualization, Data processing, Head, Data acquisition, Neural networks, Information visualization
Vocal tone is an important component of language and it plays a key role in language comprehension and communication. However, children with hearing loss face challenges in vocal tone recognition due to hearing impairment. In this study, five deaf children and two children with normal hearing were recruited to compare the differences in third and fourth tone recognition tasks between deaf and normal children. The results revealed that (1) some of the deaf children's brain regions that process vocal tones did not work properly due to hearing loss; (2) deaf children may rely on different neural networks when processing vocal tone information. (3) Deaf children process vocal tone information with hemispheric characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.