Cerebral processing of affective nonverbal vocalizations: a combined fMRI and MEG study.
情感非语言发声的大脑处理:功能磁共振成像和脑磁图联合研究。
基本信息
- 批准号:BB/J003654/1
- 负责人:
- 金额:$ 33.52万
- 依托单位:
- 依托单位国家:英国
- 项目类别:Research Grant
- 财政年份:2012
- 资助国家:英国
- 起止时间:2012 至 无数据
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
Recognizing and interpreting emotions in other persons is crucial for social interactions. In particular, people of all cultures are able to recognize emotions in vocalizations without speech such as laughs, cries or screams of fear. But how our brain analyses emotion in voices remains poorly understood, compared to how we perceive emotion in faces, for example. In this project we combine a range of advanced techniques to precisely map the brain network involved in recognizing emotions from the voice and determine its exact time-course. We will first use recent morphing technology to manipulate a database of affective voices in order to generate new vocalizations with more or less intense, possibly ambiguous expressions (e.g., pleasure mixed with fear). We will then measure a number of parameters in the vocalizations thus generated. A large group of listeners will be asked to rate each vocalization on what emotion they think is expressed, on how intense and on how positive/negative they think it is. We will also precisely measure important physical properties of the sounds such as their intensity and pitch. In parallel we will use state-of-the art, complementary brain imaging techniques (functional magnetic resonance imaging and magneto-encephalography) to measure brain activity in a smaller number of participants while they listen to the affective vocalizations and perform simple tasks-a Male/Female gender categorisation task, and a Fear/Anger/Pleasure emotion categorization task. The combination of these two brain imaging techniques will allow measurements of cerebral activity with high time (millisecond) and space (millimetre) accuracy. Analysis of this high-resolution, high density dataset will use the most recent algorithms to address three important, unresolved questions. First we want to differentiate the part of the brain that reacts to the acoustics in the sounds - a vocalization of pleasure sounds different from an angry shout-from that part of the brain that reflects genuine affective value-these two vocalizations express different affective states in the speaker. This important distinction has generally not been adequately addressed in past studies. Second, we want to better understand, in those parts of the brain genuinely related to emotional processing, exactly to which parameter they react: To the emotional category, e.g., a response to fear but not to pleasure? To the negative/positive dimension, e.g., a response to all threatening sounds but not happy or joyful sounds? Or to the task being performed by the subject, e.g., a response during an emotional task but not a gender task? Third, we want to better understand the time-course of processing of affective information at different nodes of the network of brain areas involved in processing these vocalizations. If what has been observed with facial expressions of emotion also applies to voices, then we should observe a very fast, probably unconscious reaction to affective vocalizations, in a 'fast route' that bypasses detailed analysis for the sake of a fast reaction. Advanced algorithms will allow us to determine the precise time course of neuronal activity in different parts of the brain network and understand how different emotional parameters affect the speed of brain reaction. Overall, the results of this project will allow us to understand how the brain processes a socially central dimension of voices-the emotion they carry-and what are the key parameters. They will contribute to the advancement of knowledge, but also in the longer term to a better understanding of impairments of emotion processing in pathologies such as autism or schizophrenia. They also have high potential importance for the growing industry of automated voice processing, as engineers need to know not only how to best automatically recognize emotions in people but also how to best generate realistic emotions in artificial voices.
识别和解释他人的情绪对于社交互动至关重要。特别是,所有文化的人都能够在没有语言的情况下识别声音中的情感,如笑声,哭泣或恐惧的尖叫声。但是,我们的大脑如何分析声音中的情绪仍然知之甚少,例如,与我们如何感知面部表情相比。在这个项目中,我们联合收割机结合了一系列先进的技术,精确地绘制了从声音中识别情绪的大脑网络,并确定了它的确切时间过程。我们将首先使用最新的变形技术来操纵情感语音的数据库,以便生成具有或多或少的强烈,可能模糊的表达的新发声(例如,快乐与恐惧交织在一起)。然后,我们将测量由此产生的发声中的一些参数。一大群听众将被要求根据他们认为表达了什么样的情感、有多强烈以及他们认为有多积极/消极来对每一种发声进行评分。我们还将精确测量声音的重要物理特性,如强度和音高。同时,我们将使用最先进的,互补的脑成像技术(功能性磁共振成像和脑磁图),以测量大脑活动的参与者人数较少,而他们听的情感发声,并执行简单的任务,男性/女性性别分类任务,和恐惧/愤怒/愉悦情绪分类任务。这两种脑成像技术的结合将允许以高时间(毫秒)和空间(毫米)精度测量大脑活动。对这个高分辨率、高密度数据集的分析将使用最新的算法来解决三个重要的、尚未解决的问题。首先,我们要区分大脑中对声音声学特性作出反应的部分(快乐的声音不同于愤怒的叫喊)和反映真实情感价值的部分(这两种声音表达了说话者不同的情感状态)。这一重要区别在过去的研究中一般没有得到充分的处理。第二,我们想更好地了解,在那些真正与情绪处理有关的大脑部位,它们对哪些参数做出反应:对情绪类别,例如,对恐惧的反应而不是对快乐的反应到负/正维度,例如,对所有威胁性的声音都有反应,但对快乐或喜悦的声音没有反应?或者与受试者正在执行的任务相关联,例如,在情感任务中有反应,而在性别任务中没有?第三,我们希望更好地了解情感信息在大脑区域网络中不同节点的处理时间过程,这些节点涉及处理这些发声。如果我们观察到的面部表情也适用于声音,那么我们应该观察到一种非常快速的,可能是无意识的对情感发声的反应,在一个“快速路线”中,为了快速反应而绕过详细的分析。先进的算法将使我们能够确定大脑网络不同部分神经元活动的精确时间过程,并了解不同的情绪参数如何影响大脑反应的速度。总的来说,这个项目的结果将使我们了解大脑如何处理一个社会中心维度的情绪,他们携带的,以及什么是关键参数。它们将有助于知识的进步,但从长远来看,也有助于更好地理解自闭症或精神分裂症等病理学中的情感处理障碍。它们对于不断发展的自动语音处理行业也具有很高的潜在重要性,因为工程师不仅需要知道如何最好地自动识别人们的情绪,还需要知道如何最好地在人工语音中生成逼真的情绪。
项目成果
期刊论文数量(5)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Single-subject analyses of magnetoencephalographic evoked responses to the acoustic properties of affective non-verbal vocalizations.
- DOI:10.3389/fnins.2014.00422
- 发表时间:2014
- 期刊:
- 影响因子:4.3
- 作者:Salvia E;Bestelmeyer PE;Kotz SA;Rousselet GA;Pernet CR;Gross J;Belin P
- 通讯作者:Belin P
Voice selectivity in the temporal voice area despite matched low-level acoustic cues.
- DOI:10.1038/s41598-017-11684-1
- 发表时间:2017-09-14
- 期刊:
- 影响因子:4.6
- 作者:Agus TR;Paquette S;Suied C;Pressnitzer D;Belin P
- 通讯作者:Belin P
The human voice areas: Spatial organization and inter-individual variability in temporal and extra-temporal cortices.
- DOI:10.1016/j.neuroimage.2015.06.050
- 发表时间:2015-10-01
- 期刊:
- 影响因子:5.7
- 作者:Pernet CR;McAleer P;Latinus M;Gorgolewski KJ;Charest I;Bestelmeyer PE;Watson RH;Fleming D;Crabbe F;Valdes-Sosa M;Belin P
- 通讯作者:Belin P
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Pascal Belin其他文献
oice perception in blind persons : A functional magnetic resonance maging study
盲人的声音感知:功能性磁共振成像研究
- DOI:
- 发表时间:
2009 - 期刊:
- 影响因子:0
- 作者:
F. Gougoux;Pascal Belin;P. Voss;Franco Lepore;M. Lassonde;R. Zatorre - 通讯作者:
R. Zatorre
Defective export in Escherichia coli caused by DsbA'-PhoA hybrid proteins whose DsbA' domain cannot fold into a conformation resistant to periplasmic proteases
由 DsbA-PhoA 杂合蛋白引起的大肠杆菌中的输出缺陷,其 DsbA 结构域无法折叠成对周质蛋白酶具有抗性的构象
- DOI:
- 发表时间:
1997 - 期刊:
- 影响因子:3.2
- 作者:
Agnès Guigueno;Pascal Belin;Paul L. Boquet - 通讯作者:
Paul L. Boquet
From static to dynamic: A validated video database of facial expressions
- DOI:
10.1016/j.bandc.2008.02.076 - 发表时间:
2008-06-01 - 期刊:
- 影响因子:
- 作者:
Cynthia Roy;Isabelle Fortin;Catherine Ethier-Majcher;Sylvain Roy;Frédéric Gosselin;Pascal Belin - 通讯作者:
Pascal Belin
Neuropsychology: Pitch discrimination in the early blind
神经心理学:早期盲人的音高辨别
- DOI:
- 发表时间:
2004 - 期刊:
- 影响因子:64.8
- 作者:
F. Gougoux;Franco Lepore;M. Lassonde;P. Voss;R. Zatorre;Pascal Belin - 通讯作者:
Pascal Belin
Pitch discrimination in the early blind
早期失明者的音高辨别
- DOI:
10.1038/430309a - 发表时间:
2004-07-15 - 期刊:
- 影响因子:48.500
- 作者:
Frédéric Gougoux;Franco Lepore;Maryse Lassonde;Patrice Voss;Robert J. Zatorre;Pascal Belin - 通讯作者:
Pascal Belin
Pascal Belin的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Pascal Belin', 18)}}的其他基金
Lifelong changes in the cerebral processing of social signals
大脑处理社交信号的终生变化
- 批准号:
G1001841/1 - 财政年份:2012
- 资助金额:
$ 33.52万 - 项目类别:
Research Grant
Audiovisual integration of identity information from the face and voice: behavioural fMRI and MEG studies.
来自面部和声音的身份信息的视听整合:行为功能磁共振成像和脑磁图研究。
- 批准号:
BB/I022287/1 - 财政年份:2012
- 资助金额:
$ 33.52万 - 项目类别:
Research Grant
The perception of voice gender and identity: a combined behavioural electrophysiological and neuroimaging approach.
声音性别和身份的感知:行为电生理学和神经影像学相结合的方法。
- 批准号:
BB/E003958/1 - 财政年份:2007
- 资助金额:
$ 33.52万 - 项目类别:
Research Grant
相似国自然基金
Sirt1通过调控Gli3 processing维持SHH信号促进髓母细胞瘤的发展及机制研究
- 批准号:82373900
- 批准年份:2023
- 资助金额:48 万元
- 项目类别:面上项目
靶向Gli3 processing调控Shh信号通路的新型抑制剂治疗儿童髓母细胞瘤及相关作用机制研究
- 批准号:
- 批准年份:2021
- 资助金额:30 万元
- 项目类别:青年科学基金项目
超高频超宽带系统射频基带补偿理论与技术的研究
- 批准号:61001097
- 批准年份:2010
- 资助金额:22.0 万元
- 项目类别:青年科学基金项目
堆栈型全光缓存研究
- 批准号:60977003
- 批准年份:2009
- 资助金额:35.0 万元
- 项目类别:面上项目
转录调控中起作用的细胞周期激酶的鉴定及其作用机制研究
- 批准号:30970625
- 批准年份:2009
- 资助金额:32.0 万元
- 项目类别:面上项目
非负矩阵分解及在盲信号处理中的应用
- 批准号:60874061
- 批准年份:2008
- 资助金额:32.0 万元
- 项目类别:面上项目
1A6/DRIM与NIR的相互作用及对NIR的功能调节
- 批准号:30771224
- 批准年份:2007
- 资助金额:34.0 万元
- 项目类别:面上项目
相似海外基金
Computational and neural signatures of interoceptive learning in anorexia nervosa
神经性厌食症内感受学习的计算和神经特征
- 批准号:
10824044 - 财政年份:2024
- 资助金额:
$ 33.52万 - 项目类别:
Individual differences in affective processing and implications for animal welfare: a reaction norm approach
情感处理的个体差异及其对动物福利的影响:反应规范方法
- 批准号:
BB/X014673/1 - 财政年份:2024
- 资助金额:
$ 33.52万 - 项目类别:
Research Grant
Use of sentiment analysis in SMS and social media to understand HIV prevention needs among young women in Kenya
利用短信和社交媒体中的情绪分析来了解肯尼亚年轻女性的艾滋病毒预防需求
- 批准号:
10761910 - 财政年份:2023
- 资助金额:
$ 33.52万 - 项目类别:
Active Social Vision: How the Brain Processes Visual Information During Natural Social Perception
主动社交视觉:大脑如何在自然社交感知过程中处理视觉信息
- 批准号:
10608251 - 财政年份:2023
- 资助金额:
$ 33.52万 - 项目类别:
Pain sensitivity and endogenous pain modulation in autistic adults
自闭症成人的疼痛敏感性和内源性疼痛调节
- 批准号:
10574757 - 财政年份:2023
- 资助金额:
$ 33.52万 - 项目类别:
Cross-border emotional expression analysis using image information processing
利用图像信息处理进行跨界情感表达分析
- 批准号:
23K16925 - 财政年份:2023
- 资助金额:
$ 33.52万 - 项目类别:
Grant-in-Aid for Early-Career Scientists
Validation of Neuropilin-1 receptor signaling in nociceptive processing
伤害感受处理中 Neuropilin-1 受体信号传导的验证
- 批准号:
10774563 - 财政年份:2023
- 资助金额:
$ 33.52万 - 项目类别:
Transdiagnostic Reward System Dynamics and Social Disconnection in Suicide
跨诊断奖励系统动态和自杀中的社会脱节
- 批准号:
10655760 - 财政年份:2023
- 资助金额:
$ 33.52万 - 项目类别:
Optimizing Patient-Centered Opioid Tapering with Mindfulness-Oriented Recovery Enhancement
通过以正念为导向的恢复增强来优化以患者为中心的阿片类药物逐渐减少
- 批准号:
10715903 - 财政年份:2023
- 资助金额:
$ 33.52万 - 项目类别: