The perception of voice gender and identity: a combined behavioural electrophysiological and neuroimaging approach.
声音性别和身份的感知:行为电生理学和神经影像学相结合的方法。
基本信息
- 批准号:BB/E003958/1
- 负责人:
- 金额:$ 42.2万
- 依托单位:
- 依托单位国家:英国
- 项目类别:Research Grant
- 财政年份:2007
- 资助国家:英国
- 起止时间:2007 至 无数据
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
How do we know that a voice belongs to a male or a female individual? How can we recognize a person's voice? Despite the apparent ease with which we solve these problems and the importance of these abilities in our everyday communication, little is known on the cerebral mechanisms involved. The proposed research aims to investigate our perception of voice gender and voice identity by combining state-of-the-art techniques in sound synthesis and brain imaging. Recent 'auditory morphing' techniques will be used to generate series of natural-sounding synthetic voices changing progressively between one voice and another. Two types of morphs will be generated, corresponding to the two parts of the proposed research. One type of morph will be generated between male and female voices ('gender morph') and will be used in the first part of the research to investigate how voice gender is perceived. Another type of morph will be generated between a voice A and a voice B with which participants will be familiarized ('identity morph') and will be used in the second part of the research to investigate how we recognize a person's identity from the voice. The two parts of the research - Part one on gender and Part two on identity- will then follow the exact same plan. In each part, a group of normal adult volunteers -half male and half female- will be recruited. Each participant will first be played synthetic voices from the morphs, and will be asked to decide for each voice if it sounds more like a male or a female voice (Part one) or more like voice A or voice B (Part two). This will allow to determine for each participant the voice on the morph that is the most 'ambiguous', i.e. that is classified on half the trials as male and half the trials as female (Part one), or half the trials as voice A and half the trials as voice B (Part two). Then participants will be played pairs of voices drawn for the morph, either from a same side of the ambiguous voice, or from each side of the ambiguous voice, and will be asked to decide if the two voices are the same or different. We expect that on average, participants will obtain a better performance for two voices taken from each side of the ambiguous voice. This would be a proof that they perceive the morphs in a 'categorical manner', i.e., as two distinct categories rather than as a continuum. This 'categorical perception' has already been demonstrated for morphs between different syllables -or in vision between different faces- but never for the gender or identity of voices. Two complementary techniques will then be used to measure participants' brain response to the synthetic voices: electroencephalography (EEG), and functional magnetic resonance imaging (fMRI). These two techniques will determine what parts of the participants' brain are sensitive to changes in voice gender (Part one) or voice identity (Part two). The combination of EEG and fMRI will allow a very precise characterization of these sensitive brain regions both in space (where exactly in the brain) and time (when exactly do they respond after presentation of the voice). Importantly, using synthetic voices from the morphs will ensure that the brain regions identified are really sensitive to the perceived gender (Part one) or identity (Part two) of the voices, not merely to physical characteristics of the voices. Overall, this research will increase our understanding of the way the auditory part of the brain analyses sounds of voice to allow us to rapidly and accurately extract information on a person's gender and identity simply by hearing his/her voice. This is important because very little is known on this aspect of brain function; knowing more in this area of brain research will have important implications on our understanding of how speech and language evolved in the human brain, and how brain diseases with an adverse impact on audio-visual communication, such as aphasia or autism, could be better diagnosed and treated.
我们怎么知道一个声音是属于男性还是属于女性?我们怎么能识别一个人的声音呢?尽管我们解决这些问题看起来很容易,这些能力在我们的日常交流中也很重要,但人们对其中涉及的大脑机制知之甚少。这项拟议的研究旨在通过结合最先进的声音合成和大脑成像技术来调查我们对声音性别和声音身份的感知。最近的“听觉变形”技术将被用来产生一系列听起来自然的合成声音,在一种声音和另一种声音之间逐渐变化。根据研究的两个部分,我们将生成两种类型的语素。一种类型的变形将在男性和女性声音之间产生(“性别变形”),并将在研究的第一部分用来研究声音性别是如何被感知的。另一种类型的变形将在语音A和语音B之间产生,参与者将熟悉这种变形(身份变形),并将在研究的第二部分用来研究我们如何从语音中识别一个人的身份。这项研究的两个部分--第一部分关于性别,第二部分关于身份--将遵循完全相同的计划。在每个部分,将招募一组正常的成年志愿者--一半是男性,一半是女性。每个参与者首先被播放从变形中合成的声音,然后被要求决定每个声音听起来更像男声还是女声(第一部分)或更像A或B声音(第二部分)。这将允许为每个参与者确定关于变种的声音是最模糊的,即在一半的试验中被归类为男性,一半的试验被归类为女性(第一部分),或者一半的试验被归类为A声音,一半的试验被归类为B声音(第二部分)。然后,参与者将被播放从模棱两可的声音的同一侧或从模棱两可的声音的两侧抽出的声音对,并被要求决定这两个声音是相同的还是不同的。我们预计,平均而言,参与者将获得更好的表现,从两个声音分别从模糊的声音。这将是一个证据,证明他们以一种“分类的方式”感知变形,即,作为两个不同的类别,而不是作为一个连续体。这种“绝对知觉”已经在不同音节之间的变形或不同面孔之间的视觉上得到了证明,但从来没有在性别或声音身份上得到证明。然后将使用两种互补的技术来测量参与者对合成声音的大脑反应:脑电(EEG)和功能磁共振成像(FMRI)。这两项技术将确定参与者大脑的哪些部分对语音性别(第一部分)或语音身份(第二部分)的变化敏感。EEG和fMRI的结合将使我们能够非常精确地描述这些敏感的大脑区域在空间(确切地在大脑中的什么地方)和时间(它们在声音出现后确切地在什么时候做出反应)。重要的是,使用从变形中合成的声音将确保大脑识别的区域对声音的感知性别(第一部分)或身份(第二部分)真正敏感,而不仅仅是声音的物理特征。总体而言,这项研究将增加我们对大脑听觉部分分析声音的方式的理解,使我们能够简单地通过听到一个人的声音来快速而准确地提取关于他/她的性别和身份的信息。这一点很重要,因为人们对大脑功能的这一方面知之甚少;对这一大脑研究领域的更多了解将对我们理解人类大脑中言语和语言是如何演变的,以及如何更好地诊断和治疗对视听交流有不利影响的脑部疾病,如失语症或自闭症,具有重要意义。
项目成果
期刊论文数量(10)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Similarities in face and voice cerebral processing
- DOI:10.1080/13506285.2017.1339156
- 发表时间:2017-01-01
- 期刊:
- 影响因子:2
- 作者:Belin, Pascal
- 通讯作者:Belin, Pascal
"Hearing faces and seeing voices": Amodal coding of person identity in the human brain.
- DOI:10.1038/srep37494
- 发表时间:2016-11-24
- 期刊:
- 影响因子:4.6
- 作者:Awwad Shiekh Hasan B;Valdes-Sosa M;Gross J;Belin P
- 通讯作者:Belin P
Electrophysiological evidence for an early processing of human voices.
- DOI:10.1186/1471-2202-10-127
- 发表时间:2009-10-20
- 期刊:
- 影响因子:2.4
- 作者:Charest I;Pernet CR;Rousselet GA;Quiñones I;Latinus M;Fillion-Bilodeau S;Chartrand JP;Belin P
- 通讯作者:Belin P
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Pascal Belin其他文献
oice perception in blind persons : A functional magnetic resonance maging study
盲人的声音感知:功能性磁共振成像研究
- DOI:
- 发表时间:
2009 - 期刊:
- 影响因子:0
- 作者:
F. Gougoux;Pascal Belin;P. Voss;Franco Lepore;M. Lassonde;R. Zatorre - 通讯作者:
R. Zatorre
Defective export in Escherichia coli caused by DsbA'-PhoA hybrid proteins whose DsbA' domain cannot fold into a conformation resistant to periplasmic proteases
由 DsbA-PhoA 杂合蛋白引起的大肠杆菌中的输出缺陷,其 DsbA 结构域无法折叠成对周质蛋白酶具有抗性的构象
- DOI:
- 发表时间:
1997 - 期刊:
- 影响因子:3.2
- 作者:
Agnès Guigueno;Pascal Belin;Paul L. Boquet - 通讯作者:
Paul L. Boquet
From static to dynamic: A validated video database of facial expressions
- DOI:
10.1016/j.bandc.2008.02.076 - 发表时间:
2008-06-01 - 期刊:
- 影响因子:
- 作者:
Cynthia Roy;Isabelle Fortin;Catherine Ethier-Majcher;Sylvain Roy;Frédéric Gosselin;Pascal Belin - 通讯作者:
Pascal Belin
Neuropsychology: Pitch discrimination in the early blind
神经心理学:早期盲人的音高辨别
- DOI:
- 发表时间:
2004 - 期刊:
- 影响因子:64.8
- 作者:
F. Gougoux;Franco Lepore;M. Lassonde;P. Voss;R. Zatorre;Pascal Belin - 通讯作者:
Pascal Belin
Pitch discrimination in the early blind
早期失明者的音高辨别
- DOI:
10.1038/430309a - 发表时间:
2004-07-15 - 期刊:
- 影响因子:48.500
- 作者:
Frédéric Gougoux;Franco Lepore;Maryse Lassonde;Patrice Voss;Robert J. Zatorre;Pascal Belin - 通讯作者:
Pascal Belin
Pascal Belin的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Pascal Belin', 18)}}的其他基金
Lifelong changes in the cerebral processing of social signals
大脑处理社交信号的终生变化
- 批准号:
G1001841/1 - 财政年份:2012
- 资助金额:
$ 42.2万 - 项目类别:
Research Grant
Audiovisual integration of identity information from the face and voice: behavioural fMRI and MEG studies.
来自面部和声音的身份信息的视听整合:行为功能磁共振成像和脑磁图研究。
- 批准号:
BB/I022287/1 - 财政年份:2012
- 资助金额:
$ 42.2万 - 项目类别:
Research Grant
Cerebral processing of affective nonverbal vocalizations: a combined fMRI and MEG study.
情感非语言发声的大脑处理:功能磁共振成像和脑磁图联合研究。
- 批准号:
BB/J003654/1 - 财政年份:2012
- 资助金额:
$ 42.2万 - 项目类别:
Research Grant
相似国自然基金
基于CFHTLenS和VOICE巡天的若干弱引力透镜宇宙学研究
- 批准号:11333001
- 批准年份:2013
- 资助金额:320.0 万元
- 项目类别:重点项目
相似海外基金
Understanding developmental trajectories among early adolescents to improve reproductive health
了解青少年早期的发育轨迹以改善生殖健康
- 批准号:
10573888 - 财政年份:2023
- 资助金额:
$ 42.2万 - 项目类别:
Intracranial Electrophysiology & Anatomical Connectivity of Voice-Selective Auditory Cortex
颅内电生理学
- 批准号:
10747659 - 财政年份:2023
- 资助金额:
$ 42.2万 - 项目类别:
Effect of producing a desired fundamental frequency on measures of vocal hyperfunction in transgender speakers
产生所需基频对跨性别说话者声音功能亢进测量的影响
- 批准号:
10761717 - 财政年份:2022
- 资助金额:
$ 42.2万 - 项目类别:
Effect of producing a desired fundamental frequency on measures of vocal hyperfunction in transgender speakers
产生所需基频对跨性别说话者声音功能亢进测量的影响
- 批准号:
10602986 - 财政年份:2022
- 资助金额:
$ 42.2万 - 项目类别:
Clinical Trial of the Fit Families Multicomponent Obesity Intervention for African American Adolescents and Their Caregivers: Next Step from the ORBIT Initiative
针对非裔美国青少年及其照顾者的 Fit Families 多成分肥胖干预的临床试验:ORBIT Initiative 的下一步
- 批准号:
10666990 - 财政年份:2022
- 资助金额:
$ 42.2万 - 项目类别:
Effects of exogenous testosterone therapy on communication in gender diverse speakers
外源性睾酮疗法对性别多样化说话者沟通的影响
- 批准号:
10518411 - 财政年份:2021
- 资助金额:
$ 42.2万 - 项目类别:
Modeling perceptions of social location and decision-making to develop targeted messaging promoting HIV care engagement and ART adherence among women living with HIV in the South
对社会位置和决策的认知进行建模,以制定有针对性的信息,促进南方艾滋病毒感染妇女的艾滋病毒护理参与和抗逆转录病毒治疗依从性
- 批准号:
10325299 - 财政年份:2021
- 资助金额:
$ 42.2万 - 项目类别: