Allocentric and egocentric representations of sound space in auditory cortex

听觉皮层声音空间的异中心和自我中心表征

基本信息

  • 批准号:
    BB/R004420/1
  • 负责人:
  • 金额:
    $ 66.56万
  • 依托单位:
  • 依托单位国家:
    英国
  • 项目类别:
    Research Grant
  • 财政年份:
    2017
  • 资助国家:
    英国
  • 起止时间:
    2017 至 无数据
  • 项目状态:
    已结题

项目摘要

We can recognise and localise the many sounds that make up our everyday environment. For example when sitting outside a café, we can hear the rumble of a plane overhead, the pop music coming from the shop across the street, the chink of cutlery from the table behind us and the voice of the friend to our right who we're having coffee with. However, the cochlear, which encodes the frequency composition of sounds, does not provide information about where a sound comes from. Instead sound location must be computed in the brain by comparing the signals arriving at the two ears. Auditory space thus must be constructed from measurements relative to the head - sounds from our left will hit our left ear sooner than the right ear and will be louder in left than right ear because of the shadow cast by our head. Nonetheless, we can intuitively describe sound location in several ways: relative to ourselves (the toddler is giggling on my left) or relative to the world (the clock is chiming on the fireplace). Indeed as we move around the world, our perception of sound sources in the world remains stable - if we turn our heads towards a ringing phone we don't perceive the phone rotating around us; rather we are aware that we are turning towards it. This stability is remarkable because the sounds arriving at the ear are very different before and after we move - if I turn my head 180 degrees, a sound that originally arrived first at my left ear may arrive at my right ear first after turning. To achieve this, the brain must distinguish changes in sounds caused by our own movement and compensate to make the world stable. Perceptual stability and our intuitive experience of sound location in the world, suggests that the brain can represent both head-centered and world-centered space. We recently developed new methods to determine whether spatial sensitivity in the brain is head-centered (egocentric) or world-centered (allocentric). In previous studies of spatial processing, subjects are held static at the centre of a ring of speakers while neural activity is recorded. Because the subject never moves, the sound location relative to the head and the world are always the same and so it's impossible to tell which space a neuron represents. In our study however, subjects freely explored an environment while turning and moving their heads so that sound location relative to the head and the world could differ and we could determine if neural coding was head or world centered. For the first time we proved what most scientists had assumed, that many neurons in auditory cortex represent sound location relative to the head and are thus egocentric. But surprisingly, we also discovered a smaller population of allocentric neurons that could represent sound location in the world regardless of head position / direction. In this project we seek to better understand how egocentric and allocentric neurons are organised in the brain and how they contribute to sound perception. We will use a grid of speakers to map in unprecedented detail how space is represented by egocentric and allocentric neurons. We will also ask how animals experience sound location by training animals to perform either a head-centered or world-centered sound localization task. We will then compare neurons when animals are actively performing tasks or passively listening to understand how attention changes spatial processing. Finally, we will investigate how experience influences how the brain represents sound source location by comparing neurons in animals trained in each task or with no training. The ability to localise a sound in space is crucial for survival and communication: when listening to one voice in a noisy background our use of spatial cues significantly improves our ability to pick the target voice out from the background. Understanding how the brain represents auditory space may influence the design of hearing aids and inspire improvements in speech recognition.
我们可以识别和定位构成我们日常环境的许多声音。例如,当我们坐在咖啡馆外面时,我们可以听到头顶上飞机的隆隆声,街对面商店传来的流行音乐,我们身后桌子上餐具的叮当声,以及我们右边朋友的声音,我们正在喝咖啡。然而,编码声音频率成分的耳蜗并不提供声音来自何处的信息。相反,声音的位置必须在大脑中通过比较到达两只耳朵的信号来计算。因此,听觉空间必须根据相对于头部的测量值来构建--来自左耳的声音会比右耳更早到达左耳,并且由于头部投射的阴影,左耳的声音会比右耳更大。尽管如此,我们还是可以用几种方式来直观地描述声音的位置:相对于我们自己(蹒跚学步的孩子在我左边咯咯地笑)或相对于世界(壁炉上的时钟正在鸣响)。事实上,当我们在世界各地移动时,我们对世界上声源的感知保持稳定--如果我们把头转向一个正在响的电话,我们不会感觉到电话在我们周围旋转;相反,我们意识到我们正在转向它。这种稳定性是显着的,因为到达耳朵的声音是非常不同的,在我们移动之前和之后-如果我把我的头180度,原来先到左耳的声音,转了之后可能先到右耳。为了实现这一点,大脑必须区分由我们自己的运动引起的声音变化,并进行补偿以使世界稳定。感知的稳定性和我们对声音在世界中位置的直觉经验表明,大脑可以代表以头部为中心的空间和以世界为中心的空间。我们最近开发了新的方法来确定大脑中的空间敏感性是以头部为中心(自我中心)还是以世界为中心(非自我中心)。在以前的空间处理研究中,受试者被固定在一圈扬声器的中心,同时记录神经活动。因为受试者从不移动,声音相对于头部和世界的位置总是相同的,所以不可能分辨出神经元代表哪个空间。然而,在我们的研究中,受试者在转动和移动头部的同时自由地探索环境,这样声音相对于头部和世界的位置可能会有所不同,我们可以确定神经编码是以头部为中心还是以世界为中心。我们第一次证明了大多数科学家的假设,即听觉皮层中的许多神经元代表相对于头部的声音位置,因此是自我中心的。但令人惊讶的是,我们还发现了一个较小的非中央神经元群体,它们可以代表世界上的声音位置,而不管头部的位置/方向。在这个项目中,我们试图更好地了解自我中心和非自我中心的神经元是如何在大脑中组织起来的,以及它们如何对声音感知做出贡献。我们将使用一个扬声器网格,以前所未有的细节描绘空间是如何由自我中心和非自我中心神经元表示的。我们还将通过训练动物执行以头部为中心或以世界为中心的声音定位任务来询问动物如何体验声音定位。然后,我们将比较动物主动执行任务或被动倾听时的神经元,以了解注意力如何改变空间处理。最后,我们将通过比较在每项任务中训练或没有训练的动物的神经元来研究经验如何影响大脑如何表示声源位置。在空间中定位声音的能力对于生存和交流至关重要:当在嘈杂的背景中听一个声音时,我们对空间线索的使用显着提高了我们从背景中挑选目标声音的能力。了解大脑如何代表听觉空间可能会影响助听器的设计,并激发语音识别的改进。

项目成果

期刊论文数量(7)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Sound Localization of World and Head-Centered Space in Ferrets.
Sound localization of world and head-centered space in ferrets
雪貂世界和头部中心空间的声音定位
  • DOI:
    10.1101/2021.09.15.460425
  • 发表时间:
    2021
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Town S
  • 通讯作者:
    Town S
Behaviourally modulated hippocampal theta oscillations in the ferret persist during both locomotion and immobility.
  • DOI:
    10.1038/s41467-022-33507-2
  • 发表时间:
    2022-10-07
  • 期刊:
  • 影响因子:
    16.6
  • 作者:
  • 通讯作者:
Computing Sound Space: World-centered Sound Localization in Ferrets
计算声音空间:雪貂中以世界为中心的声音定位
  • DOI:
  • 发表时间:
    2019
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Town SM
  • 通讯作者:
    Town SM
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Jennifer Bizley其他文献

Jennifer Bizley的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Jennifer Bizley', 18)}}的其他基金

Selective Attention: How does Neural Response Modulation in Auditory Cortex Enable Auditory Scene Analysis?
选择性注意:听觉皮层中的神经反应调制如何实现听觉场景分析?
  • 批准号:
    BB/N001818/1
  • 财政年份:
    2016
  • 资助金额:
    $ 66.56万
  • 项目类别:
    Research Grant
Identifying the signal in the noise: a systems approach for examining invariance in auditory cortex
识别噪声中的信号:检查听觉皮层不变性的系统方法
  • 批准号:
    BB/H016813/1
  • 财政年份:
    2011
  • 资助金额:
    $ 66.56万
  • 项目类别:
    Research Grant
Identifying the signal in the noise: a systems approach for examining invariance in auditory cortex
识别噪声中的信号:检查听觉皮层不变性的系统方法
  • 批准号:
    BB/H016813/2
  • 财政年份:
    2011
  • 资助金额:
    $ 66.56万
  • 项目类别:
    Research Grant

相似国自然基金

听力正常与听觉障碍人群脑中自我参照系统与环境参照系统之间的交互作用
  • 批准号:
    31070994
  • 批准年份:
    2010
  • 资助金额:
    32.0 万元
  • 项目类别:
    面上项目

相似海外基金

自律ロボット導入のための人のマルチモーダル業務分析
引入自主机器人的多模态人类工作分析
  • 批准号:
    23K03751
  • 财政年份:
    2023
  • 资助金额:
    $ 66.56万
  • 项目类别:
    Grant-in-Aid for Scientific Research (C)
PhD in Compressed Sensing for Egocentric Video
自我中心视频压缩感知博士
  • 批准号:
    2894973
  • 财政年份:
    2023
  • 资助金额:
    $ 66.56万
  • 项目类别:
    Studentship
Establishing novel measures of hand function at home after cervical SCI
建立颈椎 SCI 后在家中测量手部功能的新方法
  • 批准号:
    478940
  • 财政年份:
    2023
  • 资助金额:
    $ 66.56万
  • 项目类别:
    Operating Grants
Age-specific representations of social contact networks using egocentric survey data
使用以自我为中心的调查数据对特定年龄的社交联系网络进行表征
  • 批准号:
    2737744
  • 财政年份:
    2022
  • 资助金额:
    $ 66.56万
  • 项目类别:
    Studentship
Implementing clinical decision support for improved outpatient neurorehabilitation post-stroke and spinal cord injury through the integration of wearable technology and deep learning algorithms.
通过集成可穿戴技术和深度学习算法,实施临床决策支持,以改善中风和脊髓损伤后的门诊神经康复。
  • 批准号:
    475572
  • 财政年份:
    2022
  • 资助金额:
    $ 66.56万
  • 项目类别:
    Studentship Programs
The pull of reality: Egocentric bias in adult theory of mind
现实的牵引:成人心理理论中的自我中心偏见
  • 批准号:
    ES/T012528/1
  • 财政年份:
    2021
  • 资助金额:
    $ 66.56万
  • 项目类别:
    Research Grant
PhD in Computer Science - Video Object Segmentation (VOS) In Egocentric Video
计算机科学博士 - 以自我为中心的视频中的视频对象分割 (VOS)
  • 批准号:
    2615063
  • 财政年份:
    2021
  • 资助金额:
    $ 66.56万
  • 项目类别:
    Studentship
Audio-Visual Egocentric Video Understanding
视听以自我为中心的视频理解
  • 批准号:
    2615061
  • 财政年份:
    2021
  • 资助金额:
    $ 66.56万
  • 项目类别:
    Studentship
CRCNS: US-Israel - The egocentric-allocentric transformation of the cognitive map
CRCNS:美国-以色列 - 认知地图的自我中心-非中心转变
  • 批准号:
    10227807
  • 财政年份:
    2020
  • 资助金额:
    $ 66.56万
  • 项目类别:
CRCNS: US-Israel - The egocentric-allocentric transformation of the cognitive map
CRCNS:美国-以色列 - 认知地图的自我中心-非中心转变
  • 批准号:
    10657540
  • 财政年份:
    2020
  • 资助金额:
    $ 66.56万
  • 项目类别:
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了