Neural Mechanisms of Social Communication in Parrots
鹦鹉社会交流的神经机制
基本信息
- 批准号:10207958
- 负责人:
- 金额:$ 68.92万
- 依托单位:
- 依托单位国家:美国
- 项目类别:
- 财政年份:2021
- 资助国家:美国
- 起止时间:2021-04-15 至 2024-03-31
- 项目状态:已结题
- 来源:
- 关键词:AccelerometerAcousticsActive LearningAnimal ModelAnimalsAreaAssimilationsBackBeakBehaviorBehavioralBilateralBiological ModelsBirdsBrainBroca&aposs areaCannulasCell NucleusCodeCommunicationCompanionsConsentCorpus striatum structureCourtshipDataDopamineEventExhibitsFeedbackFemaleFiberFoodFriendsFriendshipsGenerationsGesturesGroomingHeadHumanImageImplantIndividualJointsLearningMachine LearningMelopsittacusMotivationMotorNamesNeuronsNeurosciencesOutcomeOutputPair BondParakeetsPartner in relationshipPathway interactionsPharmacologyPhasePhotometryPrimatesPsittacidaePsychological reinforcementPublicationsRecurrenceResearchRewardsSignal TransductionSocial BehaviorSocial InteractionSocial NetworkSocial outcomeSocializationSongbirdsSpeechStructureSystemTeenagersTestingTimeVisualawakebaseclassical conditioningcourtdopamine systemdynamic systemexperimental studyfightingfitnessfollow-upinnovationloved onesmachine learning algorithmmalemicrophonemotor controlneural circuitneuromechanismneuroregulationneurotransmissionnonhuman primateoutcome predictionrelating to nervous systemsensorsocialsocial communicationsocial learningsocial neurosciencesoundstereotypyvocal controlvocal learningvocalization
项目摘要
PROJECT SUMMARY
When Confucius said, “Tell me who are your friends, and I’ll tell you who you are,” he was noticing that how we
behave and communicate is shaped by who we choose to hang out with every day. We constantly mimic the
mannerisms and behaviors of friends and loved ones. Yet the neural basis of how we imitate, and more
importantly who we choose to emulate and why, is largely unknown. Parrots provide a powerful yet untapped
model system for social learning. Parrots, like humans and non-human primates, live in a specific type of ‘fission-
fusion’ social network in which making and maintaining friendships is the key to fitness. Like humans, they
selectively imitate and learn the names of their carefully selected companions. Here we aim to launch parrots as
a new animal model in systems neuroscience. In aims 1, we will record neural activity in the vocal motor cortical
output of the song system (nucleus AAC) in pairs of budgerigars engaged in courtship interactions. In these first-
ever neural recordings form awake, behaving parrots, we are finding that AAC neurons exhibit premotor signals
for vocalizations (as expected) and for expressive gestures such as silent kissing, head-bobbing and
allogrooming. This joint vocal and gestural neural control, observed in human Broca’s area but not in songbirds
– means that what was thought to be a songbird-like ‘song system’ is actually a more general system for social
interaction. We next test the causal relationship between song system activity and social behavior. Inactivating
AAC during courtship interactions will test if/how vocalizations and gestures degrade or lose their coordination
(Aim 2.1). Inactivating frontal or posterior cortical inputs to AAC in bonded pairs will test the songbird-inspired
idea that variability and order depend on distinct cortical pathways (Aim 2.2). For each inactivation experiment,
a pair of interacting birds is conceptualized as a single dynamical system – and we will use machine learning
guided behavioral analysis to quantify how vocalizations and gestures change (or do not) in both the inactivated
and non-inactivated partner. Finally, in Aim 3 we will image dopamine (DA) release using fiber photometry and
genetically encoded DA sensors. Pilot data demonstrate feasibility of DA imaging in singing birds. These
experiments will test for the first time if DA signals, known to evaluate the quality of reward outcomes, similarly
evaluate social outcomes. Courtship dynamics are perfect because gestural ‘requests’ to allogroom or ‘kiss’ are
rejected or accepted with visually and acoustically obvious ‘consent’ or ‘deny’ signals. Males make hundreds of
advances per day and use female feedback to learn – providing natural trial structure, within-session learning,
and ‘events’ to which we can align simultaneously recorded male and female DA signals – which may or may
not come into alignment as a pair ‘decides’ to bond or not. Budgerigar interactions resemble human
conversations – a back-and-forth of vocalizations and gestures that both communicate agonistic or affiliative
signals and control vocal learning and partner selection. Together, these experiments will help establish parrots
as a new model system in social neuroscience and will ready us for a follow-up R01 submission in two years.
项目总结
当孔子说:“告诉我你的朋友是谁,我就告诉你你是谁。”他注意到我们是如何
举止和交流取决于我们每天选择和谁在一起。我们不断地模仿
朋友和亲人的举止和行为。然而,我们如何模仿的神经基础,以及更多
重要的是,我们选择效仿谁以及为什么效仿,在很大程度上是未知的。鹦鹉提供了一种强大但尚未开发的
社会学习模式体系。鹦鹉,就像人类和非人类灵长类动物一样,生活在一种特定类型的分裂中--
Fusion的社交网络中,建立和保持友谊是健身的关键。像人类一样,它们
有选择地模仿和学习他们精心挑选的同伴的名字。在这里,我们的目标是推出鹦鹉作为
一种新的系统神经科学动物模型。在AIMS 1中,我们将记录发声运动皮质的神经活动
参与求偶互动的虎皮鹦鹉对的歌唱系统(AAC核)的输出。在这些第一批中-
从鹦鹉清醒时的神经记录来看,我们发现AAC神经元表现出运动前信号
发声(不出所料)和富有表现力的手势,如无声亲吻,摇头和
异形美容。这种发声和手势的联合神经控制,在人类Broca区观察到,但在鸣禽中观察不到
-意味着被认为是类似于鸣禽的‘歌唱系统’实际上是一个更一般的社会系统
互动。接下来,我们测试歌曲系统活动和社会行为之间的因果关系。失活
在求爱互动过程中,AAC将测试发声和手势是否/如何降低或失去协调性
(目标2.1)。在粘合对中停用AAC的额叶或后部皮质输入将测试由鸣鸟启发的
变异性和有序性取决于不同的大脑皮层通路的想法(目标2.2)。对于每一次灭活实验,
一对相互作用的鸟被概念化为一个单一的动力系统--我们将使用机器学习
引导的行为分析,以量化发声和手势如何变化(或不变化)在两个非激活
和未停用的合伙人。最后,在目标3中,我们将使用光纤光度法和
基因编码的DA传感器。先导数据验证了DA成像在鸣鸟成像中的可行性。这些
实验将首次测试用于评估奖励结果质量的DA信号是否类似
评估社会结果。求爱的动力是完美的,因为手势上的“请求”或“接吻”是
用视觉和听觉上明显的“同意”或“拒绝”信号拒绝或接受。雄性产下数百只
每天进步,并使用女性反馈来学习-提供自然的试验结构,在会话中学习,
和‘事件’,我们可以同时对齐记录的男性和女性DA信号-这可能或可能
而不是在一对人“决定”要不要结合的时候保持一致。虎皮鹦鹉的互动与人类相似
对话--一种声音和手势的来回交流,既有争议性的,也有关联性的
信号和控制声乐学习和伙伴选择。总之,这些实验将有助于建立鹦鹉
作为社会神经科学的新模型系统,并将在两年内为我们的后续R01提交做好准备。
项目成果
期刊论文数量(2)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Anterior forebrain pathway in parrots is necessary for producing learned vocalizations with individual signatures.
鹦鹉的前脑通路对于产生具有个体特征的习得发声是必要的。
- DOI:10.1016/j.cub.2023.11.014
- 发表时间:2023
- 期刊:
- 影响因子:0
- 作者:Zhao,Zhilei;Teoh,HanKheng;Carpenter,Julie;Nemon,Frieda;Kardon,Brian;Cohen,Itai;Goldberg,JesseH
- 通讯作者:Goldberg,JesseH
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Jesse Heymann Goldberg其他文献
Jesse Heymann Goldberg的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Jesse Heymann Goldberg', 18)}}的其他基金
MOTES: Micro-scale Opto-electronically Transduced Electrode Sites
MOTES:微型光电转换电极位点
- 批准号:
9244414 - 财政年份:2016
- 资助金额:
$ 68.92万 - 项目类别:
MOTES: Micro-scale Opto-electronically Transduced Electrode Sites
MOTES:微型光电转换电极位点
- 批准号:
9360613 - 财政年份:2016
- 资助金额:
$ 68.92万 - 项目类别:
Neural Mechanisms of Performance Evaluation During Motor Sequence Learning
运动序列学习过程中表现评估的神经机制
- 批准号:
10183339 - 财政年份:2015
- 资助金额:
$ 68.92万 - 项目类别:
Neural mechanisms of performance evaluation during motor sequence learning
运动序列学习过程中表现评估的神经机制
- 批准号:
9306224 - 财政年份:2015
- 资助金额:
$ 68.92万 - 项目类别:
Neural Mechanisms of Performance Evaluation During Motor Sequence Learning
运动序列学习过程中表现评估的神经机制
- 批准号:
10658875 - 财政年份:2015
- 资助金额:
$ 68.92万 - 项目类别:
Neural mechanisms of performance evaluation during motor sequence learning
运动序列学习过程中表现评估的神经机制
- 批准号:
9136884 - 财政年份:2015
- 资助金额:
$ 68.92万 - 项目类别:
Neural mechanisms of performance evaluation during motor sequence learning
运动序列学习过程中表现评估的神经机制
- 批准号:
9753376 - 财政年份:2015
- 资助金额:
$ 68.92万 - 项目类别:
Identifying pathways for motor variability in the mammalian brain
识别哺乳动物大脑运动变异的途径
- 批准号:
8955334 - 财政年份:2015
- 资助金额:
$ 68.92万 - 项目类别:
Neural Mechanisms of Performance Evaluation During Motor Sequence Learning
运动序列学习过程中表现评估的神经机制
- 批准号:
10437774 - 财政年份:2015
- 资助金额:
$ 68.92万 - 项目类别:
Basal Ganglia-Thalamic Interactions in Behaving Songbirds During Learning
鸣禽学习过程中基底神经节-丘脑的相互作用
- 批准号:
8711569 - 财政年份:2010
- 资助金额:
$ 68.92万 - 项目类别:
相似海外基金
Nonlinear Acoustics for the conditioning monitoring of Aerospace structures (NACMAS)
用于航空航天结构调节监测的非线性声学 (NACMAS)
- 批准号:
10078324 - 财政年份:2023
- 资助金额:
$ 68.92万 - 项目类别:
BEIS-Funded Programmes
ORCC: Marine predator and prey response to climate change: Synthesis of Acoustics, Physiology, Prey, and Habitat In a Rapidly changing Environment (SAPPHIRE)
ORCC:海洋捕食者和猎物对气候变化的反应:快速变化环境中声学、生理学、猎物和栖息地的综合(蓝宝石)
- 批准号:
2308300 - 财政年份:2023
- 资助金额:
$ 68.92万 - 项目类别:
Continuing Grant
University of Salford (The) and KP Acoustics Group Limited KTP 22_23 R1
索尔福德大学 (The) 和 KP Acoustics Group Limited KTP 22_23 R1
- 批准号:
10033989 - 财政年份:2023
- 资助金额:
$ 68.92万 - 项目类别:
Knowledge Transfer Partnership
User-controllable and Physics-informed Neural Acoustics Fields for Multichannel Audio Rendering and Analysis in Mixed Reality Application
用于混合现实应用中多通道音频渲染和分析的用户可控且基于物理的神经声学场
- 批准号:
23K16913 - 财政年份:2023
- 资助金额:
$ 68.92万 - 项目类别:
Grant-in-Aid for Early-Career Scientists
Combined radiation acoustics and ultrasound imaging for real-time guidance in radiotherapy
结合辐射声学和超声成像,用于放射治疗的实时指导
- 批准号:
10582051 - 财政年份:2023
- 资助金额:
$ 68.92万 - 项目类别:
Comprehensive assessment of speech physiology and acoustics in Parkinson's disease progression
帕金森病进展中言语生理学和声学的综合评估
- 批准号:
10602958 - 财政年份:2023
- 资助金额:
$ 68.92万 - 项目类别:
The acoustics of climate change - long-term observations in the arctic oceans
气候变化的声学——北冰洋的长期观测
- 批准号:
2889921 - 财政年份:2023
- 资助金额:
$ 68.92万 - 项目类别:
Studentship
Collaborative Research: Estimating Articulatory Constriction Place and Timing from Speech Acoustics
合作研究:从语音声学估计发音收缩位置和时间
- 批准号:
2343847 - 财政年份:2023
- 资助金额:
$ 68.92万 - 项目类别:
Standard Grant
Flow Physics and Vortex-Induced Acoustics in Bio-Inspired Collective Locomotion
仿生集体运动中的流动物理学和涡激声学
- 批准号:
DGECR-2022-00019 - 财政年份:2022
- 资助金额:
$ 68.92万 - 项目类别:
Discovery Launch Supplement
Collaborative Research: Estimating Articulatory Constriction Place and Timing from Speech Acoustics
合作研究:从语音声学估计发音收缩位置和时间
- 批准号:
2141275 - 财政年份:2022
- 资助金额:
$ 68.92万 - 项目类别:
Standard Grant














{{item.name}}会员




