Cortical circuit dynamics underlying multisensory decision making
多感官决策背后的皮层回路动力学
基本信息
- 批准号:10721255
- 负责人:
- 金额:$ 149.02万
- 依托单位:
- 依托单位国家:美国
- 项目类别:
- 财政年份:2023
- 资助国家:美国
- 起止时间:2023-08-01 至 2026-07-31
- 项目状态:未结题
- 来源:
- 关键词:3-DimensionalAccelerationAddressAffectAnimal ModelAnimalsAreaAssociate DegreeBehaviorBeliefBindingBrainBrain regionCellsCognitiveCommunicationComplexComputer ModelsConflict (Psychology)CouplingCuesDecentralizationDecision MakingDiscriminationDiseaseEnvironmentEsthesiaFeedbackFoundationsFutureGoalsHumanIndividualInvestigationJointsJudgmentKinesthesisLinkLocomotionMeasuresMediatingMethodsModalityModelingMonkeysMotionMotorNeuronsParietalPatternPerceptionPopulationPopulation DynamicsProcessPropertyPsychologistReaction TimeRecurrenceReportingSaccadesSelf DirectionSelf PerceptionSensorySensory ProcessShapesSignal TransductionSourceStimulusStructureSystemTask PerformancesTestingTimeTrainingUncertaintyVisualVisuospatialanalytical toolarea MSTbehavior predictioncognitive abilitycognitive functiondensityexperimental studyimprovedinsightlateral intraparietal areamultisensoryneuralneurophysiologynovelsensory cortexsensory integrationspatiotemporalsuccessvisual-vestibular
项目摘要
Project Summary
To navigate and guide locomotion in a complex 3D environment, humans and animals must make countless
judgments of their direction of self-motion, or heading. Each of these is a multisensory perceptual decision,
able to achieve greater accuracy and precision by combining signals from the visual, vestibular, and
kinesthetic senses. At the same time, the brain must decide when to commit to a course of action (e.g., to
quickly change direction to avoid an obstacle), and make predictions of the likelihood of success in that
action. These features of a decision—choice accuracy, response time (RT), and confidence—have been
studied by psychologists for over a century, but primarily for only a single modality, instead of the more
natural case of integrating multiple sources of sensory evidence. Moreover, the neural basis of multisensory
integration is largely studied at the level of individual cells or brain regions, whereas nearly all perceptual and
cognitive functions depend on population-level computations and communication between areas. To address
these gaps, we trained monkeys to perform a visual-vestibular heading discrimination task which measures
choice, RT, and confidence via a post-decision wager (PDW). During performance of the task, we will record
ensemble activity simultaneously from two key nodes in the sensory cortical network representing visual and
vestibular self-motion cues (MST and PIVC, respectively), as well as one node (lateral intraparietal area, LIP) in
the downstream decision network that converts sensory evidence into a motor plan. These regions have
individually been linked to heading perception, but little is known about how their coordinated activity
patterns, observable only though population recordings, support multisensory decision making. In Aim 1 of
the proposal, we will quantify the coordinated activity across sensory neural populations and test whether the
perceptual improvement from multisensory integration depend on the strength of coupling between them,
beyond what can be explained by their activity considered independently. When visual and vestibular cues
are artificially placed in conflict, we will ask whether and how the relative precision of heading estimates
decoded from these sensory populations predicts choice and confidence, guided by predictions of a
multisensory evidence accumulation model. In Aim 2, we will extend our investigation of inter-areal
interactions to the decision stage, quantifying the strength and timing of functional coupling between LIP and
each of the two sensory areas. The relative timing of this coordinated activity can indicate feedforward versus
feedback processes, revealing how perceptual decisions evolve via recurrent loops between sensation and
degree of belief in a proposition (or commitment to a plan of action). The results will yield new insights into the
representation and readout of sensory evidence and its associated degree of (un)certainty, and will advance a
population- and circuit-level understanding of decision computations in a multisensory task.
项目概要
为了在复杂的 3D 环境中导航和引导运动,人类和动物必须做出无数的动作
对自身运动方向或航向的判断。每一个都是多感官的感知决定,
能够通过结合来自视觉、前庭和大脑的信号来实现更高的准确性和精确度
动觉。同时,大脑必须决定何时采取一系列行动(例如,
快速改变方向以避免障碍),并预测成功的可能性
行动。决策的这些特征——选择准确性、响应时间 (RT) 和置信度——已被
心理学家研究了一个多世纪,但主要只针对单一模式,而不是更多
整合多种感官证据来源的自然案例。此外,多感觉的神经基础
整合主要在单个细胞或大脑区域的水平上进行研究,而几乎所有的感知和
认知功能取决于人口水平的计算和区域之间的交流。致地址
这些差距,我们训练猴子执行视觉前庭方向辨别任务,该任务测量
通过决策后投注 (PDW) 实现的选择、RT 和置信度。在执行任务的过程中,我们会记录
感觉皮层网络中两个关键节点同时进行整体活动,代表视觉和视觉
前庭自我运动线索(分别为 MST 和 PIVC),以及一个节点(侧顶叶区,LIP)
将感官证据转化为运动计划的下游决策网络。这些地区有
个体与航向感知有关,但人们对它们的协调活动如何进行知之甚少
只有通过人口记录才能观察到的模式支持多感官决策。目标 1
该提案中,我们将量化感觉神经群体之间的协调活动,并测试是否
多感官整合的感知改善取决于它们之间的耦合强度,
超出了他们独立考虑的活动所能解释的范围。当视觉和前庭提示时
人为地将其置于冲突中,我们将询问航向估计的相对精度是否以及如何
从这些感官群体中解码出来的信息可以预测选择和信心,并受到以下预测的指导
多感官证据积累模型。在目标 2 中,我们将扩大跨区域调查
决策阶段的交互,量化 LIP 和 LIP 之间功能耦合的强度和时间
两个感觉区域中的每一个。这种协调活动的相对时间可以表明前馈与
反馈过程,揭示感知决策如何通过感觉和感觉之间的循环进行演变
对某个主张(或对行动计划的承诺)的信念程度。研究结果将为我们带来新的见解
感官证据的表示和读出及其相关的(不)确定性程度,并将推进
对多感官任务中决策计算的总体和电路级理解。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
CHRISTOPHER R FETSCH其他文献
CHRISTOPHER R FETSCH的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
相似海外基金
EXCESS: The role of excess topography and peak ground acceleration on earthquake-preconditioning of landslides
过量:过量地形和峰值地面加速度对滑坡地震预处理的作用
- 批准号:
NE/Y000080/1 - 财政年份:2024
- 资助金额:
$ 149.02万 - 项目类别:
Research Grant
Collaborative Research: FuSe: R3AP: Retunable, Reconfigurable, Racetrack-Memory Acceleration Platform
合作研究:FuSe:R3AP:可重调、可重新配置、赛道内存加速平台
- 批准号:
2328975 - 财政年份:2024
- 资助金额:
$ 149.02万 - 项目类别:
Continuing Grant
SHINE: Origin and Evolution of Compressible Fluctuations in the Solar Wind and Their Role in Solar Wind Heating and Acceleration
SHINE:太阳风可压缩脉动的起源和演化及其在太阳风加热和加速中的作用
- 批准号:
2400967 - 财政年份:2024
- 资助金额:
$ 149.02万 - 项目类别:
Standard Grant
Collaborative Research: FuSe: R3AP: Retunable, Reconfigurable, Racetrack-Memory Acceleration Platform
合作研究:FuSe:R3AP:可重调、可重新配置、赛道内存加速平台
- 批准号:
2328973 - 财政年份:2024
- 资助金额:
$ 149.02万 - 项目类别:
Continuing Grant
Market Entry Acceleration of the Murb Wind Turbine into Remote Telecoms Power
默布风力涡轮机加速进入远程电信电力市场
- 批准号:
10112700 - 财政年份:2024
- 资助金额:
$ 149.02万 - 项目类别:
Collaborative R&D
Collaborative Research: FuSe: R3AP: Retunable, Reconfigurable, Racetrack-Memory Acceleration Platform
合作研究:FuSe:R3AP:可重调、可重新配置、赛道内存加速平台
- 批准号:
2328972 - 财政年份:2024
- 资助金额:
$ 149.02万 - 项目类别:
Continuing Grant
Collaborative Research: A new understanding of droplet breakup: hydrodynamic instability under complex acceleration
合作研究:对液滴破碎的新认识:复杂加速下的流体动力学不稳定性
- 批准号:
2332916 - 财政年份:2024
- 资助金额:
$ 149.02万 - 项目类别:
Standard Grant
Collaborative Research: A new understanding of droplet breakup: hydrodynamic instability under complex acceleration
合作研究:对液滴破碎的新认识:复杂加速下的流体动力学不稳定性
- 批准号:
2332917 - 财政年份:2024
- 资助金额:
$ 149.02万 - 项目类别:
Standard Grant
Collaborative Research: FuSe: R3AP: Retunable, Reconfigurable, Racetrack-Memory Acceleration Platform
合作研究:FuSe:R3AP:可重调、可重新配置、赛道内存加速平台
- 批准号:
2328974 - 财政年份:2024
- 资助金额:
$ 149.02万 - 项目类别:
Continuing Grant
Study of the Particle Acceleration and Transport in PWN through X-ray Spectro-polarimetry and GeV Gamma-ray Observtions
通过 X 射线光谱偏振法和 GeV 伽马射线观测研究 PWN 中的粒子加速和输运
- 批准号:
23H01186 - 财政年份:2023
- 资助金额:
$ 149.02万 - 项目类别:
Grant-in-Aid for Scientific Research (B)














{{item.name}}会员




