CRCNS: Neural Basis of Planning
CRCNS:规划的神经基础
基本信息
- 批准号:9762221
- 负责人:
- 金额:$ 9.43万
- 依托单位:
- 依托单位国家:美国
- 项目类别:
- 财政年份:2018
- 资助国家:美国
- 起止时间:2018-08-10 至 2019-07-01
- 项目状态:已结题
- 来源:
- 关键词:AlgorithmsAnimal StructuresAnimalsArbitrationBehaviorBehavioralBrainCaliberCompetenceComplexComputersDecision MakingDecision TreesEconomicsEnvironmentEvaluationExhibitsFunctional Magnetic Resonance ImagingFutureGoalsHumanImpairmentIndividualKnowledgeLateralLearningMacaca mulattaMachine LearningMedialMental DepressionMental disordersModelingMonitorMonkeysNeuronsNeurosciencesOutcomeParticipantPatternPhysiologicalPlayPositioning AttributePrefrontal CortexPrimatesProbabilityPsyche structurePsychological reinforcementPupilResearchRewardsRoleSchizophreniaSeriesSignal TransductionSpeedStrategic PlanningTestingTrainingTreesUpdatebaseexpectationexperimental studyfictional worksimprovedlearning algorithmneuromechanismneurophysiologynonhuman primatepredictive modelingrelating to nervous systemresponsesimulationsource guides
项目摘要
Humans and other animals can choose their actions using multiple learning algorithms and decision making strategies. For example, habitual behaviors adapted to a stable environment can be selected using so-called model-free reinforcement learning algorithms, in which the value of each action is incrementally updated according to the amount of unexpected reward. The underlying neural mechanisms for this type of reinforcement learning have been intensively studied. By contrast, how the brain utilizes the animal's knowledge of its environment to plan sequential actions using a model-based reinforcement learning algorithm remains unexplored. In this application, PIs with complementary expertise will investigate how different subdivisions of the primate prefrontal cortex contribute to the evaluation and arbitration of different learning algorithms during strategic planning in primates, using a sequential game referred to as "4-in-a row". Previous studies have revealed that with training, humans improve their competence in this game by gradually switching away from a model-free reinforcement learning towards a model-based reinforcement learning in the form of a tree search. In the first set of experiments, we will train non-human primates to play the 4-in-a-row game against a computer opponent. We predict that the complexity of the strategic planning and the opponent's move violating the animal's expectation will be reflected in the speed of animal's action and pupil diameters. Next, we will test how the medial and lateral aspects of prefrontal cortex contribute to the evaluation and selection of different learning algorithms during strategic interaction between the animal and computer opponent. We hypothesize that the lateral prefrontal cortex is involved in computing the integrated values of alternative actions originating from multiple sources and guiding the animal's choice, whereas the medial prefrontal cortex might be more involved in monitoring and resolving the discrepancies of actions favored by different learning algorithms. The results from these experiments will expand our knowledge of the neural mechanisms for complex strategic planning and unify various approaches to study naturalistic behaviors. By taking advantage of recent advances in machine learning and decision neuroscience, proposed studies will elucidate how multiple learning algorithms are simultaneously implemented and coordinated via specific patterns of activity in the prefrontal cortex. The results from these studies will transform the behavioral and analytical paradigms used to study high-order planning and their neural underpinnings in humans and animals.
人类和其他动物可以使用多种学习算法和决策策略来选择他们的行动。例如,可以使用所谓的无模型强化学习算法来选择适应稳定环境的习惯行为,其中每个动作的值根据意外奖励的数量增量更新。这种类型的强化学习的潜在神经机制已经被深入研究。相比之下,大脑如何利用动物对环境的知识,使用基于模型的强化学习算法来计划连续行动,仍未得到探索。在这个应用程序中,具有互补专业知识的pi将研究灵长类动物前额叶皮层的不同细分如何在灵长类动物的战略规划期间对不同学习算法的评估和仲裁做出贡献,使用被称为“4-in-a - row”的顺序游戏。之前的研究表明,通过训练,人类通过逐渐从无模型强化学习转向以树搜索形式的基于模型的强化学习来提高他们在这个游戏中的能力。在第一组实验中,我们将训练非人类灵长类动物与计算机对手玩4人一排的游戏。我们预测策略规划的复杂性和对手违背动物预期的行动会反映在动物的行动速度和瞳孔直径上。接下来,我们将测试前额叶皮层的内侧和外侧方面如何在动物和计算机对手之间的战略互动中对不同学习算法的评估和选择做出贡献。我们假设,外侧前额叶皮层参与计算多种来源的备选动作的综合值并指导动物的选择,而内侧前额叶皮层可能更多地参与监测和解决不同学习算法所偏好的动作的差异。这些实验的结果将扩展我们对复杂战略规划的神经机制的认识,并统一各种研究自然行为的方法。通过利用机器学习和决策神经科学的最新进展,拟议的研究将阐明多种学习算法如何通过前额皮质的特定活动模式同时实现和协调。这些研究的结果将改变用于研究人类和动物的高阶规划及其神经基础的行为和分析范式。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
DAEYEOL LEE其他文献
DAEYEOL LEE的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}














{{item.name}}会员




