Integration of Facial Form and Facial Motion During Face Recognition
人脸识别过程中人脸形态和面部运动的融合
基本信息
- 批准号:280741132
- 负责人:
- 金额:--
- 依托单位:
- 依托单位国家:德国
- 项目类别:Research Fellowships
- 财政年份:2015
- 资助国家:德国
- 起止时间:2014-12-31 至 2016-12-31
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
The ability to integrate information provided by the form and motion of faces is a crucial task for social animals like humans. Yet, most of what is known about face perception comes from studies relying on static images of faces. Surprisingly little is known about how and when humans integrate form and motion to recognize a face. This gap is partly due to the previous lack of well-controlled dynamic face stimuli. Moreover, quantitative models linking brain activity to behavior are needed to study the integration of facial form and motion in the human brain. Recently, cue integration research has benefited from the application of so-called optimal cue integration models to investigate multisensory integration. Here, we plan to apply optimal cue integration models to investigate the mechanisms and the neural basis of facial form and motion integration during face recognition.In particular, we aim to address the following research questions: First, what are the specific weights and functions which humans apply when integrating facial motion and form during face recognition? We plan to answer these questions by combining well-controlled dynamic face stimuli, psychophysics and quantitative predictions of optimal cue integration models. Second, which cortical areas are involved in the integration of facial form and motion during face recognition? To this end, in a neuroimaging study, we want to compare behavioral performance to decoded neural activity based on optimal integration theory.The results of this project will have important implications for cognitive, computational and neural models of face perception that currently propose anatomically and functionally distinct neural pathways for the processing of facial form and motion. Moreover, the results will contribute to understanding, diagnosis and therapy of disorders involving dysfunctions of face perception found for example in prosopagnosia or autism spectrum disorders.
对于像人类这样的群居动物来说,整合面部形状和运动提供的信息的能力是一项至关重要的任务。然而,关于脸部感知的大部分已知知识来自于依赖于脸部静态图像的研究。令人惊讶的是,关于人类如何以及何时整合形体和动作来识别一张脸,人们知之甚少。这一差距的部分原因是之前缺乏良好控制的动态面部刺激。此外,需要将大脑活动与行为联系起来的量化模型,以研究面部形状和运动在人脑中的整合。近年来,线索整合的研究得益于最优线索整合模型在多感觉整合研究中的应用。在这里,我们计划应用最优线索整合模型来研究人脸识别过程中面部形状和动作整合的机制和神经基础。特别是,我们的目标是解决以下研究问题:首先,在人脸识别过程中,人类在整合面部运动和形状时采用的具体权重和函数是什么?我们计划通过结合良好控制的动态面部刺激、心理物理学和最佳线索整合模型的定量预测来回答这些问题。第二,在人脸识别过程中,哪些大脑皮层区域参与了人脸形态和运动的整合?为此,在一项神经成像研究中,我们希望基于最优整合理论将行为表现与解码的神经活动进行比较。该项目的结果将对认知、计算和面部感知的神经模型产生重要影响,这些模型目前提出了解剖和功能上不同的神经路径来处理面部形状和运动。此外,研究结果将有助于理解、诊断和治疗涉及面部感知功能障碍的疾病,例如面容失认症或自闭症谱系障碍。
项目成果
期刊论文数量(1)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Dr. Katharina Dobs其他文献
Dr. Katharina Dobs的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
相似海外基金
FlexNIR-PD: A resource efficient UK-based production process for patented flexible Near Infrared Sensors for LIDAR, Facial recognition and high-speed data retrieval
FlexNIR-PD:基于英国的资源高效生产工艺,用于 LIDAR、面部识别和高速数据检索的专利柔性近红外传感器
- 批准号:
10098113 - 财政年份:2024
- 资助金额:
-- - 项目类别:
Collaborative R&D
Affective Computing Models: from Facial Expression to Mind-Reading
情感计算模型:从面部表情到读心术
- 批准号:
EP/Y03726X/1 - 财政年份:2024
- 资助金额:
-- - 项目类别:
Research Grant
3DFace@Home: A pilot study for robust and highly accurate facial 3D reconstruction from mobile devices for facial growth monitoring at home
3DFace@Home:一项通过移动设备进行稳健且高精度面部 3D 重建的试点研究,用于家庭面部生长监测
- 批准号:
EP/X036642/1 - 财政年份:2024
- 资助金额:
-- - 项目类别:
Research Grant
Affective Computing Models: from Facial Expression to Mind-Reading ("ACMod")
情感计算模型:从面部表情到读心术(“ACMod”)
- 批准号:
EP/Z000025/1 - 财政年份:2024
- 资助金额:
-- - 项目类别:
Research Grant
三叉神経領域のFacial Coolingによる慢性呼吸器疾患患者の呼吸困難感の軽減
通过三叉神经区域的面部冷却减轻慢性呼吸系统疾病患者的呼吸困难感
- 批准号:
24K13599 - 财政年份:2024
- 资助金额:
-- - 项目类别:
Grant-in-Aid for Scientific Research (C)
Implicit Neural Representations for Facial Animation
面部动画的隐式神经表示
- 批准号:
2889954 - 财政年份:2023
- 资助金额:
-- - 项目类别:
Studentship
Collaborative Research: CCSS: Continuous Facial Sensing and 3D Reconstruction via Single-ear Wearable Biosensors
合作研究:CCSS:通过单耳可穿戴生物传感器进行连续面部传感和 3D 重建
- 批准号:
2401415 - 财政年份:2023
- 资助金额:
-- - 项目类别:
Standard Grant
Examination of the psychophysiological mechanism of facial skin blood flow in emotion processing and its clinical application
情绪处理中面部皮肤血流的心理生理机制探讨及其临床应用
- 批准号:
22KJ2717 - 财政年份:2023
- 资助金额:
-- - 项目类别:
Grant-in-Aid for JSPS Fellows
Interdisciplinary perspectives on oral and facial pain and headache: unravelling the complexities for improved understanding, prevention, and management
关于口腔和面部疼痛和头痛的跨学科视角:揭示改善理解、预防和管理的复杂性
- 批准号:
487930 - 财政年份:2023
- 资助金额:
-- - 项目类别:
Miscellaneous Programs
Digital humanities research on facial expression and emotion recognition in the illustrated books in the German Enlightenment period.
德国启蒙时期图画书中的面部表情和情感识别的数字人文研究。
- 批准号:
23K00093 - 财政年份:2023
- 资助金额:
-- - 项目类别:
Grant-in-Aid for Scientific Research (C)