CAREER: Multimodal and Multialgorithm Facial Activity Understanding by Audiovisual Information Fusion
职业:通过视听信息融合进行多模式和多算法面部活动理解
基本信息
- 批准号:1149787
- 负责人:
- 金额:$ 44.38万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Standard Grant
- 财政年份:2012
- 资助国家:美国
- 起止时间:2012-03-01 至 2018-09-30
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
This project develops a unified multimodal and multialgorithm fusion framework to recognize facial action units, which describe complex and rich facial behaviors. The information from voice is incorporated with visual observations to effectively improve facial activity understanding since voice and facial activity are intrinsically correlated. The developed framework systematically captures the inherent interactions between the visual and audio channels in a global context of human perception of facial behavior. Advanced machine learning techniques are developed to integrate these relationships together with uncertainties associated with various visual and audio measurements in the fusion framework to achieve a robust and accurate understanding of facial activity. It is these coordinated and consistent interactions that produce a meaningful facial display.The research work from this project fosters computer vision and machine learning technologies with applications across a wide range of fields varying from psychiatry to human-computer interaction. The new audiovisual emotional database constructed in this research facilitates benchmark evaluations and promotes new research directions, especially, in human behavior analysis. An integration of research and education promotes cutting-edge training on human-computer interactions to K-12, undergraduate, and graduate students, especially encourages the participation of women in engineering and computing.
该项目开发了一个统一的多模态和多算法融合框架来识别面部动作单元,这些动作单元描述了复杂而丰富的面部行为。来自语音的信息与视觉观察相结合,以有效地提高面部活动的理解,因为语音和面部活动是内在相关的。所开发的框架系统地捕捉在全球范围内的人类感知的面部行为的视觉和音频通道之间的固有的相互作用。先进的机器学习技术被开发来将这些关系与融合框架中与各种视觉和音频测量相关联的不确定性集成在一起,以实现对面部活动的鲁棒和准确的理解。正是这些协调一致的互动产生了有意义的面部显示。该项目的研究工作促进了计算机视觉和机器学习技术的发展,其应用范围广泛,从精神病学到人机交互。本研究所建构的新视听情绪资料库,有助于基准评估,并促进新的研究方向,特别是人类行为分析。研究和教育的整合促进了对K-12,本科生和研究生的人机交互的尖端培训,特别是鼓励妇女参与工程和计算。
项目成果
期刊论文数量(4)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Improving Facial Expression Analysis using Histograms of Log-Transformed Nonnegative Sparse Representation with a Spatial Pyramid Structure
使用具有空间金字塔结构的对数变换非负稀疏表示的直方图改进面部表情分析
- DOI:10.1109/fg.2013.6553774
- 发表时间:2013
- 期刊:
- 影响因子:0
- 作者:Ping Liu, Shizhong Han
- 通讯作者:Ping Liu, Shizhong Han
FACIAL GRID TRANSFORMATION: A NOVEL FACE REGISTRATION APPROACH FOR IMPROVING FACIAL ACTION UNIT RECOGNITION
面部网格变换:一种改进面部动作单元识别的新颖的面部注册方法
- DOI:10.1109/icip.2014.7025283
- 发表时间:2014
- 期刊:
- 影响因子:0
- 作者:Han, Shizhong;Meng, Zibo;Liu, Ping;Tong, Yan
- 通讯作者:Tong, Yan
Feature Disentangling Machine - A Novel Approach of Feature Selection and Disentangling in Facial Expression Analysis
- DOI:10.1007/978-3-319-10593-2_11
- 发表时间:2014-09
- 期刊:
- 影响因子:0
- 作者:Ping Liu;Joey Tianyi Zhou;I. Tsang;Zibo Meng;Shizhong Han;Yan Tong
- 通讯作者:Ping Liu;Joey Tianyi Zhou;I. Tsang;Zibo Meng;Shizhong Han;Yan Tong
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Yan Tong其他文献
Functions, motives and barriers of homestead vegetable production in rural areas in ageing China
老龄化中国农村宅基地蔬菜生产的功能、动因与障碍
- DOI:
10.1016/j.jrurstud.2019.02.007 - 发表时间:
2019-04 - 期刊:
- 影响因子:5.1
- 作者:
Fan Liangxin;Dang Xiaohu;Yan Tong;Ruihua Lia - 通讯作者:
Ruihua Lia
Thermodynamic analysis of subunit interactions in Escherichia coli molybdopterin synthase.
大肠杆菌钼蝶呤合酶亚基相互作用的热力学分析。
- DOI:
- 发表时间:
2005 - 期刊:
- 影响因子:2.9
- 作者:
Yan Tong;M. Wuebbens;K. Rajagopalan;M. Fitzgerald - 通讯作者:
M. Fitzgerald
Facial Contour Labeling via Congealing
通过凝结进行面部轮廓标记
- DOI:
10.1007/978-3-642-15549-9_26 - 发表时间:
2010 - 期刊:
- 影响因子:1
- 作者:
Xiaoming Liu;Yan Tong;F. Wheeler;P. Tu - 通讯作者:
P. Tu
An arbitrary Lagrangian–Eulerian formulation for the Reynolds equation considering the JFO boundary condition
考虑 JFO 边界条件的雷诺方程的任意拉格朗日-欧拉公式
- DOI:
10.1002/pamm.202300216 - 发表时间:
2023 - 期刊:
- 影响因子:0
- 作者:
Yan Tong;Michael Müller;G. Ostermeyer - 通讯作者:
G. Ostermeyer
Bubble dynamics and their effects on interfacial heat transfer in one single microchannel
气泡动力学及其对单个微通道内界面传热的影响
- DOI:
10.1016/j.ijheatmasstransfer.2023.125060 - 发表时间:
2024 - 期刊:
- 影响因子:5.2
- 作者:
Qun Han;Jiaxuan Ma;Ahmed;Wei Chang;Chen Li;Yan Tong;Wenming Li - 通讯作者:
Wenming Li
Yan Tong的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Yan Tong', 18)}}的其他基金
WORKSHOP: Doctoral Consortium at the IEEE International Conference on Automatic Face and Gesture Recognition (FG 2018)
研讨会:博士联盟在 IEEE 自动人脸和手势识别国际会议 (FG 2018)
- 批准号:
1829167 - 财政年份:2018
- 资助金额:
$ 44.38万 - 项目类别:
Standard Grant
WORKSHOP: Doctoral Consortium at the IEEE FG 2017 Conference
研讨会:IEEE FG 2017 会议上的博士联盟
- 批准号:
1733800 - 财政年份:2017
- 资助金额:
$ 44.38万 - 项目类别:
Standard Grant
FG 2013 Doctoral Consortium Proposal for Travel Support for Graduate Students
FG 2013 博士联盟研究生旅行支持提案
- 批准号:
1331619 - 财政年份:2013
- 资助金额:
$ 44.38万 - 项目类别:
Standard Grant
相似海外基金
Where Gesture Meets Grammar: Crosslinguistic Multimodal Communication
手势与语法的结合:跨语言多模式交流
- 批准号:
DP240102369 - 财政年份:2024
- 资助金额:
$ 44.38万 - 项目类别:
Discovery Projects
Exploring the Mechanisms of Multimodal Metaphor Creation in Japanese Children
探索日本儿童多模态隐喻创造的机制
- 批准号:
24K16041 - 财政年份:2024
- 资助金额:
$ 44.38万 - 项目类别:
Grant-in-Aid for Early-Career Scientists
HoloSurge: Multimodal 3D Holographic tool and real-time Guidance System with point-of-care diagnostics for surgical planning and interventions on liver and pancreatic cancers
HoloSurge:多模态 3D 全息工具和实时指导系统,具有护理点诊断功能,可用于肝癌和胰腺癌的手术规划和干预
- 批准号:
10103131 - 财政年份:2024
- 资助金额:
$ 44.38万 - 项目类别:
EU-Funded
ZooCELL: Tracing the evolution of sensory cell types in animal diversity: multidisciplinary training in 3D cellular reconstruction, multimodal data ..
ZooCELL:追踪动物多样性中感觉细胞类型的进化:3D 细胞重建、多模态数据方面的多学科培训..
- 批准号:
EP/Y037049/1 - 财政年份:2024
- 资助金额:
$ 44.38万 - 项目类别:
Research Grant
Tracing the evolution of sensory cell types in animal diversity: multidisciplinary training in 3D cellular reconstruction, multimodal data analysis
追踪动物多样性中感觉细胞类型的进化:3D 细胞重建、多模式数据分析的多学科培训
- 批准号:
EP/Y037081/1 - 财政年份:2024
- 资助金额:
$ 44.38万 - 项目类别:
Research Grant
mLMT: Multimodal Large Machine Translation Model
mLMT:多模态大型机器翻译模型
- 批准号:
24K20841 - 财政年份:2024
- 资助金额:
$ 44.38万 - 项目类别:
Grant-in-Aid for Early-Career Scientists
Next Generation Tools For Genome-Centric Multimodal Data Integration In Personalised Cardiovascular Medicine
个性化心血管医学中以基因组为中心的多模式数据集成的下一代工具
- 批准号:
10104323 - 财政年份:2024
- 资助金额:
$ 44.38万 - 项目类别:
EU-Funded
Integrated multimodal microscopy facility for single molecule analysis
用于单分子分析的集成多模态显微镜设施
- 批准号:
LE240100086 - 财政年份:2024
- 资助金额:
$ 44.38万 - 项目类别:
Linkage Infrastructure, Equipment and Facilities
Towards Evolvable and Sustainable Multimodal Machine Learning
迈向可进化和可持续的多模式机器学习
- 批准号:
DE240100105 - 财政年份:2024
- 资助金额:
$ 44.38万 - 项目类别:
Discovery Early Career Researcher Award
Class-Balanced Contrastive Learning for Multimodal Recognition
多模态识别的类平衡对比学习
- 批准号:
24K20831 - 财政年份:2024
- 资助金额:
$ 44.38万 - 项目类别:
Grant-in-Aid for Early-Career Scientists














{{item.name}}会员




