CI-ADDO-EN: Collaborative Research: 3D Dynamic Multimodal Spontaneous Emotion Corpus for Automated Facial Behavior and Emotion Analysis
CI-ADDO-EN:协作研究:用于自动面部行为和情绪分析的 3D 动态多模态自发情绪语料库
基本信息
- 批准号:1205664
- 负责人:
- 金额:$ 30.68万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Standard Grant
- 财政年份:2012
- 资助国家:美国
- 起止时间:2012-09-01 至 2016-08-31
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
Emotion is the complex psycho-physiological experience of an individual's state of mind. It affects every aspect of rational thinking, learning, decision making, and psychomotor ability. Emotion modeling and recognition is playing an increasingly important role in many research areas, including human computer interaction, robotics, artificial intelligence, and advanced technologies for education and learning. Current emotion-related research, however, is impeded by a lack of a large spontaneous emotion data corpus. With few exceptions, emotion databases are limited in terms of size, sensor modalities, labeling, and elicitation methods. Most rely on posed emotions, which may bear little resemblance to what occurs in the contexts wherein the emotions are really triggered. In this project the PIs will address these limitations by developing a multimodal and multidimensional corpus of dynamic spontaneous emotion and facial expression data, with labels and feature derivatives, from approximately 200 subjects of different ethnicities and ages, using sensors of different modalities. To these ends, they will acquire a 6-camera wide-range 3D dynamic imaging system to capture ultra high-resolution facial geometric data and video texture data, which will allow them to examine the fine structure change as well as the precise time course for spontaneous expressions. Video data will be accompanied by other sensor modalities, including thermal, audio and physiological sensors. An IR thermal camera will allow real time recording of facial temperature, while an audio sensor will record the voices of both subject and experimenter. The physiological sensor will measure skin conductivity and related physiological signals. Tools and methods to facilitate and simplify use of the dataset will be provided. The entire dataset, including metadata and associated software, will be stored in a public depository and made available for research in computer vision, affective computing, human computer interaction, and related fields.Intellectual Merit This research will involve construction of a corpus of spontaneous multi-dimensional and multimodal emotion and facial expression data, which is significantly larger than any that currently exist. To elicit natural and spontaneous emotions from subjects, the PIs will employ five approaches using physical experience, film clips, cold pressor, relived memories tasks, and interview formats. The database will employ sensors of different modalities including high resolution 2D/3D video cameras, infrared thermal cameras, audio sensors, and physiological sensors. The video data will be labeled according to a number of categories, including AU labeling and emotion labeling from self-report and perceptual judgments of naïve observers. Comprehensive emotion labeling will include dimensional approaches (e.g., valence, arousal), discrete emotions (e.g., joy, anger, smile controls), anatomic methods (e.g., FACS), and paralinguistic signaling (e.g., back-channeling). Additional features will be derived from the raw data, including 2D/3D facial feature points, head pose, and audio parameters.Broader Impact Project outcomes will immediately benefit researchers in computer vision and emotion modeling and recognition, because the database will allow them to train and validate their facial expression and emotion recognition algorithms. The new corpus will facilitate the study of multimodal fusion from audio, video, geometric, thermal, and physical responses. It will contribute to the development of a comprehensive understanding of mechanisms involving human behavior, and will allow enhancements to human computer interaction (e.g., through emotion-sensitive and socially intelligent interfaces), robotics, artificial intelligence, and cognitive science. The work will likely also significantly impact research in diverse other fields such as psychology, biometrics, medicine/life science, law-enforcement, education, entrainment, and social science.
情感是个体心理状态的复杂心理生理体验。它影响着理性思维、学习、决策和精神运动能力的各个方面。情感建模和识别在人机交互、机器人技术、人工智能以及先进的教育和学习技术等许多研究领域发挥着越来越重要的作用。然而,目前的情绪相关研究由于缺乏大量的自发情绪数据语料库而受到阻碍。除了少数例外,情感数据库在大小、传感器模式、标签和启发方法方面都是有限的。大多数人依赖于摆姿势的情绪,这与真正触发情绪的情境几乎没有相似之处。在这个项目中,pi将通过开发一个多模态和多维的动态自发情绪和面部表情数据语料库,使用不同模态的传感器,使用标签和特征衍生,来自大约200个不同种族和年龄的受试者。为此,他们将获得一个6摄像头宽范围3D动态成像系统,以捕获超高分辨率的面部几何数据和视频纹理数据,这将使他们能够检查细微的结构变化以及自发表情的精确时间过程。视频数据将伴随着其他传感器模式,包括热、音频和生理传感器。红外热像仪可以实时记录面部温度,而音频传感器可以记录受试者和实验者的声音。生理传感器将测量皮肤电导率和相关的生理信号。将提供工具和方法来促进和简化数据集的使用。整个数据集,包括元数据和相关软件,将存储在一个公共存储库中,供计算机视觉、情感计算、人机交互和相关领域的研究使用。本研究将涉及构建一个自发的多维、多模态情绪和面部表情数据语料库,该语料库比现有的任何语料库都要大得多。为了从被试中引出自然和自发的情绪,pi将采用五种方法,包括物理体验、电影剪辑、冷压力、重温记忆任务和访谈形式。该数据库将采用不同模式的传感器,包括高分辨率2D/3D摄像机、红外热像仪、音频传感器和生理传感器。视频数据将根据多个类别进行标记,包括AU标记和来自naïve观察者自我报告和感知判断的情感标记。综合情绪标签将包括维度方法(如效价、唤醒)、离散情绪(如喜悦、愤怒、微笑控制)、解剖学方法(如FACS)和副语言信号(如反向通道)。其他特征将从原始数据中导出,包括2D/3D面部特征点、头部姿势和音频参数。项目成果将立即使计算机视觉、情感建模和识别方面的研究人员受益,因为数据库将允许他们训练和验证他们的面部表情和情感识别算法。新的语料库将促进从音频、视频、几何、热和物理响应的多模态融合的研究。它将有助于对涉及人类行为的机制的全面理解的发展,并将允许增强人机交互(例如,通过情感敏感和社会智能接口)、机器人、人工智能和认知科学。这项工作可能还会对其他不同领域的研究产生重大影响,如心理学、生物识别学、医学/生命科学、执法、教育、娱乐和社会科学。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Lijun Yin其他文献
Diverse mechanical properties and microstructures of sorghum bran arabinoxylans/soy protein isolate mixed gels by duo-induction of peroxidase and calcium ions
过氧化物酶和钙离子双重诱导高粱麸阿拉伯木聚糖/大豆分离蛋白混合凝胶的多种机械性能和微观结构
- DOI:
10.1016/j.foodhyd.2020.105946 - 发表时间:
2020-10 - 期刊:
- 影响因子:10.7
- 作者:
Jinxin Yan;Boya Zhang;Feifei Wu;Wenjia Yan;Peng Lv;Madhav Yadav;Xin Jia;Lijun Yin - 通讯作者:
Lijun Yin
Research on the Relationship between Land Finance and Housing Price in Urbanization Process: An Empirical Analysis of 182 Cities in China based on Threshold Panel Models,
城镇化进程中土地财政与房价关系研究——基于阈值面板模型的中国182个城市的实证分析,
- DOI:
- 发表时间:
2021 - 期刊:
- 影响因子:0
- 作者:
Meiting Hu;Jichang Dong;Lijun Yin;Chun Meng;Xiuting Li - 通讯作者:
Xiuting Li
The use of W/O/W controlled-release coagulants to improve the quality of bittern-solidified tofu
使用W/O/W控释凝固剂提高卤豆腐品质
- DOI:
10.1016/j.foodhyd.2013.08.002 - 发表时间:
2014-03 - 期刊:
- 影响因子:10.7
- 作者:
Yongqiang Cheng;Eizo Tatsumi;Masayoshi Saito;Lijun Yin - 通讯作者:
Lijun Yin
Synthesis, characterization and application of sugar beet pectin-ferulic acid conjugates in the study of lipid, DNA and protein oxidation
甜菜果胶 - 阿魏酸共轭物的合成、表征及其在脂质、DNA和蛋白质氧化研究中的应用
- DOI:
10.1016/j.ijbiomac.2025.141358 - 发表时间:
2025-05-01 - 期刊:
- 影响因子:8.500
- 作者:
Xudong Yang;Kun Wang;Yuang Zhong;Weining Cui;Xin Jia;Lijun Yin - 通讯作者:
Lijun Yin
Effects of Fermentation Temperature on the Content and Composition of Isoflavones and β-Glucosidase Activity in Sufu
发酵温度对腐乳中异黄酮含量、组成及β-葡萄糖苷酶活性的影响
- DOI:
10.1271/bbb.69.267 - 发表时间:
2005 - 期刊:
- 影响因子:0
- 作者:
Lijun Yin;Li;Huan Liu;M. Saito;E. Tatsumi - 通讯作者:
E. Tatsumi
Lijun Yin的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Lijun Yin', 18)}}的其他基金
CI-SUSTAIN: Collaborative Research: Extending a Large Multimodal Corpus of Spontaneous Behavior for Automated Emotion Analysis
CI-SUSTAIN:协作研究:扩展自发行为的大型多模态语料库以进行自动情绪分析
- 批准号:
1629898 - 财政年份:2016
- 资助金额:
$ 30.68万 - 项目类别:
Standard Grant
EAGER: Spontaneous 4D-Facial Expression Corpus for Automated Facial Image Analysis
EAGER:用于自动面部图像分析的自发 4D 面部表情语料库
- 批准号:
1051103 - 财政年份:2010
- 资助金额:
$ 30.68万 - 项目类别:
Standard Grant
SGER: Analyzing Facial Expression in Three Dimensional Space
SGER:分析三维空间中的面部表情
- 批准号:
0541044 - 财政年份:2005
- 资助金额:
$ 30.68万 - 项目类别:
Standard Grant
SGER: Developing a high-definition face modeling system for recognition and generation of face and face expressions
SGER:开发高清人脸建模系统,用于人脸和面部表情的识别和生成
- 批准号:
0414029 - 财政年份:2004
- 资助金额:
$ 30.68万 - 项目类别:
Standard Grant
相似海外基金
Collaborative Research: CI-ADDO-EN: Research Repository for Model-Driven Software Development (REMODD)
协作研究:CI-ADDO-EN:模型驱动软件开发研究存储库 (REMODD)
- 批准号:
1305381 - 财政年份:2013
- 资助金额:
$ 30.68万 - 项目类别:
Standard Grant
Collaborative Research: CI-ADDO-EN: Making Internet Routing Data Accessible To All
合作研究:CI-ADDO-EN:让所有人都能访问互联网路由数据
- 批准号:
1305404 - 财政年份:2013
- 资助金额:
$ 30.68万 - 项目类别:
Standard Grant
CI-ADDO-EN: Collaborative Research: Enhancing the srcML Infrastructure: A Mixed-Language Exploration, Analysis, and Manipulation Framework to Support Software Evolution
CI-ADDO-EN:协作研究:增强 srcML 基础设施:支持软件演进的混合语言探索、分析和操作框架
- 批准号:
1305292 - 财政年份:2013
- 资助金额:
$ 30.68万 - 项目类别:
Standard Grant
Collaborative Research: CI-ADDO-EN: Making Internet Routing Data Accessible To All
合作研究:CI-ADDO-EN:让所有人都能访问互联网路由数据
- 批准号:
1305218 - 财政年份:2013
- 资助金额:
$ 30.68万 - 项目类别:
Standard Grant
CI-ADDO-EN: Smart Home in a Box: Creating a Large Scale, Long Term Repository for Smart Environment Technologies
CI-ADDO-EN:盒子里的智能家居:为智能环境技术创建大规模、长期存储库
- 批准号:
1262814 - 财政年份:2013
- 资助金额:
$ 30.68万 - 项目类别:
Standard Grant
CI-ADDO-EN: Infrastructure for the RF-Powered Computing Community
CI-ADDO-EN:射频驱动计算社区的基础设施
- 批准号:
1305072 - 财政年份:2013
- 资助金额:
$ 30.68万 - 项目类别:
Standard Grant
CRI-CI-ADDO-EN: National File System Trace Repository
CRI-CI-ADDO-EN:国家文件系统跟踪存储库
- 批准号:
1305360 - 财政年份:2013
- 资助金额:
$ 30.68万 - 项目类别:
Standard Grant
Collaborative Research: CI-ADDO-EN: Research Repository for Model-Driven Software Development (REMODD)
协作研究:CI-ADDO-EN:模型驱动软件开发研究存储库 (REMODD)
- 批准号:
1305358 - 财政年份:2013
- 资助金额:
$ 30.68万 - 项目类别:
Standard Grant
CI-ADDO-EN: Collaborative Research: Enhancing the srcML Infrastructure: A Mixed-Language Exploration, Analysis, and Manipulation Framework to Support Software Evolution
CI-ADDO-EN:协作研究:增强 srcML 基础设施:支持软件演进的混合语言探索、分析和操作框架
- 批准号:
1305217 - 财政年份:2013
- 资助金额:
$ 30.68万 - 项目类别:
Standard Grant
Collaborative Research: CI-ADDO-EN: Making Internet Routing Data Accessible To All
合作研究:CI-ADDO-EN:让所有人都能访问互联网路由数据
- 批准号:
1305346 - 财政年份:2013
- 资助金额:
$ 30.68万 - 项目类别:
Standard Grant