CI-SUSTAIN: Collaborative Research: Extending a Large Multimodal Corpus of Spontaneous Behavior for Automated Emotion Analysis

CI-SUSTAIN:协作研究:扩展自发行为的大型多模态语料库以进行自动情绪分析

基本信息

  • 批准号:
    1629856
  • 负责人:
  • 金额:
    $ 21.54万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
    Standard Grant
  • 财政年份:
    2016
  • 资助国家:
    美国
  • 起止时间:
    2016-09-01 至 2022-08-31
  • 项目状态:
    已结题

项目摘要

This project will extend and sustain a widely-used data infrastructure for studying human emotion, hosted at the lead investigator's university and available to the research community. The first two versions of the dataset (BP4D and BP4D+) contain videos of people reacting to varied emotion-eliciting situations, their self-reported emotion, and expert annotations of their facial expression. Version 1, BP4D (n=41), has been used by over 100 research groups and supported a successful community competition around recognizing emotion. The second version (BP4D+) adds participants (n = 140), thermal imaging, and measures of peripheral physiology. The current project greatly broadens and extends this corpus to produce a new dataset (BP4D++) that enables deep-learning approaches, increases generalizability, and builds research infrastructure and community in computer and behavioral science. The collaborators will (1) increase participant diversity; 2) add videos of pairs of people interacting to the current mix of individual and interviewer-mediated video; 3) increase the number of participants to meet the demands of recent advances in "big data" approaches to machine learning; and 4) expand the size and scope of annotations in the videos. They will also involve the community through an oversight and coordinating consortium that includes researchers in computer vision, biometrics, robotics, and cognitive and behavioral science. The consortium will be composed of special interest groups that focus on various aspects of the corpus, including groups responsible for completing the needed annotations, generating meta-data, and expanding the database application scope. Having an infrastructure to support emotion recognition research matters because computer systems that interact with people (such as phone assistants or characters in virtual reality environments) will be more useful if they react appropriately to what people are doing, thinking, and feeling. The team will triple the number of participants in the combined corpora to 540. They will develop a dyadic interaction task and capture data from 100 interacting dyads to support dynamic modeling of interpersonal influence across expressive behavior and physiology, as well as analysis of emotional synchrony. They will increase the density of facial annotations to about 15 million frames in total, allowing the database to become sufficiently large to support deep-learning approaches to multimodal emotion detection. These annotations will be accomplished through a hybrid approach that combines expert coding using the Facial Action Coding System, automated face analysis, and crowdsourcing with expert input from the research community. Finally, the recorded data will be augmented with a wide range of meta-data derived from 2D videos, 3D videos, thermal videos, and physiological signals. To ensure the community is involved in sustaining the infrastructure, in addition to the governance consortium described above, the investigators will involve the community in jointly building both APIs that allow adding meta-data and annotations and tools to support the submission and evaluation of new recognition algorithms, then organizing community-wide competitions using those tools. The research team will also reach out to new research communities around health computing, biometrics, and affective computing to widen the utility of the enhanced infrastructure, grow the community of expert annotators through training workshops, and build an educational community around the infrastructure that facilitates the development and sharing of course materials that use it. Long-term, the infrastructure will be funded through a combination of commercial licensing and support from the lead university's system administration group.
该项目将扩展和维持一个广泛使用的用于研究人类情感的数据基础设施,该基础设施由首席研究员所在的大学托管,并可供研究社区使用。前两个版本的数据集(BP4D和BP4D+)包含人们对各种引发情绪的情况做出反应的视频、他们自我报告的情绪以及对他们面部表情的专家注释。第一版,BP4D(n=41),已经被100多个研究小组使用,并支持了一场成功的关于识别情绪的社区比赛。第二个版本(BP4D+)增加了参与者(n=140)、热成像和外周生理测量。目前的项目极大地拓宽和扩展了这一语料库,以产生一个新的数据集(BP4D++),该数据集能够实现深度学习方法,增加概括性,并在计算机和行为科学方面建立研究基础设施和社区。合作者将(1)增加参与者的多样性;2)在目前由个人和采访者中介的视频组合中增加成对的人互动的视频;3)增加参与者的数量,以满足机器学习的“大数据”方法的最新进展;以及4)扩大视频中注释的大小和范围。他们还将通过一个监督和协调联盟让社区参与进来,该联盟包括计算机视觉、生物识别、机器人以及认知和行为科学的研究人员。该联盟将由侧重于语料库各个方面的特别兴趣小组组成,包括负责完成所需注释、生成元数据和扩大数据库应用范围的小组。拥有支持情感识别研究的基础设施很重要,因为与人互动的计算机系统(如虚拟现实环境中的电话助理或角色)如果对人们的所作所为、思想和感觉做出适当的反应,将更加有用。该团队将把合并后的语料库的参与者数量增加两倍,达到540人。他们将开发一项二元互动任务,并从100个互动的二元互动中获取数据,以支持跨表达行为和生理的人际影响的动态建模,以及情绪同步性分析。他们将把面部标注的密度增加到总共约1500万帧,使数据库变得足够大,以支持多模式情感检测的深度学习方法。这些注释将通过一种混合方法完成,该方法将使用面部动作编码系统的专家编码、自动面部分析和众包与来自研究社区的专家输入相结合。最后,记录的数据将用从2D视频、3D视频、热视频和生理信号获得的广泛的元数据来扩充。为了确保社区参与维护基础设施,除上述治理联盟外,调查人员还将让社区参与共同构建两个API,允许添加元数据和注释以及工具以支持新识别算法的提交和评估,然后使用这些工具组织社区范围的比赛。研究团队还将围绕健康计算、生物识别和情感计算接触新的研究社区,以扩大增强基础设施的效用,通过培训研讨会扩大专家注释者社区,并围绕基础设施建立一个教育社区,促进使用它的课程材料的开发和共享。从长远来看,基础设施将通过商业许可和牵头大学系统管理小组的支持相结合的方式提供资金。

项目成果

期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Qiang Ji其他文献

Oil financialisation and volatility forecast: Evidence from multidimensional predictors
石油金融化和波动性预测:来自多维预测的证据
  • DOI:
    10.1002/for.2577
  • 发表时间:
    2019-09
  • 期刊:
  • 影响因子:
    3.4
  • 作者:
    Yan-ran Ma;Qiang Ji;Jiaofeng Pan
  • 通讯作者:
    Jiaofeng Pan
An ecological network analysis of the structure, development and sustainability of China’s natural gas supply system security
中国天然气供应体系安全结构、发展与可持续性的生态网络分析
  • DOI:
    10.1016/j.ecolind.2016.09.051
  • 发表时间:
    2017-02
  • 期刊:
  • 影响因子:
    6.9
  • 作者:
    Faheemullah Shaikh;Qiang Ji;Ying Fan
  • 通讯作者:
    Ying Fan
Improving Face Recognition by Online Image Alignment
通过在线图像对齐改进人脸识别
Exploring Domain Knowledge for Facial Expression-Assisted Action Unit Activation Recognition
探索面部表情辅助动作单元激活识别的领域知识
Forecasting portfolio variance: a new decomposition approach
预测投资组合方差:一种新的分解方法
  • DOI:
    10.1007/s10479-023-05546-5
  • 发表时间:
    2023
  • 期刊:
  • 影响因子:
    4.8
  • 作者:
    Bo Yu;Dayong Zhang;Qiang Ji
  • 通讯作者:
    Qiang Ji

Qiang Ji的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Qiang Ji', 18)}}的其他基金

EAGER: Deep Causal Representation Learning for Generalizable Visual Understanding
EAGER:用于泛化视觉理解的深度因果表示学习
  • 批准号:
    2236026
  • 财政年份:
    2022
  • 资助金额:
    $ 21.54万
  • 项目类别:
    Standard Grant
Affect-Based Video Retrieval
基于情感的视频检索
  • 批准号:
    1539012
  • 财政年份:
    2015
  • 资助金额:
    $ 21.54万
  • 项目类别:
    Standard Grant
WORKSHOP: Doctoral Consortium at the IEEE ACII 2015 Conference
研讨会:IEEE ACII 2015 会议上的博士联盟
  • 批准号:
    1544421
  • 财政年份:
    2015
  • 资助金额:
    $ 21.54万
  • 项目类别:
    Standard Grant
EAGER: Combining Knowledge with Data for Generalizable and Robust Visual Learning
EAGER:将知识与数据相结合,实现可推广且稳健的视觉学习
  • 批准号:
    1145152
  • 财政年份:
    2011
  • 资助金额:
    $ 21.54万
  • 项目类别:
    Standard Grant
Automated Alignment and Segmentation for Electron Tomography
电子断层扫描的自动对准和分割
  • 批准号:
    0241182
  • 财政年份:
    2003
  • 资助金额:
    $ 21.54万
  • 项目类别:
    Continuing Grant

相似海外基金

CRI: CI-SUSTAIN: Collaborative Research: Sustaining Lemur Project Resources for the Long-Term
CRI:CI-SUSTAIN:合作研究:长期维持狐猴项目资源
  • 批准号:
    1822986
  • 财政年份:
    2018
  • 资助金额:
    $ 21.54万
  • 项目类别:
    Standard Grant
CRI: CI-SUSTAIN: Collaborative Research: CiteSeerX: Toward Sustainable Support of Scholarly Big Data
CRI:CI-SUSTAIN:协作研究:CiteSeerX:迈向学术大数据的可持续支持
  • 批准号:
    1823288
  • 财政年份:
    2018
  • 资助金额:
    $ 21.54万
  • 项目类别:
    Standard Grant
CRI: CI-SUSTAIN: Collaborative Research: CiteSeerX: Toward Sustainable Support of Scholarly Big Data
CRI:CI-SUSTAIN:协作研究:CiteSeerX:迈向学术大数据的可持续支持
  • 批准号:
    1853919
  • 财政年份:
    2018
  • 资助金额:
    $ 21.54万
  • 项目类别:
    Standard Grant
CRI: CI-SUSTAIN: Collaborative Research: CiteSeerX: Toward Sustainable Support of Scholarly Big Data
CRI:CI-SUSTAIN:协作研究:CiteSeerX:迈向学术大数据的可持续支持
  • 批准号:
    1823292
  • 财政年份:
    2018
  • 资助金额:
    $ 21.54万
  • 项目类别:
    Standard Grant
CRI: CI-SUSTAIN: Collaborative Research: Sustaining Lemur Project Resources for the Long-Term
CRI:CI-SUSTAIN:合作研究:长期维持狐猴项目资源
  • 批准号:
    1822975
  • 财政年份:
    2018
  • 资助金额:
    $ 21.54万
  • 项目类别:
    Standard Grant
Collaborative Research: CI-SUSTAIN: StarExec: Cross-Community Infrastructure for Logic Solving
协作研究:CI-SUSTAIN:StarExec:用于逻辑解决的跨社区基础设施
  • 批准号:
    1730419
  • 财政年份:
    2017
  • 资助金额:
    $ 21.54万
  • 项目类别:
    Standard Grant
Collaborative Research: CI-SUSTAIN: National File System Trace Repository
合作研究:CI-SUSTAIN:国家文件系统跟踪存储库
  • 批准号:
    1730726
  • 财政年份:
    2017
  • 资助金额:
    $ 21.54万
  • 项目类别:
    Standard Grant
Collaborative Research: CI-SUSTAIN: National File System Trace Repository
合作研究:CI-SUSTAIN:国家文件系统跟踪存储库
  • 批准号:
    1729939
  • 财政年份:
    2017
  • 资助金额:
    $ 21.54万
  • 项目类别:
    Standard Grant
Collaborative Research: CI-SUSTAIN: StarExec: Cross-Community Infrastructure for Logic Solving
协作研究:CI-SUSTAIN:StarExec:用于逻辑解决的跨社区基础设施
  • 批准号:
    1729603
  • 财政年份:
    2017
  • 资助金额:
    $ 21.54万
  • 项目类别:
    Standard Grant
CI-SUSTAIN: Collaborative Research: Extending a Large Multimodal Corpus of Spontaneous Behavior for Automated Emotion Analysis
CI-SUSTAIN:协作研究:扩展自发行为的大型多模态语料库以进行自动情绪分析
  • 批准号:
    1629716
  • 财政年份:
    2016
  • 资助金额:
    $ 21.54万
  • 项目类别:
    Standard Grant
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了