Adaptive Facial Deformable Models for Tracking (ADAManT)

用于跟踪的自适应面部变形模型 (ADAManT)

基本信息

  • 批准号:
    EP/L026813/1
  • 负责人:
  • 金额:
    $ 12.49万
  • 依托单位:
  • 依托单位国家:
    英国
  • 项目类别:
    Research Grant
  • 财政年份:
    2014
  • 资助国家:
    英国
  • 起止时间:
    2014 至 无数据
  • 项目状态:
    已结题

项目摘要

We propose to develop methodologies for automatic construction of person-specific facial deformable models for robust tracking of facial motion in unconstrained videos (recorded 'in-the-wild'). The tools are expected to work well for data recorded by a device as cheap as a web-cam and in almost arbitrary recording conditions. The technology that will be developed in the project is expected to have a huge impact in many different applications including but not limited to, biometrics (face recognition), Human Computer Interaction (HCI) systems, as well as, analysis and indexing of videos using facial information (e.g., YouTube), capturing of facial motion in games and film industry, creating virtual avatars, just to name a few. The novelty of the ADAMant technology is multi-faceted. We propose the very first, robust, discriminative deformable facial models that can be customized, in an incremental fashion, so that they can automatically tailor themselves to the person's face using image sequences under uncontrolled recording conditions (both indoors and outdoors). Also, we propose to build and publicly release the first annotated, with regards to facial landmarks, database of facial videos made 'in-the-wild'. Finally, we aim to use the database as the base of the first competition for facial landmark tracking 'in-the-wild', which will run as a satellite workshop of a top vision venue (such as ICCV 2015). As a proof of concept, and with a focus on a novel application, the ADAMant technology will be applied for (1) facial landmark tracking for machine analysis of behaviour in response to product adverts watched by people at comfort of their home (indoors) and (2) facial landmark tracking for automatic face verification using videos recorded by mobile devices (outdoors). In an increasingly global economy and ever-ubiquitous digital age, the market can change rapidly. As stipulated by the UK Researcher Councils' Digital Economy Theme, realising substantial transformational impact on how new business models are being created and taking advantage of the digital world is one of the main challenges. As human face is at the heart of many scientific disciplines and business models, the ADAManT project provides technology that can reshape established business models to become more efficient but also create new ones. Within EPSRC's ICT priorities our research is extremely relevant to autonomous systems and robotics, since it enables the development of robots capable of understanding human behaviour in unconstrained environments (i.e., design of robot companions, robots as tourist guide, etc.).
我们建议开发的方法自动构建的人特定的面部可变形模型的鲁棒跟踪面部运动在不受约束的视频(记录的“在野外”)。预计这些工具可以很好地处理像网络摄像头一样便宜的设备记录的数据,并且几乎可以在任意记录条件下工作。该项目中开发的技术预计将在许多不同的应用中产生巨大影响,包括但不限于生物识别(人脸识别),人机交互(HCI)系统,以及使用面部信息(例如,YouTube),在游戏和电影行业中捕捉面部动作,创建虚拟化身,仅举几例。ADAMant技术的新奇是多方面的。我们提出了第一个,强大的,歧视性的可变形的面部模型,可以定制,在一个增量的方式,使他们可以自动定制自己的人的脸使用图像序列在不受控制的记录条件下(室内和室外)。此外,我们建议建立并公开发布第一个注释的面部标志,面部视频数据库“在野外”。最后,我们的目标是使用该数据库作为第一次面部地标跟踪“野外”竞赛的基础,该竞赛将作为顶级视觉场地的卫星研讨会(如ICCV 2015)。作为概念验证,并专注于新的应用,ADAMant技术将应用于(1)面部标志跟踪,用于机器分析人们在舒适的家中(室内)观看的产品广告的行为,以及(2)面部标志跟踪,用于使用移动的设备(室外)录制的视频进行自动面部验证。在日益全球化的经济和无处不在的数字时代,市场可能会迅速变化。正如英国研究委员会的数字经济主题所规定的那样,实现对新商业模式的实质性变革影响并利用数字世界是主要挑战之一。由于人脸是许多科学学科和商业模式的核心,ADAManT项目提供的技术可以重塑现有的商业模式,使其变得更加高效,同时也可以创造新的商业模式。在EPSRC的ICT优先事项中,我们的研究与自主系统和机器人技术极其相关,因为它能够开发能够在不受约束的环境中理解人类行为的机器人(即,机器人伴侣的设计,机器人作为导游等)。

项目成果

期刊论文数量(10)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Offline Deformable Face Tracking in Arbitrary Videos
Unifying holistic and Parts-Based Deformable Model fitting
统一整体和基于零件的变形模型拟合
  • DOI:
    10.1109/cvpr.2015.7298991
  • 发表时间:
    2015
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Alabort-I-Medina J
  • 通讯作者:
    Alabort-I-Medina J
Active Pictorial Structures
A Comprehensive Performance Evaluation of Deformable Face Tracking "In-the-Wild"
“野外”可变形人脸跟踪的综合性能评估
  • DOI:
    10.48550/arxiv.1603.06015
  • 发表时间:
    2016
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Chrysos G
  • 通讯作者:
    Chrysos G
Statistical non-rigid ICP algorithm and its application to 3D face alignment
  • DOI:
    10.1016/j.imavis.2016.10.007
  • 发表时间:
    2017-02-01
  • 期刊:
  • 影响因子:
    4.7
  • 作者:
    Cheng, Shiyang;Marras, Ioannis;Pantic, Maja
  • 通讯作者:
    Pantic, Maja
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Stefanos Zafeiriou其他文献

Stefanos Zafeiriou的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Stefanos Zafeiriou', 18)}}的其他基金

GNOMON: Deep Generative Models in non-Euclidean Spaces for Computer Vision & Graphics
GNOMON:计算机视觉非欧几里得空间中的深度生成模型
  • 批准号:
    EP/X011364/1
  • 财政年份:
    2023
  • 资助金额:
    $ 12.49万
  • 项目类别:
    Research Grant
DEFORM: Large Scale Shape Analysis of Deformable Models of Humans
DEFORM:人体变形模型的大规模形状分析
  • 批准号:
    EP/S010203/1
  • 财政年份:
    2019
  • 资助金额:
    $ 12.49万
  • 项目类别:
    Fellowship

相似海外基金

FlexNIR-PD: A resource efficient UK-based production process for patented flexible Near Infrared Sensors for LIDAR, Facial recognition and high-speed data retrieval
FlexNIR-PD:基于英国的资源高效生产工艺,用于 LIDAR、面部识别和高速数据检索的专利柔性近红外传感器
  • 批准号:
    10098113
  • 财政年份:
    2024
  • 资助金额:
    $ 12.49万
  • 项目类别:
    Collaborative R&D
Affective Computing Models: from Facial Expression to Mind-Reading
情感计算模型:从面部表情到读心术
  • 批准号:
    EP/Y03726X/1
  • 财政年份:
    2024
  • 资助金额:
    $ 12.49万
  • 项目类别:
    Research Grant
3DFace@Home: A pilot study for robust and highly accurate facial 3D reconstruction from mobile devices for facial growth monitoring at home
3DFace@Home:一项通过移动设备进行稳健且高精度面部 3D 重建的试点研究,用于家庭面部生长监测
  • 批准号:
    EP/X036642/1
  • 财政年份:
    2024
  • 资助金额:
    $ 12.49万
  • 项目类别:
    Research Grant
Affective Computing Models: from Facial Expression to Mind-Reading ("ACMod")
情感计算模型:从面部表情到读心术(“ACMod”)
  • 批准号:
    EP/Z000025/1
  • 财政年份:
    2024
  • 资助金额:
    $ 12.49万
  • 项目类别:
    Research Grant
三叉神経領域のFacial Coolingによる慢性呼吸器疾患患者の呼吸困難感の軽減
通过三叉神经区域的面部冷却减轻慢性呼吸系统疾病患者的呼吸困难感
  • 批准号:
    24K13599
  • 财政年份:
    2024
  • 资助金额:
    $ 12.49万
  • 项目类别:
    Grant-in-Aid for Scientific Research (C)
Implicit Neural Representations for Facial Animation
面部动画的隐式神经表示
  • 批准号:
    2889954
  • 财政年份:
    2023
  • 资助金额:
    $ 12.49万
  • 项目类别:
    Studentship
Collaborative Research: CCSS: Continuous Facial Sensing and 3D Reconstruction via Single-ear Wearable Biosensors
合作研究:CCSS:通过单耳可穿戴生物传感器进行连续面部传感和 3D 重建
  • 批准号:
    2401415
  • 财政年份:
    2023
  • 资助金额:
    $ 12.49万
  • 项目类别:
    Standard Grant
Examination of the psychophysiological mechanism of facial skin blood flow in emotion processing and its clinical application
情绪处理中面部皮肤血流的心理生理机制探讨及其临床应用
  • 批准号:
    22KJ2717
  • 财政年份:
    2023
  • 资助金额:
    $ 12.49万
  • 项目类别:
    Grant-in-Aid for JSPS Fellows
Interdisciplinary perspectives on oral and facial pain and headache: unravelling the complexities for improved understanding, prevention, and management
关于口腔和面部疼痛和头痛的跨学科视角:揭示改善理解、预防和管理的复杂性
  • 批准号:
    487930
  • 财政年份:
    2023
  • 资助金额:
    $ 12.49万
  • 项目类别:
    Miscellaneous Programs
Digital humanities research on facial expression and emotion recognition in the illustrated books in the German Enlightenment period.
德国启蒙时期图画书中的面部表情和情感识别的数字人文研究。
  • 批准号:
    23K00093
  • 财政年份:
    2023
  • 资助金额:
    $ 12.49万
  • 项目类别:
    Grant-in-Aid for Scientific Research (C)
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了