Creating Expressive Three-Dimensional Talking Faces

创建富有表现力的三维说话面孔

基本信息

  • 批准号:
    EP/D049075/1
  • 负责人:
  • 金额:
    $ 14.63万
  • 依托单位:
  • 依托单位国家:
    英国
  • 项目类别:
    Research Grant
  • 财政年份:
    2006
  • 资助国家:
    英国
  • 起止时间:
    2006 至 无数据
  • 项目状态:
    已结题

项目摘要

The broad aim of this work is to develop a life-like expressive talking head. This is difficult to achieve as we are extremely sensitive to subtle changes in the features of the face. Flaws in animated sequences are easy to detect and severely degrade the perceived quality of the output. This is especially true for systems that strive for videorealism (indistinguishable from real video).All previous approaches that achieve close to videorealism areimage-based and two-dimensional; the pose of the character is always face-on, emotion is usually ignored, and the vocabulary is often limited. This work will overcome, for the first time, all of these limitations.To generate realistic animated sequences, a user need only supply the text (or voice) of the sentence they wish to animate. Contrast this with animation studios, such as Pixar, that require months (or years) of manual tuning of animation parameters to create realistic animated sequences. Of course, these sequences are limited to the script of the movie - to generate further sequences would require further manual specification of the parameters. This system will generate expressive visual speech for any arbitrary utterance from the limited training data available, without the need for user intervention. Also, a user can specify a desired expression, e.g. a happy expression for good news, and the output will automatically be adapted to that expression.
这项工作的广泛目标是培养一个栩栩如生、富有表现力的会说话的脑袋。这是很难实现的,因为我们对面部特征的细微变化非常敏感。动画序列中的缺陷很容易检测到,并严重降低输出的感知质量。对于努力追求视频真实感(与真实视频难以区分)的系统来说尤其如此。以前所有接近视频真实感的方法都是基于图像和二维的;角色的姿势总是面对面的,情感通常被忽略,词汇量往往有限。这项工作将第一次克服所有这些限制。要生成逼真的动画序列,用户只需提供他们希望动画的句子的文本(或语音)。这与皮克斯等动画工作室形成对比,后者需要数月(或数年)的手动调整动画参数才能创建逼真的动画序列。当然,这些序列仅限于电影剧本--要生成更多的序列,需要进一步手动指定参数。该系统将从有限的可用训练数据中为任意发声生成富有表现力的视觉语音,而不需要用户干预。此外,用户可以指定期望的表达,例如好消息的快乐表达,并且输出将自动适应该表达。

项目成果

期刊论文数量(5)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
A real-time speech-driven talking head using active appearance models
  • DOI:
  • 发表时间:
    2007
  • 期刊:
  • 影响因子:
    0
  • 作者:
    B. Theobald;N. Wilkinson
  • 通讯作者:
    B. Theobald;N. Wilkinson
On Evaluating Synthesised Visual Speech
关于评估合成视觉语音
LIPS2008: Visual speech synthesis challenge
LIPS2008:视觉语音合成挑战
Mapping and manipulating facial expression.
  • DOI:
    10.1177/0023830909103181
  • 发表时间:
    2009
  • 期刊:
  • 影响因子:
    1.8
  • 作者:
    Theobald BJ;Matthews I;Mangini M;Spies JR;Brick TR;Cohn JF;Boker SM
  • 通讯作者:
    Boker SM
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Barry-John Theobald其他文献

In pursuit of visemes
追求视位
  • DOI:
  • 发表时间:
    2010
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Barry-John Theobald
  • 通讯作者:
    Barry-John Theobald

Barry-John Theobald的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

相似海外基金

CRII: SaTC: Enforcing Expressive Security Policies using Trusted Execution Environments
CRII:SaTC:使用可信执行环境执行表达性安全策略
  • 批准号:
    2348304
  • 财政年份:
    2024
  • 资助金额:
    $ 14.63万
  • 项目类别:
    Standard Grant
Formalising Expressive Morphology
形式化表达形态
  • 批准号:
    EP/X024105/1
  • 财政年份:
    2023
  • 资助金额:
    $ 14.63万
  • 项目类别:
    Fellowship
Designing an Expressive Relational Robotic Memory System with Long-Term Capabilities
设计具有长期能力的表达关系机器人记忆系统
  • 批准号:
    23K19984
  • 财政年份:
    2023
  • 资助金额:
    $ 14.63万
  • 项目类别:
    Grant-in-Aid for Research Activity Start-up
Development of skill evaluation models for systematic learning in Expressive Activity and Dance classes
开发表达活动和舞蹈课程系统学习的技能评估模型
  • 批准号:
    23K02358
  • 财政年份:
    2023
  • 资助金额:
    $ 14.63万
  • 项目类别:
    Grant-in-Aid for Scientific Research (C)
EMERGE: Early Markers of Expressive and Receptive (language) Growth in Ethnically diverse autistic toddlers
出现:种族多元化自闭症幼儿表达和接受(语言)成长的早期标志
  • 批准号:
    10862026
  • 财政年份:
    2023
  • 资助金额:
    $ 14.63万
  • 项目类别:
On the reliability of computational algorithms in optimal control methods using highly expressive non-differentiable functions
使用高表达不可微函数的最优控制方法中计算算法的可靠性
  • 批准号:
    23K13359
  • 财政年份:
    2023
  • 资助金额:
    $ 14.63万
  • 项目类别:
    Grant-in-Aid for Early-Career Scientists
The effects of expressive writing following traumatic childbirth
创伤性分娩后表达性写作的影响
  • 批准号:
    10592883
  • 财政年份:
    2023
  • 资助金额:
    $ 14.63万
  • 项目类别:
Expressive data augmentation in deep learning
深度学习中的富有表现力的数据增强
  • 批准号:
    RGPIN-2022-04651
  • 财政年份:
    2022
  • 资助金额:
    $ 14.63万
  • 项目类别:
    Discovery Grants Program - Individual
Modeling Diverse, Personalized and Expressive Animations for Virtual Characters through Motion Capture, Synthesis and Perception
通过动作捕捉、合成和感知为虚拟角色建模多样化、个性化和富有表现力的动画
  • 批准号:
    DGECR-2022-00415
  • 财政年份:
    2022
  • 资助金额:
    $ 14.63万
  • 项目类别:
    Discovery Launch Supplement
Modeling Diverse, Personalized and Expressive Animations for Virtual Characters through Motion Capture, Synthesis and Perception
通过动作捕捉、合成和感知为虚拟角色建模多样化、个性化和富有表现力的动画
  • 批准号:
    RGPIN-2022-04920
  • 财政年份:
    2022
  • 资助金额:
    $ 14.63万
  • 项目类别:
    Discovery Grants Program - Individual
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了