Learning Unconstrained Human Pose Estimation from Low-cost Approximate Annotation

从低成本近似注释学习无约束人体姿势估计

基本信息

  • 批准号:
    EP/H035885/1
  • 负责人:
  • 金额:
    $ 12.81万
  • 依托单位:
  • 依托单位国家:
    英国
  • 项目类别:
    Research Grant
  • 财政年份:
    2010
  • 资助国家:
    英国
  • 起止时间:
    2010 至 无数据
  • 项目状态:
    已结题

项目摘要

This research is in the area of computer vision - making computers which can understand what is happening in photographs and video. As humans we are fascinated by other humans, and capture endless images of their activities, for example photographs of our family on holiday, video of sports events or CCTV footage of people in a town center. A computer capable of understanding what people are doing in such images would be able to do many jobs for us, for example finding photos of our child waving, fast forwarding to a goal in a football game, or spotting when someone starts a fight in the street. A fundamental task in achieving such aims is to get the computer to understand a person's pose - how are they standing, is their arm raised, where are they pointing? This pose estimation problem is easy for humans but very difficult for computers because people vary so much in their pose, their body shape and the clothing they wear.Much work has tried to solve this problem, and works well in particular settings for example where people wear a special suit with markers to help find the limbs, but does not work for real-world pictures because it uses simple stick man models of humans. We will investigate better models of how humans look by teaching the computer by showing it many example pictures. This approach of learning from pictures instead of building models by hand is showing great progress, but needs example pictures where the pose has been marked or annotated by a human annotator. Because annotating pictures is slow and tiresome current methods make do with a few hundred pictures and this isn't enough to learn all the ways a human can appear. We will overcome this problem by annotating pictures only roughly in a way which is very fast so we can annotate lots of pictures with low cost. We will then develop methods where the computer can learn from this rough annotation, working out what the corresponding exact annotation would be by combining many pictures and information we already know such as how the human body is put together.By having lots of images to learn from, and methods for making use of rough annotation, we will be able to make stronger models of how humans look as they change their pose. This will lead to pose estimation methods which work better in the real world and contribute to longer-term aims in understanding human activity from photographs and video.
这项研究是在计算机视觉领域-使计算机能够理解照片和视频中发生的事情。作为人类,我们被其他人所吸引,并捕捉他们活动的无尽图像,例如我们家庭度假的照片,体育赛事的视频或市中心人们的闭路电视镜头。一台能够理解人们在这些图像中所做的事情的计算机将能够为我们做很多工作,例如找到我们孩子挥手的照片,快速前进到足球比赛中的目标,或者发现有人在街上开始打架。实现这些目标的一个基本任务是让计算机理解一个人的姿势-他们是如何站立的,他们的手臂是否抬起,他们指向哪里?这个姿势估计问题对人类来说很容易,但对计算机来说非常困难,因为人们的姿势、体型和穿着都有很大的不同。很多工作都试图解决这个问题,并且在特定的环境中工作得很好,例如人们穿着带有标记的特殊套装来帮助找到四肢,但对现实世界的图片不起作用,因为它使用简单的人类模型。我们将通过向计算机展示许多示例图片来教授计算机,从而研究更好的人类外观模型。这种从图片中学习而不是手工构建模型的方法正在取得巨大进展,但需要由人类注释者标记或注释姿势的示例图片。因为注释图片是缓慢和令人厌烦的,目前的方法只能用几百张图片来做,这不足以学习人类出现的所有方式。我们将克服这个问题,注释图片只有粗略的方式,这是非常快,所以我们可以注释大量的图片与低成本。然后我们将开发一些方法,让计算机可以从这个粗略的注释中学习,通过结合我们已经知道的许多图片和信息,比如人体是如何组合在一起的,来计算出相应的精确注释是什么。通过有大量的图像可以学习,以及使用粗略注释的方法,我们将能够做出更强大的模型,来描述人类在改变姿势时的样子。这将导致姿势估计方法在真实的世界中工作得更好,并有助于从照片和视频中理解人类活动的长期目标。

项目成果

期刊论文数量(2)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Mark Everingham其他文献

Mark Everingham的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

相似海外基金

TRTech-PGR: Development of highly efficient and unconstrained CRISPR systems for plant functional genomics
TRTech-PGR:开发用于植物功能基因组学的高效且无约束的 CRISPR 系统
  • 批准号:
    2132693
  • 财政年份:
    2022
  • 资助金额:
    $ 12.81万
  • 项目类别:
    Standard Grant
NRI: FND: Natural Power Transmission through Unconstrained Fluids for Robotic Manipulation
NRI:FND:通过不受约束的流体进行自然动力传输,用于机器人操作
  • 批准号:
    2024409
  • 财政年份:
    2020
  • 资助金额:
    $ 12.81万
  • 项目类别:
    Standard Grant
Developing a network-based encoding model of motor cortex during natural behavior of the unconstrained marmoset
在不受约束的狨猴自然行为过程中开发基于网络的运动皮层编码模型
  • 批准号:
    10263941
  • 财政年份:
    2020
  • 资助金额:
    $ 12.81万
  • 项目类别:
Unconstrained Synthetic Aperture Sonar
无约束合成孔径声纳
  • 批准号:
    418971043
  • 财政年份:
    2019
  • 资助金额:
    $ 12.81万
  • 项目类别:
    Research Grants
Development of unconstrained learning support system using wearable sensors and virtual reality for a nursing motion
使用可穿戴传感器和虚拟现实的护理动作开发无约束学习支持系统
  • 批准号:
    19K20749
  • 财政年份:
    2019
  • 资助金额:
    $ 12.81万
  • 项目类别:
    Grant-in-Aid for Early-Career Scientists
Telemetric mouthguard sensor system with biocompatible materials and MEMS techniques for unconstrained human assessment
采用生物相容性材料和 MEMS 技术的遥测护牙套传感器系统,可实现不受约束的人体评估
  • 批准号:
    19KK0259
  • 财政年份:
    2019
  • 资助金额:
    $ 12.81万
  • 项目类别:
    Fund for the Promotion of Joint International Research (Fostering Joint International Research (B))
Continual Online Learning For Unconstrained Facial Landmark Detection And Tracking
持续在线学习不受约束的面部标志检测和跟踪
  • 批准号:
    2159382
  • 财政年份:
    2018
  • 资助金额:
    $ 12.81万
  • 项目类别:
    Studentship
Development of non-contact, unconstrained, motion-free PET system using inexpensive measuring equipment
使用廉价测量设备开发非接触式、无约束、自由运动 PET 系统
  • 批准号:
    17K18376
  • 财政年份:
    2017
  • 资助金额:
    $ 12.81万
  • 项目类别:
    Grant-in-Aid for Young Scientists (B)
Aerial encounter-type haptic display using wind to achieve unconstrained interaction
利用风实现空中遭遇式触觉显示,实现无约束交互
  • 批准号:
    17H01780
  • 财政年份:
    2017
  • 资助金额:
    $ 12.81万
  • 项目类别:
    Grant-in-Aid for Scientific Research (B)
CHS: Medium: Data Driven Biomechanically Accurate Modeling of Human Gait on Unconstrained Terrain
CHS:中:数据驱动的无约束地形上人类步态的生物力学精确建模
  • 批准号:
    1703883
  • 财政年份:
    2017
  • 资助金额:
    $ 12.81万
  • 项目类别:
    Standard Grant
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了