Mobile Motion Capture: From Photorealistic Avatars to Privacy Protection
移动动作捕捉:从逼真的头像到隐私保护
基本信息
- 批准号:RGPIN-2020-05456
- 负责人:
- 金额:$ 2.11万
- 依托单位:
- 依托单位国家:加拿大
- 项目类别:Discovery Grants Program - Individual
- 财政年份:2022
- 资助国家:加拿大
- 起止时间:2022-01-01 至 2023-12-31
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
The objective of this research program is to enhance everyday life by creating intuitive interfaces between persons, mobile computers, and their physical environment. As of now, smart devices lack a detailed picture of the user and the context of the surrounding. For instance, by not knowing the situation of the user, inapt push notifications disrupt our attention. Moreover, even though display capabilities matured for augmented reality, life-like, holographic telepresence is still Sci-Fi. Realistic avatars-graphical representation of a user-are only realized in dedicated studios equipped with dozens of high-end cameras. Some machine learning (ML) methods succeed in less-controlled conditions; however, fail for moderately complex apparel, such as a skirt, high resolution, or extreme motions because supervised ML demands huge amounts of labeled examples. I propose to overcome these limitations with the concept of world representation learning (WRL), which captures the geometric structure of our physical world hierarchically, using strong forms of self-supervision. Instead of scanning in proprietary studios and using person-months of annotation, WRL learns from mere consumer-level videos by utilizing geometric, temporal, and physical constraints as supervision signals. One of the biggest challenges will be to make WRL parametric and accessible by artists and casual users-closely related to the interpretability of ML models. If successful, WRL will supersede the widely used hand-crafted mesh representations that have problems representing irregular shapes and topological changes. Furthermore, learning from unlabeled consumer videos that are plentiful and diverse will alleviate biases, such as towards Caucasian skin tone, body characteristics, and clothing style; WRL will enable personalized avatars. The objective is developing mobile motion capture for consumer-level devices for recovering the fine-grained and subtle motions that reveal emotion and intent for ubiquitous human-computer interaction (HCI) and the mentioned push-notification timing. It is linked to my complementary research threads on neuroscience, on studying how neural circuits orchestrate limbed behaviors, and to other life sciences where automated capture helps to unroot new discoveries. Capturing people with smart devices (cf. Google Glass) scrutinizes bystanders without consent. I advocate a privacy-preserving camera: an acoustic camera which senses ultrasonic echoes instead of visible light and uses ML algorithms to reconstruct human motion. Sound echoes carry far less information, which masks subject identity in favor of privacy, yet, could suffice to localize persons; perhaps even beyond the line of sight. I see a great fit of the WRL, mobile capture, and anonymous tracking solutions for smart environments and wearables, and predict a huge positive impact on our everyday utility while resolving ethical concerns.
这项研究计划的目标是通过在人、移动的计算机及其物理环境之间创建直观的界面来改善日常生活。到目前为止,智能设备缺乏用户的详细图片和周围环境。例如,由于不知道用户的情况,不适当的推送通知会扰乱我们的注意力。 此外,尽管增强现实的显示能力已经成熟,但逼真的全息远程呈现仍然是科幻小说。真实的化身用户的图形化表现只有在配备了几十台高端摄像机的专用工作室才能实现。一些机器学习(ML)方法在控制较少的条件下取得了成功;但是,对于中等复杂的服装,如裙子,高分辨率或极端运动,由于监督ML需要大量的标记示例,因此失败。我建议用世界表征学习(WRL)的概念来克服这些局限性,WRL使用强大的自我监督形式分层地捕获我们物理世界的几何结构。WRL没有在专有工作室中扫描并使用人工月的注释,而是通过利用几何、时间和物理约束作为监督信号,从纯粹的消费级视频中学习。 最大的挑战之一将是使WRL参数化,并由艺术家和临时用户访问-与ML模型的可解释性密切相关。如果成功,WRL将取代广泛使用的手工制作的网格表示,这些表示在表示不规则形状和拓扑变化方面存在问题。此外,从丰富多样的未标记消费者视频中学习将减轻偏见,例如对白人肤色,身体特征和服装风格的偏见; WRL将实现个性化的化身。我们的目标是开发消费级设备的移动的动作捕捉,以恢复细粒度和微妙的动作,揭示无处不在的人机交互(HCI)的情感和意图,以及上述的推送通知定时。它与我在神经科学方面的补充研究线索有关,研究神经回路如何协调肢体行为,以及自动捕获有助于挖掘新发现的其他生命科学。用智能设备吸引人(参见谷歌眼镜)在未经同意的情况下仔细检查旁观者。我提倡一种保护隐私的摄像机:一种声学摄像机,它可以感知超声波回波而不是可见光,并使用ML算法来重建人体运动。声音回声携带的信息要少得多,这掩盖了主体身份,有利于隐私,但足以定位人;甚至可能超出视线。 我认为WRL,移动的捕获和匿名跟踪解决方案非常适合智能环境和可穿戴设备,并预测在解决道德问题的同时对我们的日常效用产生巨大的积极影响。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Rhodin, Helge其他文献
Towards a Visualizable, De-identified Synthetic Biomarker of Human Movement Disorders.
- DOI:
10.3233/jpd-223351 - 发表时间:
2022-08-27 - 期刊:
- 影响因子:5.2
- 作者:
Hu, Hao;Xiao, Dongsheng;Rhodin, Helge;Murphy, Timothy H. - 通讯作者:
Murphy, Timothy H.
DeepFly3D, a deep learning-based approach for 3D limb and appendage tracking in tethered, adult Drosophila
- DOI:
10.7554/elife.48571 - 发表时间:
2019-10-04 - 期刊:
- 影响因子:7.7
- 作者:
Gunel, Semih;Rhodin, Helge;Fua, Pascal - 通讯作者:
Fua, Pascal
Standardized 3D test object for multi-camera calibration during animal pose capture.
- DOI:
10.1117/1.nph.10.4.046602 - 发表时间:
2023-10 - 期刊:
- 影响因子:5.3
- 作者:
Hu, Hao;Zhang, Roark;Fong, Tony;Rhodin, Helge;Murphy, Timothy H. - 通讯作者:
Murphy, Timothy H.
Are Existing Monocular Computer Vision-Based 3D Motion Capture Approaches Ready for Deployment? A Methodological Study on the Example of Alpine Skiing
- DOI:
10.3390/s19194323 - 发表时间:
2019-10-01 - 期刊:
- 影响因子:3.9
- 作者:
Ostrek, Mirela;Rhodin, Helge;Spoerri, Joerg - 通讯作者:
Spoerri, Joerg
Rhodin, Helge的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Rhodin, Helge', 18)}}的其他基金
Mobile Motion Capture: From Photorealistic Avatars to Privacy Protection
移动动作捕捉:从逼真的头像到隐私保护
- 批准号:
RGPIN-2020-05456 - 财政年份:2021
- 资助金额:
$ 2.11万 - 项目类别:
Discovery Grants Program - Individual
Mobile Motion Capture: From Photorealistic Avatars to Privacy Protection
移动动作捕捉:从逼真的头像到隐私保护
- 批准号:
RGPIN-2020-05456 - 财政年份:2020
- 资助金额:
$ 2.11万 - 项目类别:
Discovery Grants Program - Individual
Mobile Motion Capture: From Photorealistic Avatars to Privacy Protection
移动动作捕捉:从逼真的头像到隐私保护
- 批准号:
DGECR-2020-00287 - 财政年份:2020
- 资助金额:
$ 2.11万 - 项目类别:
Discovery Launch Supplement
AuMoCap: Augmented, Portable, Real-Time, Markerless Motion Capture for Digital Humans
AuMoCap:用于数字人类的增强型、便携式、实时、无标记动作捕捉
- 批准号:
RTI-2020-00655 - 财政年份:2019
- 资助金额:
$ 2.11万 - 项目类别:
Research Tools and Instruments
相似海外基金
Physically and biomechanically plausible human motion capture from video
从视频中捕捉物理和生物力学上合理的人体动作
- 批准号:
23KJ1915 - 财政年份:2023
- 资助金额:
$ 2.11万 - 项目类别:
Grant-in-Aid for JSPS Fellows
Study of Clinical Dental Education Using Motion Capture System
利用运动捕捉系统进行临床牙科教育的研究
- 批准号:
23K16219 - 财政年份:2023
- 资助金额:
$ 2.11万 - 项目类别:
Grant-in-Aid for Early-Career Scientists
Norfolk's first dedicated Motion Capture Studio.
诺福克第一个专用动作捕捉工作室。
- 批准号:
10063567 - 财政年份:2023
- 资助金额:
$ 2.11万 - 项目类别:
Collaborative R&D
Development of safe personal protective equipment removal procedures using a motion capture system
使用动作捕捉系统开发安全的个人防护装备移除程序
- 批准号:
23K16411 - 财政年份:2023
- 资助金额:
$ 2.11万 - 项目类别:
Grant-in-Aid for Early-Career Scientists
cloudSLEAP: Maximizing accessibility to deep learning-based motion capture
cloudSLEAP:最大限度地提高基于深度学习的动作捕捉的可访问性
- 批准号:
10643661 - 财政年份:2023
- 资助金额:
$ 2.11万 - 项目类别:
Smartphone Application-Based Markerless Motion Capture and Biomechanical Analysis of Athletes using a Deep Learning Model
使用深度学习模型对运动员进行基于智能手机应用的无标记动作捕捉和生物力学分析
- 批准号:
486618 - 财政年份:2022
- 资助金额:
$ 2.11万 - 项目类别:
Studentship Programs
Modeling Diverse, Personalized and Expressive Animations for Virtual Characters through Motion Capture, Synthesis and Perception
通过动作捕捉、合成和感知为虚拟角色建模多样化、个性化和富有表现力的动画
- 批准号:
DGECR-2022-00415 - 财政年份:2022
- 资助金额:
$ 2.11万 - 项目类别:
Discovery Launch Supplement
SBIR Phase II: 3D Markerless Motion Capture Technology For Gait Analysis
SBIR 第二阶段:用于步态分析的 3D 无标记运动捕捉技术
- 批准号:
2153138 - 财政年份:2022
- 资助金额:
$ 2.11万 - 项目类别:
Cooperative Agreement
Modeling Diverse, Personalized and Expressive Animations for Virtual Characters through Motion Capture, Synthesis and Perception
通过动作捕捉、合成和感知为虚拟角色建模多样化、个性化和富有表现力的动画
- 批准号:
RGPIN-2022-04920 - 财政年份:2022
- 资助金额:
$ 2.11万 - 项目类别:
Discovery Grants Program - Individual
Video and inertial (VIMU) motion capture systems for human movement assessment
用于人体运动评估的视频和惯性 (VIMU) 运动捕捉系统
- 批准号:
RGPIN-2021-04059 - 财政年份:2022
- 资助金额:
$ 2.11万 - 项目类别:
Discovery Grants Program - Individual