Mobile Motion Capture: From Photorealistic Avatars to Privacy Protection

移动动作捕捉:从逼真的头像到隐私保护

基本信息

  • 批准号:
    RGPIN-2020-05456
  • 负责人:
  • 金额:
    $ 2.11万
  • 依托单位:
  • 依托单位国家:
    加拿大
  • 项目类别:
    Discovery Grants Program - Individual
  • 财政年份:
    2020
  • 资助国家:
    加拿大
  • 起止时间:
    2020-01-01 至 2021-12-31
  • 项目状态:
    已结题

项目摘要

The objective of this research program is to enhance everyday life by creating intuitive interfaces between persons, mobile computers, and their physical environment. As of now, smart devices lack a detailed picture of the user and the context of the surrounding. For instance, by not knowing the situation of the user, inapt push notifications disrupt our attention. Moreover, even though display capabilities matured for augmented reality, life-like, holographic telepresence is still Sci-Fi. Realistic avatarsgraphical representation of a userare only realized in dedicated studios equipped with dozens of high-end cameras. Some machine learning (ML) methods succeed in less-controlled conditions; however, fail for moderately complex apparel, such as a skirt, high resolution, or extreme motions because supervised ML demands huge amounts of labeled examples. I propose to overcome these limitations with the concept of world representation learning (WRL), which captures the geometric structure of our physical world hierarchically, using strong forms of self-supervision. Instead of scanning in proprietary studios and using person-months of annotation, WRL learns from mere consumer-level videos by utilizing geometric, temporal, and physical constraints as supervision signals. One of the biggest challenges will be to make WRL parametric and accessible by artists and casual usersclosely related to the interpretability of ML models. If successful, WRL will supersede the widely used hand-crafted mesh representations that have problems representing irregular shapes and topological changes. Furthermore, learning from unlabeled consumer videos that are plentiful and diverse will alleviate biases, such as towards Caucasian skin tone, body characteristics, and clothing style; WRL will enable personalized avatars. The objective is developing mobile motion capture for consumer-level devices for recovering the fine-grained and subtle motions that reveal emotion and intent for ubiquitous human-computer interaction (HCI) and the mentioned push-notification timing. It is linked to my complementary research threads on neuroscience, on studying how neural circuits orchestrate limbed behaviors, and to other life sciences where automated capture helps to unroot new discoveries. Capturing people with smart devices (cf. Google Glass) scrutinizes bystanders without consent. I advocate a privacy-preserving camera: an acoustic camera which senses ultrasonic echoes instead of visible light and uses ML algorithms to reconstruct human motion. Sound echoes carry far less information, which masks subject identity in favor of privacy, yet, could suffice to localize persons; perhaps even beyond the line of sight. I see a great fit of the WRL, mobile capture, and anonymous tracking solutions for smart environments and wearables, and predict a huge positive impact on our everyday utility while resolving ethical concerns.
这项研究计划的目标是通过在人、移动计算机和他们的物理环境之间创建直观的界面来改善日常生活。到目前为止,智能设备缺乏用户和周围环境的详细图片。例如,由于不了解用户的情况,不恰当的推送通知会干扰我们的注意力。

项目成果

期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Rhodin, Helge其他文献

Towards a Visualizable, De-identified Synthetic Biomarker of Human Movement Disorders.
  • DOI:
    10.3233/jpd-223351
  • 发表时间:
    2022-08-27
  • 期刊:
  • 影响因子:
    5.2
  • 作者:
    Hu, Hao;Xiao, Dongsheng;Rhodin, Helge;Murphy, Timothy H.
  • 通讯作者:
    Murphy, Timothy H.
DeepFly3D, a deep learning-based approach for 3D limb and appendage tracking in tethered, adult Drosophila
  • DOI:
    10.7554/elife.48571
  • 发表时间:
    2019-10-04
  • 期刊:
  • 影响因子:
    7.7
  • 作者:
    Gunel, Semih;Rhodin, Helge;Fua, Pascal
  • 通讯作者:
    Fua, Pascal
Standardized 3D test object for multi-camera calibration during animal pose capture.
  • DOI:
    10.1117/1.nph.10.4.046602
  • 发表时间:
    2023-10
  • 期刊:
  • 影响因子:
    5.3
  • 作者:
    Hu, Hao;Zhang, Roark;Fong, Tony;Rhodin, Helge;Murphy, Timothy H.
  • 通讯作者:
    Murphy, Timothy H.
Are Existing Monocular Computer Vision-Based 3D Motion Capture Approaches Ready for Deployment? A Methodological Study on the Example of Alpine Skiing
  • DOI:
    10.3390/s19194323
  • 发表时间:
    2019-10-01
  • 期刊:
  • 影响因子:
    3.9
  • 作者:
    Ostrek, Mirela;Rhodin, Helge;Spoerri, Joerg
  • 通讯作者:
    Spoerri, Joerg

Rhodin, Helge的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Rhodin, Helge', 18)}}的其他基金

Mobile Motion Capture: From Photorealistic Avatars to Privacy Protection
移动动作捕捉:从逼真的头像到隐私保护
  • 批准号:
    RGPIN-2020-05456
  • 财政年份:
    2022
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Discovery Grants Program - Individual
Mobile Motion Capture: From Photorealistic Avatars to Privacy Protection
移动动作捕捉:从逼真的头像到隐私保护
  • 批准号:
    RGPIN-2020-05456
  • 财政年份:
    2021
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Discovery Grants Program - Individual
Mobile Motion Capture: From Photorealistic Avatars to Privacy Protection
移动动作捕捉:从逼真的头像到隐私保护
  • 批准号:
    DGECR-2020-00287
  • 财政年份:
    2020
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Discovery Launch Supplement
AuMoCap: Augmented, Portable, Real-Time, Markerless Motion Capture for Digital Humans
AuMoCap:用于数字人类的增强型、便携式、实时、无标记动作捕捉
  • 批准号:
    RTI-2020-00655
  • 财政年份:
    2019
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Research Tools and Instruments

相似海外基金

Physically and biomechanically plausible human motion capture from video
从视频中捕捉物理和生物力学上合理的人体动作
  • 批准号:
    23KJ1915
  • 财政年份:
    2023
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Grant-in-Aid for JSPS Fellows
Study of Clinical Dental Education Using Motion Capture System
利用运动捕捉系统进行临床牙科教育的研究
  • 批准号:
    23K16219
  • 财政年份:
    2023
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Grant-in-Aid for Early-Career Scientists
Norfolk's first dedicated Motion Capture Studio.
诺福克第一个专用动作捕捉工作室。
  • 批准号:
    10063567
  • 财政年份:
    2023
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Collaborative R&D
Development of safe personal protective equipment removal procedures using a motion capture system
使用动作捕捉系统开发安全的个人防护装备移除程序
  • 批准号:
    23K16411
  • 财政年份:
    2023
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Grant-in-Aid for Early-Career Scientists
cloudSLEAP: Maximizing accessibility to deep learning-based motion capture
cloudSLEAP:最大限度地提高基于深度学习的动作捕捉的可访问性
  • 批准号:
    10643661
  • 财政年份:
    2023
  • 资助金额:
    $ 2.11万
  • 项目类别:
Smartphone Application-Based Markerless Motion Capture and Biomechanical Analysis of Athletes using a Deep Learning Model
使用深度学习模型对运动员进行基于智能手机应用的无标记动作捕捉和生物力学分析
  • 批准号:
    486618
  • 财政年份:
    2022
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Studentship Programs
Modeling Diverse, Personalized and Expressive Animations for Virtual Characters through Motion Capture, Synthesis and Perception
通过动作捕捉、合成和感知为虚拟角色建模多样化、个性化和富有表现力的动画
  • 批准号:
    DGECR-2022-00415
  • 财政年份:
    2022
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Discovery Launch Supplement
SBIR Phase II: 3D Markerless Motion Capture Technology For Gait Analysis
SBIR 第二阶段:用于步态分析的 3D 无标记运动捕捉技术
  • 批准号:
    2153138
  • 财政年份:
    2022
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Cooperative Agreement
Modeling Diverse, Personalized and Expressive Animations for Virtual Characters through Motion Capture, Synthesis and Perception
通过动作捕捉、合成和感知为虚拟角色建模多样化、个性化和富有表现力的动画
  • 批准号:
    RGPIN-2022-04920
  • 财政年份:
    2022
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Discovery Grants Program - Individual
Video and inertial (VIMU) motion capture systems for human movement assessment
用于人体运动评估的视频和惯性 (VIMU) 运动捕捉系统
  • 批准号:
    RGPIN-2021-04059
  • 财政年份:
    2022
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Discovery Grants Program - Individual
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了