Design the Future 2: CrowdDesignVR

设计未来 2:CrowdDesignVR

基本信息

  • 批准号:
    EP/R004471/1
  • 负责人:
  • 金额:
    $ 71.42万
  • 依托单位:
  • 依托单位国家:
    英国
  • 项目类别:
    Research Grant
  • 财政年份:
    2018
  • 资助国家:
    英国
  • 起止时间:
    2018 至 无数据
  • 项目状态:
    已结题

项目摘要

Our initial proposal CrowdDesign set out to explore how we can aid rapid prototyping of mobile sensor-based user interfaces by exploiting the versatile sensor capabilities of mobile phones. The primary objective was to investigate if we can crowdsource such sensor-dependent tasks to mobile devices in order to assist designers in rapidly evaluating new interaction techniques in situ. We have identified a very strong research trajectory that motivates continuing the CrowdDesign project beyond this year: CrowdDesignVR. In this follow-up project we propose to substantially extend the scope of the CrowdDesign project and elevate it from the smartphone platform and into a virtual reality platform. To enable the exploration of a very promising research trajectory for crowdsourced human-computer interaction, we need to invest time and effort into realising a high-quality crowdsourcing platform for VR.CrowdDesignVR will be the first crowdsourcing system for virtual reality. It will distribute tasks across the Steam VR distribution network, which allows it to reach a large sample of VR users. Prior research cannot reach this scale, as research has been limited to opportunity-sampling local participants and then train them to use a specific VR system. In contrast, by enabling access to thousands or even tens of thousands of Steam users, CrownDesignVR facilitates user interaction data collection at a scale that is several orders of magnitudes larger. This provides a number of wider benefits: 1) we can during the course of the project create more accurate models of human actions; 2) we can collect sufficient training data to train machine learning models, such as deep neural network models to accurately decode common user interface interaction patterns, such as typing, gesturing and determining whether an action was intended or not by the user.Since crowdsourcing tasks in a high-fidelity VR environment is a new avenue of research, there are many fundamental questions that need to be answered. We believe this project could result in potential seminal work on the understanding of the design space for crowdsourcing in VR.Another potential impact is the data itself. Our internal work on building deep neural networks for decoding typing tasks on touchscreen and physical keyboards has revealed that deep neural networks (specifically, recurrent neural networks) output traditional hidden Markov model decoding. However, we have also found that the amount of data that needs to be collected is very large, in fact, we use our CrowdDesign task architecture as mentioned previously in our report to collect touchscreen data from hundreds of users. CrowdDesignVR can substantially widen the scope and let us tackle some of deep previously unsolved questions in user interface design, such as how we can build a gesture recogniser that is capable of learning to recognise both open-loop (direct recall from motor memory) and closed-loop (visually-guided motion) gestures on both the 2D plane and in 3D space. A large amount of data would allow us to train a recurrent neural network to learn this separation. The potential is large as users are always in a continuum between open-loop and closed-loop interaction. However, due to the fundamental differences in the underlying generative models that result in the observed behaviour, it is very difficult to collect sufficient training data in lab.
我们最初的建议CrowdDesign开始探索如何利用移动的手机的多功能传感器功能,帮助快速原型设计基于移动的传感器的用户界面。主要目的是调查,如果我们可以众包这种传感器依赖的任务,以帮助设计师在现场快速评估新的交互技术的移动的设备。我们已经确定了一个非常强大的研究轨迹,激励我们在今年之后继续CrowdDesign项目:CrowdDesignVR。在这个后续项目中,我们建议大幅扩展CrowdDesign项目的范围,并将其从智能手机平台提升到虚拟现实平台。为了探索一个非常有前途的人机交互众包研究轨迹,我们需要投入时间和精力来实现一个高质量的VR众包平台。CrowdDesignVR将是第一个虚拟现实众包系统。它将通过Steam VR分发网络分发任务,这使它能够接触到大量VR用户。之前的研究无法达到这种规模,因为研究仅限于对当地参与者进行机会抽样,然后训练他们使用特定的VR系统。相比之下,通过访问数千甚至数万名Steam用户,CrownDesignVR促进了用户交互数据收集,其规模要大几个数量级。这提供了一些更广泛的好处:1)我们可以在项目过程中创建更准确的人类行为模型; 2)我们可以收集足够的训练数据来训练机器学习模型,例如深度神经网络模型,以准确解码常见的用户界面交互模式,例如打字,手势并确定用户是否有意进行动作。由于高保真VR环境中的众包任务是一种新的研究途径,有许多基本问题需要回答。我们相信这个项目可能会对理解VR众包的设计空间产生潜在的开创性工作。另一个潜在的影响是数据本身。我们在构建深度神经网络以解码触摸屏和物理键盘上的打字任务方面的内部工作表明,深度神经网络(特别是递归神经网络)输出传统的隐马尔可夫模型解码。然而,我们也发现需要收集的数据量非常大,事实上,我们使用我们之前在报告中提到的CrowdDesign任务架构来收集数百名用户的触摸屏数据。CrowdDesignVR可以大大拓宽范围,让我们解决用户界面设计中一些以前未解决的深层次问题,例如我们如何构建一个手势识别器,能够学习识别2D平面和3D空间上的开环(从运动记忆中直接回忆)和闭环(视觉引导运动)手势。大量的数据将使我们能够训练一个循环神经网络来学习这种分离。潜力是巨大的,因为用户总是处于开环和闭环交互之间的连续体中。然而,由于导致观察到的行为的底层生成模型的根本差异,很难在实验室中收集足够的训练数据。

项目成果

期刊论文数量(10)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Crowdsourcing Interface Feature Design with Bayesian Optimization
使用贝叶斯优化的众包界面功能设计
  • DOI:
    10.1145/3290605.3300482
  • 发表时间:
    2019
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Dudley J
  • 通讯作者:
    Dudley J
Bare-Handed 3D Drawing in Augmented Reality
增强现实中的徒手 3D 绘图
  • DOI:
    10.1145/3196709.3196737
  • 发表时间:
    2018
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Dudley J
  • 通讯作者:
    Dudley J
Change Blindness in Proximity-Aware Mobile Interfaces
改变接近感知移动界面的盲目性
  • DOI:
    10.1145/3173574.3173617
  • 发表时间:
    2018
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Brock M
  • 通讯作者:
    Brock M
Fast and Precise Touch-Based Text Entry for Head-Mounted Augmented Reality with Variable Occlusion
Crowdsourcing Design Guidance for Contextual Adaptation of Text Content in Augmented Reality
增强现实中文本内容情境适应的众包设计指南
  • DOI:
    10.1145/3411764.3445493
  • 发表时间:
    2021
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Dudley J
  • 通讯作者:
    Dudley J
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Per Ola Kristensson其他文献

From wax tablets to touchscreens: an introduction to text-entry research
从蜡片到触摸屏:文本输入研究简介
  • DOI:
  • 发表时间:
    2014
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Per Ola Kristensson
  • 通讯作者:
    Per Ola Kristensson
Estimating and using absolute and relative viewing distance in interactive systems
  • DOI:
    10.1016/j.pmcj.2012.06.009
  • 发表时间:
    2014-02-01
  • 期刊:
  • 影响因子:
  • 作者:
    Jakub Dostal;Per Ola Kristensson;Aaron Quigley
  • 通讯作者:
    Aaron Quigley
Swarm manipulation: An efficient and accurate technique for multi-object manipulation in virtual reality
  • DOI:
    10.1016/j.cag.2024.104113
  • 发表时间:
    2024-12-01
  • 期刊:
  • 影响因子:
  • 作者:
    Xiang Li;Jin-Du Wang;John J. Dudley;Per Ola Kristensson
  • 通讯作者:
    Per Ola Kristensson

Per Ola Kristensson的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Per Ola Kristensson', 18)}}的其他基金

Towards an Equitable Social VR
迈向公平的社交 VR
  • 批准号:
    EP/W02456X/1
  • 财政年份:
    2023
  • 资助金额:
    $ 71.42万
  • 项目类别:
    Research Grant
Inclusive Design of Immersive Content
沉浸式内容的包容性设计
  • 批准号:
    EP/S027432/1
  • 财政年份:
    2019
  • 资助金额:
    $ 71.42万
  • 项目类别:
    Research Grant
Intelligent Mobile Crowd Design Platform
智能移动人群设计平台
  • 批准号:
    EP/N010558/1
  • 财政年份:
    2016
  • 资助金额:
    $ 71.42万
  • 项目类别:
    Research Grant
Text Entry by Inference: Eye Typing, Stenography, and Understanding Context of Use
通过推理进行文本输入:眼睛打字、速记和理解使用上下文
  • 批准号:
    EP/H027408/2
  • 财政年份:
    2011
  • 资助金额:
    $ 71.42万
  • 项目类别:
    Fellowship
Text Entry by Inference: Eye Typing, Stenography, and Understanding Context of Use
通过推理进行文本输入:眼睛打字、速记和理解使用上下文
  • 批准号:
    EP/H027408/1
  • 财政年份:
    2010
  • 资助金额:
    $ 71.42万
  • 项目类别:
    Fellowship

相似海外基金

Home helper robots: Understanding our future lives with human-like AI
家庭帮手机器人:用类人人工智能了解我们的未来生活
  • 批准号:
    FT230100021
  • 财政年份:
    2025
  • 资助金额:
    $ 71.42万
  • 项目类别:
    ARC Future Fellowships
FABB-HVDC (Future Aerospace power conversion Building Blocks for High Voltage DC electrical power systems)
FABB-HVDC(高压直流电力系统的未来航空航天电力转换构建模块)
  • 批准号:
    10079892
  • 财政年份:
    2024
  • 资助金额:
    $ 71.42万
  • 项目类别:
    Legacy Department of Trade & Industry
Human-Robot Co-Evolution: Achieving the full potential of future workplaces
人机协同进化:充分发挥未来工作场所的潜力
  • 批准号:
    DP240100938
  • 财政年份:
    2024
  • 资助金额:
    $ 71.42万
  • 项目类别:
    Discovery Projects
Cities as transformative agents for a climate-safe future
城市是气候安全未来的变革推动者
  • 批准号:
    FL230100021
  • 财政年份:
    2024
  • 资助金额:
    $ 71.42万
  • 项目类别:
    Australian Laureate Fellowships
Cloud immersion and the future of tropical montane forests
云沉浸和热带山地森林的未来
  • 批准号:
    EP/Y027736/1
  • 财政年份:
    2024
  • 资助金额:
    $ 71.42万
  • 项目类别:
    Fellowship
Mem-Fast Membranes as Enablers for Future Biorefineries: from Fabrication to Advanced Separation Technologies
Mem-Fast 膜作为未来生物精炼的推动者:从制造到先进的分离技术
  • 批准号:
    EP/Y032004/1
  • 财政年份:
    2024
  • 资助金额:
    $ 71.42万
  • 项目类别:
    Research Grant
International Centre-to-Centre Collaboration: New catalysts for acetylene processes enabling a sustainable future
国际中心间合作:乙炔工艺的新型催化剂实现可持续的未来
  • 批准号:
    EP/Z531285/1
  • 财政年份:
    2024
  • 资助金额:
    $ 71.42万
  • 项目类别:
    Research Grant
Addressing the complexity of future power system dynamic behaviour
解决未来电力系统动态行为的复杂性
  • 批准号:
    MR/S034420/2
  • 财政年份:
    2024
  • 资助金额:
    $ 71.42万
  • 项目类别:
    Fellowship
CAREER: Advances to the EMT Modeling and Simulation of Restoration Processes for Future Grids
职业:未来电网恢复过程的 EMT 建模和仿真的进展
  • 批准号:
    2338621
  • 财政年份:
    2024
  • 资助金额:
    $ 71.42万
  • 项目类别:
    Continuing Grant
Securing the Future: Inclusive Cybersecurity Education for All
确保未来:全民包容性网络安全教育
  • 批准号:
    2350448
  • 财政年份:
    2024
  • 资助金额:
    $ 71.42万
  • 项目类别:
    Standard Grant
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了