GLANCE: GLAnceable Nuances for Contextual Events

GLANCE:上下文事件的 GLanceable 细微差别

基本信息

  • 批准号:
    EP/N013964/1
  • 负责人:
  • 金额:
    $ 102.83万
  • 依托单位:
  • 依托单位国家:
    英国
  • 项目类别:
    Research Grant
  • 财政年份:
    2016
  • 资助国家:
    英国
  • 起止时间:
    2016 至 无数据
  • 项目状态:
    已结题

项目摘要

This project will develop and validate exciting novel ways in which people can interact with the world via cognitive wearables -intelligent on-body computing systems that aim to understand the user, the context, and importantly, are prompt-less and useful. Specifically, we will focus on the automatic production and display of what we call glanceable guidance. Eschewing traditional and intricate 3D Augmented Reality approaches that have been difficult to show significant usefulness, glanceable guidance aims to synthesize the nuances of complex tasks in short snippets that are ideal for wearable computing systems and that interfere less with the user and that are easier to learn and use.There are two key research challenges, the first is to be able to mine information from long, raw and unscripted wearable video taken from real user-object interactions in order to generate the glanceable supports. Another key challenge is how to automatically detect user's moments of uncertainty during which support should be provided without the user's explicit prompt.The project aims to address the following fundamental problems:1. Improve the detection of user's attention by robustly determining periods of time that correspond to task-relevant object interactions from a continuous stream of wearable visual and inertial sensors.2. Provide assistance only when it is needed by building models of the user, context and task from autonomously identified micro-interactions by multiple users, focusing on models that can facilitate guidance.3. Identify and predict action uncertainty from wearable sensing in particular gaze patterns and head motions.4. Detect and weigh user expertise for the identification of task nuances towards the optimal creation of real-time tailored guidance.5. Design and deliver glanceable guidance that acts in a seamless and prompt-less manner during task performance with minimal interruptions, based on autonomously built models.GLANCE is underpinned by a rich program of experimental work and rigorous validation across a variety of interaction tasks and user groups. Populations to be tested include skilled and general population and for tasks that include: assembly, using novel equipment (e.g. an unknown coffee maker), and repair tasks (e.g. replacing a bicycle gear cable). It also tightly incorporates the development of working demonstrations.And in collaboration with our partners the project will explore high-value impact cases related to health care towards assisted living and in industrial settings focusing on assembly and maintenance tasks. Our team is a collaboration between Computer Science, to develop a the novel data mining and computer vision algorithms, and Behavioral Science to understand when and how users need support.
该项目将开发和验证令人兴奋的新方式,人们可以通过认知可穿戴设备与世界互动-智能体上计算系统,旨在了解用户,上下文,重要的是,是无障碍和有用的。具体来说,我们将专注于自动生产和显示我们所谓的一目了然的指导。Glanceable Guidance避开了传统的复杂的3D增强现实方法,这些方法很难显示出显著的实用性,它旨在将复杂任务的细微差别合成在短片段中,这是可穿戴计算系统的理想选择,对用户的干扰更少,更容易学习和使用。有两个关键的研究挑战,第一个是能够从长时间的信息中挖掘信息,从真实的用户-对象交互中获取的原始和无脚本的可穿戴视频,以便生成可浏览的支持。另一个关键的挑战是如何自动检测用户的不确定时刻,在此期间,应提供支持,而无需用户的显式提示。通过从可穿戴视觉和惯性传感器的连续流中鲁棒地确定与任务相关的对象交互相对应的时间段,改进对用户注意力的检测。2.只有在需要时才提供帮助,方法是从多个用户自主识别的微交互中构建用户、上下文和任务的模型,重点是可以促进指导的模型。识别和预测动作的不确定性,从可穿戴传感,特别是凝视模式和头部运动.检测和权衡用户专业知识,以识别任务的细微差别,从而优化实时定制指导的创建。5.基于自主构建的模型,设计并提供可浏览的指导,在任务执行过程中以无缝和无干扰的方式进行操作,最大限度地减少中断。GLANCE以丰富的实验工作计划和跨各种交互任务和用户组的严格验证为基础。测试人群包括熟练人群和普通人群,测试任务包括:装配、使用新设备(例如未知的咖啡机)和维修任务(例如更换自行车齿轮电缆)。它还紧密结合了工作演示的发展。通过与我们的合作伙伴合作,该项目将探索与医疗保健相关的高价值影响案例,以辅助生活和工业环境中的装配和维护任务为重点。我们的团队是计算机科学之间的合作,开发新颖的数据挖掘和计算机视觉算法,以及行为科学,以了解用户何时以及如何需要支持。

项目成果

期刊论文数量(10)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Integration of Experts' and Beginners' Machine Operation Experiences to Obtain a Detailed Task Model
整合专家和初学者的机器操作经验,获得详细的任务模型
Abstract: Predicting Eye and Head Coordination While Looking and Pointing
摘要:预测观察和指向时眼睛和头部的协调
  • DOI:
  • 发表时间:
    2017
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Brian Sullivan
  • 通讯作者:
    Brian Sullivan
The EPIC-KITCHENS Dataset: Collection, Challenges and Baselines
Hotspot Modeling of Hand-Machine Interaction Experiences from a Head-Mounted RGB-D Camera
头戴式 RGB-D 相机的手机交互体验的热点建模
Hotspots Integrating of Expert and Beginner Experiences of Machine Operations through Egocentric Vision
通过以自我为中心的愿景整合机器操作的专家和初学者经验的热点
  • DOI:
    10.23919/mva.2019.8758043
  • 发表时间:
    2019
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Chen L
  • 通讯作者:
    Chen L
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Walterio Mayol-Cuevas其他文献

Correspondence, Matching and Recognition
  • DOI:
    10.1007/s11263-015-0827-8
  • 发表时间:
    2015-05-14
  • 期刊:
  • 影响因子:
    9.300
  • 作者:
    Tilo Burghardt;Dima Damen;Walterio Mayol-Cuevas;Majid Mirmehdi
  • 通讯作者:
    Majid Mirmehdi

Walterio Mayol-Cuevas的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Walterio Mayol-Cuevas', 18)}}的其他基金

On-Sensor Computer Vision
传感器计算机视觉
  • 批准号:
    EP/Y022629/1
  • 财政年份:
    2024
  • 资助金额:
    $ 102.83万
  • 项目类别:
    Research Grant
An Integrated Vision and Control Architecture for Agile Robotic Exploration
用于敏捷机器人探索的集成视觉和控制架构
  • 批准号:
    EP/M019454/1
  • 财政年份:
    2015
  • 资助金额:
    $ 102.83万
  • 项目类别:
    Research Grant
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了