HRI: Perceptually Situated Human-Robot Dialog Models

HRI:感知情境人机对话模型

基本信息

  • 批准号:
    0819984
  • 负责人:
  • 金额:
    $ 81.5万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
    Standard Grant
  • 财政年份:
    2008
  • 资助国家:
    美国
  • 起止时间:
    2008-01-01 至 2012-07-31
  • 项目状态:
    已结题

项目摘要

Humans naturally use dialog and gestures to discuss complex phenomena and plans, especially when they refer to physical aspects of the environment while they communicate with each other. Existing robot vision systems can sense people and the environment, but are limited in their ability to detect the detailed conversational cues people often rely upon (such as head pose, eye gaze, and body gestures), and to exploit those cues in multimodal conversational dialog. Recent advances in computer vision have made it possible to track such detailed cues. Robots can use passive measures to sense the presence of people, estimate their focus of attention and body pose, and to recognize human gestures and identify physical references. But they have had limited means of integrating such information into models of natural language; heretofore, they have used dialog models for specific domains and/or were limited to one-on-one interaction. Separately, recent advances in natural language processing have led to dialog models that can track relatively free-form conversation among multiple participants, and extract meaningful semantics about people's intentions and actions. These multi-party dialog models have been used in meeting environments and other domains. In this project, the PI and his team will fuse these two lines of research to achieve a perceptually situated, natural conversation model that robots can use to interact multimodally with people. They will develop a reasonably generic dialog model that allows a situated agent to track the dialog around it, know when it is being addressed, and take direction from a human operator regarding where it should find or place various objects, what it should look for in the environment, and which individuals it should attend to, follow, or obey. Project outcomes will extend existing dialog management techniques to a more general theory of interaction management, and will also extend current state-of-the-art vision research to be able to recognize the subtleties of nonverbal conversational cues, as well as methods for integrating those cues with ongoing dialog interpretation and interaction with the world.Broader Impacts: There are clearly many positive societal impacts that will derive from this research. Ultimately, development of effective human-robot interfaces will allow greater deployment of robots to perform dangerous tasks that humans would otherwise have to perform, and will also enable greater use of robots for service tasks in domestic environments. As part of the project, the PI will conduct outreach efforts to engage secondary-school students in the hope that exposure to HRI research may increase their interest in science and engineering studies.
人类自然地使用对话和手势来讨论复杂的现象和计划,特别是当他们在相互交流时提到环境的物理方面时。 现有的机器人视觉系统可以感知人和环境,但在检测人们经常依赖的详细会话线索(如头部姿势,眼睛凝视和身体姿势)以及在多模态会话对话中利用这些线索的能力方面受到限制。 计算机视觉的最新进展使得跟踪这些细节线索成为可能。 机器人可以使用被动措施来感知人的存在,估计他们的注意力和身体姿势,并识别人类手势和识别物理参考。 但是,他们有有限的手段将这些信息集成到自然语言的模型;迄今为止,他们已经使用特定领域的对话模型和/或仅限于一对一的互动。 另外,自然语言处理的最新进展已经产生了对话模型,可以跟踪多个参与者之间相对自由形式的对话,并提取有关人们意图和行为的有意义的语义。 这些多方对话模型已被用于会议环境和其他领域。 在这个项目中,PI和他的团队将融合这两条研究路线,以实现一个感知定位的自然对话模型,机器人可以使用它与人进行多模式交互。 他们将开发一个合理的通用对话模型,允许一个定位的代理跟踪它周围的对话,知道它何时被寻址,并从人类操作员那里得到关于它应该在哪里找到或放置各种对象的指示,它应该在环境中寻找什么,以及它应该关注,跟随或服从哪些人。 项目成果将把现有的对话管理技术扩展到更一般的互动管理理论,也将扩展当前最先进的视觉研究,使其能够识别非语言对话线索的微妙之处,以及将这些线索与正在进行的对话解释和与世界的互动相结合的方法。显然,这项研究将产生许多积极的社会影响。 最终,有效的人机界面的开发将允许更多地部署机器人来执行人类必须执行的危险任务,并且还将使机器人能够在家庭环境中更多地用于服务任务。 作为该项目的一部分,PI将开展外展工作,让中学生参与,希望接触HRI研究可以增加他们对科学和工程研究的兴趣。

项目成果

期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Trevor Darrell其他文献

Towards Context-Based Visual Feedback Recognition for Embodied Agents
面向实体代理的基于上下文的视觉反馈识别
  • DOI:
  • 发表时间:
    2005
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Louis;C. Sidner;Trevor Darrell
  • 通讯作者:
    Trevor Darrell
Fast stereo-based head tracking for interactive environments
适用于交互式环境的快速立体头部跟踪
Recovering Articulated Model Topology from Observed Motion
从观察到的运动中恢复铰接模型拓扑
  • DOI:
  • 发表时间:
    2002
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Leonid Taycher;John W. Fisher III;Trevor Darrell
  • 通讯作者:
    Trevor Darrell
From conversational tooltips to grounded discourse: head poseTracking in interactive dialog systems
从会话工具提示到扎根话语:交互式对话系统中的头部姿势跟踪
Modeling and Interactive Animation of Facial Expression using Vision
使用视觉进行面部表情建模和交互式动画
  • DOI:
  • 发表时间:
    1994
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Irfan Essa;Trevor Darrell;A. Pentland
  • 通讯作者:
    A. Pentland

Trevor Darrell的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Trevor Darrell', 18)}}的其他基金

Collaborative Research: CCRI: New: An Open Source Simulation Platform for AI Research on Autonomous Driving
合作研究:CCRI:新:自动驾驶人工智能研究的开源仿真平台
  • 批准号:
    2235013
  • 财政年份:
    2023
  • 资助金额:
    $ 81.5万
  • 项目类别:
    Standard Grant
AitF: FULL: Collaborative Research: PEARL: Perceptual Adaptive Representation Learning in the Wild
AitF:FULL:协作研究:PEARL:野外感知自适应表示学习
  • 批准号:
    1536003
  • 财政年份:
    2015
  • 资助金额:
    $ 81.5万
  • 项目类别:
    Standard Grant
NRI: Collaborative Research: Shall I Touch This?: Navigating the Look and Feel of Complex Surfaces
NRI:协作研究:我应该触摸这个吗?:导航复杂表面的外观和感觉
  • 批准号:
    1427425
  • 财政年份:
    2014
  • 资助金额:
    $ 81.5万
  • 项目类别:
    Standard Grant
RI: Large: Collaborative Research: Reconstructive recognition: Uniting statistical scene understanding and physics-based visual reasoning
RI:大型:协作研究:重建识别:结合统计场景理解和基于物理的视觉推理
  • 批准号:
    1212798
  • 财政年份:
    2012
  • 资助金额:
    $ 81.5万
  • 项目类别:
    Standard Grant
RI: Small: Hierarchical Probabilistic Layers for Visual Recognition of Complex Objects
RI:小:用于复杂对象视觉识别的分层概率层
  • 批准号:
    1116411
  • 财政年份:
    2011
  • 资助金额:
    $ 81.5万
  • 项目类别:
    Continuing Grant
Support for Workshop on Advances in Language and Vision
支持语言和视觉进步研讨会
  • 批准号:
    1134072
  • 财政年份:
    2011
  • 资助金额:
    $ 81.5万
  • 项目类别:
    Standard Grant
HCC: Medium: Collaborative Research: Computer Vision and Online Communities: A Symbiosis
HCC:媒介:协作研究:计算机视觉和在线社区:共生
  • 批准号:
    0905647
  • 财政年份:
    2009
  • 资助金额:
    $ 81.5万
  • 项目类别:
    Standard Grant
HRI: Perceptually Situated Human-Robot Dialog Models
HRI:感知情境人机对话模型
  • 批准号:
    0704479
  • 财政年份:
    2007
  • 资助金额:
    $ 81.5万
  • 项目类别:
    Standard Grant
Student Participant Support for International Conference on Multimodal Interfaces 2007; November 12-15, 2007 in Nagoya, Japan
2007 年国际多模式接口会议学生参与者支持;
  • 批准号:
    0735077
  • 财政年份:
    2007
  • 资助金额:
    $ 81.5万
  • 项目类别:
    Standard Grant
Student participant support for ICMI 2006
ICMI 2006 学生参与者支持
  • 批准号:
    0631995
  • 财政年份:
    2006
  • 资助金额:
    $ 81.5万
  • 项目类别:
    Standard Grant

相似海外基金

Perceptually Optimized Video and Graphics on Mobile Devices
移动设备上经过感知优化的视频和图形
  • 批准号:
    545170-2020
  • 财政年份:
    2022
  • 资助金额:
    $ 81.5万
  • 项目类别:
    Alliance Grants
CAREER: HCC: Developing Perceptually-Driven Tools for Estimating Visualization Effectiveness
职业:HCC:开发用于估计可视化效果的感知驱动工具
  • 批准号:
    2320920
  • 财政年份:
    2022
  • 资助金额:
    $ 81.5万
  • 项目类别:
    Continuing Grant
Perceptually Optimized Video and Graphics on Mobile Devices
移动设备上经过感知优化的视频和图形
  • 批准号:
    545170-2020
  • 财政年份:
    2021
  • 资助金额:
    $ 81.5万
  • 项目类别:
    Alliance Grants
CAREER: HCC: Developing Perceptually-Driven Tools for Estimating Visualization Effectiveness
职业:HCC:开发用于估计可视化效果的感知驱动工具
  • 批准号:
    2046725
  • 财政年份:
    2021
  • 资助金额:
    $ 81.5万
  • 项目类别:
    Continuing Grant
Capturing child attention: Are perceptually rich stimuli the best way to aid number learning?
吸引孩子的注意力:丰富的感知刺激是帮助数字学习的最佳方式吗?
  • 批准号:
    2411605
  • 财政年份:
    2020
  • 资助金额:
    $ 81.5万
  • 项目类别:
    Studentship
Perceptually Optimized Video and Graphics on Mobile Devices
移动设备上经过感知优化的视频和图形
  • 批准号:
    545170-2020
  • 财政年份:
    2020
  • 资助金额:
    $ 81.5万
  • 项目类别:
    Alliance Grants
Psychoacoustic evaluations of timbral boundaries and a computational model of perceptually distinct sound entities.
音色边界的心理声学评估和感知不同声音实体的计算模型。
  • 批准号:
    489788-2016
  • 财政年份:
    2018
  • 资助金额:
    $ 81.5万
  • 项目类别:
    Alexander Graham Bell Canada Graduate Scholarships - Doctoral
Perceptually Motivated Advanced Bandwidth Extension Method in Multichannel Blind Source Separation
多通道盲源分离中感知驱动的高级带宽扩展方法
  • 批准号:
    489818-2016
  • 财政年份:
    2018
  • 资助金额:
    $ 81.5万
  • 项目类别:
    Postgraduate Scholarships - Doctoral
CAREER: Perceptually Guided Hand Motion Synthesis
职业:感知引导的手部动作合成
  • 批准号:
    1652210
  • 财政年份:
    2017
  • 资助金额:
    $ 81.5万
  • 项目类别:
    Continuing Grant
Psychoacoustic evaluations of timbral boundaries and a computational model of perceptually distinct sound entities.
音色边界的心理声学评估和感知不同声音实体的计算模型。
  • 批准号:
    489788-2016
  • 财政年份:
    2017
  • 资助金额:
    $ 81.5万
  • 项目类别:
    Alexander Graham Bell Canada Graduate Scholarships - Doctoral
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了