CAREER: Anywhere Augmentation: Practical Mobile Augmented Reality in Unprepared Physical Environments
职业:随时随地增强:在未准备好的物理环境中实用的移动增强现实
基本信息
- 批准号:0747520
- 负责人:
- 金额:$ 50万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Continuing Grant
- 财政年份:2008
- 资助国家:美国
- 起止时间:2008-04-01 至 2015-03-31
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
The PI introduces the term Anywhere Augmentation to refer to the idea of linking location-specific computing services with the physical world, making them readily and directly available in any situation and location. This project embodies a novel approach to Anywhere Augmentation based on efficient human input for wearable computing and augmented reality (AR), through both sketch-based interfaces on hand-held devices and direct-overlay 3D user interfaces. Mobile augmented reality is a powerful interface for wearable computing. If computer users are enabled to place arbitrary annotations in 3D space wherever they go, the physical world becomes the user interface. Instead of embedding computing and display equipment in the environment as in the case of ubiquitous computing, graphical annotations are overlaid on top of the environment by means of optical see-through glasses or video overlay. Robust registration between the physical world and the augmentations is necessary. Current approaches rely on the availability of a 3D model of the environment or on its instrumentation with active or passive markers. The PI's approach is novel in that he proposes to emphasize, support, and utilize the expertise of the human in the loop to make Anywhere Augmentation feasible. Human users generally have a clear grasp of the layout of the scene in front of them, and they can easily identify the physical objects with which information should be linked. The PI plans to enable users to transfer their intuitional scene understanding to the computer through intelligently constrained and assisted user interfaces. Current real-time computer vision techniques and algorithms, while far from being able to facilitate automatic scene understanding, are very well suited to constrain and guide a user?s informed input for scene analysis and augmentation, delivered in the form of a few simple point selections, stroke gestures, and common classifications. As a main source of input, the PI will employ video feeds from small head-worn or palm-top-device cameras. The devices, algorithms and interaction techniques will be applicable to novel settings and application scenarios (e.g., visualization of occluded infrastructure, navigational guidance, and social and educational applications for high-school students).Broader Impact: The PI will use this research as a case study and platform for projects supporting the teaching of human-computer interaction fundamentals. As a first step towards actively furthering the inclusion of minority students in the benefits of the research, the PI has partnered with Jackson State University, MS, and will also collaborate with outreach programs supporting local underrepresented K-12 students, where he will enhance the after-school programs and field trips with carefully planned AR experimentation and mentoring. The goal of Anywhere Augmentation will be tested by making research products (new devices, tools, and interfaces) available to students and field scientists in a wide variety of environments (e.g., as a campus navigation and inventory tool, for an emergency response scenario, at UCSB lab open houses, and at international conferences). The PI will also introduce innovations in three courses in the curriculum of the UCSB Computer Science Department (an undergraduate elective on HCI which he established, the undergraduate senior CS design project course cycle, and a new graduate course on 3D user interfaces), so they include hands-on experiences on effective UI design and programming for mobile devices, including mobile AR interfaces. To this end, the PI will also involve the UCSB Allosphere, a 3-story spherical surround-view 3D immersive space, as a mobile AR simulator.
PI引入了“Anywhere Augmentation”一词,指的是将特定位置的计算服务与物理世界联系起来的想法,使它们在任何情况和位置都可以直接使用。 该项目体现了一种基于可穿戴计算和增强现实(AR)的有效人类输入的新方法,通过手持设备上基于草图的界面和直接覆盖的3D用户界面。 移动的增强现实是可穿戴计算的强大接口。 如果计算机用户能够在3D空间中放置任意注释,无论他们走到哪里,物理世界都将成为用户界面。 代替如在普适计算的情况下那样在环境中嵌入计算和显示设备,图形注释借助于光学透视眼镜或视频覆盖而覆盖在环境的顶部上。 物理世界和增强体之间的鲁棒配准是必要的。 目前的方法依赖于环境的3D模型的可用性或其具有有源或无源标记的仪器。 PI的方法是新颖的,因为他建议强调,支持和利用循环中人类的专业知识,使任何地方的增强都是可行的。 人类用户通常对他们面前的场景布局有清晰的把握,他们可以很容易地识别信息应该与之链接的物理对象。 PI计划使用户能够通过智能约束和辅助用户界面将其直观的场景理解转移到计算机上。 当前的实时计算机视觉技术和算法,虽然远远不能促进自动场景理解,但非常适合约束和引导用户?的知情输入场景分析和增强,交付的形式,一些简单的点选择,笔画手势,和共同的分类。 作为输入的主要来源,PI将采用来自小型头戴式或掌上设备摄像机的视频馈送。 设备、算法和交互技术将适用于新的设置和应用场景(例如,更广泛的影响:PI将利用这项研究作为案例研究和支持人机交互基础教学的项目平台。 作为积极促进少数民族学生参与研究的第一步,PI与杰克逊州立大学合作,并将与支持当地代表性不足的K-12学生的外展计划合作,在那里他将通过精心策划的AR实验和指导来加强课后计划和实地考察。 随处增强的目标将通过使研究产品(新设备,工具和界面)在各种环境中(例如,作为校园导航和库存工具,用于紧急响应场景,UCSB实验室开放日和国际会议)。 PI还将在UCSB计算机科学系的课程中引入三门课程的创新(他建立的HCI本科选修课,本科高年级CS设计项目课程周期,以及关于3D用户界面的新研究生课程),因此它们包括有效的UI设计和移动的设备编程的实践经验,包括移动的AR界面。 为此,PI还将涉及UCSB Allosphere,这是一个3层楼高的球形环绕视角3D沉浸式空间,作为移动的AR模拟器。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Tobias Hollerer其他文献
Privately Evaluating Contingency Tables with Suppression
通过抑制私下评估列联表
- DOI:
- 发表时间:
2016 - 期刊:
- 影响因子:0
- 作者:
Tomohiro Mashita;Alexander Plopski;Akira Kudo;Tobias Hollerer;Kiyoshi Kiyokawa;and Haruo Takemura;陸文杰,佐久間淳 - 通讯作者:
陸文杰,佐久間淳
高野山周辺の御田―真国を中心として
以新国为中心的高野山周围的大纳稻田
- DOI:
- 发表时间:
2017 - 期刊:
- 影响因子:0
- 作者:
John O'Donovan;Shinsuke Nakajima;Tobias Hollerer;Mayumi Ueda;Yuuki Matsunami;Byungkyu Kang;森本一彦;武田昌一;森本一彦 - 通讯作者:
森本一彦
異なる光源環境における画像特徴の頑健性の調査
研究不同光源环境下图像特征的鲁棒性
- DOI:
- 发表时间:
2015 - 期刊:
- 影响因子:0
- 作者:
工藤 彰;Alexander Plopski;Tobias Hollerer;間下以大;竹村 治雄;清川 清 - 通讯作者:
清川 清
モバイル端末のコントラスト比と水晶体白濁度による可読性への影響
移动设备对比度和镜头不透明度对可读性的影响
- DOI:
- 发表时间:
2015 - 期刊:
- 影响因子:0
- 作者:
工藤 彰;Alexander Plopski;Tobias Hollerer;間下以大;竹村 治雄;清川 清;岩田光平,石井佑樹,小飯塚達也,松波紫草,石尾暢宏,R. Paul Lege,小嶌健仁,宮尾克 - 通讯作者:
岩田光平,石井佑樹,小飯塚達也,松波紫草,石尾暢宏,R. Paul Lege,小嶌健仁,宮尾克
A Cross-Cultural Analysis of Explanations for Product Reviews
产品评论解释的跨文化分析
- DOI:
- 发表时间:
2016 - 期刊:
- 影响因子:0
- 作者:
John O'Donovan;Shinsuke Nakajima;Tobias Hollerer;Mayumi Ueda;Yuuki Matsunami;Byungkyu Kang - 通讯作者:
Byungkyu Kang
Tobias Hollerer的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Tobias Hollerer', 18)}}的其他基金
Collaborative Research: HCC: Medium: HCI in Motion -- Using EEG, Eye Tracking, and Body Sensing for Attention-Aware Mobile Mixed Reality
合作研究:HCC:媒介:运动中的 HCI——使用 EEG、眼动追踪和身体感应实现注意力感知移动混合现实
- 批准号:
2211784 - 财政年份:2022
- 资助金额:
$ 50万 - 项目类别:
Standard Grant
CHS: Small: Integrative Wide-Area Augmented Reality Scene Modeling
CHS:小型:集成广域增强现实场景建模
- 批准号:
1911230 - 财政年份:2019
- 资助金额:
$ 50万 - 项目类别:
Continuing Grant
EAGER: Attention-Aware Mixed Reality Interfaces
EAGER:注意力感知混合现实界面
- 批准号:
1845587 - 财政年份:2018
- 资助金额:
$ 50万 - 项目类别:
Standard Grant
EAGER: Large-Scale Real-Time Information Visualization on Immersive Platforms
EAGER:沉浸式平台上的大规模实时信息可视化
- 批准号:
1748392 - 财政年份:2017
- 资助金额:
$ 50万 - 项目类别:
Standard Grant
EAGER: Collaborative Visualization for Knowledge Computing
EAGER:知识计算的协作可视化
- 批准号:
1058132 - 财政年份:2010
- 资助金额:
$ 50万 - 项目类别:
Standard Grant
Scalable Visualization and Constrained Interaction for Large Graphs -- Supporting the Collaborative Analysis of High-dimensional Data Sets
大图的可扩展可视化和约束交互——支持高维数据集的协同分析
- 批准号:
0635492 - 财政年份:2006
- 资助金额:
$ 50万 - 项目类别:
Standard Grant
相似海外基金
Collaborative Research: CISE: Large: Systems Support for Run-Anywhere Serverless
协作研究:CISE:大型:对 Run-Anywhere Serverless 的系统支持
- 批准号:
2321725 - 财政年份:2023
- 资助金额:
$ 50万 - 项目类别:
Continuing Grant
Collaborative Research: CISE: Large: Systems Support for Run-Anywhere Serverless
协作研究:CISE:大型:对 Run-Anywhere Serverless 的系统支持
- 批准号:
2321723 - 财政年份:2023
- 资助金额:
$ 50万 - 项目类别:
Continuing Grant
Collaborative Research: CISE: Large: Systems Support for Run-Anywhere Serverless
协作研究:CISE:大型:对 Run-Anywhere Serverless 的系统支持
- 批准号:
2321724 - 财政年份:2023
- 资助金额:
$ 50万 - 项目类别:
Continuing Grant
Collaborative Research: CISE: Large: Systems Support for Run-Anywhere Serverless
协作研究:CISE:大型:对 Run-Anywhere Serverless 的系统支持
- 批准号:
2321726 - 财政年份:2023
- 资助金额:
$ 50万 - 项目类别:
Continuing Grant
SCH: Bringing Intelligence to Pulmonology: New AI-Enabled Systems for Pulmonary Function Tests Anytime and Anywhere
SCH:为肺病学带来智能:新型人工智能系统可随时随地进行肺功能测试
- 批准号:
2205360 - 财政年份:2022
- 资助金额:
$ 50万 - 项目类别:
Standard Grant
IP Creation-Collaboration from Anywhere
IP创造-随时随地协作
- 批准号:
10012137 - 财政年份:2021
- 资助金额:
$ 50万 - 项目类别:
Responsive Strategy and Planning
Research on the real-time fall prevention system using IMU sensors based on machine learning methods which can be used anywhere on daily basis
基于机器学习方法的IMU传感器实时防跌倒系统研究,可在日常任何地方使用
- 批准号:
21K12798 - 财政年份:2021
- 资助金额:
$ 50万 - 项目类别:
Grant-in-Aid for Scientific Research (C)
BBC Prosperity Partnership: Future Personalised Object-Based Media Experiences Delivered at Scale Anywhere
BBC Prosperity 合作伙伴关系:未来随处大规模提供基于对象的个性化媒体体验
- 批准号:
EP/V038087/1 - 财政年份:2021
- 资助金额:
$ 50万 - 项目类别:
Research Grant
Matching people virtually with their right mental health professional for better care. Anytime and anywhere
将人们与合适的心理健康专家进行虚拟匹配,以获得更好的护理。
- 批准号:
69256 - 财政年份:2020
- 资助金额:
$ 50万 - 项目类别:
Feasibility Studies