Merging Real and Virtual Worlds Considering Depth Information
考虑深度信息合并真实和虚拟世界
基本信息
- 批准号:09480068
- 负责人:
- 金额:$ 3.52万
- 依托单位:
- 依托单位国家:日本
- 项目类别:Grant-in-Aid for Scientific Research (B)
- 财政年份:1997
- 资助国家:日本
- 起止时间:1997 至 1999
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
We have obtained the following research results in this project.1.Acquiring 3D Information of a Real Scene by Binocular StereoOne of the most important problems in augmented reality is to resolve occlusions between real and virtual objects. The problem requires depth information of a real world scene. We have studied on improving the efficiency of stereo matching. Our idea for improving the efficiency is to limit stereo matching areas to those on which virtual objects are merged. This made it possible to estimate depth of real world in real time.2.Acquiring 3D Information of a Real Scene by Motion StereoA new factorization method was developed for estimating 3D structure of a real scene from a sequence of images obtained by a moving camera. Another attempt is to predict apparent motion in dynamic images, which improves the efficiency of stereo matching.3.Generation of Virtual Objects by Immersive ModelerWe have extended the immersive solid modeler VLEGO for generating virtual objects in natural and simple way. Also developed is a technique for imposing geometric constraints on virtual objects. This made it possible to create articulated objects and to generate animations.4.Merging Virtual Objects into a Real World Scene Image with Correct OcclusionGeometric registration between real and virtual worlds was achieved by tracking markers in captured images. This may be called a vision-based registration. Integrating this result with those in above items, we have developed an efficient method of stereo image composition.5.Development of Prototype System for Merging Real and VirtualWe have developed a prototype of stereoscopic video see-through HMD which can merge real and virtual objects with correct occlusion in real time. Experiments have shown the feasibility of the prototype systems. The displayed image of merging real and virtual is updated every 100ms.
本课题主要取得了以下研究成果:1.利用双目立体声获取真实的场景的三维信息增强现实中最重要的问题之一就是解决真实的物体与虚拟物体之间的遮挡问题。该问题需要真实的世界场景的深度信息。研究了如何提高立体匹配的效率。我们的想法,提高效率是限制立体匹配领域的虚拟对象合并。这使得在真实的时间内估计真实的世界的深度成为可能。2.通过运动立体声获取真实的场景的三维信息提出了一种新的分解方法,用于从运动相机获得的图像序列中估计真实的场景的三维结构。另一种尝试是预测动态图像中的表观运动,从而提高立体匹配的效率。3.通过沉浸式建模生成虚拟对象我们扩展了沉浸式实体建模器VLEGO,以便以自然而简单的方式生成虚拟对象。还开发了一种对虚拟对象施加几何约束的技术。这使得有可能创建铰接的对象和生成动画。4.合并虚拟对象到一个真实的世界场景图像与正确的遮挡之间的几何配准真实的世界和虚拟世界是通过跟踪捕获的图像中的标记来实现的。这可以被称为基于视觉的配准。5.真实的和虚拟的融合原型系统的开发我们已经开发了一个立体视频透视头戴式显示器的原型,它可以在真实的时间内将真实的和虚拟的物体以正确的遮挡进行融合。实验证明了原型系统的可行性。显示的融合真实的和虚拟的图像每100 ms更新一次。
项目成果
期刊论文数量(54)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
大隈 隆史: "ビデオシースルー型拡張現実のための実画像からのカメラパラメータの実時間推定"電子情報通信学会技術研究報告. 97・324. 13-20 (1997)
Takashi Okuma:“根据视频透视增强现实的真实图像实时估计相机参数”IEICE 技术报告 97・324 (1997)。
- DOI:
- 发表时间:
- 期刊:
- 影响因子:0
- 作者:
- 通讯作者:
Masayuki Kanbara: "A steroscopic video see-through augmented reality system based on vision-based registration"Proc.IEEE Virtual Reality 2000. (2000)
Masayuki Kanbara:“基于视觉配准的立体视频透视增强现实系统”Proc.IEEE Virtual Reality 2000。(2000)
- DOI:
- 发表时间:
- 期刊:
- 影响因子:0
- 作者:
- 通讯作者:
Yuichi Ohta: "Mixed Reality-Merging Real and Virtual Worlds-" Ohmsha-Springer Verlag, 418 (1999)
Yuichi Ohta:“混合现实 - 融合真实和虚拟世界 -” Ohmsha-Springer Verlag,418 (1999)
- DOI:
- 发表时间:
- 期刊:
- 影响因子:0
- 作者:
- 通讯作者:
Takashi Okuma: "An Augmented reality system using real-time vision based registration"Proc. 14th IAPR Int. Conf. on Pattern Recognition. II・. 1226-1229 (1998)
Takashi Okuma:“使用基于实时视觉的注册的增强现实系统”Proc. 14th IAPR Conf. 1226-1229(1998)。
- DOI:
- 发表时间:
- 期刊:
- 影响因子:0
- 作者:
- 通讯作者:
Masayuki Kanbara: "Real-time composition of stereo images for video see-through augmented reality"Proc. 6th IEEE Int. Conf. on Multimedia Computing and Systems. 1・. 213-219 (1999)
Masayuki Kanbara:“视频透视增强现实的立体图像的实时合成”Proc。 6th IEEE Int. 213-219 (1999)。
- DOI:
- 发表时间:
- 期刊:
- 影响因子:0
- 作者:
- 通讯作者:
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
YOKOYA Naokazu其他文献
YOKOYA Naokazu的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('YOKOYA Naokazu', 18)}}的其他基金
Mixed Reality over Space and Time Basd on Sensing Real World Environments
基于感知现实世界环境的时空混合现实
- 批准号:
23240024 - 财政年份:2011
- 资助金额:
$ 3.52万 - 项目类别:
Grant-in-Aid for Scientific Research (A)
Studies on Free-viewpoint Image Generation of Large-scale Outdoor Environments Based on Omni-directional Dynamic Imaging
基于全方位动态成像的大型室外环境自由视点图像生成研究
- 批准号:
19200016 - 财政年份:2007
- 资助金额:
$ 3.52万 - 项目类别:
Grant-in-Aid for Scientific Research (A)
Wearable Augmented Reality System for Sending and Sharing Location-dependent Information
用于发送和共享位置相关信息的可穿戴增强现实系统
- 批准号:
13558035 - 财政年份:2001
- 资助金额:
$ 3.52万 - 项目类别:
Grant-in-Aid for Scientific Research (B)
Object Shape and Surface Reflectance Modeling Based on 3D Measurement
基于 3D 测量的物体形状和表面反射率建模
- 批准号:
13480098 - 财政年份:2001
- 资助金额:
$ 3.52万 - 项目类别:
Grant-in-Aid for Scientific Research (B)
Development of Real-time Omnidirectional Stereo Imaging Systems
实时全向立体成像系统的开发
- 批准号:
09555127 - 财政年份:1997
- 资助金额:
$ 3.52万 - 项目类别:
Grant-in-Aid for Scientific Research (B)
Three-dimensional Analysis of Faces and Application to Facial Animation with Expressions
面部三维分析及其在面部表情动画中的应用
- 批准号:
07680399 - 财政年份:1995
- 资助金额:
$ 3.52万 - 项目类别:
Grant-in-Aid for Scientific Research (C)