Collaborative Research: HCC: Medium: Deep Learning-Based Tracking of Eyes and Lens Shape from Purkinje Images for Holographic Augmented Reality Glasses

合作研究:HCC:媒介:基于深度学习的浦肯野图像眼睛和晶状体形状跟踪,用于全息增强现实眼镜

基本信息

  • 批准号:
    2107049
  • 负责人:
  • 金额:
    $ 22.5万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
    Standard Grant
  • 财政年份:
    2021
  • 资助国家:
    美国
  • 起止时间:
    2021-10-01 至 2024-09-30
  • 项目状态:
    已结题

项目摘要

This project seeks to develop head-worn Augmented Reality (AR) systems that look and feel like ordinary prescription eyeglasses, and can be worn comfortably all day, with a field of view that matches the wide field of view of today's eyewear. Such future AR glasses will enable vast new capabilities for individuals and groups, integrating computer assistance as 3D enhancements within the user’s surroundings. For example, wearing such AR glasses, an individual will see around them remote individuals as naturally as they now see and interact with nearby real individuals. Virtual personal assistants such as Alexa and Siri may become 3D-embodied within these AR glasses and situationally aware, guiding the wearer around a new airport, or coaching the user in customized physical exercise. This project aims to advance two crucial, synergistic parts of such AR glasses: 1) the see-through display itself and 2) the 3D eye-tracking subsystem. The see-through display needs to be both very compact and have a wide field of view. To achieve these display requirements, the project uses true holographic image generation, and improves the algorithms that generate these holograms by a) concentrating higher image quality in the direction and distance of the user's current gaze, and b) algorithmically steering the "eye box" (the precise location where the eye needs to be to observe the image) to the current location of the eye's pupil opening. In current holographic displays, this viewing eye box is typically less than 1 cubic millimeter, far too small for a practical head-worn system. Therefore, a practical system may need both a precise eye tracking system that locates the pupil opening and a display system that algorithmically steers the holographic image to be viewable at that precise location. The 3D eye tracking system also seeks to determine the direction of the user's gaze, and the distance of the point of gaze from the eye (whether near or far), so that the display system can optimize the generated holographic image for the precise focus of attention. The proposed AR display can render images at variable focal lengths, so it could be used for people with visual accommodation issues, thereby allowing them to participate in AR-supported education and training programs. The device could also have other possible uses in medical (such as better understanding of the human visual system) and training fields. The two branches of this project, the holographic display, and the 3D eye tracker, are closely linked and each improved by the other. The 3D eye tracker utilizes an enriched set of signals and sensors (multiple cameras for each eye, and a multiplicity of infra-red (IR) LEDs), from which the system extracts the multiple tracking parameters in real time: the horizontal and vertical gaze angles, the distance accommodation, and the 3D position and size of the pupil's opening. The distance accommodation is extracted by analyzing Purkinje reflections of the IR LEDs from the multiple layers in the eye's cornea and lens. A neural network extracts the aforementioned 3D tracking results from the multiple sensors after being trained on a large body of ground truth data. The training data is generated from multiple human subjects who are exposed, instantaneously to known patterns on external displays at a range of distances and angles from the eye. Simultaneous to these instantaneous patterns, the subject is also shown images from the near-eye holographic image generator whose eye box location and size have been previously optically calibrated. One part of each pattern will be shown, instantaneously, on an external display and the other part, at the same instant, on the holographic display. The subject can only answer correctly a challenge question if they have observed both displays simultaneously. This can only occur if the eye is at a precise 3D location and also at a precise known gaze angle. The eye tracker will be further improved by integrated its training and calibration with the high precision (but very bulky) BinoScopic tracker at UC Berkeley, which tracks using precise maps of the user's retina. The holograhic image generator uses the real time data from the 3D eye tracker to generate holograms whose highest image quality is at the part of image that is currently on the viewer's fovea, and at the distance to which the user is currently accommodated. The image quality is improved by a trained neural network whose inputs are images from a camera placed, during training, at the position of the viewer's eye.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
该项目旨在开发看起来和感觉上都像普通处方眼镜的头戴式增强现实(AR)系统,并且可以整天舒适地佩戴,视野与当今眼镜的宽广视野相匹配。这种未来的AR眼镜将为个人和团体提供巨大的新功能,将计算机辅助作为3D增强功能集成到用户周围。例如,戴上这样的AR眼镜,个人将看到他们周围的遥远个人,就像他们现在看到的一样,并与附近的真实个人互动。Alexa和Siri等虚拟个人助理可以在这些AR眼镜中实现3D效果,并根据情况感知,引导佩戴者参观新机场,或指导用户进行定制的体育锻炼。该项目旨在推进这类AR眼镜的两个关键、协同部分:1)透明显示器本身和2)3D眼睛跟踪子系统。透视式显示器需要非常紧凑,并且有很宽的视野。为了实现这些显示要求,该项目使用真正的全息图像生成,并通过a)在用户当前凝视的方向和距离上集中更高的图像质量,以及b)算法地将“眼框”(眼睛需要观察图像的精确位置)引导到眼睛的瞳孔张开的当前位置来改进生成这些全息图的算法。在目前的全息显示器中,这种观察眼框通常不到1立方毫米,对于实用的头戴式系统来说太小了。因此,一个实用的系统可能既需要定位瞳孔开口的精确眼睛跟踪系统,也需要通过算法引导全息图像在该精确位置可见的显示系统。3D眼睛跟踪系统还试图确定用户的凝视方向,以及凝视点与眼睛的距离(无论是近还是远),以便显示系统可以针对精确的注意力焦点优化生成的全息图像。建议的AR显示器可以在可变焦距下呈现图像,因此它可以用于有视觉调节问题的人,从而允许他们参与AR支持的教育和培训计划。该设备还可能在医学(例如更好地了解人类视觉系统)和培训领域有其他可能的用途。这个项目的两个分支--全息显示和3D眼球跟踪器--紧密相连,相互改进。3D眼睛跟踪器利用了丰富的信号和传感器集(每只眼睛有多个摄像头,以及多个红外线(IR)LED),系统从这些传感器中实时提取多个跟踪参数:水平和垂直凝视角度、距离调节以及瞳孔开口的3D位置和大小。通过分析眼睛角膜和晶状体中多层红外LED的浦肯野反射来提取距离调节。神经网络在对大量地面真实数据进行训练后,从多个传感器中提取上述3D跟踪结果。训练数据是从多个人类受试者产生的,这些受试者在与眼睛的距离和角度范围内瞬间暴露在外部显示器上的已知图案中。在这些瞬时图案的同时,还向受试者展示了来自近眼全息图像生成器的图像,其眼盒位置和大小先前已经被光学校准。每种图案的一部分将瞬间显示在外部显示器上,而另一部分将同时显示在全息显示器上。只有当受试者同时观察到两个显示器时,他们才能正确回答挑战问题。只有当眼睛处于精确的3D位置,并且处于精确的已知凝视角度时,才会发生这种情况。通过将眼睛跟踪器的训练和校准与加州大学伯克利分校的高精度(但非常笨重)BinoScope跟踪器相结合,眼睛跟踪器将得到进一步改进。BinoScope跟踪器使用用户视网膜的精确地图进行跟踪。全息图像生成器使用来自3D眼睛跟踪器的实时数据来生成全息图,该全息图的最高图像质量位于当前在观察者的中心凹上的图像部分以及在用户当前适应的距离处。图像质量由经过训练的神经网络改进,其输入是在训练期间放置在观看者眼睛位置的摄像机的图像。该奖项反映了NSF的法定使命,并通过使用基金会的智力优势和更广泛的影响审查标准进行评估,被认为值得支持。

项目成果

期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Jorge Otero-Millan其他文献

V1 neurons can distinguish between motion in the world and visual displacements due to eye movements: a microsaccade study
  • DOI:
    10.1186/1471-2202-14-s1-p262
  • 发表时间:
    2013-07-08
  • 期刊:
  • 影响因子:
    2.300
  • 作者:
    Xoana G Troncoso;Ali Najafian Jazi;Jorge Otero-Millan;Stephen L Macknik;Susana Martinez-Conde
  • 通讯作者:
    Susana Martinez-Conde
Modeling of magnetic vestibular stimulation experienced during high-field clinical MRI
高场临床磁共振成像期间经历的磁前庭刺激建模
  • DOI:
    10.1038/s43856-024-00667-9
  • 发表时间:
    2025-01-21
  • 期刊:
  • 影响因子:
    6.300
  • 作者:
    Ismael Arán-Tapia;Vicente Pérez-Muñuzuri;Alberto P. Muñuzuri;Andrés Soto-Varela;Jorge Otero-Millan;Dale C. Roberts;Bryan K. Ward
  • 通讯作者:
    Bryan K. Ward
Visual exploration in amblyopic patients
  • DOI:
    10.1016/j.jaapos.2017.07.102
  • 发表时间:
    2017-08-01
  • 期刊:
  • 影响因子:
  • 作者:
    Fatema Ghasia;Dinah Chen;Jorge Otero-Millan;Priyanka Kumar;Aasef Shaikh
  • 通讯作者:
    Aasef Shaikh

Jorge Otero-Millan的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

相似国自然基金

B3GAT3介导CDK4蛋白糖基化修饰促进肝癌细胞衰老抵抗与HCC发生发展的机制与转化研究
  • 批准号:
  • 批准年份:
    2025
  • 资助金额:
    0.0 万元
  • 项目类别:
    省市级项目
IGF2BP2/hnRNPU调控SREBP-1可变剪接促进MASH-HCC发生的机制研究
  • 批准号:
  • 批准年份:
    2025
  • 资助金额:
    0.0 万元
  • 项目类别:
    省市级项目
TKIs氘代化修饰通过促进HCC铁死亡增强免疫原性并增敏anti-PD-1治疗的机制研究
  • 批准号:
    JCZRQN202500319
  • 批准年份:
    2025
  • 资助金额:
    0.0 万元
  • 项目类别:
    省市级项目
m6A调控介导的HCC耐药机制 研究与靶向干预应用
  • 批准号:
  • 批准年份:
    2025
  • 资助金额:
    0.0 万元
  • 项目类别:
    省市级项目
NEURL1通过泛素化降解调控ferroportin1介导的细胞铁死亡抑制作用促进HCC放疗敏感性的机制研究
  • 批准号:
    2025JJ80744
  • 批准年份:
    2025
  • 资助金额:
    0.0 万元
  • 项目类别:
    省市级项目
靶向CTSB可通过调控肿瘤相关巨噬细胞 的代谢重编程增强PD-1单抗治疗HCC疗效 的机制研究
  • 批准号:
  • 批准年份:
    2025
  • 资助金额:
    10.0 万元
  • 项目类别:
    省市级项目
射线刺激肝星状细胞过表达CXCL12诱导HCC放疗抵抗的机制研究
  • 批准号:
    2024Y9606
  • 批准年份:
    2024
  • 资助金额:
    15.0 万元
  • 项目类别:
    省市级项目
sgp130Fc联合PD-L1单抗协同抑制HCC的生物学功能及分子机制研究
  • 批准号:
    2024Y9629
  • 批准年份:
    2024
  • 资助金额:
    15.0 万元
  • 项目类别:
    省市级项目
基于化痰法探讨皂荚提取物调控miR-21外泌体/MDSC通路改善HCC肿瘤微环境的机制研究
  • 批准号:
  • 批准年份:
    2024
  • 资助金额:
    0.0 万元
  • 项目类别:
    省市级项目
PINK1通过改变肿瘤免疫微环境调控HCC对PD-L1抗体响应的分子机 制研究
  • 批准号:
  • 批准年份:
    2024
  • 资助金额:
    0.0 万元
  • 项目类别:
    省市级项目

相似海外基金

Collaborative Research: HCC: Medium: Aligning Robot Representations with Humans
合作研究:HCC:媒介:使机器人表示与人类保持一致
  • 批准号:
    2310757
  • 财政年份:
    2023
  • 资助金额:
    $ 22.5万
  • 项目类别:
    Standard Grant
Collaborative Research: HCC: Small: End-User Guided Search and Optimization for Accessible Product Customization and Design
协作研究:HCC:小型:最终用户引导的搜索和优化,以实现无障碍产品定制和设计
  • 批准号:
    2327136
  • 财政年份:
    2023
  • 资助金额:
    $ 22.5万
  • 项目类别:
    Standard Grant
Collaborative Research: HCC: Small: Bridging Research and Visualization Design Practice via a Sustainable Knowledge Platform
合作研究:HCC:小型:通过可持续知识平台桥接研究和可视化设计实践
  • 批准号:
    2147044
  • 财政年份:
    2023
  • 资助金额:
    $ 22.5万
  • 项目类别:
    Standard Grant
Collaborative Research: HCC: Small: Computational Design and Application of Wearable Haptic Knits
合作研究:HCC:小型:可穿戴触觉针织物的计算设计与应用
  • 批准号:
    2301355
  • 财政年份:
    2023
  • 资助金额:
    $ 22.5万
  • 项目类别:
    Standard Grant
Collaborative Research: NSF-CSIRO: HCC: Small: Understanding Bias in AI Models for the Prediction of Infectious Disease Spread
合作研究:NSF-CSIRO:HCC:小型:了解预测传染病传播的 AI 模型中的偏差
  • 批准号:
    2302969
  • 财政年份:
    2023
  • 资助金额:
    $ 22.5万
  • 项目类别:
    Standard Grant
Collaborative Research: HCC: Small: Understanding Online-to-Offline Sexual Violence through Data Donation from Users
合作研究:HCC:小型:通过用户捐赠的数据了解线上线下性暴力
  • 批准号:
    2401775
  • 财政年份:
    2023
  • 资助金额:
    $ 22.5万
  • 项目类别:
    Standard Grant
Collaborative Research: HCC: Small: Supporting Flexible and Safe Disability Representation in Social Virtual Reality
合作研究:HCC:小型:支持社交虚拟现实中灵活、安全的残疾表征
  • 批准号:
    2328183
  • 财政年份:
    2023
  • 资助金额:
    $ 22.5万
  • 项目类别:
    Standard Grant
Collaborative Research: HCC: Small: RUI: Drawing from Life in Extended Reality: Advancing and Teaching Cross-Reality User Interfaces for Observational 3D Sketching
合作研究:HCC:小型:RUI:从扩展现实中的生活中汲取灵感:推进和教授用于观察 3D 草图绘制的跨现实用户界面
  • 批准号:
    2326998
  • 财政年份:
    2023
  • 资助金额:
    $ 22.5万
  • 项目类别:
    Standard Grant
Collaborative Research: HCC: Medium: "Unboxing" Haptic Texture Perception: Closing the Loop from Skin Contact Mechanics to Novel Haptic Device
合作研究:HCC:媒介:“拆箱”触觉纹理感知:闭合从皮肤接触力学到新型触觉设备的循环
  • 批准号:
    2312153
  • 财政年份:
    2023
  • 资助金额:
    $ 22.5万
  • 项目类别:
    Standard Grant
Collaborative Research: HCC: Small: Computational Design and Application of Wearable Haptic Knits
合作研究:HCC:小型:可穿戴触觉针织物的计算设计与应用
  • 批准号:
    2301357
  • 财政年份:
    2023
  • 资助金额:
    $ 22.5万
  • 项目类别:
    Standard Grant
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了