Next Generation Screen Magnification Technology for People with Low Vision
适合弱视人士的下一代屏幕放大技术
基本信息
- 批准号:1805076
- 负责人:
- 金额:$ 30万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Standard Grant
- 财政年份:2018
- 资助国家:美国
- 起止时间:2018-07-15 至 2022-06-30
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
People with low vision find it challenging to use currently available screen magnifiers, their "go-to" assistive technology for interacting with computing devices. Firstly, these magnifiers indiscriminately magnify screen content, including white space, as a blanket operation, often blocking important contextual information such as visual cues (e.g., borders), and semantic relationships between different user interface elements (e.g., a checkbox and its label) from the user's viewport, which is the visible area on a web page that the user can see at a given time. Mentally reconstructing contextual information from exclusively narrow views dramatically increases the burden on the user. Secondly, low vision users have widely varying needs requiring a range of dynamic customizations for interface elements, which in currently available magnifiers is disruptive as it causes users to shift their focus. Thirdly, navigation aids to help users explore the application, obtain quick overviews, and easily locate elements of interest are lacking. The proposed project will research the design and development of SteeringWheel, a transformative next generation screen magnification technology to rectify these limitations. SteeringWheel will be based on several novel ideas. First, it will magnify the white space and non-white space user interface (UI) elements differently in order to keep the local context in the viewport post-magnification. Second, it will confine the cursor movement to the local context, thereby restricting panning. Third, it will interface with a physical dial, supporting simple rotate-and-press gestures with audio-haptic (sense of touch) feedback, that will enable users to quickly navigate different content sections, easily locate desired content, get a quick overview, and seamlessly customize the interface. A byproduct of the project will be the creation of standardized benchmark data sets to gauge the performance of current and future screen magnification technologies. It is anticipated that SteeringWheel will make it far easier for low vision users to perceive and consume digital information, leading to improved productivity. The project will serve as a launching board for creating project-driven graduate and undergraduate courses in Accessible Computing.The project will research, design and engineer SteeringWheel, a transformative next generation screen magnification technology that is predicated on the fundamental idea of Semantics-based Locality-Preserving Magnification (SLM). For retaining contextual information, SLM incorporates knowledge about the semantics of different UI elements and inter-element relationships, rooted in the concept of a logical segment, which is a collection of related UI elements exhibiting consistency in presentation style and spatial locality (e.g. a form-field textbox and its associated border and label). SteeringWheel, which overcomes the limitations of extant screen magnification technologies, rests on two scientific ideas, namely, incorporating semantics into the magnification process, complemented by an interaction paradigm based on simple rotate and press gestures with haptic feedback, that serves as an "all-in-one" interface for all magnification-related operations. The Research Plan is organized under six objectives. OBJ 1: Algorithms for locality-preservation targeting different screen-sizes for Desktops and Mobiles. Extraction algorithms will be designed to analyze the application layout, identify the semantically meaningful logical segments, and generate a semantic hierarchy by organizing these segments in an object-oriented fashion. Extraction will be followed by the design of locality-preserving algorithms that differentially magnify different types of content of these segments in the semantic hierarchy. Locality-preserving algorithms will keep most (if not all) of the contextual information within the users' viewport after magnification, to minimize the panning effort and hence the associated cognitive burden. OBJ 2: Mapping and integration of SteeringWheel gestures onto input devices for Desktops. A set of input gestures based on simple actions such as rotation and press will be designed for the SteeringWheel interface targeted at the Desktop platform. The set of gestures will be implemented on a Microsoft's Surface Dial and a Gaming Mouse. These gestures will enable users to quickly navigate and easily explore different segments of the application as well as seamlessly make magnification adjustments as needed. OBJ 3: Mapping and integration of SteeringWheel gestures onto input devices for Mobiles. A set of input gestures that are variations of simple rotation and press will be designed for the SteeringWheel targeted at the Mobile platform. The set of gestures will be implemented on Apple and Android Watches. These gestures will enable the users to easily adjust the magnification settings on-the-fly and conveniently explore the mobile application content without having to go through a frustrating and time-consuming interaction process using the default touch-based gestures (e.g. triple press) available in smart phones. OBJ 4: Infrastructure for semi-automated personalization of magnification settings. Techniques will be designed to support semi-automated personalization that enables users to make customizations only once, and SteeringWheel will remember and automatically apply these users' preferences each time they revisit the application. Customization at the granularity of individual segments will be supported, which will further reduce the user's tedious efforts by automatically applying the customizations made for one segment to all other similar segments. OBJ 5: Spoken dialog assistant for SteeringWheel. The Assistant will be designed to helps users easily locate and shift the navigation focus to segments or UI elements of interest using speech commands such as "take me to menu bar", "move to the first form field", etc. The Assistant will also allow users to specify commands such as "increase brightness", "invert colors", etc., for customizing the interface. Speech commands have the potential to transfer this cognitive burden from users to the magnification interface, thereby letting users focus on their tasks instead of expending needless time and effort locating and manually configuring individual segments and UI elements. OBJ 6: Infrastructure for porting users' magnification profile across different devices. Different methods will be designed to facilitate porting of users' profiles containing their preferred magnification settings for different applications to different devices, so that the low vision users need not make the same magnification adjustments for the same applications on different devices. Mechanisms will also be designed for letting users securely share their settings with each other, which will further reduce their interaction overload, as different users with similar eye conditions may need similar magnification settings for the same application.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
项目成果
期刊论文数量(20)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Breaking the Accessibility Barrier in Non-Visual Interaction with PDF Forms
使用 PDF 表单打破非视觉交互的无障碍障碍
- DOI:10.1145/3397868
- 发表时间:2020
- 期刊:
- 影响因子:0
- 作者:Uckun, Utku;Aydin, Ali Selman;Ashok, Vikas;Ramakrishnan, IV
- 通讯作者:Ramakrishnan, IV
Taming User-Interface Heterogeneity with Uniform Overlays for Blind Users
通过为盲人用户提供统一的覆盖来驯服用户界面的异构性
- DOI:10.1145/3503252.3531317
- 发表时间:2022
- 期刊:
- 影响因子:0
- 作者:Uckun, Utku;Tumkur Suresh, Rohan;Ferdous, Md Javedul;Bi, Xiaojun;Ramakrishnan, I.V.;Ashok, Vikas
- 通讯作者:Ashok, Vikas
Towards Enabling Blind People to Fill Out Paper Forms with a Wearable Smartphone Assistant.
- DOI:10.20380/gi2021.18
- 发表时间:2021-05
- 期刊:
- 影响因子:0
- 作者:Feiz S;Borodin A;Bi X;Ramakrishnan IV
- 通讯作者:Ramakrishnan IV
Enabling Convenient Online Collaborative Writing for Low Vision Screen Magnifier Users
- DOI:10.1145/3511095.3531274
- 发表时间:2022-06
- 期刊:
- 影响因子:0
- 作者:H. Lee;Y. Prakash;Mohan Sunkara;I. Ramakrishnan;V. Ashok
- 通讯作者:H. Lee;Y. Prakash;Mohan Sunkara;I. Ramakrishnan;V. Ashok
TableView: Enabling Efficient Access to Web Data Records for Screen-Magnifier Users
- DOI:10.1145/3373625.3417030
- 发表时间:2020-10
- 期刊:
- 影响因子:0
- 作者:H. Lee;S. Uddin;V. Ashok
- 通讯作者:H. Lee;S. Uddin;V. Ashok
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
IV Ramakrishnan其他文献
IV Ramakrishnan的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('IV Ramakrishnan', 18)}}的其他基金
HCC-Large: Using the Internet without using the Eyes: Models of Online Transactions for Non-Visual Interaction
HCC-Large:不使用眼睛使用互联网:非视觉交互在线交易模型
- 批准号:
0808678 - 财政年份:2008
- 资助金额:
$ 30万 - 项目类别:
Standard Grant
CRI: IAD - Web Accessibility Laboratory
CRI:IAD - 网络无障碍实验室
- 批准号:
0751083 - 财政年份:2008
- 资助金额:
$ 30万 - 项目类别:
Continuing Grant
Content-Driven Techniques for Non-Visual Web Access
非可视化 Web 访问的内容驱动技术
- 批准号:
0534419 - 财政年份:2005
- 资助金额:
$ 30万 - 项目类别:
Continuing Grant
U.S.-France Cooperative Research: Deduction with Constraints.
美法合作研究:带约束的演绎。
- 批准号:
9314412 - 财政年份:1994
- 资助金额:
$ 30万 - 项目类别:
Standard Grant
Computational Aspects of Rewrite Operations
重写操作的计算方面
- 批准号:
8805734 - 财政年份:1988
- 资助金额:
$ 30万 - 项目类别:
Continuing Grant
Research Initiation: Design and Analysis of VLSI Array Algorithms
研究发起:VLSI阵列算法设计与分析
- 批准号:
8404399 - 财政年份:1984
- 资助金额:
$ 30万 - 项目类别:
Standard Grant
相似国自然基金
Next Generation Majorana Nanowire Hybrids
- 批准号:
- 批准年份:2020
- 资助金额:20 万元
- 项目类别:
相似海外基金
Next Generation Glioma Treatments using Direct Light Therapy
使用直接光疗法的下一代神经胶质瘤治疗
- 批准号:
10092859 - 财政年份:2024
- 资助金额:
$ 30万 - 项目类别:
EU-Funded
Next-generation KYC banking verification via embedded smart keyboard
通过嵌入式智能键盘进行下一代 KYC 银行验证
- 批准号:
10100109 - 财政年份:2024
- 资助金额:
$ 30万 - 项目类别:
Collaborative R&D
Multi-component interventions to reducing unhealthy diets and physical inactivity among adolescents and youth in sub-Saharan Africa (Generation H)
采取多方干预措施减少撒哈拉以南非洲青少年的不健康饮食和缺乏身体活动(H 代)
- 批准号:
10106976 - 财政年份:2024
- 资助金额:
$ 30万 - 项目类别:
EU-Funded
Safe and Sustainable by Design framework for the next generation of Chemicals and Materials
下一代化学品和材料的安全和可持续设计框架
- 批准号:
10110559 - 财政年份:2024
- 资助金额:
$ 30万 - 项目类别:
EU-Funded
Next-Generation Distributed Graph Engine for Big Graphs
适用于大图的下一代分布式图引擎
- 批准号:
DP240101322 - 财政年份:2024
- 资助金额:
$ 30万 - 项目类别:
Discovery Projects
Next Generation Fluorescent Tools for Measuring Autophagy Dynamics in Cells
用于测量细胞自噬动态的下一代荧光工具
- 批准号:
DP240100465 - 财政年份:2024
- 资助金额:
$ 30万 - 项目类别:
Discovery Projects
PhD in the Next Generation of Organic LEDs
下一代有机 LED 博士
- 批准号:
2904651 - 财政年份:2024
- 资助金额:
$ 30万 - 项目类别:
Studentship
van der Waals Heterostructures for Next-generation Hot Carrier Photovoltaics
用于下一代热载流子光伏的范德华异质结构
- 批准号:
EP/Y028287/1 - 财政年份:2024
- 资助金额:
$ 30万 - 项目类别:
Fellowship
MagTEM2 - the next generation microscope for imaging functional materials
MagTEM2 - 用于功能材料成像的下一代显微镜
- 批准号:
EP/Z531078/1 - 财政年份:2024
- 资助金额:
$ 30万 - 项目类别:
Research Grant
FLF Next generation atomistic modelling for medicinal chemistry and biology
FLF 下一代药物化学和生物学原子建模
- 批准号:
MR/Y019601/1 - 财政年份:2024
- 资助金额:
$ 30万 - 项目类别:
Fellowship