CAREER: Modeling User Touch and Motion Behaviors for Adaptive Interfaces in Mobile Devices

职业:为移动设备中的自适应界面建模用户触摸和运动行为

基本信息

  • 批准号:
    1552663
  • 负责人:
  • 金额:
    $ 50.12万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
    Continuing Grant
  • 财政年份:
    2016
  • 资助国家:
    美国
  • 起止时间:
    2016-02-01 至 2023-01-31
  • 项目状态:
    已结题

项目摘要

In this project the PI will investigate how to apply mobile interaction data to automatically improve the usability and accessibility of mobile user interfaces. The research will identify user abilities based on their behaviors, leading to mobile user interfaces that are more accessible to diverse user communities (e.g., veterans), in a variety of environments (including cases of situation-induced impairments). In particular, the PI will explore the challenges of data-driven adaptive interface layouts based on user behavior and visual attention in mobile computing when the user is actually mobile. The work will involve student researchers from under-represented groups currently advised by the PI, and will be evaluated by different populations engaged in realistic but varied activities. This will allow the PI to release software and research findings that enable people to design interfaces that properly adapt to the abilities of members of the target communities. The research ties directly into an educational plan to develop a student response tool for lecture-style user interface courses, that allows students to create wireframe interfaces, to design typefaces, and to draw visualizations during class on touchscreen and mobile devices, which can be displayed on the room screen for discussion and peer feedback. The tool will be iteratively refined in a user interface course with a diverse student population, and deployed in a course at the PI's institution outside of computer science as well as in user interface courses at other universities. User interfaces on mobile devices are not one-size-fits-all, nor should they be. Users' abilities may differ, or the situational context of the environment can introduce new challenges. By their very nature, mobile devices are used in many different environments and with different postures. For example, users may hold their tablet in both hands with the screen in landscape orientation to read in bed, swiping to different pages occasionally; at other times, they may be pushing a stroller while gripping their phone with one hand to navigate a map application. Because manufacturers know this, smartphones and tablets, unlike desktop computers, can accept touch input and sense both motion and orientation, and data from these interactions can be captured by websites and apps to identify specific user abilities and context. Over time, user interaction data collected at scale will enable personalization of the interface, say by reshaping touch targets to compensate for a user's habit of typically tapping to the right of a target, by relocating important buttons to more accessible locations on the screen, or by determining ideal text size by noting the zoom level a user often applies. Thus, the work will comprise three research objectives. The first objective is to investigate how to passively capture touch and motion data from mobile devices, to compute metrics representing user habits and mistakes as they perform touch interactions, and to determine the environmental context of the user from motion and touch behaviors. The second objective is to incorporate orientation and touch sensors to train an eye tracking model using the front-facing camera to detect the user's attention. The third objective, informed by findings from the first two, is to improve the usability and accessibility of existing interfaces, e.g., by adjusting the hittable area of targets, the text size, and interface layout.
在这个项目中,PI将研究如何应用移动交互数据来自动提高移动用户界面的可用性和可访问性。这项研究将根据用户的行为来确定用户的能力,从而使移动用户界面在各种环境(包括情境导致的损伤)中更容易被不同的用户群体(例如,退伍军人)使用。特别是,PI将探索基于用户行为和视觉注意力的数据驱动的自适应界面布局的挑战,当用户实际上是移动的。这项工作将涉及来自目前由PI提供建议的代表性不足的群体的学生研究人员,并将由从事现实但不同活动的不同人群进行评估。这将允许PI发布软件和研究成果,使人们能够设计适合目标社区成员能力的界面。这项研究与一项教育计划直接相关,该计划旨在为授课式用户界面课程开发一种学生响应工具,允许学生在触摸屏和移动设备上创建线框界面,设计字体,并在课堂上绘制可视化图像,这些图像可以显示在房间屏幕上,供讨论和同伴反馈。该工具将在用户界面课程中迭代改进,并在PI所在机构的计算机科学以外的课程以及其他大学的用户界面课程中部署。移动设备上的用户界面不是万能的,也不应该是万能的。用户的能力可能不同,或者环境的情景上下文可能会引入新的挑战。就其本质而言,移动设备可以在许多不同的环境中以不同的姿势使用。例如,用户可能会双手拿着平板电脑,屏幕横着在床上阅读,偶尔滑动到不同的页面;其他时候,他们可能推着婴儿车,一手抓着手机浏览地图应用程序。因为制造商知道这一点,所以与台式电脑不同,智能手机和平板电脑可以接受触摸输入,并感知运动和方向,这些交互的数据可以被网站和应用程序捕获,以识别特定的用户能力和背景。随着时间的推移,大规模收集的用户交互数据将实现界面的个性化,例如通过重塑触摸目标来补偿用户通常在目标右侧点击的习惯,通过将重要按钮重新定位到屏幕上更容易访问的位置,或者通过注意用户经常使用的缩放级别来确定理想的文本大小。因此,这项工作将包括三个研究目标。第一个目标是研究如何被动地从移动设备捕获触摸和运动数据,计算代表用户习惯和错误的指标,因为他们执行触摸交互,并从运动和触摸行为确定用户的环境背景。第二个目标是结合方向和触摸传感器来训练眼动追踪模型,使用前置摄像头来检测用户的注意力。第三个目标,根据前两个的发现,是改善现有界面的可用性和可访问性,例如,通过调整目标的可点击区域、文本大小和界面布局。

项目成果

期刊论文数量(3)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Remotion: A Motion-Based Capture and Replay Platform of Mobile Device Interaction for Remote Usability Testing
Remotion:基于运动的移动设备交互捕获和重放平台,用于远程可用性测试
  • DOI:
    10.1145/3214280
  • 发表时间:
    2018
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Qian, Jing;Chapin, Arielle;Papoutsaki, Alexandra;Yang, Fumeng;Nelissen, Klaas;Huang, Jeff
  • 通讯作者:
    Huang, Jeff
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Jeff Huang其他文献

“Together but not together”: Evaluating Typing Indicators for Interaction-Rich Communication
“在一起但不在一起”:评估交互丰富的沟通的打字指标
An Analysis of Automated Visual Analysis Classification: Interactive Visualization Task Inference of Cancer Genomics Domain Experts
自动视觉分析分类分析:癌症基因组学领域专家的交互式可视化任务推理
Comparing Pupil Light Response Modulation between Saccade Planning and Working Memory
比较眼跳计划和工作记忆之间的瞳孔光响应调制
  • DOI:
    10.5334/joc.33
  • 发表时间:
    2018
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Chin;Jeff Huang;Rachel Yep;D. Munoz
  • 通讯作者:
    D. Munoz
FastCFI: Real-Time Control Flow Integrity Using FPGA Without Code Instrumentation
FastCFI:使用 FPGA 实现实时控制流完整性,无需代码检测
AKEY INFLUENCER IN SOCIAL MEDIA UTILIZING TOPIC MODELING AND SOCIAL DIFFUSION ANALYSIS
利用主题建模和社交扩散分析的社交媒体关键影响者
  • DOI:
  • 发表时间:
    2017
  • 期刊:
  • 影响因子:
    0
  • 作者:
    S. Moon;Jagan Sankaranarayanan;H. Samet;Benjamin E. Teitler;Michael D. Lieberman;J. Sperling;Jeff Huang;E. Efthimiadis
  • 通讯作者:
    E. Efthimiadis

Jeff Huang的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Jeff Huang', 18)}}的其他基金

I-Corps: Smart Programming Tools for Improving Software Debugging
I-Corps:用于改进软件调试的智能编程工具
  • 批准号:
    1952383
  • 财政年份:
    2020
  • 资助金额:
    $ 50.12万
  • 项目类别:
    Standard Grant
SHF: Small: Pa3S: Towards Pointer Analysis as a Service
SHF:小型:Pa3S:迈向指针分析即服务
  • 批准号:
    2006450
  • 财政年份:
    2020
  • 资助金额:
    $ 50.12万
  • 项目类别:
    Standard Grant
SaTC: CORE: Small: New Defenses for Data-Only Attacks
SaTC:核心:小型:针对纯数据攻击的新防御
  • 批准号:
    1901482
  • 财政年份:
    2019
  • 资助金额:
    $ 50.12万
  • 项目类别:
    Standard Grant
EAGER: Computationally and Socially Guided Self-Experiments
EAGER:计算和社会引导的自我实验
  • 批准号:
    1656763
  • 财政年份:
    2017
  • 资助金额:
    $ 50.12万
  • 项目类别:
    Standard Grant
CAREER: Scalable and Maximal Concurrency Debugging
职业:可扩展和最大并发调试
  • 批准号:
    1552935
  • 财政年份:
    2016
  • 资助金额:
    $ 50.12万
  • 项目类别:
    Continuing Grant
CRII: CHS: Scalable Webcam Eyetracking by Learning from User Interactions
CRII:CHS:通过从用户交互中学习的可扩展网络摄像头眼球追踪
  • 批准号:
    1464061
  • 财政年份:
    2015
  • 资助金额:
    $ 50.12万
  • 项目类别:
    Continuing Grant

相似国自然基金

基于大语言模型的社交媒体用户中医药服务模式构建与应用研究
  • 批准号:
    JCZRLH202500036
  • 批准年份:
    2025
  • 资助金额:
    0.0 万元
  • 项目类别:
    省市级项目
浙江文化遗产的数字化创新设计与传播研究
  • 批准号:
    2025C25012
  • 批准年份:
    2025
  • 资助金额:
    0.0 万元
  • 项目类别:
    省市级项目
不确定环境下信息推送的偏好匹配与优化方法研究
  • 批准号:
  • 批准年份:
    2025
  • 资助金额:
    0.0 万元
  • 项目类别:
    省市级项目
基于用户需求驱动的数字历史建筑价值挖掘研究
  • 批准号:
    2024JJ6150
  • 批准年份:
    2024
  • 资助金额:
    0.0 万元
  • 项目类别:
    省市级项目
基于多模态知识图谱的推荐系统关键问题研究
  • 批准号:
  • 批准年份:
    2024
  • 资助金额:
    30.0 万元
  • 项目类别:
    省市级项目
免费增值模式下用户动态行为与商家最优策略的研究 基于结构建模的方法
  • 批准号:
  • 批准年份:
    2024
  • 资助金额:
    万元
  • 项目类别:
    青年科学基金项目
融合多源多模态数据的用户偏好细粒度建模方法研究
  • 批准号:
  • 批准年份:
    2024
  • 资助金额:
    0.0 万元
  • 项目类别:
    省市级项目
基于地图服务平台虚拟轨迹的用户隐性访问兴趣建模及地图数据更新策略
  • 批准号:
    n/a
  • 批准年份:
    2023
  • 资助金额:
    0.0 万元
  • 项目类别:
    省市级项目
面向用户需求建模的会话式信息检索研究
  • 批准号:
    LQ23F020014
  • 批准年份:
    2023
  • 资助金额:
    0.0 万元
  • 项目类别:
    省市级项目
基于访问轨迹的地图服务平台用户隐性访问兴趣时空建模研究
  • 批准号:
    42301485
  • 批准年份:
    2023
  • 资助金额:
    30 万元
  • 项目类别:
    青年科学基金项目

相似海外基金

Developing an Innovative Platform for Modeling Active Road User Interactions and Safety: Integration of Computer Vision, Agent-based, and Machine Learning Models
开发用于对主动道路用户交互和安全进行建模的创新平台:计算机视觉、基于代理和机器学习模型的集成
  • 批准号:
    RGPIN-2019-06688
  • 财政年份:
    2022
  • 资助金额:
    $ 50.12万
  • 项目类别:
    Discovery Grants Program - Individual
Collaborative Research: SaTC: CORE: Medium: Toward safe, private, and secure home automation: from formal modeling to user evaluation
协作研究:SaTC:核心:中:迈向安全、私密和可靠的家庭自动化:从形式建模到用户评估
  • 批准号:
    2320903
  • 财政年份:
    2022
  • 资助金额:
    $ 50.12万
  • 项目类别:
    Standard Grant
Accurate User Modeling of Search and Decision Making Tasks for Improved Offline Evaluation of Information Retrieval
准确的搜索和决策任务用户建模,以改进信息检索的离线评估
  • 批准号:
    RGPAS-2020-00080
  • 财政年份:
    2022
  • 资助金额:
    $ 50.12万
  • 项目类别:
    Discovery Grants Program - Accelerator Supplements
Accurate User Modeling of Search and Decision Making Tasks for Improved Offline Evaluation of Information Retrieval
准确的搜索和决策任务用户建模,以改进信息检索的离线评估
  • 批准号:
    RGPIN-2020-04665
  • 财政年份:
    2022
  • 资助金额:
    $ 50.12万
  • 项目类别:
    Discovery Grants Program - Individual
SAI-P: Overcoming Barriers to User-Centered Infrastructure Planning with System Modeling and Natural Language Processing
SAI-P:通过系统建模和自然语言处理克服以用户为中心的基础设施规划的障碍
  • 批准号:
    2228783
  • 财政年份:
    2022
  • 资助金额:
    $ 50.12万
  • 项目类别:
    Standard Grant
Fundamental Challenges in User Experience Modeling for Immersive Technologies
沉浸式技术用户体验建模的基本挑战
  • 批准号:
    RGPIN-2017-06090
  • 财政年份:
    2022
  • 资助金额:
    $ 50.12万
  • 项目类别:
    Discovery Grants Program - Individual
Massive, Heterogeneous, and User Centric Wireless Networks: Modeling and Optimization
大规模、异构和以用户为中心的无线网络:建模和优化
  • 批准号:
    RGPIN-2019-06357
  • 财政年份:
    2022
  • 资助金额:
    $ 50.12万
  • 项目类别:
    Discovery Grants Program - Individual
Collaborative Research: SaTC: CORE: Medium: Toward safe, private, and secure home automation: from formal modeling to user evaluation
协作研究:SaTC:核心:中:迈向安全、私密和可靠的家庭自动化:从形式建模到用户评估
  • 批准号:
    2114074
  • 财政年份:
    2021
  • 资助金额:
    $ 50.12万
  • 项目类别:
    Standard Grant
Collaborative Research: SaTC: CORE: Medium: Toward safe, private, and secure home automation: from formal modeling to user evaluation
协作研究:SaTC:核心:中:迈向安全、私密和可靠的家庭自动化:从形式建模到用户评估
  • 批准号:
    2114148
  • 财政年份:
    2021
  • 资助金额:
    $ 50.12万
  • 项目类别:
    Standard Grant
Accurate User Modeling of Search and Decision Making Tasks for Improved Offline Evaluation of Information Retrieval
准确的搜索和决策任务用户建模,以改进信息检索的离线评估
  • 批准号:
    RGPIN-2020-04665
  • 财政年份:
    2021
  • 资助金额:
    $ 50.12万
  • 项目类别:
    Discovery Grants Program - Individual
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了