Text Entry by Inference: Eye Typing, Stenography, and Understanding Context of Use

通过推理进行文本输入:眼睛打字、速记和理解使用上下文

基本信息

  • 批准号:
    EP/H027408/2
  • 负责人:
  • 金额:
    $ 23.34万
  • 依托单位:
  • 依托单位国家:
    英国
  • 项目类别:
    Fellowship
  • 财政年份:
    2011
  • 资助国家:
    英国
  • 起止时间:
    2011 至 无数据
  • 项目状态:
    已结题

项目摘要

My research is based on the observation that our daily interaction with computers is highly redundant. Some of these redundancies can be modelled and exploited by intelligent user interfaces. Intelligent text entry methods use AI techniques such as machine learning to exploit redundancies in our languages. They enable users to write quickly and accurately, without the need for a key press for every single intended letter.In this programme I propose to develop two new intelligent text entry methods. The first is a system that enables disabled users to communicate efficiently using an eye-tracker. The second system is a novel intelligent text entry method that is inspired by stenography.In addition, I propose to explore text entry methods' broader context. The research literature has concentrated on inventing text entry methods that promise high entry rates and low error rates. Now that we have text entry methods that have reasonably high entry rates it is time to complement this objective function by discovering other aspects of text entry. I propose to use social-science techniques, such as diary and field-studies, to understand how users would prefer to use text entry methods in the wild. System 1: Eye-typing by inferenceThis is a system that will potentially increase the entry rate in eye-typing systems. Current eye-typing systems are inherently slow (due to the dwell timeouts), and users perceive them as frustrating. I propose to build a system that enables users to eye-type without the need for a dwell timeout at all. Potentially, my method will be faster than any other eye-tracker based method in the world.With my proposed system users write words by directing their gaze at the intended letter keys, in sequence. Users' intended words are transcribed when they look at a result area positioned above the keyboard. Users can write more than one word. They can also write sequences or words, or even stop short within a word. They may go to the spacebar key between words but this is not strictly necessary for the system to be able to correctly infer users' intended words.System 2: Stenography by inferenceThis system will be a stenography system for pen or single-finger input. The primary application is mobile text entry. However, I strive to create a system that to some extent can replace the desktop keyboard, should users so desire. Potentially it will be faster than any other pen-based text entry method.The idea behind this method is to enable users to write words quickly by gesturing patterns they have previously learned. Such open-loop recall from muscle-memory is much faster than the closed-loop visually-guided motions users are required to perform when they tap on, for example, an on-screen keyboard. My proposed system will enable users to quickly and accurately articulate gestures for individual words. These gestures will be fixed for a particular word. That is, each word is associated with a single (prototypical) unique gestural pattern. A user's input gesture is recognised by a pattern recognizer. The word whose closest pattern best match the user's input gesture will be outputted by the system as the user's intended word.Understanding the broader context of text entryThe last component of my proposed programme serves to contribute new perspectives to the text entry research field. As previously discussed, context of use is largely unexplored in text entry. I intend to explore this topic using a range of qualitative methods. I intend to perform interviews, conduct field studies (e.g. studying participants trying a prototype mobile speech recognizer at a caf), and diary-studies. The latter will be conducted with a system that provides users of a choice of a few text entry methods that I hypothesize will be useful for different situations. I also intend to read literature on design and architecture to further my understanding of the complete design space of text entry.
我的研究是基于这样一个观察:我们与计算机的日常互动是高度冗余的。其中一些冗余可以通过智能用户界面进行建模和利用。智能文本输入方法使用机器学习等AI技术来利用我们语言中的冗余。它们使用户能够快速准确地书写,而不需要为每一个想要的字母按下一个键。在这个程序中,我建议开发两种新的智能文本输入方法。第一个是一个系统,使残疾人用户能够使用眼动跟踪器有效地沟通。第二个系统是一个新的智能文本输入方法,它的灵感来自速记。此外,我建议探索文本输入方法的更广泛的背景。研究文献集中于发明保证高输入率和低错误率的文本输入方法。既然我们已经有了具有相当高的输入率的文本输入方法,那么是时候通过发现文本输入的其他方面来补充这个目标函数了。我建议使用社会科学技术,如日记和实地研究,以了解用户如何更喜欢使用文本输入方法在野外。系统1:通过推理的眼睛打字这是一个系统,将潜在地增加眼睛打字系统的输入率。目前的眼睛打字系统本质上是缓慢的(由于停留超时),用户认为它们令人沮丧。我建议建立一个系统,使用户的眼睛类型,而不需要一个停留超时。有可能,我的方法将比世界上任何其他基于眼动仪的方法更快。使用我提出的系统,用户通过将目光按顺序对准预定的字母键来书写单词。当用户查看位于键盘上方的结果区域时,他们想要的单词被转录。用户可以写一个以上的字。他们还可以写序列或单词,甚至在一个单词中短暂停顿。他们可能会去到单词之间的空格键,但这并不是系统能够正确推断用户想要的单词的严格必要条件。系统2:通过推断的速记这个系统将是笔或单指输入的速记系统。主要应用是移动的文本输入。然而,我努力创造一个系统,在某种程度上可以取代桌面键盘,应该用户这样的愿望。它可能比其他任何基于笔的文本输入方法都要快,这种方法背后的想法是让用户能够通过以前学过的手势模式快速书写单词。这种从肌肉记忆中的开环回忆比用户在点击屏幕键盘时需要执行的闭环视觉引导运动快得多。我提出的系统将使用户能够快速准确地表达单个单词的手势。这些手势将针对特定单词进行固定。也就是说,每个单词都与一个(原型)独特的手势模式相关联。用户的输入手势由模式识别器识别。最接近的模式最匹配用户的输入手势的单词将由系统输出作为用户的预期world.Understanding更广泛的文本entryThe我建议的方案的最后一个组成部分,有助于新的观点,文本输入研究领域。如前所述,在文本输入中,使用的上下文在很大程度上是未被探索的。我打算使用一系列定性方法来探讨这一主题。我打算进行采访,进行实地研究(例如,研究参与者在咖啡馆尝试移动的语音识别器原型),和日记研究。后者将通过一个系统进行,该系统为用户提供了几种文本输入方法的选择,我假设这些方法对不同的情况都有用。我还打算阅读有关设计和建筑的文献,以进一步了解文本输入的完整设计空间。

项目成果

期刊论文数量(1)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Complementing text entry evaluations with a composition task
通过写作任务补充文本输入评估
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Per Ola Kristensson其他文献

From wax tablets to touchscreens: an introduction to text-entry research
从蜡片到触摸屏:文本输入研究简介
  • DOI:
  • 发表时间:
    2014
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Per Ola Kristensson
  • 通讯作者:
    Per Ola Kristensson
Estimating and using absolute and relative viewing distance in interactive systems
  • DOI:
    10.1016/j.pmcj.2012.06.009
  • 发表时间:
    2014-02-01
  • 期刊:
  • 影响因子:
  • 作者:
    Jakub Dostal;Per Ola Kristensson;Aaron Quigley
  • 通讯作者:
    Aaron Quigley
Swarm manipulation: An efficient and accurate technique for multi-object manipulation in virtual reality
  • DOI:
    10.1016/j.cag.2024.104113
  • 发表时间:
    2024-12-01
  • 期刊:
  • 影响因子:
  • 作者:
    Xiang Li;Jin-Du Wang;John J. Dudley;Per Ola Kristensson
  • 通讯作者:
    Per Ola Kristensson

Per Ola Kristensson的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Per Ola Kristensson', 18)}}的其他基金

Towards an Equitable Social VR
迈向公平的社交 VR
  • 批准号:
    EP/W02456X/1
  • 财政年份:
    2023
  • 资助金额:
    $ 23.34万
  • 项目类别:
    Research Grant
Inclusive Design of Immersive Content
沉浸式内容的包容性设计
  • 批准号:
    EP/S027432/1
  • 财政年份:
    2019
  • 资助金额:
    $ 23.34万
  • 项目类别:
    Research Grant
Design the Future 2: CrowdDesignVR
设计未来 2:CrowdDesignVR
  • 批准号:
    EP/R004471/1
  • 财政年份:
    2018
  • 资助金额:
    $ 23.34万
  • 项目类别:
    Research Grant
Intelligent Mobile Crowd Design Platform
智能移动人群设计平台
  • 批准号:
    EP/N010558/1
  • 财政年份:
    2016
  • 资助金额:
    $ 23.34万
  • 项目类别:
    Research Grant
Text Entry by Inference: Eye Typing, Stenography, and Understanding Context of Use
通过推理进行文本输入:眼睛打字、速记和理解使用上下文
  • 批准号:
    EP/H027408/1
  • 财政年份:
    2010
  • 资助金额:
    $ 23.34万
  • 项目类别:
    Fellowship

相似海外基金

Market Entry Acceleration of the Murb Wind Turbine into Remote Telecoms Power
默布风力涡轮机加速进入远程电信电力市场
  • 批准号:
    10112700
  • 财政年份:
    2024
  • 资助金额:
    $ 23.34万
  • 项目类别:
    Collaborative R&D
RII Track-4:@NSF: Surrogate-based Optimal Atmospheric Entry Guidance using High-fidelity Simulation Data
RII Track-4:@NSF:使用高保真模拟数据的基于替代的最佳大气进入指导
  • 批准号:
    2327379
  • 财政年份:
    2024
  • 资助金额:
    $ 23.34万
  • 项目类别:
    Standard Grant
How Does Pre-Entry Communication Impact Competition and Welfare
入职前沟通如何影响竞争和福利
  • 批准号:
    24K00249
  • 财政年份:
    2024
  • 资助金额:
    $ 23.34万
  • 项目类别:
    Grant-in-Aid for Scientific Research (B)
Structural biology of the hepatitis B virus entry and its inhibition
乙型肝炎病毒进入的结构生物学及其抑制
  • 批准号:
    23H02724
  • 财政年份:
    2023
  • 资助金额:
    $ 23.34万
  • 项目类别:
    Grant-in-Aid for Scientific Research (B)
Effect of continued entry of new investors on price formation and dynamics
新投资者的持续进入对价格形成和动态的影响
  • 批准号:
    23K04284
  • 财政年份:
    2023
  • 资助金额:
    $ 23.34万
  • 项目类别:
    Grant-in-Aid for Scientific Research (C)
Elucidating the mechanism of hydrogen entry into metals under corrosive environment using an ultrasensitive hydrogen visualization system
使用超灵敏氢可视化系统阐明腐蚀环境下氢进入金属的机制
  • 批准号:
    23K13570
  • 财政年份:
    2023
  • 资助金额:
    $ 23.34万
  • 项目类别:
    Grant-in-Aid for Early-Career Scientists
ReREE: Establishing feasibility of a novel process to recover rare earth elements from mining tailings for re-entry into a UK supply chain.
ReREE:建立一种从尾矿中回收稀土元素以重新进入英国供应链的新工艺的可行性。
  • 批准号:
    10082225
  • 财政年份:
    2023
  • 资助金额:
    $ 23.34万
  • 项目类别:
    Feasibility Studies
Identification and Estimation of the entry model of firms in the differentiated products oligopoly market.
差异化产品寡头垄断市场企业进入模式的识别与估计。
  • 批准号:
    23K01393
  • 财政年份:
    2023
  • 资助金额:
    $ 23.34万
  • 项目类别:
    Grant-in-Aid for Scientific Research (C)
Implementing a patient navigation intervention across a health system to address treatment entry inequities
在整个卫生系统中实施患者导航干预,以解决治疗进入不平等问题
  • 批准号:
    10812628
  • 财政年份:
    2023
  • 资助金额:
    $ 23.34万
  • 项目类别:
Novel Epigenetic Marks for HIV Latency Entry and Reversal
HIV潜伏期进入和逆转的新表观遗传标记
  • 批准号:
    10617943
  • 财政年份:
    2023
  • 资助金额:
    $ 23.34万
  • 项目类别:
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了