CHS: Medium: Collaborative Research: Immediate Feedback to Support Learning American Sign Language through Multisensory Recognition
CHS:媒介:协作研究:通过多感官识别支持学习美国手语的即时反馈
基本信息
- 批准号:1400802
- 负责人:
- 金额:$ 55.79万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Standard Grant
- 财政年份:2014
- 资助国家:美国
- 起止时间:2014-09-01 至 2020-08-31
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
American Sign Language (ASL) is a primary means of communication for 500,000 people in the United States and a distinct language from English, conveyed through hands, facial expressions, and body movements. Studies indicate that deaf children of deaf parents read better than deaf children of hearing parents, mainly due to better communication when both children and parents are deaf. However, more than 80% of children who are deaf or hard of hearing are born to hearing parents. It is challenging for parents, teachers, and other people in the life of a deaf child to learn ASL rapidly enough to support the visual language acquisition of the child. Technology that can automatically recognize aspects of ASL signing and provide instant feedback to these students of ASL would give them a time-flexible way to practice and improve their signing skills. The goal of this project, which involves an interdisciplinary team of researchers at three colleges within the City University of New York (CUNY) with expertise in computer vision, human-computer interaction, and Deaf and Hard of Hearing education, is to discover the most effective underlying technologies, user-interface design, and pedagogical use for an interactive tool to provide such immediate, automatic feedback for students of ASL.Most prior work on ASL recognition has focused on identifying a small set of simple signs performed, but current technology is not sufficiently accurate on continuous signing of sentences with an unrestricted vocabulary. The PIs will develop technologies to fundamentally advance ASL partial recognition, that is to identify linguistic/performance attributes of ASL without necessarily identifying the entire sequence of signs, and automatically determine if a performance is fluent or contains errors. The research will include five thrusts: (1) based on ASL linguistics and pedagogy, to identify a set of observable attributes indicating ASL fluency; (2) to discover new technologies for automatic detection of the ASL fluency attributes through fusion of multimodality (facial expression, hand gesture, and body pose) and multisensory information (RGB and Depth videos); (3) to collect and annotate a dataset of RGBD videos of ASL, performed at varied levels of fluency, by students and native signers; (4) to develop an interactive ASL learning tool that provides ASL students immediate feedback about whether their signing is fluent or not; and (5) to evaluate the robustness of the new algorithms and the effectiveness of the ASL learning tool, including its educational benefits. The work will lead to advances in computer vision technologies for human behavior perception, to new understanding of user-interface design with ASL video, and to a revolutionary and cost-effective educational tool to assist ASL learners achieve fluency, using recognition technologies that are robust and accurate in the near-term. Project outcomes will include a dataset of videos at varied fluency levels, which will be valuable for future ASL linguists or instructors, students learning ASL, and computer vision researchers.
美国手语(ASL)是美国50万人的主要交流方式,是一种不同于英语的语言,通过手、面部表情和身体动作来传达。研究表明,聋人父母的聋儿比正常父母的聋儿阅读能力更好,这主要是因为当孩子和父母都是聋人时,他们的交流更好。然而,80%以上的失聪或重听儿童的父母是听力正常的。对于失聪儿童的父母、老师和其他生活中的人来说,快速学习美国手语以支持孩子的视觉语言习得是一项挑战。能够自动识别手语的各个方面并为这些学生提供即时反馈的技术将为他们提供一种灵活的时间练习和提高手语技能的方法。该项目由纽约市立大学(CUNY)三所学院的跨学科研究人员组成,他们在计算机视觉、人机交互、聋人和听力障碍者教育方面具有专业知识,其目标是发现最有效的底层技术、用户界面设计和教学用途,为美国手语学生提供即时、自动的反馈。以前大多数关于美国手语识别的工作都集中在识别一小组简单的手势上,但目前的技术在不受限制的词汇量的句子连续手势上不够准确。pi将开发技术,从根本上推进ASL部分识别,即识别ASL的语言/性能属性,而不必识别整个符号序列,并自动确定表演是否流畅或包含错误。本研究将包括五个重点:(1)基于美国手语语言学和教育学,确定一组可观察到的表征美国手语流利性的属性;(2)通过融合多模态(面部表情、手势和身体姿势)和多感官信息(RGB和Depth视频),发现自动检测ASL流利性属性的新技术;(3)收集和注释由学生和母语手语者以不同流利程度表演的美国手语RGBD视频数据集;(4)开发互动式的美国手语学习工具,为美国手语学生提供手语流利与否的即时反馈;(5)评估新算法的鲁棒性和美国手语学习工具的有效性,包括其教育效益。这项工作将导致人类行为感知的计算机视觉技术的进步,对美国手语视频用户界面设计的新理解,以及一种革命性的、具有成本效益的教育工具,帮助美国手语学习者在短期内使用强大而准确的识别技术实现流利。项目成果将包括不同流利程度的视频数据集,这将对未来的美国手语语言学家或教师、学习美国手语的学生和计算机视觉研究人员有价值。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
YingLi Tian其他文献
YingLi Tian的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('YingLi Tian', 18)}}的其他基金
EAGER: Learning Transferable Visual Features
EAGER:学习可迁移的视觉特征
- 批准号:
2041307 - 财政年份:2020
- 资助金额:
$ 55.79万 - 项目类别:
Standard Grant
IEEE International Conference on Multimedia and Expo (ICME) 2014: Doctoral Consortium
IEEE 国际多媒体会议 (ICME) 2014:博士联盟
- 批准号:
1419299 - 财政年份:2014
- 资助金额:
$ 55.79万 - 项目类别:
Standard Grant
AIR Option 1: Technology Translation: Automated Targeted Destination Recognition for the Blind with Motion Deblurring
AIR 选项 1:技术翻译:通过运动去模糊为盲人自动识别目标目的地
- 批准号:
1343402 - 财政年份:2013
- 资助金额:
$ 55.79万 - 项目类别:
Standard Grant
Context-based Indoor Object Detection
基于上下文的室内物体检测
- 批准号:
0957016 - 财政年份:2009
- 资助金额:
$ 55.79万 - 项目类别:
Standard Grant
相似海外基金
CHS: Medium: Collaborative Research: Augmenting Human Cognition with Collaborative Robots
CHS:媒介:协作研究:用协作机器人增强人类认知
- 批准号:
2343187 - 财政年份:2023
- 资助金额:
$ 55.79万 - 项目类别:
Continuing Grant
CHS: Medium: Collaborative Research: Empirically Validated Perceptual Tasks for Data Visualization
CHS:媒介:协作研究:数据可视化的经验验证感知任务
- 批准号:
2236644 - 财政年份:2022
- 资助金额:
$ 55.79万 - 项目类别:
Standard Grant
CHS: Medium: Collaborative Research: Regional Experiments for the Future of Work in America
CHS:媒介:合作研究:美国未来工作的区域实验
- 批准号:
2243330 - 财政年份:2021
- 资助金额:
$ 55.79万 - 项目类别:
Continuing Grant
CHS: Medium: Collaborative Research: From Hobby to Socioeconomic Driver: Innovation Pathways to Professional Making in Asia and the American Midwest
CHS:媒介:协作研究:从爱好到社会经济驱动力:亚洲和美国中西部专业制造的创新之路
- 批准号:
2224258 - 财政年份:2021
- 资助金额:
$ 55.79万 - 项目类别:
Continuing Grant
CHS: Medium: Collaborative Research: Discovery and Exploration of Design Trade-Offs
CHS:媒介:协作研究:设计权衡的发现和探索
- 批准号:
1954028 - 财政年份:2020
- 资助金额:
$ 55.79万 - 项目类别:
Continuing Grant
CHS: Medium: Collaborative Research: Computer-Aided Design and Fabrication for General-Purpose Knit Manufacturing
CHS:媒介:协作研究:通用针织制造的计算机辅助设计和制造
- 批准号:
1955444 - 财政年份:2020
- 资助金额:
$ 55.79万 - 项目类别:
Standard Grant
CHS: Medium: Collaborative Research: Teachable Activity Trackers for Older Adults
CHS:媒介:协作研究:针对老年人的可教学活动追踪器
- 批准号:
1955590 - 财政年份:2020
- 资助金额:
$ 55.79万 - 项目类别:
Standard Grant
CHS: Medium: Collaborative Research: Code demography: Addressing information needs at scale for programming interface users and designers
CHS:媒介:协作研究:代码人口统计:大规模解决编程接口用户和设计者的信息需求
- 批准号:
1955699 - 财政年份:2020
- 资助金额:
$ 55.79万 - 项目类别:
Standard Grant
CHS: Medium: Collaborative Reearch: Bio-behavioral data analytics to enable personalized training of veterans for the future workforce
CHS:中:协作研究:生物行为数据分析,为未来的劳动力提供退伍军人的个性化培训
- 批准号:
1955721 - 财政年份:2020
- 资助金额:
$ 55.79万 - 项目类别:
Standard Grant
CHS: Medium: Collaborative Research: Fabric-Embedded Dynamic Sensing for Adaptive Exoskeleton Assistance
CHS:媒介:协作研究:用于自适应外骨骼辅助的织物嵌入式动态传感
- 批准号:
1955979 - 财政年份:2020
- 资助金额:
$ 55.79万 - 项目类别:
Standard Grant