CRII: CHS: Enabling Behavior Sensing via the Cloud and its Application to Public Speaking
CRII:CHS:通过云实现行为感知及其在公共演讲中的应用
基本信息
- 批准号:1464162
- 负责人:
- 金额:$ 17.35万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Continuing Grant
- 财政年份:2015
- 资助国家:美国
- 起止时间:2015-04-01 至 2018-03-31
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
Public speaking is a task that people often rank as their top fear; one consequence is that even after repeatedly practicing a presentation many find they end up speaking too hastily when standing before the audience. People often desire to improve their public speaking skills, but lack of resources and social stigma may impede their ability to obtain the personalized training they seek. The PI's objective in this project is to build on his prior work and establish a research program to develop a ubiquitously available (Cloud based) automated social sensing framework that can recognize and interpret human nonverbal data (including facial expressions, tone of voice, body language, etc.), and then present constructive feedback to its users where they want and when they want. Modeling of the full range of human nonverbal behavior remains a challenging endeavor. Using the 43 muscles in our face, we can produce 10,000 unique combinations of facial expressions; modalities such as vocal tone, body language, and elements of physiology add to the complexity. While computers can now recognize basic expressions such as smiling and frowning, the automated interpretation of an individual's intent remains an active area of exploration (e.g., a smiling customer does not necessarily indicate that s/he is satisfied). This research represents a step towards developing algorithms and implementing a practical framework that can capture and interpret nonverbal data while providing meaningful feedback in the context of public speaking. Project outcomes ultimately will transform the way social skills are adapted and learned, which will have a broad impact on people with social difficulties (e.g., those with Asperger's syndrome). Human nonverbal behaviors can be subtle, are often confusing, and may even appear contradictory. While computer algorithms are more reliable than people at sensing subtle human behavior objectively and consistently, human intelligence is currently far superior at interpreting contextual behavior. This research adopts an approach that couples computer algorithms with human intelligence towards automated sensing and interpretation of nonverbal behavior in nearly real time. The PI's approach is to develop a robust and scalable Web-based sensing framework that will automatically capture and analyze an individual?s behavior by exploiting the Cloud infrastructure, without requiring any major computational resources from the end-user. The work will include three phases: development of a Cloud-enabled sensing platform for automated recognition of nonverbal behavior; development of algorithms for combining the behavioral data with human judgment using the so-called wisdom of the crowd to generate meaningful insights, interpretations, and social recommendations; and running user centric iterative studies to validate the framework for the general public as well as practitioners. The work will also lead to core contributions in designing computer interfaces. And since behavioral modeling methods typically assume a large amount of naturalistic data, preferably collected in the wild; it is therefore noteworthy that the PI's sensing framework has the potential to collect one of the largest naturalistic nonverbal datasets.
公众演讲通常是人们最害怕的任务之一;其中一个后果是,即使在反复练习演讲后,许多人也会发现站在观众面前演讲太仓促了。人们往往渴望提高他们的公共演讲技能,但缺乏资源和社会污名可能会阻碍他们获得他们所寻求的个性化培训的能力。PI在这个项目中的目标是在他之前工作的基础上建立一个研究计划,以开发一个无处不在的(基于云的)自动化社交感知框架,该框架可以识别和解释人类的非语言数据(包括面部表情、语气、肢体语言等),然后在他们想要的地方和时间向用户提供建设性的反馈。对人类所有的非语言行为进行建模仍然是一项具有挑战性的工作。使用我们面部的43块肌肉,我们可以产生10,000种独特的面部表情组合;声调、肢体语言和生理学元素等模式增加了复杂性。虽然计算机现在可以识别微笑和皱眉等基本表情,但对个人意图的自动解释仍然是一个活跃的探索领域(例如,客户微笑并不一定表明S满意)。这项研究代表着朝着开发算法和实现一个实用的框架迈出了一步,该框架可以捕获和解释非语言数据,同时在公开演讲的背景下提供有意义的反馈。项目成果最终将改变适应和学习社交技能的方式,这将对有社交困难的人(例如,阿斯伯格综合症患者)产生广泛影响。人类的非语言行为可能是微妙的,经常是令人困惑的,甚至可能看起来相互矛盾。虽然计算机算法在客观和一致地感知人类细微的行为方面比人类更可靠,但人类的智能目前在解释语境行为方面要优越得多。这项研究采用了一种将计算机算法与人类智能相结合的方法,以实现对非语言行为的自动感知和几乎实时的解释。PI的方法是开发一个健壮且可扩展的基于Web的感知框架,通过利用云基础设施自动捕获和分析S的个人行为,而不需要来自最终用户的任何主要计算资源。这项工作将包括三个阶段:开发一个支持云的感知平台,用于自动识别非语言行为;开发算法,使用所谓的群体智慧将行为数据与人类判断相结合,以生成有意义的见解、解释和社交建议;以及运行以用户为中心的迭代研究,以验证该框架对普通公众和从业者的有效性。这项工作还将导致在设计计算机界面方面做出核心贡献。由于行为建模方法通常假设大量的自然主义数据,最好是在野外收集的;因此值得注意的是,PI的感知框架有可能收集最大的自然主义非语言数据集之一。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Ehsan Hoque其他文献
Multimodal Communication in Face-to-Face Computer-Mediated Conversations
面对面计算机介导的对话中的多模态通信
- DOI:
- 发表时间:
2007 - 期刊:
- 影响因子:0
- 作者:
M. Louwerse;Nick Benesh;Ehsan Hoque;Patrick Jeuniaux;Gwyneth A. Lewis;Jie Wu;Megan Zirnstein - 通讯作者:
Megan Zirnstein
The interaction between information and intonation structure: Prosodic marking of theme and rheme
信息与语调结构的相互作用:主位和述位的韵律标记
- DOI:
- 发表时间:
2008 - 期刊:
- 影响因子:0
- 作者:
M. Louwerse;Patrick Jeuniaux;Bin Zhang;Jie Wu;Ehsan Hoque - 通讯作者:
Ehsan Hoque
Awe the Audience: How the Narrative Trajectories Affect Audience Perception in Public Speaking
敬畏观众:叙事轨迹如何影响公众演讲中的观众感知
- DOI:
10.1145/3173574.3173598 - 发表时间:
2018 - 期刊:
- 影响因子:0
- 作者:
Md. Iftekhar Tanveer;Samiha Samrose;Raiyan Abdul Baten;Ehsan Hoque - 通讯作者:
Ehsan Hoque
Visual Cues for Disrespectful Conversation Analysis
不尊重谈话分析的视觉线索
- DOI:
10.1109/acii.2019.8925440 - 发表时间:
2019 - 期刊:
- 影响因子:0
- 作者:
Samiha Samrose;Wenyi Chu;C. He;Yuebai Gao;Syeda Sarah Shahrin;Zhen Bai;Ehsan Hoque - 通讯作者:
Ehsan Hoque
Social skills training with virtual assistant and real-time feedback
通过虚拟助手和实时反馈进行社交技能培训
- DOI:
10.1145/3123024.3123196 - 发表时间:
2017 - 期刊:
- 影响因子:0
- 作者:
M. R. Ali;Ehsan Hoque - 通讯作者:
Ehsan Hoque
Ehsan Hoque的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Ehsan Hoque', 18)}}的其他基金
CAREER: A collaboration coach with effective intervention strategies to optimize group performance.
职业:协作教练,具有有效的干预策略来优化团队绩效。
- 批准号:
1750380 - 财政年份:2018
- 资助金额:
$ 17.35万 - 项目类别:
Continuing Grant
NRT-DESE: Graduate Training in Data-Enabled Research into Human Behavior and its Cognitive and Neural Mechanisms
NRT-DESE:人类行为及其认知和神经机制的数据支持研究研究生培训
- 批准号:
1449828 - 财政年份:2015
- 资助金额:
$ 17.35万 - 项目类别:
Standard Grant
EAGER: An Animated Agent for Developing the Conversational Skills of Individuals with Social Interaction Difficulties
EAGER:用于培养有社交互动困难的人的对话技能的动画代理
- 批准号:
1543758 - 财政年份:2015
- 资助金额:
$ 17.35万 - 项目类别:
Standard Grant
相似国自然基金
基于CHS-DRGs和诊疗全流程大数据挖掘的子宫肌瘤手术“主路径+支路径”的复合临床路径模式研究
- 批准号:
- 批准年份:2025
- 资助金额:0.0 万元
- 项目类别:省市级项目
CHS-DRG模式下ICU老年患者CRE医院感染防控对策研究
- 批准号:
- 批准年份:2024
- 资助金额:0.0 万元
- 项目类别:省市级项目
3,5-双(2-羟基-4-氟-苯基)-1,2,4-噁二唑-铈配合物@CD-MFO-CHS 脑靶向载药纳米粒的制备及抗 AIS脑保护作用研究
- 批准号:
- 批准年份:2024
- 资助金额:15.0 万元
- 项目类别:省市级项目
威尼斯镰刀菌中几丁质合成关键基因Chs调控菌丝体结构与蛋白消
化特性的机制研究
- 批准号:
- 批准年份:2024
- 资助金额:0.0 万元
- 项目类别:省市级项目
PLA/GO/CHS导电分层缓释给药系统治疗长节段周围神经损伤的研究
- 批准号:
- 批准年份:2024
- 资助金额:0.0 万元
- 项目类别:省市级项目
旁系同源CHS在柑橘黄酮类及花色苷合成通路中差异化调控的分子机制
- 批准号:32302507
- 批准年份:2023
- 资助金额:30 万元
- 项目类别:青年科学基金项目
Chs 基因对红曲色素和桔霉素合成代谢的调控作用
- 批准号:2021JJ31146
- 批准年份:2021
- 资助金额:0.0 万元
- 项目类别:省市级项目
红曲霉关键chs基因调控红曲色素和桔霉素合成的作用机制
- 批准号:
- 批准年份:2021
- 资助金额:30 万元
- 项目类别:青年科学基金项目
除虫菊CHS合成酶及其互作蛋白协同调控除虫菊酯合成代谢的催化机制解析
- 批准号:31902051
- 批准年份:2019
- 资助金额:23.0 万元
- 项目类别:青年科学基金项目
先进CHS结构柔性复合负极材料的可控制备及其储能构效关系研究
- 批准号:61574122
- 批准年份:2015
- 资助金额:64.0 万元
- 项目类别:面上项目
相似海外基金
CARDIOVASCULAR HEALTH STUDY (CHS) - TASK AREA C, STUDY CLOSEOUT
心血管健康研究 (CHS) - 任务领域 C,研究收尾
- 批准号:
10974001 - 财政年份:2023
- 资助金额:
$ 17.35万 - 项目类别:
CHS: Medium: Collaborative Research: Augmenting Human Cognition with Collaborative Robots
CHS:媒介:协作研究:用协作机器人增强人类认知
- 批准号:
2343187 - 财政年份:2023
- 资助金额:
$ 17.35万 - 项目类别:
Continuing Grant
CRII: CHS: RUI: Computational models of humans for studying and improving Human-AI interaction
CRII:CHS:RUI:用于研究和改善人机交互的人类计算模型
- 批准号:
2218226 - 财政年份:2022
- 资助金额:
$ 17.35万 - 项目类别:
Standard Grant
CHS: Smal: AI-Human Collaboration in Autonomous Vehicles for Safety and Security
CHS:Smal:自动驾驶汽车中的人工智能与人类协作以确保安全
- 批准号:
2245055 - 财政年份:2022
- 资助金额:
$ 17.35万 - 项目类别:
Standard Grant
CHS: Small: Towards Next-Generation Large-Scale Nonlinear Deformable Simulation
CHS:小型:迈向下一代大规模非线性变形模拟
- 批准号:
2244651 - 财政年份:2022
- 资助金额:
$ 17.35万 - 项目类别:
Standard Grant
CRII: CHS: Developing Youth Data Literacies through a Visual Programming Environment
CRII:CHS:通过可视化编程环境培养青少年数据素养
- 批准号:
2230291 - 财政年份:2022
- 资助金额:
$ 17.35万 - 项目类别:
Standard Grant
CHS: Medium: Collaborative Research: Empirically Validated Perceptual Tasks for Data Visualization
CHS:媒介:协作研究:数据可视化的经验验证感知任务
- 批准号:
2236644 - 财政年份:2022
- 资助金额:
$ 17.35万 - 项目类别:
Standard Grant
CRII: CHS: Harnessing Machine Learning to Improve Human Decision Making: A Case Study on Deceptive Detection
CRII:CHS:利用机器学习改善人类决策:欺骗检测案例研究
- 批准号:
2125113 - 财政年份:2021
- 资助金额:
$ 17.35万 - 项目类别:
Standard Grant
CHS: Small: Guiding future design of affect-aware cyber-human systems through the investigation of human reactions to machine errors
CHS:小型:通过研究人类对机器错误的反应来指导情感感知网络人类系统的未来设计
- 批准号:
2151464 - 财政年份:2021
- 资助金额:
$ 17.35万 - 项目类别:
Standard Grant
CHS: Small: Developing and Validating a Physically Accurate Light-Scattering Model for the Rendering of Bird and Other Dinosaur Feathers
CHS:小型:开发和验证用于渲染鸟类和其他恐龙羽毛的物理精确光散射模型
- 批准号:
2007974 - 财政年份:2021
- 资助金额:
$ 17.35万 - 项目类别:
Continuing Grant














{{item.name}}会员




