Understanding feature-based auditory-visual interactions.
了解基于特征的听觉-视觉交互。
基本信息
- 批准号:8313865
- 负责人:
- 金额:$ 37.49万
- 依托单位:
- 依托单位国家:美国
- 项目类别:
- 财政年份:2011
- 资助国家:美国
- 起止时间:2011-09-01 至 2014-08-31
- 项目状态:已结题
- 来源:
- 关键词:Acoustic StimulationAcousticsAffectAngerAnxietyAttentionAuditoryAwarenessBehaviorBehavioralCharacteristicsCodeComplexComputer SimulationCrowdingDepressed moodDetectionDevelopmentDiagnosticDiseaseEnvironmentExperimental DesignsFaceFacial ExpressionFacial Expression PerceptionFelis catusFrequenciesGenderGoalsGroupingImageIndividualKnowledgeLabelLifeMeasuresMental HealthMethodsMidbrain structureModalityModelingMotionNatureNeuronsNoisePatternPerceptionPlayPopulationPositioning AttributeProcessPsychophysicsRelative (related person)ResearchSamplingShapesSignal TransductionSocial InteractionSpace PerceptionSpeechStrokeStructureTechniquesTextureTimeVisionVisualVisual PathwaysVisual PerceptionVisual impairmentVisual system structureWorkage relatedarea striataauditory stimulusbasedensitydirected attentionexperienceinsightluminanceobject perceptionobject recognitionobject shapereceptive fieldrelating to nervous systemresearch studysocialsoundtheoriesvisual codingvisual informationvisual processvisual processingvisual search
项目摘要
DESCRIPTION (provided by applicant): Research has revealed much about the mechanisms of the visual system. However, perceptual experience is usually multimodal, with close relationships between visual and auditory modalities. Auditory signals influence neural activation throughout the visual pathways including in the mid brain and primary visual cortex. It is therefore important to extend the rigorous theories of vision to integrate multimodal contexts. Prior research on auditory-visual interactions has primarily focused on perception of space, timing, duration, motion, and speech, whereas recent research has demonstrated auditory-visual interactions in the perception of objects and faces. The goal of the proposed research is to fill the gap in our understanding of auditory-visual interactions at the level of visual feature processing. We will characterize which acoustic patterns uniquely interact with processing of low-level (e.g., spatial frequency), intermediate-level (e.g., material texture and 2D shape), and high-level (e.g., common objects, words, face identity, and facial expressions) visual features. To understand these interactions, we will combine psychophysics and computational modeling (AIM 1) to determine how associated sounds influence basic mechanisms of visual feature processing, including those that control image visibility (front-end signal-to-noise ratio and sampling efficiency), those that control signal competition for visual awareness, and those that control the strength and reliability of neural population coding of visual features in the presence of between- and within-receptive-field signal interactions. The results will provide an integrative understanding of how sounds influence visual signals, sampling, competition, and coding, for the processing of low-, intermediate-, and high-level visual features. The proposed research will also allow development of cross- modal methods for assisting visual perception by enhancing specific spatial scales, materials, shapes, objects, and facial expressions. For example, our preliminary results suggest that sounds can be used to boost and tune the perception of facial expressions, and to direct attention to specific spatial frequencies. In the translational aim (AIM 2), we will systematically investigate how sounds can be used to aid visual perception, for example, to direct attention to an object, material, word, or facial expression in search, facilitate object recognition via directing attention to diagnostic spatial-frequency components, and enrich scene understanding via directing attention to multiple spatial scales. Because feature-specific auditory signals are readily presented over headphones, the proposed research may provide a means to, for example, counter biased perception (e.g., perceiving facial expressions as negative due to social anxiety), and to direct attention to specific objects and spatial scales (e.g., details versus gist) for individuals with visual challenges such as low vision, strokes affecting vision, or with attention disorders. Thus, the proposed research will not only systematically integrate auditory influences into the current models of visual feature processing, but it may also provide a means to aid visual processing by using auditory signals.
描述(由申请人提供):研究已经揭示了很多关于视觉系统的机制。然而,感知体验通常是多模态的,视觉和听觉模态之间有着密切的关系。听觉信号影响整个视觉通路的神经激活,包括中脑和初级视觉皮层。因此,重要的是扩展严格的视觉理论,整合多模态的背景。先前的研究主要集中在空间,时间,持续时间,运动和语音的感知,而最近的研究表明,在物体和面孔的感知中的视觉-视觉交互。本研究的目的是填补我们在视觉特征加工水平上对视觉-记忆交互的理解上的差距。我们将描述哪些声学模式与低水平(例如,空间频率),中间电平(例如,材料纹理和2D形状),以及高级(例如,共同的物体、词语、面部身份和面部表情)视觉特征。为了理解这些相互作用,我们将结合联合收割机心理物理学和计算建模(AIM 1)来确定相关声音如何影响视觉特征处理的基本机制,包括那些控制图像可见性的机制(前端信噪比和采样效率),那些控制视觉意识的信号竞争,以及在感受野之间和感受野内信号相互作用的存在下控制视觉特征的神经群体编码的强度和可靠性的那些。结果将提供一个综合的理解,声音如何影响视觉信号,采样,竞争和编码,低,中,高层次的视觉功能的处理。拟议的研究还将允许开发跨模态方法,通过增强特定的空间尺度、材料、形状、物体和面部表情来辅助视觉感知。例如,我们的初步结果表明,声音可以用来增强和调整面部表情的感知,并将注意力引导到特定的空间频率。在翻译目标(AIM 2)中,我们将系统地研究如何使用声音来帮助视觉感知,例如,在搜索中将注意力引导到物体,材料,单词或面部表情,通过将注意力引导到诊断空间频率分量来促进物体识别,并通过将注意力引导到多个空间尺度来丰富场景理解。因为特征特定的听觉信号很容易通过耳机呈现,所以所提出的研究可以提供一种手段,例如,对抗偏见的感知(例如,由于社交焦虑而将面部表情感知为负面),以及将注意力引导到特定对象和空间尺度(例如,细节与要点),用于具有视觉挑战(例如低视力、影响视力的中风或具有注意力障碍)的个体。因此,拟议的研究不仅将系统地整合到当前的视觉特征处理模型的听觉影响,但它也可能提供一种手段,通过使用听觉信号来帮助视觉处理。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
SATORU SUZUKI其他文献
SATORU SUZUKI的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('SATORU SUZUKI', 18)}}的其他基金
Understanding feature-based auditory-visual interactions.
了解基于特征的听觉-视觉交互。
- 批准号:
8187726 - 财政年份:2011
- 资助金额:
$ 37.49万 - 项目类别:
Understanding feature-based auditory-visual interactions.
了解基于特征的听觉-视觉交互。
- 批准号:
8526466 - 财政年份:2011
- 资助金额:
$ 37.49万 - 项目类别:
Understanding the mechanisms that control the dynamics of perceptual switches
了解控制感知开关动态的机制
- 批准号:
7880336 - 财政年份:2009
- 资助金额:
$ 37.49万 - 项目类别:
Understanding the mechanisms that control the dynamics of perceptual switches
了解控制感知开关动态的机制
- 批准号:
7577408 - 财政年份:2008
- 资助金额:
$ 37.49万 - 项目类别:
Understanding the mechanisms that control the dynamics of perceptual switches
了解控制感知开关动态的机制
- 批准号:
7467158 - 财政年份:2008
- 资助金额:
$ 37.49万 - 项目类别:
Understanding the mechanisms that control the dynamics of perceptual switches
了解控制感知开关动态的机制
- 批准号:
7777269 - 财政年份:2008
- 资助金额:
$ 37.49万 - 项目类别:
Visual Adaptation, Selective Attention, and Shape Coding
视觉适应、选择性注意和形状编码
- 批准号:
6946785 - 财政年份:2003
- 资助金额:
$ 37.49万 - 项目类别:
Visual Adaptation, Selective Attention, and Shape Coding
视觉适应、选择性注意和形状编码
- 批准号:
6774100 - 财政年份:2003
- 资助金额:
$ 37.49万 - 项目类别:
Visual Adaptation, Selective Attention, and Shape Coding
视觉适应、选择性注意和形状编码
- 批准号:
6681200 - 财政年份:2003
- 资助金额:
$ 37.49万 - 项目类别:
相似海外基金
Nonlinear Acoustics for the conditioning monitoring of Aerospace structures (NACMAS)
用于航空航天结构调节监测的非线性声学 (NACMAS)
- 批准号:
10078324 - 财政年份:2023
- 资助金额:
$ 37.49万 - 项目类别:
BEIS-Funded Programmes
ORCC: Marine predator and prey response to climate change: Synthesis of Acoustics, Physiology, Prey, and Habitat In a Rapidly changing Environment (SAPPHIRE)
ORCC:海洋捕食者和猎物对气候变化的反应:快速变化环境中声学、生理学、猎物和栖息地的综合(蓝宝石)
- 批准号:
2308300 - 财政年份:2023
- 资助金额:
$ 37.49万 - 项目类别:
Continuing Grant
University of Salford (The) and KP Acoustics Group Limited KTP 22_23 R1
索尔福德大学 (The) 和 KP Acoustics Group Limited KTP 22_23 R1
- 批准号:
10033989 - 财政年份:2023
- 资助金额:
$ 37.49万 - 项目类别:
Knowledge Transfer Partnership
User-controllable and Physics-informed Neural Acoustics Fields for Multichannel Audio Rendering and Analysis in Mixed Reality Application
用于混合现实应用中多通道音频渲染和分析的用户可控且基于物理的神经声学场
- 批准号:
23K16913 - 财政年份:2023
- 资助金额:
$ 37.49万 - 项目类别:
Grant-in-Aid for Early-Career Scientists
Combined radiation acoustics and ultrasound imaging for real-time guidance in radiotherapy
结合辐射声学和超声成像,用于放射治疗的实时指导
- 批准号:
10582051 - 财政年份:2023
- 资助金额:
$ 37.49万 - 项目类别:
Comprehensive assessment of speech physiology and acoustics in Parkinson's disease progression
帕金森病进展中言语生理学和声学的综合评估
- 批准号:
10602958 - 财政年份:2023
- 资助金额:
$ 37.49万 - 项目类别:
The acoustics of climate change - long-term observations in the arctic oceans
气候变化的声学——北冰洋的长期观测
- 批准号:
2889921 - 财政年份:2023
- 资助金额:
$ 37.49万 - 项目类别:
Studentship
Collaborative Research: Estimating Articulatory Constriction Place and Timing from Speech Acoustics
合作研究:从语音声学估计发音收缩位置和时间
- 批准号:
2343847 - 财政年份:2023
- 资助金额:
$ 37.49万 - 项目类别:
Standard Grant
Collaborative Research: Estimating Articulatory Constriction Place and Timing from Speech Acoustics
合作研究:从语音声学估计发音收缩位置和时间
- 批准号:
2141275 - 财政年份:2022
- 资助金额:
$ 37.49万 - 项目类别:
Standard Grant
Flow Physics and Vortex-Induced Acoustics in Bio-Inspired Collective Locomotion
仿生集体运动中的流动物理学和涡激声学
- 批准号:
DGECR-2022-00019 - 财政年份:2022
- 资助金额:
$ 37.49万 - 项目类别:
Discovery Launch Supplement