Collaborative Research: CompCog: Achieving Analogical Reasoning via Human and Machine Learning

合作研究:CompCog:通过人类和机器学习实现类比推理

基本信息

  • 批准号:
    1827427
  • 负责人:
  • 金额:
    $ 26.99万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
    Standard Grant
  • 财政年份:
    2018
  • 资助国家:
    美国
  • 起止时间:
    2018-08-15 至 2022-07-31
  • 项目状态:
    已结题

项目摘要

Despite recent advances in artificial intelligence, humans remain unmatched in their ability to think creatively. Intelligent machines can use massive data to learn to identify patterns that are similar to learned examples, but people can use very small amounts of data to discover deep similarities between situations that are superficially very different (e.g., engineers have devised a cooling system for buildings using principles adapted from termite mounds). This type of creative thinking depends on analogy: the ability to find and exploit resemblances based on relations among entities, rather than solely on superficial appearances. The present investigation aims to show how relations can be learned from examples (in the form of either texts or pictures) and then used to reason by analogy. The work integrates recent advances in machine learning with more human-like learning mechanisms. Improved analogy models will increase the power of computer-based information retrieval, allowing both text and pictures to serve as retrieval cues to search large databases for items that are analogous in relational structure. The large analogy datasets generated for the project will be made publically available. More flexible search engines will help to automate creative tasks such as engineering design. Identifying the computational basis for relation learning and analogical reasoning will guide development of artificial intelligence systems by providing more efficient learning mechanisms. The research team is integrating research and education activities by using this project as a training opportunity in interdisciplinary research, encompassing psychology, statistics, computer science and mathematics. The research will integrate advanced computational approaches with behavioral experiments on human relation learning and analogical reasoning, using both texts and pictures as inputs. The work is guided by cognitive theory on learning and reasoning, and exploits recent advances in the field of machine vision. The project includes the creation and validation of multiple databases of analogy problems. Experiments will be performed to establish human performance levels in a variety of tasks. Computational models will be developed by synergizing big-data learning through deep networks with small-data learning through Bayesian modeling. Models will be evaluated by comparison with human benchmarks. By addressing issues that arise in reasoning from natural inputs such as texts and pictures, the models to be developed will generalize to situations that people encounter in their daily life.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
尽管人工智能最近取得了进展,但人类在创造性思维方面的能力仍然无与伦比。智能机器可以使用大量数据来学习识别与学习到的示例相似的模式,但人们可以使用非常少量的数据来发现表面上非常不同的情况之间的深层相似性(例如,工程师们已经设计了一种用于建筑物的冷却系统,其使用的原理适用于白蚁土丘)。这种创造性思维依赖于类比:发现和利用基于实体之间关系的相似性的能力,而不仅仅是表面现象。本研究的目的是展示如何关系可以从例子(无论是文本或图片的形式),然后用类比推理。这项工作将机器学习的最新进展与更像人类的学习机制相结合。改进的类比模型将增加基于计算机的信息检索的能力,允许文本和图片作为检索线索,在大型数据库中搜索关系结构相似的项目。为该项目生成的大型模拟数据集将以电子方式提供。更灵活的搜索引擎将有助于自动化工程设计等创造性任务。识别关系学习和类比推理的计算基础将通过提供更有效的学习机制来指导人工智能系统的发展。研究小组正在将研究和教育活动结合起来,利用该项目作为跨学科研究的培训机会,包括心理学、统计学、计算机科学和数学。这项研究将把先进的计算方法与人类关系学习和类比推理的行为实验结合起来,使用文本和图片作为输入。这项工作以学习和推理的认知理论为指导,并利用了机器视觉领域的最新进展。该项目包括创建和验证多个类比问题数据库。将进行实验,以建立在各种任务的人类表现水平。计算模型将通过深度网络的大数据学习与贝叶斯建模的小数据学习协同发展。将通过与人类基准进行比较来评估模型。通过解决从文本和图片等自然输入中推理产生的问题,开发的模型将推广到人们日常生活中遇到的情况。该奖项反映了NSF的法定使命,并通过使用基金会的智力价值和更广泛的影响审查标准进行评估,被认为值得支持。

项目成果

期刊论文数量(7)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
PartImageNet: A Large, High-Quality Dataset of Parts
  • DOI:
    10.1007/978-3-031-20074-8_8
  • 发表时间:
    2021-12
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Ju He;Shuo Yang;Shaokang Yang;Adam Kortylewski;Xiaoding Yuan;Jieneng Chen;Shuai Liu;Cheng Yang;A. Yuille
  • 通讯作者:
    Ju He;Shuo Yang;Shaokang Yang;Adam Kortylewski;Xiaoding Yuan;Jieneng Chen;Shuai Liu;Cheng Yang;A. Yuille
Amodal Segmentation through Out-of-Task and Out-of-Distribution Generalization with a Bayesian Model. CVPR. 2022.
使用贝叶斯模型通过任务外和分布外泛化进行非模态分割。
Robust Category-Level 6D Pose Estimation with Coarse-to-Fine Rendering of Neural Features
  • DOI:
    10.48550/arxiv.2209.05624
  • 发表时间:
    2022-09
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Wufei Ma;Angtian Wang;A. Yuille;Adam Kortylewski
  • 通讯作者:
    Wufei Ma;Angtian Wang;A. Yuille;Adam Kortylewski
Synthesize then Compare: Detecting Failures and Anomalies for Semantic Segmentation
  • DOI:
    10.1007/978-3-030-58452-8_9
  • 发表时间:
    2020-03
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Yingda Xia;Yi Zhang;Fengze Liu;Wei Shen;A. Yuille
  • 通讯作者:
    Yingda Xia;Yi Zhang;Fengze Liu;Wei Shen;A. Yuille
Alan Yuille. Learning Part Segmentation through Unsupervised Domain Adaptation from Synthetic Vehicles.
艾伦·尤尔.
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Alan Yuille其他文献

Stereo and controlled movement
Max Margin Learning of Hierarchical Configural Deformable Templates (HCDTs) for Efficient Object Parsing and Pose Estimation
  • DOI:
    10.1007/s11263-010-0375-1
  • 发表时间:
    2010-08-31
  • 期刊:
  • 影响因子:
    9.300
  • 作者:
    Long (Leo) Zhu;Yuanhao Chen;Chenxi Lin;Alan Yuille
  • 通讯作者:
    Alan Yuille
Belief Propagation, Mean-field, and Bethe Approximations
  • DOI:
  • 发表时间:
    2010
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Alan Yuille
  • 通讯作者:
    Alan Yuille
Deep networks under scene-level supervision for multi-class geospatial object detection from remote sensing images
场景级监督下的深层网络用于遥感图像的多类地理空间目标检测
STFlow: Self-Taught Optical Flow Estimation Using Pseudo Labels
STFlow:使用伪标签自学光流估计
  • DOI:
    10.1109/tip.2020.3024015
  • 发表时间:
    2020-09
  • 期刊:
  • 影响因子:
    10.6
  • 作者:
    Zhe Ren;Wenhan Luo;Junchi Yan;Wenlong Liao;Xiaokang Yang;Alan Yuille;Hongyuan Zha
  • 通讯作者:
    Hongyuan Zha

Alan Yuille的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Alan Yuille', 18)}}的其他基金

Collaborative Research: Visual Cortex on Silicon
合作研究:硅上视觉皮层
  • 批准号:
    1762521
  • 财政年份:
    2017
  • 资助金额:
    $ 26.99万
  • 项目类别:
    Continuing Grant
Collaborative Research: Visual Cortex on Silicon
合作研究:硅上视觉皮层
  • 批准号:
    1317376
  • 财政年份:
    2013
  • 资助金额:
    $ 26.99万
  • 项目类别:
    Continuing Grant
RI: Small: Recursive Compositional Models for Vision
RI:小型:视觉递归组合模型
  • 批准号:
    0917141
  • 财政年份:
    2009
  • 资助金额:
    $ 26.99万
  • 项目类别:
    Standard Grant
A Computational Theory of Motion Perception Modeling the Statistics of the Environment
环境统计建模的运动感知计算理论
  • 批准号:
    0736015
  • 财政年份:
    2007
  • 资助金额:
    $ 26.99万
  • 项目类别:
    Standard Grant
IPAM/Statistics Graduate Workshop
IPAM/统计学研究生研讨会
  • 批准号:
    0743835
  • 财政年份:
    2007
  • 资助金额:
    $ 26.99万
  • 项目类别:
    Standard Grant
Computational Theory of Motion Perception
运动感知的计算理论
  • 批准号:
    0613563
  • 财政年份:
    2006
  • 资助金额:
    $ 26.99万
  • 项目类别:
    Standard Grant
Image Parsing: Integrating Generative and Discriminative Methods
图像解析:集成生成方法和判别方法
  • 批准号:
    0413214
  • 财政年份:
    2005
  • 资助金额:
    $ 26.99万
  • 项目类别:
    Continuing Grant
SGER: Stochastic Algorithms for Visual Search and Recognition
SGER:视觉搜索和识别的随机算法
  • 批准号:
    0240148
  • 财政年份:
    2003
  • 资助金额:
    $ 26.99万
  • 项目类别:
    Standard Grant
Automated Detection of Informational Signs and Hazardous Objects: Visual Aids for the Blind
自动检测信息标志和危险物体:盲人视觉辅助工具
  • 批准号:
    9800670
  • 财政年份:
    1998
  • 资助金额:
    $ 26.99万
  • 项目类别:
    Continuing Grant
Deformable Templates for Face Description, Recognition, Interpretation, and Learning
用于人脸描述、识别、解释和学习的可变形模板
  • 批准号:
    9696107
  • 财政年份:
    1996
  • 资助金额:
    $ 26.99万
  • 项目类别:
    Continuing Grant

相似国自然基金

Research on Quantum Field Theory without a Lagrangian Description
  • 批准号:
    24ZR1403900
  • 批准年份:
    2024
  • 资助金额:
    0.0 万元
  • 项目类别:
    省市级项目
Cell Research
  • 批准号:
    31224802
  • 批准年份:
    2012
  • 资助金额:
    24.0 万元
  • 项目类别:
    专项基金项目
Cell Research
  • 批准号:
    31024804
  • 批准年份:
    2010
  • 资助金额:
    24.0 万元
  • 项目类别:
    专项基金项目
Cell Research (细胞研究)
  • 批准号:
    30824808
  • 批准年份:
    2008
  • 资助金额:
    24.0 万元
  • 项目类别:
    专项基金项目
Research on the Rapid Growth Mechanism of KDP Crystal
  • 批准号:
    10774081
  • 批准年份:
    2007
  • 资助金额:
    45.0 万元
  • 项目类别:
    面上项目

相似海外基金

Collaborative Research: CompCog: RI: Medium: Understanding human planning through AI-assisted analysis of a massive chess dataset
合作研究:CompCog:RI:中:通过人工智能辅助分析海量国际象棋数据集了解人类规划
  • 批准号:
    2312374
  • 财政年份:
    2023
  • 资助金额:
    $ 26.99万
  • 项目类别:
    Standard Grant
Collaborative Research: CompCog: RI: Medium: Understanding human planning through AI-assisted analysis of a massive chess dataset
合作研究:CompCog:RI:中:通过人工智能辅助分析海量国际象棋数据集了解人类规划
  • 批准号:
    2312373
  • 财政年份:
    2023
  • 资助金额:
    $ 26.99万
  • 项目类别:
    Standard Grant
Collaborative Research: CompCog: Modeling Search within the Mental Lexicon
合作研究:CompCog:心理词典中的建模搜索
  • 批准号:
    2235362
  • 财政年份:
    2023
  • 资助金额:
    $ 26.99万
  • 项目类别:
    Standard Grant
Collaborative Research: CompCog: Modeling Search within the Mental Lexicon
合作研究:CompCog:心理词典中的建模搜索
  • 批准号:
    2235363
  • 财政年份:
    2023
  • 资助金额:
    $ 26.99万
  • 项目类别:
    Standard Grant
Collaborative Research: CompCog: Adversarial Collaborative Research on Intuitive Physical Reasoning
协作研究:CompCog:直观物理推理的对抗性协作研究
  • 批准号:
    2121009
  • 财政年份:
    2021
  • 资助金额:
    $ 26.99万
  • 项目类别:
    Standard Grant
Collaborative Research: CompCog: Psychological, Computational, and Neural Adequacy in a Deep Learning Model of Human Speech Recognition
合作研究:CompCog:人类语音识别深度学习模型中的心理、计算和神经充分性
  • 批准号:
    2043903
  • 财政年份:
    2021
  • 资助金额:
    $ 26.99万
  • 项目类别:
    Standard Grant
Collaborative Research: CompCog: Psychological, Computational, and Neural Adequacy in a Deep Learning Model of Human Speech Recognition
合作研究:CompCog:人类语音识别深度学习模型中的心理、计算和神经充分性
  • 批准号:
    2043950
  • 财政年份:
    2021
  • 资助金额:
    $ 26.99万
  • 项目类别:
    Standard Grant
Collaborative Research: CompCog: Adversarial Collaborative Research on Intuitive Physical Reasoning
协作研究:CompCog:直观物理推理的对抗性协作研究
  • 批准号:
    2121102
  • 财政年份:
    2021
  • 资助金额:
    $ 26.99万
  • 项目类别:
    Standard Grant
CompCog: Collaborative Research: Testing quantitative predictions of sentence processing theories with a large-scale eye-tracking database
CompCog:协作研究:使用大型眼动追踪数据库测试句子处理理论的定量预测
  • 批准号:
    2020914
  • 财政年份:
    2020
  • 资助金额:
    $ 26.99万
  • 项目类别:
    Standard Grant
CompCog: Collaborative Research: Testing quantitative predictions of sentence processing theories with a large-scale eye-tracking database
CompCog:协作研究:使用大型眼动追踪数据库测试句子处理理论的定量预测
  • 批准号:
    2020945
  • 财政年份:
    2020
  • 资助金额:
    $ 26.99万
  • 项目类别:
    Standard Grant
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了