Causality in Machine Learning

机器学习中的因果关系

基本信息

  • 批准号:
    RGPIN-2022-03667
  • 负责人:
  • 金额:
    $ 2.11万
  • 依托单位:
  • 依托单位国家:
    加拿大
  • 项目类别:
    Discovery Grants Program - Individual
  • 财政年份:
    2022
  • 资助国家:
    加拿大
  • 起止时间:
    2022-01-01 至 2023-12-31
  • 项目状态:
    已结题

项目摘要

Causality in Machine Learning is often understood as the ability to understand decisions provided by a machine-learning model in terms of the knowledge of the domain in which the model operates, and the ability to reason about such decisions. A causal model has an "introspective" ability to reason about itself. Learning a causal model is a much more difficult task than the one performed by current Machine Learning methods, including Deep Learning, that determine a "correlational" or "pattern-matching" relationship between the inputs of the model and its decision. I propose here a research program in causality in Machine Learning. Causality is one of the main challenges before the field of Machine Learning. Moreover, having a causal representation of a model will allow a progress of Machine Learning towards the abilities of human intelligence, such as learning with a few examples. I propose to connect with the rich body of existing Artificial Intelligence work exploring the use of logic to reason about causes of changing states of the world and variables describing the world. The proposed research program is founded on my previous work. In particular, my active participation in a sub-area of Machine Learning known as Inductive Logic Programming will be useful. I propose to interpret models obtained with Deep Learning using logic. Use of Inductive Logic Programming will enable us to build models behaving similarly to models obtained by Deep Learning. These ILP models will be "distilled" from the Deep Learning models, and will be expressed as rules in first-order logic. This will make them interpretable by humans. It will also facilitate integrating previous knowledge expressed in logic with the learned models. Even partial success of research on causality is likely to have significant impact. Causality is necessary for broader social acceptance of models developed using Machine Learning for decision-making concerning humans. For instance, the European Union GDPR directive stipulates that any such model should be explainable, i.e. a person about whom the model has made a decision should be able to obtain an explanation of the model's decision understandable to them. Understanding models will eventually allow us to avoid models that make decisions about humans based on gender, ethnicity, etc. For example, the group in Pisa with which I collaborate has access to claim processing data of one of the leading Italian insurance companies. We will look at the explainability of decisions taken by their automated insurance clam processing systems. Addressing causality is a huge challenge. In this program I propose to make inroads into distillation of Deep Learning models into understandable models that also make causality explicit, and in being able to assign multiple factors as combined causes of a given effect predicted by a model. I will also young researchers who will continue the important work in causality for Machine Learning.
机器学习中的因果关系通常被理解为根据机器学习模型所在领域的知识理解该模型提供的决策的能力,以及对此类决策进行推理的能力。因果模型具有一种对自身进行推理的“内省”能力。学习因果模型是一项比当前机器学习方法(包括深度学习)执行的任务困难得多的任务,这些方法确定模型的输入和决策之间的“相关”或“模式匹配”关系。我在这里提出了一个关于机器学习中因果关系的研究计划。因果关系是机器学习领域面临的主要挑战之一。此外,拥有模型的因果表示将允许机器学习朝着人类智能能力的方向发展,例如通过几个例子进行学习。我建议与现有的丰富的人工智能工作联系起来,探索使用逻辑来推理世界状态变化的原因和描述世界的变量。拟议的研究计划是建立在我之前的工作基础上的。特别是,我积极参与机器学习的一个子领域,即归纳逻辑编程,这将是有用的。我建议使用逻辑来解释通过深度学习获得的模型。使用归纳逻辑编程将使我们能够构建行为类似于深度学习获得的模型的模型。这些ILP模型将从深度学习模型中“提取”出来,并将在一阶逻辑中表示为规则。这将使它们可以被人类解释。它还将有助于将以前用逻辑表达的知识与学习的模型结合起来。即使因果关系研究取得部分成功,也可能产生重大影响。因果关系对于更广泛的社会接受使用机器学习进行人类决策所开发的模型是必要的。例如,欧洲联盟GDPR指令规定,任何这种模式都应该是可解释的,即被模式作出决定的人应该能够获得他们可以理解的对模式决定的解释。理解模型最终将让我们避免那些根据性别、种族等做出关于人类的决定的模型。例如,与我合作的比萨的团队可以访问一家领先的意大利保险公司的索赔处理数据。我们将研究他们的自动保险CLAM处理系统所做出的决定的可解释性。解决因果关系是一个巨大的挑战。在这个项目中,我建议将深度学习模型提炼成可理解的模型,这些模型也使因果关系明确,并能够将多个因素指定为模型预测的给定结果的组合原因。我还将介绍年轻的研究人员,他们将继续机器学习因果关系方面的重要工作。

项目成果

期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Matwin, Stan其他文献

deepBioWSD: effective deep neural word sense disambiguation of biomedical text data
Unsupervised named-entity recognition: Generating gazetteers and resolving ambiguity
Learning and evaluation in the presence of class hierarchies: Application to text categorization
A new algorithm for reducing the workload of experts in performing systematic reviews
A novel machine learning approach to analyzing geospatial vessel patterns using AIS data
  • DOI:
    10.1080/15481603.2022.2118437
  • 发表时间:
    2022-12-31
  • 期刊:
  • 影响因子:
    6.7
  • 作者:
    Ferreira, Martha Dais;Campbell, Jessica N. A.;Matwin, Stan
  • 通讯作者:
    Matwin, Stan

Matwin, Stan的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Matwin, Stan', 18)}}的其他基金

Interpretability for Machine Learning
机器学习的可解释性
  • 批准号:
    CRC-2019-00383
  • 财政年份:
    2022
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Canada Research Chairs
Automated Monitoring of the Naval Information Space (AMNIS)
海军信息空间 (AMNIS) 自动监控
  • 批准号:
    550722-2020
  • 财政年份:
    2021
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Alliance Grants
Research Challenges in Privacy-Aware Mobility Data Analysis and in Text Mining with Enriched Data
隐私意识移动数据分析和丰富数据文本挖掘的研究挑战
  • 批准号:
    RGPIN-2016-03913
  • 财政年份:
    2021
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Discovery Grants Program - Individual
Interpretability For Machine Learning
机器学习的可解释性
  • 批准号:
    CRC-2019-00383
  • 财政年份:
    2021
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Canada Research Chairs
Interpretability for Machine Learning
机器学习的可解释性
  • 批准号:
    CRC-2019-00383
  • 财政年份:
    2020
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Canada Research Chairs
Research Challenges in Privacy-Aware Mobility Data Analysis and in Text Mining with Enriched Data
隐私意识移动数据分析和丰富数据文本挖掘的研究挑战
  • 批准号:
    RGPIN-2016-03913
  • 财政年份:
    2020
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Discovery Grants Program - Individual
Automated Monitoring of the Naval Information Space (AMNIS)
海军信息空间 (AMNIS) 自动监控
  • 批准号:
    550722-2020
  • 财政年份:
    2020
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Alliance Grants
Visual Text Analytics
视觉文本分析
  • 批准号:
    1000228345-2012
  • 财政年份:
    2019
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Canada Research Chairs
Research Challenges in Privacy-Aware Mobility Data Analysis and in Text Mining with Enriched Data
隐私意识移动数据分析和丰富数据文本挖掘的研究挑战
  • 批准号:
    RGPIN-2016-03913
  • 财政年份:
    2019
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Discovery Grants Program - Individual
Interpretability for Machine Learning
机器学习的可解释性
  • 批准号:
    CRC-2019-00383
  • 财政年份:
    2019
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Canada Research Chairs

相似国自然基金

Understanding structural evolution of galaxies with machine learning
  • 批准号:
    n/a
  • 批准年份:
    2022
  • 资助金额:
    10.0 万元
  • 项目类别:
    省市级项目

相似海外基金

CAREER: Blessing of Nonconvexity in Machine Learning - Landscape Analysis and Efficient Algorithms
职业:机器学习中非凸性的祝福 - 景观分析和高效算法
  • 批准号:
    2337776
  • 财政年份:
    2024
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Continuing Grant
RII Track-4:NSF: Physics-Informed Machine Learning with Organ-on-a-Chip Data for an In-Depth Understanding of Disease Progression and Drug Delivery Dynamics
RII Track-4:NSF:利用器官芯片数据进行物理信息机器学习,深入了解疾病进展和药物输送动力学
  • 批准号:
    2327473
  • 财政年份:
    2024
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Standard Grant
CC* Campus Compute: UTEP Cyberinfrastructure for Scientific and Machine Learning Applications
CC* 校园计算:用于科学和机器学习应用的 UTEP 网络基础设施
  • 批准号:
    2346717
  • 财政年份:
    2024
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Standard Grant
Learning to create Intelligent Solutions with Machine Learning and Computer Vision: A Pathway to AI Careers for Diverse High School Students
学习利用机器学习和计算机视觉创建智能解决方案:多元化高中生的人工智能职业之路
  • 批准号:
    2342574
  • 财政年份:
    2024
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Standard Grant
Collaborative Research: Conference: DESC: Type III: Eco Edge - Advancing Sustainable Machine Learning at the Edge
协作研究:会议:DESC:类型 III:生态边缘 - 推进边缘的可持续机器学习
  • 批准号:
    2342498
  • 财政年份:
    2024
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Standard Grant
Excellence in Research:Towards Data and Machine Learning Fairness in Smart Mobility
卓越研究:实现智能移动中的数据和机器学习公平
  • 批准号:
    2401655
  • 财政年份:
    2024
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Standard Grant
I-Corps: Translation potential of using machine learning to predict oxaliplatin chemotherapy benefit in early colon cancer
I-Corps:利用机器学习预测奥沙利铂化疗对早期结肠癌疗效的转化潜力
  • 批准号:
    2425300
  • 财政年份:
    2024
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Standard Grant
CAREER: Mitigating the Lack of Labeled Training Data in Machine Learning Based on Multi-level Optimization
职业:基于多级优化缓解机器学习中标记训练数据的缺乏
  • 批准号:
    2339216
  • 财政年份:
    2024
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Continuing Grant
Postdoctoral Fellowship: OPP-PRF: Leveraging Community Structure Data and Machine Learning Techniques to Improve Microbial Functional Diversity in an Arctic Ocean Ecosystem Model
博士后奖学金:OPP-PRF:利用群落结构数据和机器学习技术改善北冰洋生态系统模型中的微生物功能多样性
  • 批准号:
    2317681
  • 财政年份:
    2024
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Standard Grant
Accelerated discovery of ultra-fast ionic conductors with machine learning
通过机器学习加速超快离子导体的发现
  • 批准号:
    24K08582
  • 财政年份:
    2024
  • 资助金额:
    $ 2.11万
  • 项目类别:
    Grant-in-Aid for Scientific Research (C)
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了