CAREER: Generative Models for Targeted Domain Interpretability with Applications to Healthcare

职业:目标领域可解释性的生成模型及其在医疗保健领域的应用

基本信息

  • 批准号:
    1750358
  • 负责人:
  • 金额:
    $ 54.8万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
    Continuing Grant
  • 财政年份:
    2018
  • 资助国家:
    美国
  • 起止时间:
    2018-02-15 至 2025-01-31
  • 项目状态:
    未结题

项目摘要

The imminent deployment of AI and machine learning in poorly-characterized settings such as autonomous driving, personalized news feeds, and treatment recommendation systems has created an urgent need for machine learning systems that explain their decisions. Interpretability helps human experts ascertain whether machine learning systems, trained on technical objective functions, have sensible outputs despite unmodeled unknowns. For example, a clinical decision support system will never know all of a patient's history, nor may it know which of many side effects a specific patient is willing to tolerate. An important challenge, then, is how to design machine learning systems that both predict well and provide explanation. Within this broad challenge, this work develops techniques for domain-targeted interpretability, finding summaries of high-dimensional data that are relevant for making decisions. The proposed work focuses on healthcare applications, where interpretable models are essential to safety. However, the project aims to produce foundational learning algorithms applicable to a range of scientific and social domains. The developed methods will be tested on real problems in personalizing treatment recommendations and prognoses for sepsis, depression, and autism spectrum disorder. Thus, the successful completion of the work will impact both interpretable machine learning and clinical science. All software developed in the course of the project will be freely shared. The educational component of the proposed work will educate early elementary students about the impact of statistics in medicine and educate policy-makers and legal scholars on how a right to explanation might be regulated in the context of machine learning, such as clinical decision support systems. PI Doshi-Velez also engages high school students, undergraduates, women, and researchers from underserved areas in her lab.The proposed work addresses a specific challenge common in scientific settings: domain-targeted interpretability. In many scientific domains, unsupervised generative models are used by domain experts to understand patterns in the data, but as the dimensionality of data grow, the most salient patterns in the data may not be relevant for the specific investigation. For example, a psychiatrist may find the strongest signals in the data from his patient cohort come from diabetes and heart disease, which may not be relevant for choosing therapies for depression. The proposed work leverages synergies in explaining domain-relevant patterns in the data and performing well on domain-relevant tasks to achieve domain-targeted interpretability. It defines a task-constrained approach to domain-targeted interpretability and develops essential inference techniques, develops extensions to sequential decision making, and defines extensions to improve downstream task performance while retaining interpretabilty. While there is a large body of work on making unsupervised learning models also useful for downstream tasks, none of these approaches truly manage the trade-offs between providing an interpretation of data and task performance. The proposed work addresses these shortcomings to make domain-targeted interpretability and task performance synergistic goals, and proposes a number of innovations to solve the proposed objective. Innovations include combining an existing rich literature on inference traditional unsupervised models with modern inference techniques and directly searching for dimensions or patterns relevant to the downstream task.
人工智能和机器学习即将在自动驾驶、个性化新闻推送和治疗推荐系统等特征不佳的环境中部署,这就迫切需要机器学习系统来解释他们的决策。可解释性帮助人类专家确定在技术目标函数上训练的机器学习系统是否具有合理的输出,尽管存在未建模的未知数。例如,临床决策支持系统永远不会知道患者的所有病史,也不可能知道特定患者愿意忍受许多副作用中的哪一个。因此,一个重要的挑战是如何设计既能很好地预测又能提供解释的机器学习系统。 在这个广泛的挑战中,这项工作开发了针对领域的可解释性技术,找到了与决策相关的高维数据的摘要。 拟议的工作重点是医疗保健应用,其中可解释的模型对安全至关重要。 然而,该项目旨在产生适用于一系列科学和社会领域的基础学习算法。 开发的方法将在个性化治疗建议和败血症,抑郁症和自闭症谱系障碍的诊断中的真实的问题上进行测试。因此,这项工作的成功完成将影响可解释的机器学习和临床科学。 在项目过程中开发的所有软件都将免费共享。 拟议工作的教育部分将教育早期小学生统计在医学中的影响,并教育政策制定者和法律的学者如何在机器学习的背景下规范解释权,如临床决策支持系统。PI Doshi-Velez还吸引了高中生,本科生,女性和来自她实验室服务不足地区的研究人员。拟议的工作解决了科学环境中常见的一个特定挑战:针对领域的可解释性。在许多科学领域,领域专家使用无监督生成模型来理解数据中的模式,但随着数据维度的增长,数据中最突出的模式可能与特定调查无关。例如,精神科医生可能会发现他的患者队列数据中最强的信号来自糖尿病和心脏病,这可能与选择抑郁症治疗方法无关。 拟议的工作利用协同效应来解释数据中与领域相关的模式,并在与领域相关的任务上表现良好,以实现针对领域的可解释性。它定义了一个任务约束的方法,以领域为目标的可解释性和开发必要的推理技术,开发扩展顺序决策,并定义扩展,以提高下游任务的性能,同时保留可解释性。 虽然有大量的工作使无监督学习模型对下游任务也有用,但这些方法都没有真正管理提供数据解释和任务性能之间的权衡。所提出的工作解决了这些缺点,使领域为目标的可解释性和任务性能的协同目标,并提出了一些创新,以解决所提出的目标。创新包括将现有的关于推理的丰富文献传统无监督模型与现代推理技术相结合,并直接搜索与下游任务相关的维度或模式。

项目成果

期刊论文数量(7)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Signature Activation: A Sparse Signal View for Holistic Saliency
  • DOI:
    10.48550/arxiv.2309.11443
  • 发表时间:
    2023-09
  • 期刊:
  • 影响因子:
    0
  • 作者:
    José Roberto Tello-Ayala;A. Fahed;Weiwei Pan;E. Pomerantsev;P. Ellinor;A. Philippakis;F. Doshi-Velez
  • 通讯作者:
    José Roberto Tello-Ayala;A. Fahed;Weiwei Pan;E. Pomerantsev;P. Ellinor;A. Philippakis;F. Doshi-Velez
Soft prompting might be a bug, not a feature
软提示可能是一个错误,而不是一个功能
  • DOI:
  • 发表时间:
    2023
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Bailey, Luke;Ahdritz, Gustaf;Kleiman, Anat;Swaroop, Siddharth;Doshi-Velez, Finale;Pan, Weiwei
  • 通讯作者:
    Pan, Weiwei
A Joint Learning Approach for Semi-supervised Neural Topic Modeling
半监督神经主题建模的联合学习方法
Online model selection by learning how compositional kernels evolve
通过学习组合核如何演化进行在线模型选择
Implications of Gaussian process kernel mismatch for out-of-distribution data
高斯过程核不匹配对分布外数据的影响
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Finale Doshi-Velez其他文献

How machine-learning recommendations influence clinician treatment selections: the example of antidepressant selection
机器学习推荐如何影响临床医生的治疗选择:以抗抑郁药选择为例
  • DOI:
    10.1038/s41398-021-01224-x
  • 发表时间:
    2021-02-04
  • 期刊:
  • 影响因子:
    6.200
  • 作者:
    Maia Jacobs;Melanie F. Pradier;Thomas H. McCoy;Roy H. Perlis;Finale Doshi-Velez;Krzysztof Z. Gajos
  • 通讯作者:
    Krzysztof Z. Gajos
Ethical and regulatory challenges of large language models in medicine
医学中大型语言模型的伦理和监管挑战
  • DOI:
    10.1016/s2589-7500(24)00061-x
  • 发表时间:
    2024-06-01
  • 期刊:
  • 影响因子:
    24.100
  • 作者:
    Jasmine Chiat Ling Ong;Shelley Yin-Hsi Chang;Wasswa William;Atul J Butte;Nigam H Shah;Lita Sui Tjien Chew;Nan Liu;Finale Doshi-Velez;Wei Lu;Julian Savulescu;Daniel Shu Wei Ting
  • 通讯作者:
    Daniel Shu Wei Ting
Association between prescriber practices and major depression treatment outcomes
  • DOI:
    10.1016/j.xjmad.2024.100080
  • 发表时间:
    2024-12-01
  • 期刊:
  • 影响因子:
  • 作者:
    Sarah Rathnam;Abhishek Sharma;Kamber L. Hart;Pilar F. Verhaak;Thomas H. McCoy;Roy H. Perlis;Finale Doshi-Velez
  • 通讯作者:
    Finale Doshi-Velez

Finale Doshi-Velez的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Finale Doshi-Velez', 18)}}的其他基金

RI: Small: Human Validation in Batch Reinforcement Learning
RI:小:批量强化学习中的人工验证
  • 批准号:
    2007076
  • 财政年份:
    2020
  • 资助金额:
    $ 54.8万
  • 项目类别:
    Continuing Grant
RI: Small: Collaborative Research: Hidden Parameter Markov Decision Processes: Exploiting Structure in Families of Tasks
RI:小型:协作研究:隐藏参数马尔可夫决策过程:利用任务族中的结构
  • 批准号:
    1718306
  • 财政年份:
    2017
  • 资助金额:
    $ 54.8万
  • 项目类别:
    Standard Grant
RI: Small: Workshop for Women in Machine Learning
RI:小型:机器学习领域女性研讨会
  • 批准号:
    1649706
  • 财政年份:
    2016
  • 资助金额:
    $ 54.8万
  • 项目类别:
    Standard Grant
Scalable Bayesian Inference for Interpretable Time-Series Models
可解释时间序列模型的可扩展贝叶斯推理
  • 批准号:
    1544628
  • 财政年份:
    2015
  • 资助金额:
    $ 54.8万
  • 项目类别:
    Standard Grant
Scalable Bayesian Inference in Large Medical Databases
大型医学数据库中的可扩展贝叶斯推理
  • 批准号:
    1225204
  • 财政年份:
    2012
  • 资助金额:
    $ 54.8万
  • 项目类别:
    Fellowship Award

相似海外基金

AI HUB IN GENERATIVE MODELS
生成模型中的 AI 中心
  • 批准号:
    EP/Y028805/1
  • 财政年份:
    2024
  • 资助金额:
    $ 54.8万
  • 项目类别:
    Research Grant
SBIR Phase I: Methods for Embedding User Data into 3D Generative AI Computer-aided-Design Models
SBIR 第一阶段:将用户数据嵌入 3D 生成式 AI 计算机辅助设计模型的方法
  • 批准号:
    2335491
  • 财政年份:
    2024
  • 资助金额:
    $ 54.8万
  • 项目类别:
    Standard Grant
SG: Species Distribution Modeling on the A.I. frontier: Deep generative models for powerful, general and accessible SDM
SG:人工智能上的物种分布建模
  • 批准号:
    2329701
  • 财政年份:
    2024
  • 资助金额:
    $ 54.8万
  • 项目类别:
    Standard Grant
AI innovation in the supply chain of consumer packaged-goods for recognising objects in retail execution, supply chain management and smart factories: using novel diffusion-based optimisation algorithms and diffusion-based generative models
消费包装商品供应链中的人工智能创新,用于识别零售执行、供应链管理和智能工厂中的对象:使用新颖的基于扩散的优化算法和基于扩散的生成模型
  • 批准号:
    10081810
  • 财政年份:
    2023
  • 资助金额:
    $ 54.8万
  • 项目类别:
    Collaborative R&D
Development of data-driven multiple sound spot synthesis technology based on deep generative neural network models
基于深度生成神经网络模型的数据驱动多声点合成技术开发
  • 批准号:
    23K11177
  • 财政年份:
    2023
  • 资助金额:
    $ 54.8万
  • 项目类别:
    Grant-in-Aid for Scientific Research (C)
I-Corps: A Framework for Streamlining the Development and Deployment of Generative Artificial Intelligence (AI) Models on Enterprise Data
I-Corps:简化企业数据生成人工智能 (AI) 模型的开发和部署的框架
  • 批准号:
    2335828
  • 财政年份:
    2023
  • 资助金额:
    $ 54.8万
  • 项目类别:
    Standard Grant
Proteasomal recruiters of PAX3-FOXO1 Designed via Sequence-Based Generative Models
通过基于序列的生成模型设计的 PAX3-FOXO1 蛋白酶体招募剂
  • 批准号:
    10826068
  • 财政年份:
    2023
  • 资助金额:
    $ 54.8万
  • 项目类别:
Development of a Realistic LiDAR Simulator based on Deep Generative Models
基于深度生成模型的现实 LiDAR 模拟器的开发
  • 批准号:
    23K16974
  • 财政年份:
    2023
  • 资助金额:
    $ 54.8万
  • 项目类别:
    Grant-in-Aid for Early-Career Scientists
CAREER: Exploiting Deep Generative Models for Visual Recognition
职业:利用深度生成模型进行视觉识别
  • 批准号:
    2239076
  • 财政年份:
    2023
  • 资助金额:
    $ 54.8万
  • 项目类别:
    Continuing Grant
RI: Small: Integrating physics, data, and art-based insights for controllable generative models
RI:小型:集成物理、数据和基于艺术的见解以实现可控生成模型
  • 批准号:
    2323086
  • 财政年份:
    2023
  • 资助金额:
    $ 54.8万
  • 项目类别:
    Standard Grant
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了