Interpretable Machine Learning for life science data
生命科学数据的可解释机器学习
基本信息
- 批准号:RGPIN-2020-05860
- 负责人:
- 金额:$ 4.01万
- 依托单位:
- 依托单位国家:加拿大
- 项目类别:Discovery Grants Program - Individual
- 财政年份:2021
- 资助国家:加拿大
- 起止时间:2021-01-01 至 2022-12-31
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
The recent surge of interest in artificial intelligence (AI) applications has brought to light several new opportunities for the Canadian industries. Although several successful applications have recently emerged in many key sectors, important hindrances remain to be addressed in other sectors such as life science and bioinformatics, where innovations may have significant social and ethical impacts. One such issue is interpretability. Indeed, most AI applications are developed with the Machine Learning (ML) paradigm, which consists in designing algorithms to learn a task by themselves rather than explicitly programming the solution. For harder tasks, we often grant the learning algorithm with greater capacity of transforming the data to a suitable representation. Although this permits the task to be performed, it comes at a cost: there exists an inverse relationship between the capacity and the effective interpretability of the model, potentially making it an opaque model difficult to understand by a human. Motivations for interpretable ML methods are numerous: build trust by making the model challengeable, contribute to acceptance by explaining decisions, serve as a diagnostic tool to drive future data collections, help in certification processes to uncover corner cases, assess the presence of undesired biases in the model to ensure fairness and reveal obfuscated decision mechanisms for knowledge discovery. All these aspects are crucial and are part of the long-term goal of the research program to reach a unified methodology and general understanding of ML interpretability. The field of bioinformatics and life science in general offers great potential for research on ML interpretability. In fact, the field is facing some specific realities that make the problems more difficult than in other areas where ML is already fruitful. For instance, the amount of available data can be limited and costly to gather; the data might originate from different labs, making the fusion of multiple dataset a nontrivial task; over- and under-representation of some populations can lead to unacceptable biases that must be identified and compensated; and potential discoveries made by ML algorithms must be humanly intelligible to be used. The present project proposes to address these issues by developing specialized ML methods by covering the bias, diagnostic and knowledge discovery aspects of interpretability. It is expected that advances in ML interpretability will contribute to the general acceptance and trust of using AI in life science. The initial impact will be to promote the use of ML to accelerate life science research by making it more cost effective and scalable. The improved practices will eventually foster the development of AI applications that will be beneficial to Canadians once introduced in the healthcare systems.
最近人们对人工智能(AI)应用的兴趣激增,为加拿大行业带来了几个新的机遇。尽管最近在许多关键部门出现了几项成功的应用,但在生命科学和生物信息学等其他部门仍有重要的障碍需要解决,在这些部门,创新可能会产生重大的社会和伦理影响。其中一个问题是可解释性。事实上,大多数人工智能应用程序都是使用机器学习(ML)范例开发的,机器学习范例在于设计算法来自行学习任务,而不是显式地对解决方案进行编程。对于较难的任务,我们通常赋予学习算法更大的能力来将数据转换为合适的表示。尽管这允许执行任务,但它是有代价的:在模型的能力和有效的可解释性之间存在反向关系,潜在地使其成为一个难以被人类理解的不透明的模型。可解释的ML方法的动机有很多:通过使模型具有挑战性来建立信任,通过解释决策来促进接受,作为一种诊断工具来推动未来的数据收集,帮助认证过程来发现角落案例,评估模型中是否存在不希望看到的偏见以确保公平性,并揭示模糊的决策机制以用于知识发现。所有这些方面都是至关重要的,也是研究计划的长期目标的一部分,目的是达成统一的方法和对ML可解释性的一般理解。生物信息学和生命科学领域为ML可解释性的研究提供了巨大的潜力。事实上,该领域正面临一些特定的现实,这使得问题比ML已经取得丰硕成果的其他领域更加困难。例如,可用数据量可能有限,收集成本也很高;数据可能来自不同的实验室,使多个数据集的融合成为一项艰巨的任务;某些群体的代表过多和代表不足可能导致不可接受的偏见,必须加以识别和补偿;ML算法做出的潜在发现必须是人类可以理解的,才能使用。本项目建议通过开发专门的ML方法来解决这些问题,方法是涵盖可解释性的偏见、诊断和知识发现方面。预计ML可解释性的进步将有助于在生命科学中使用人工智能的普遍接受和信任。最初的影响将是通过使其更具成本效益和可伸缩性来促进使用ML来加速生命科学研究。改进的做法最终将促进人工智能应用程序的开发,一旦将其引入医疗系统,将使加拿大人受益。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Laviolette, François其他文献
Laviolette, François的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Laviolette, François', 18)}}的其他基金
Big data analytics in insurance
保险业大数据分析
- 批准号:
515901-2017 - 财政年份:2021
- 资助金额:
$ 4.01万 - 项目类别:
Collaborative Research and Development Grants
Interpretable Machine Learning for life science data
生命科学数据的可解释机器学习
- 批准号:
RGPAS-2020-00082 - 财政年份:2021
- 资助金额:
$ 4.01万 - 项目类别:
Discovery Grants Program - Accelerator Supplements
NSERC/Intact Financial Industrial Research Chair in Machine Learning for Insurances
NSERC/Intact Financial 保险机器学习工业研究主席
- 批准号:
529529-2017 - 财政年份:2021
- 资助金额:
$ 4.01万 - 项目类别:
Industrial Research Chairs
Big data analytics in insurance
保险业大数据分析
- 批准号:
515901-2017 - 财政年份:2020
- 资助金额:
$ 4.01万 - 项目类别:
Collaborative Research and Development Grants
Interpretable Machine Learning for life science data
生命科学数据的可解释机器学习
- 批准号:
RGPAS-2020-00082 - 财政年份:2020
- 资助金额:
$ 4.01万 - 项目类别:
Discovery Grants Program - Accelerator Supplements
DEEL DEpendable & Explainable Learning
DEEL 值得信赖
- 批准号:
537462-2018 - 财政年份:2020
- 资助金额:
$ 4.01万 - 项目类别:
Collaborative Research and Development Grants
Interpretable Machine Learning for life science data
生命科学数据的可解释机器学习
- 批准号:
RGPIN-2020-05860 - 财政年份:2020
- 资助金额:
$ 4.01万 - 项目类别:
Discovery Grants Program - Individual
NSERC/Intact Financial Industrial Research Chair in Machine Learning for Insurances
NSERC/Intact Financial 保险机器学习工业研究主席
- 批准号:
529529-2017 - 财政年份:2020
- 资助金额:
$ 4.01万 - 项目类别:
Industrial Research Chairs
DEEL DEpendable & Explainable Learning
DEEL 值得信赖
- 批准号:
537462-2018 - 财政年份:2019
- 资助金额:
$ 4.01万 - 项目类别:
Collaborative Research and Development Grants
Big data analytics in insurance
保险业大数据分析
- 批准号:
515901-2017 - 财政年份:2019
- 资助金额:
$ 4.01万 - 项目类别:
Collaborative Research and Development Grants
相似国自然基金
Understanding structural evolution of galaxies with machine learning
- 批准号:n/a
- 批准年份:2022
- 资助金额:10.0 万元
- 项目类别:省市级项目
相似海外基金
22-BBSRC/NSF-BIO - Interpretable & Noise-robust Machine Learning for Neurophysiology
22-BBSRC/NSF-BIO - 可解释
- 批准号:
BB/Y008758/1 - 财政年份:2024
- 资助金额:
$ 4.01万 - 项目类别:
Research Grant
Interpretable Machine Learning Modelling of Future Extreme Floods under Climate Change
气候变化下未来极端洪水的可解释机器学习模型
- 批准号:
2889015 - 财政年份:2023
- 资助金额:
$ 4.01万 - 项目类别:
Studentship
UKRI/BBSRC-NSF/BIO: Interpretable and Noise-Robust Machine Learning for Neurophysiology
UKRI/BBSRC-NSF/BIO:用于神经生理学的可解释且抗噪声的机器学习
- 批准号:
2321840 - 财政年份:2023
- 资助金额:
$ 4.01万 - 项目类别:
Continuing Grant
CAREER: Interpretable and Robust Machine Learning Models: Analysis and Algorithms
职业:可解释且稳健的机器学习模型:分析和算法
- 批准号:
2239787 - 财政年份:2023
- 资助金额:
$ 4.01万 - 项目类别:
Continuing Grant
Macroeconomic structural changes and their characteristics: Applications of interpretable machine learning
宏观经济结构变化及其特征:可解释机器学习的应用
- 批准号:
23K01319 - 财政年份:2023
- 资助金额:
$ 4.01万 - 项目类别:
Grant-in-Aid for Scientific Research (C)
Optimization and Validation of a Cost-effective Image-Guided Automated Extracapsular Extension Detection Framework through Interpretable Machine Learning in Head and Neck Cancer
通过可解释的机器学习在头颈癌中优化和验证具有成本效益的图像引导自动囊外扩展检测框架
- 批准号:
10648372 - 财政年份:2023
- 资助金额:
$ 4.01万 - 项目类别:
Accurate, reliable, and interpretable machine learning for assessment of neonatal and pediatric brain micro-structure
准确、可靠且可解释的机器学习,用于评估新生儿和儿童大脑微结构
- 批准号:
10566299 - 财政年份:2023
- 资助金额:
$ 4.01万 - 项目类别:
Improving Interpretable Machine Learning for Plasmas: Towards Physical Insight, Data-Driven Models, and Optimal Sensing
改进等离子体的可解释机器学习:迈向物理洞察、数据驱动模型和最佳传感
- 批准号:
2329765 - 财政年份:2023
- 资助金额:
$ 4.01万 - 项目类别:
Continuing Grant
Interpretable machine learning to synergize brain age estimation and neuroimaging genetics
可解释的机器学习可协同大脑年龄估计和神经影像遗传学
- 批准号:
10568234 - 财政年份:2023
- 资助金额:
$ 4.01万 - 项目类别:
Collaborative Research: CIF: Small: Interpretable Fair Machine Learning: Frameworks, Robustness, and Scalable Algorithms
协作研究:CIF:小型:可解释的公平机器学习:框架、稳健性和可扩展算法
- 批准号:
2343869 - 财政年份:2023
- 资助金额:
$ 4.01万 - 项目类别:
Standard Grant