Resilience, Interpretability, and Scale in Large Complex Systems
大型复杂系统的弹性、可解释性和规模
基本信息
- 批准号:RGPIN-2020-04490
- 负责人:
- 金额:$ 2.4万
- 依托单位:
- 依托单位国家:加拿大
- 项目类别:Discovery Grants Program - Individual
- 财政年份:2020
- 资助国家:加拿大
- 起止时间:2020-01-01 至 2021-12-31
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
Throughout my research career I have been interested in image processing problems with an explicit notion of hierarchy or scale. In the last five to ten years, multi-scale algorithms have transformed into what are now known as convolutional neural networks (CNNs) or deep networks, a family of approaches which now dominates nearly every aspect of data analysis and computer vision. Although these deep networks have remarkable performance, to some extent this appearance of invincibility is quite misplaced. In particular, their nonlinearity and size means that these networks are essentially uninterpretable, that there is no way to answer how the 100-million parameters in some 100-layer black box reached a certain conclusion. Furthermore, there is the unsettling observation that networks, reporting 99.9% accuracy, in fact fail for other very elementary problems. If such networks are to be used in healthcare or autonomous vehicles or countless other life-critical applications, then a degree of interpretability and network resilience is essential, and indeed is slowly being mandated by some governments.
The following inter-related summaries outline the exploratory research objectives proposed under my Discovery Grant:
1. Resilience and Interpretability of Deep Networks:
The unusual aspects of deep learning - fantastically robust on certain test sets, and then bafflingly catastrophic errors on others - hint at unusual patterns of learning. Since the network essentially lives in a 100-million dimensional space, visualizing the learned classification boundaries is completely out of the question, yet some insight into the learning process is essential in producing more resilient networks with more predictable learning outcomes and some degree of interpretability.
2. The role of Scale in Large Networks:
Deep networks and related strategies suffer from a high computational complexity and from a limited predictability as to when the approach will or will not work. It is ambiguous how or when scale may implicitly be introduced by machine learning, although anecdotally it is known that learned filters frequently show scale-related patterns. The goal of this work is to explicitly introduce scale dependence, whether at the inputs, transferred from other networks, or explicitly into the network architecture.
3. Large Complex Systems and Nonlinear Networks:
Deep networks are, in a sense, just unusually large complex nonlinear systems. However complex systems research has had relatively little connection to research into deep networks. I would like to consider whether aspects of resilience in complex systems might lead to insights on related questions of resilience in large networks, not that the algorithm would be the same, but whether understanding of resilience might translate.
All of these topics and skills are in demand across a wide range of industries, leading to outstanding training and future employment opportunities for HQP.
在我的研究生涯中,我一直对具有明确层次或规模概念的图像处理问题感兴趣。在过去的五到十年里,多尺度算法已经转变为现在被称为卷积神经网络(cnn)或深度网络的方法,这一系列方法现在几乎主导了数据分析和计算机视觉的各个方面。尽管这些深度网络表现出色,但在某种程度上,这种不可战胜的表象是相当错误的。特别是,它们的非线性和规模意味着这些网络本质上是不可解释的,没有办法回答一个100层黑箱中的1亿个参数是如何得出某个结论的。此外,还有一个令人不安的观察结果,即网络,报告99.9%的准确率,实际上在其他非常基本的问题上失败了。如果这样的网络要用于医疗保健、自动驾驶汽车或无数其他生命攸关的应用,那么一定程度的可解释性和网络弹性是必不可少的,一些政府确实正在慢慢强制要求。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Fieguth, Paul其他文献
Constrained Watershed Method to Infer Morphology of Mammalian Cells in Microscopic Images
- DOI:
10.1002/cyto.a.20951 - 发表时间:
2010-12-01 - 期刊:
- 影响因子:3.7
- 作者:
Kachouie, Nezamoddin N.;Fieguth, Paul;Khademhosseini, Ali - 通讯作者:
Khademhosseini, Ali
Process performance evaluation and classification via in-situ melt pool monitoring in directed energy deposition
- DOI:
10.1016/j.cirpj.2021.06.015 - 发表时间:
2021-07-14 - 期刊:
- 影响因子:4.8
- 作者:
Ertay, Deniz Sera;Naiel, Mohamed A.;Fieguth, Paul - 通讯作者:
Fieguth, Paul
Extended local binary patterns for texture classification
- DOI:
10.1016/j.imavis.2012.01.001 - 发表时间:
2012-02-01 - 期刊:
- 影响因子:4.7
- 作者:
Liu, Li;Zhao, Lingjun;Fieguth, Paul - 通讯作者:
Fieguth, Paul
Virtual histological staining of label-free total absorption photoacoustic remote sensing (TA-PARS).
- DOI:
10.1038/s41598-022-14042-y - 发表时间:
2022-06-18 - 期刊:
- 影响因子:4.6
- 作者:
Boktor, Marian;Ecclestone, Benjamin R.;Pekar, Vlad;Dinakaran, Deepak;Mackey, John R.;Fieguth, Paul;Haji Reza, Parsin - 通讯作者:
Haji Reza, Parsin
Deep learning methods for inverse problems.
- DOI:
10.7717/peerj-cs.951 - 发表时间:
2022 - 期刊:
- 影响因子:3.8
- 作者:
Kamyab, Shima;Azimifar, Zohreh;Sabzi, Rasool;Fieguth, Paul - 通讯作者:
Fieguth, Paul
Fieguth, Paul的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Fieguth, Paul', 18)}}的其他基金
Resilience, Interpretability, and Scale in Large Complex Systems
大型复杂系统的弹性、可解释性和规模
- 批准号:
RGPIN-2020-04490 - 财政年份:2022
- 资助金额:
$ 2.4万 - 项目类别:
Discovery Grants Program - Individual
Resilience, Interpretability, and Scale in Large Complex Systems
大型复杂系统的弹性、可解释性和规模
- 批准号:
RGPIN-2020-04490 - 财政年份:2021
- 资助金额:
$ 2.4万 - 项目类别:
Discovery Grants Program - Individual
Unsupervised Machine Learning for Visual Relation Detection
用于视觉关系检测的无监督机器学习
- 批准号:
549003-2019 - 财政年份:2021
- 资助金额:
$ 2.4万 - 项目类别:
Alliance Grants
Advanced Calibration for Multiple Projector Systems
多投影仪系统的高级校准
- 批准号:
531853-2018 - 财政年份:2020
- 资助金额:
$ 2.4万 - 项目类别:
Collaborative Research and Development Grants
Unsupervised Machine Learning for Visual Relation Detection
用于视觉关系检测的无监督机器学习
- 批准号:
549003-2019 - 财政年份:2020
- 资助金额:
$ 2.4万 - 项目类别:
Alliance Grants
Advanced Calibration for Multiple Projector Systems
多投影仪系统的高级校准
- 批准号:
531853-2018 - 财政年份:2019
- 资助金额:
$ 2.4万 - 项目类别:
Collaborative Research and Development Grants
Scale-Coupling and Non-Locality in Large Random Fields
大随机场中的尺度耦合和非局部性
- 批准号:
RGPIN-2015-05866 - 财政年份:2019
- 资助金额:
$ 2.4万 - 项目类别:
Discovery Grants Program - Individual
Scale-Coupling and Non-Locality in Large Random Fields
大随机场中的尺度耦合和非局部性
- 批准号:
RGPIN-2015-05866 - 财政年份:2018
- 资助金额:
$ 2.4万 - 项目类别:
Discovery Grants Program - Individual
Advanced correction of projected imagery
投影图像的高级校正
- 批准号:
499828-2016 - 财政年份:2017
- 资助金额:
$ 2.4万 - 项目类别:
Collaborative Research and Development Grants
Scale-Coupling and Non-Locality in Large Random Fields
大随机场中的尺度耦合和非局部性
- 批准号:
RGPIN-2015-05866 - 财政年份:2017
- 资助金额:
$ 2.4万 - 项目类别:
Discovery Grants Program - Individual
相似海外基金
Collaborative Research: SHF: Medium: Toward Understandability and Interpretability for Neural Language Models of Source Code
合作研究:SHF:媒介:实现源代码神经语言模型的可理解性和可解释性
- 批准号:
2423813 - 财政年份:2024
- 资助金额:
$ 2.4万 - 项目类别:
Standard Grant
Enhancing the Accuracy and Interpretability of Global Flood Models with AI: Development of a Physics-Guided Deep Learning Model Considering River Network Topology
利用人工智能提高全球洪水模型的准确性和可解释性:考虑河网拓扑的物理引导深度学习模型的开发
- 批准号:
24K17353 - 财政年份:2024
- 资助金额:
$ 2.4万 - 项目类别:
Grant-in-Aid for Early-Career Scientists
Collaborative Research: SHF: Medium: Toward Understandability and Interpretability for Neural Language Models of Source Code
合作研究:SHF:媒介:实现源代码神经语言模型的可理解性和可解释性
- 批准号:
2311468 - 财政年份:2023
- 资助金额:
$ 2.4万 - 项目类别:
Standard Grant
Automatic Controller Design with Performance and Interpretability in Reliable Industrial Applications
在可靠的工业应用中具有性能和可解释性的自动控制器设计
- 批准号:
23K19116 - 财政年份:2023
- 资助金额:
$ 2.4万 - 项目类别:
Grant-in-Aid for Research Activity Start-up
Collaborative Research: SHF: Medium: Toward Understandability and Interpretability for Neural Language Models of Source Code
合作研究:SHF:媒介:实现源代码神经语言模型的可理解性和可解释性
- 批准号:
2311469 - 财政年份:2023
- 资助金额:
$ 2.4万 - 项目类别:
Standard Grant
Development of Vacancy Rate Prediction Methods for Apartments with Prediction Accuracy and Interpretability
开发具有预测准确性和可解释性的公寓空置率预测方法
- 批准号:
23K01333 - 财政年份:2023
- 资助金额:
$ 2.4万 - 项目类别:
Grant-in-Aid for Scientific Research (C)
Non-Contact Sleep Stage Estimation: Machine Learning in Multi-Imbalance Data for Improvements in Accuracy and Interpretability
非接触式睡眠阶段估计:多重不平衡数据中的机器学习,以提高准确性和可解释性
- 批准号:
22KJ1367 - 财政年份:2023
- 资助金额:
$ 2.4万 - 项目类别:
Grant-in-Aid for JSPS Fellows
CAREER: Small Data in a Big World: Balancing Interpretability and Generalizability for Data Integration in Clinical Neuroscience
职业:大世界中的小数据:平衡临床神经科学数据集成的可解释性和概括性
- 批准号:
2322823 - 财政年份:2023
- 资助金额:
$ 2.4万 - 项目类别:
Continuing Grant
CRII: III: Pursuing Interpretability in Utilitarian Online Learning Models
CRII:III:追求功利在线学习模式的可解释性
- 批准号:
2245946 - 财政年份:2023
- 资助金额:
$ 2.4万 - 项目类别:
Standard Grant
Improving the interpretability of genetic studies of major depressive disorder to identify risk genes
提高重度抑郁症基因研究的可解释性以识别风险基因
- 批准号:
10504696 - 财政年份:2022
- 资助金额:
$ 2.4万 - 项目类别: