Mathematical Principles for Neural Network Design
神经网络设计的数学原理
基本信息
- 批准号:RGPIN-2021-03864
- 负责人:
- 金额:$ 2.11万
- 依托单位:
- 依托单位国家:加拿大
- 项目类别:Discovery Grants Program - Individual
- 财政年份:2021
- 资助国家:加拿大
- 起止时间:2021-01-01 至 2022-12-31
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
Even as neural networks have shown their power in solving complex problems, their behavior remains poorly understood. Neural networks are now used for many purposes, from autonomous driving to medical image analysis. In each of these situations, a different set of properties is needed, such as good generalization to novel data or robustness to noise. Currently, there is no principled way to design neural networks to possess the particular set of characteristics needed for the task at hand. This lack of theoretical grounding means that instead of optimal algorithms designed from rigorous understanding, successes are incremental and arise from trial-and-error experimentation, while failures can be catastrophic and come as a surprise. The long-term goal of this research program is to gain a formal mathematical understanding of how design choices in neural networks affect their performance, and to use these theoretical results to derive actionable insights for practitioners. Our research has the following short-term objectives. Objective 1: Quantifying inductive biases The structure of a neural network and its optimization process determine the sets of functions that the network can express and learn, essentially encoding an inductive bias towards certain functions and away from others. In this objective, we will derive theoretical results on how the design of a neural network influences this inductive bias. Objective 2: Matching algorithms to data This objective will consider how the inductive biases of different neural networks can be leveraged to improve deep learning algorithms. We will derive methods for identifying the inductive biases that are needed to solve a particular task, and will design learning methods that give a high degree of control over which functions a neural network will learn. Objective 3: Improving security We will use our mathematical understanding of inductive biases to improve the security of deep learning algorithms. We will show when it is possible to extract information about the parameters of a neural network from the function it computes, as well as how to guard against this. Our work will protect the privacy of neural networks and the data used to train them, and will prevent adversarial attacks. Overall, this research will provide much-needed tools for the principled design of neural networks, allowing for significant increases in performance and reliability from algorithms that are increasingly essential across society. This will have an impact on fields from robotics to energy. Understanding the mathematical principles behind deep learning innovation will also help maintain Canada's preeminent position in AI. Our work will train HQP to be leading innovators in the intersection of deep learning theory and engineering, a synergistic combination of skills that is much in demand within both academia and industry.
尽管神经网络在解决复杂问题方面显示出了强大的力量,但人们对它们的行为仍然知之甚少。神经网络现在被用于许多用途,从自动驾驶到医学图像分析。在每种情况下,都需要一组不同的属性,例如对新数据的良好泛化或对噪声的鲁棒性。目前,还没有原则性的方法来设计神经网络,使其具有手头任务所需的特定特征集。这种理论基础的缺乏意味着,成功是渐进的,来自于反复试验,而不是从严格的理解中设计出最优算法,而失败可能是灾难性的,令人惊讶。该研究计划的长期目标是获得神经网络设计选择如何影响其性能的正式数学理解,并使用这些理论结果为从业者提供可操作的见解。我们的研究有以下短期目标。神经网络的结构及其优化过程决定了网络可以表达和学习的函数集,本质上是编码对某些函数的归纳偏差,而对其他函数的归纳偏差。在这个目标中,我们将推导出关于神经网络设计如何影响这种归纳偏置的理论结果。本目标将考虑如何利用不同神经网络的归纳偏差来改进深度学习算法。我们将推导出用于识别解决特定任务所需的归纳偏差的方法,并将设计学习方法,对神经网络将学习的功能进行高度控制。我们将利用我们对归纳偏差的数学理解来提高深度学习算法的安全性。我们将展示何时可以从其计算的函数中提取有关神经网络参数的信息,以及如何防范这种情况。我们的工作将保护神经网络和用于训练它们的数据的隐私,并将防止对抗性攻击。总的来说,这项研究将为神经网络的原则设计提供急需的工具,从而显著提高在整个社会中越来越重要的算法的性能和可靠性。这将对从机器人到能源等领域产生影响。了解深度学习创新背后的数学原理也将有助于保持加拿大在人工智能领域的卓越地位。我们的工作将培养HQP成为深度学习理论和工程交叉领域的领先创新者,这是学术界和工业界都非常需要的技能的协同组合。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Rolnick, David其他文献
Aligning artificial intelligence with climate change mitigation
- DOI:
10.1038/s41558-022-01377-7 - 发表时间:
2022-06-01 - 期刊:
- 影响因子:30.7
- 作者:
Kaack, Lynn H.;Donti, Priya L.;Rolnick, David - 通讯作者:
Rolnick, David
Why Does Deep and Cheap Learning Work So Well?
- DOI:
10.1007/s10955-017-1836-5 - 发表时间:
2017-09-01 - 期刊:
- 影响因子:1.6
- 作者:
Lin, Henry W.;Tegmark, Max;Rolnick, David - 通讯作者:
Rolnick, David
Hidden Symmetries of ReLU Networks
ReLU 网络的隐藏对称性
- DOI:
- 发表时间:
2023 - 期刊:
- 影响因子:0
- 作者:
Grigsby, Elisenda;Lindsey, Kathryn;Rolnick, David - 通讯作者:
Rolnick, David
Randomized Experimental Design via Geographic Clustering
通过地理聚类的随机实验设计
- DOI:
10.1145/3292500.3330778 - 发表时间:
2019 - 期刊:
- 影响因子:0
- 作者:
Rolnick, David;Aydin, Kevin;Pouget-Abadie, Jean;Kamali, Shahab;Mirrokni, Vahab;Najmi, Amir - 通讯作者:
Najmi, Amir
Deep ReLU Networks Have Surprisingly Few Activation Patterns
深度 ReLU 网络的激活模式少得惊人
- DOI:
- 发表时间:
2019 - 期刊:
- 影响因子:0
- 作者:
Hanin, Boris;Rolnick, David - 通讯作者:
Rolnick, David
Rolnick, David的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Rolnick, David', 18)}}的其他基金
Mathematical Principles for Neural Network Design
神经网络设计的数学原理
- 批准号:
RGPIN-2021-03864 - 财政年份:2022
- 资助金额:
$ 2.11万 - 项目类别:
Discovery Grants Program - Individual
Mathematical Principles for Neural Network Design
神经网络设计的数学原理
- 批准号:
DGECR-2021-00469 - 财政年份:2021
- 资助金额:
$ 2.11万 - 项目类别:
Discovery Launch Supplement
相似国自然基金
基于First Principles的光催化降解PPCPs同步脱氮体系构建及其电子分配机制研究
- 批准号:51778175
- 批准年份:2017
- 资助金额:59.0 万元
- 项目类别:面上项目
相似海外基金
Collaborative Research: RI: Medium: Principles for Optimization, Generalization, and Transferability via Deep Neural Collapse
合作研究:RI:中:通过深度神经崩溃实现优化、泛化和可迁移性的原理
- 批准号:
2312841 - 财政年份:2023
- 资助金额:
$ 2.11万 - 项目类别:
Standard Grant
Collaborative Research: RI: Medium: Principles for Optimization, Generalization, and Transferability via Deep Neural Collapse
合作研究:RI:中:通过深度神经崩溃实现优化、泛化和可迁移性的原理
- 批准号:
2312842 - 财政年份:2023
- 资助金额:
$ 2.11万 - 项目类别:
Standard Grant
Collaborative Research: RI: Medium: Principles for Optimization, Generalization, and Transferability via Deep Neural Collapse
合作研究:RI:中:通过深度神经崩溃实现优化、泛化和可迁移性的原理
- 批准号:
2312840 - 财政年份:2023
- 资助金额:
$ 2.11万 - 项目类别:
Standard Grant
Mathematical Principles for Neural Network Design
神经网络设计的数学原理
- 批准号:
RGPIN-2021-03864 - 财政年份:2022
- 资助金额:
$ 2.11万 - 项目类别:
Discovery Grants Program - Individual
New Deep Neural Network Architectures for Blind Image Watermarking Based on the Information-Theoretic Principles
基于信息论原理的新型图像盲水印深度神经网络架构
- 批准号:
547243-2020 - 财政年份:2022
- 资助金额:
$ 2.11万 - 项目类别:
Alexander Graham Bell Canada Graduate Scholarships - Doctoral
Auditory Neuroscience - The architecture and principles governing neural responses to natural sounds
听觉神经科学 - 控制神经对自然声音反应的架构和原理
- 批准号:
2748747 - 财政年份:2022
- 资助金额:
$ 2.11万 - 项目类别:
Studentship
New Deep Neural Network Architectures for Blind Image Watermarking Based on the Information-Theoretic Principles
基于信息论原理的新型图像盲水印深度神经网络架构
- 批准号:
547243-2020 - 财政年份:2021
- 资助金额:
$ 2.11万 - 项目类别:
Alexander Graham Bell Canada Graduate Scholarships - Doctoral
Elucidating the principles behind neural processing with application to neuromodulation and implant design
阐明神经处理背后的原理及其在神经调节和植入物设计中的应用
- 批准号:
RGPIN-2017-06668 - 财政年份:2021
- 资助金额:
$ 2.11万 - 项目类别:
Discovery Grants Program - Individual
Mathematical Principles for Neural Network Design
神经网络设计的数学原理
- 批准号:
DGECR-2021-00469 - 财政年份:2021
- 资助金额:
$ 2.11万 - 项目类别:
Discovery Launch Supplement