Boosting Robustness of Deep Neural Networks against Sparsity-aware Adversarial Attacks
提高深度神经网络对抗稀疏感知对抗攻击的鲁棒性
基本信息
- 批准号:580570-2022
- 负责人:
- 金额:$ 2.19万
- 依托单位:
- 依托单位国家:加拿大
- 项目类别:Alliance Grants
- 财政年份:2022
- 资助国家:加拿大
- 起止时间:2022-01-01 至 2023-12-31
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
Deep neural networks (DNNs) are one of the most prominent technologies of our time, as they achieve state-of-the-art performance in a wide range of real-world applications. Despite the success of DNNs, they are vulnerable to adversarial attacks. It has been shown that carefully crafted input perturbations can fool well-trained DNNs. This causes serious concerns in security-sensitive applications such as online banking or autonomous driving. Failure of DNNs to function correctly in the presence of malicious attacks can result in severe consequences such as identity theft, financial losses, and even endangering human lives. In this research project, we focus on a new type of adversarial attack that targets energy and latency of DNNs rather than their accuracy. Over the last few years, DNNs were deployed into mobile devices. Mobile devices have limited power budget as they mostly operate on battery. As a result, defending DNNs against energy-aware adversarial attacks is crucial for success of mobile computing. Exploiting sparse values in DNNs has emerged as an effective technique to improve energy-efficiency of machine learning algorithms in resource-constrained applications. Reducing sparsity increases energy and execution time of DNNs. Despite of significant advancement in defence of DNNs against adversarial attacks over the last few years, there is no solution to defend DNNs against those attacks that target energy based on sparsity. To protect DNNs against these types of attacks, we propose to use the correlation between firing neurons and predictions made by a DNN. Each set of neurons in a DNN are responsible for detecting a specific feature in an input. By monitoring the firing neurons during the inference phase, we can detect those neurons that are activated maliciously for a given class. The outcome of this project will help Canadian high-tech industry to detect and disable malicious attacks that impact energy consumption of computing systems. In addition, highly-qualified personnel that will be trained through this program will gain valuable experience in the area of robust DNNs which in turn will help Canadian high-tech companies and will give them edge in the area of robust computing.
深度神经网络 (DNN) 是当今时代最杰出的技术之一,因为它们在广泛的现实应用中实现了最先进的性能。尽管 DNN 取得了成功,但它们很容易受到对抗性攻击。事实证明,精心设计的输入扰动可以欺骗训练有素的 DNN。这会引起网上银行或自动驾驶等安全敏感应用程序的严重担忧。 DNN 在遭受恶意攻击时无法正常运行可能会导致严重后果,例如身份盗用、经济损失,甚至危及人类生命。 在这个研究项目中,我们专注于一种新型的对抗性攻击,其目标是 DNN 的能量和延迟,而不是其准确性。在过去几年中,DNN 被部署到移动设备中。移动设备的功率预算有限,因为它们主要依靠电池运行。因此,保护 DNN 免受能量感知的对抗性攻击对于移动计算的成功至关重要。利用 DNN 中的稀疏值已成为提高资源受限应用中机器学习算法能效的有效技术。降低稀疏性会增加 DNN 的能量和执行时间。尽管过去几年在防御 DNN 抵御对抗性攻击方面取得了重大进展,但还没有解决方案可以防御 DNN 抵御那些基于稀疏性的针对能量的攻击。为了保护 DNN 免受此类攻击,我们建议利用放电神经元与 DNN 做出的预测之间的相关性。 DNN 中的每组神经元负责检测输入中的特定特征。通过在推理阶段监控放电神经元,我们可以检测到那些针对给定类别被恶意激活的神经元。该项目的成果将帮助加拿大高科技产业检测和阻止影响计算系统能耗的恶意攻击。此外,通过该计划接受培训的高素质人员将获得强大的 DNN 领域的宝贵经验,这反过来将有助于加拿大高科技公司,并赋予他们在强大计算领域的优势。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Atoofian, EhsanE其他文献
Atoofian, EhsanE的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Atoofian, EhsanE', 18)}}的其他基金
Approximate Quantum Arithmetic Units
近似量子算术单位
- 批准号:
580808-2022 - 财政年份:2022
- 资助金额:
$ 2.19万 - 项目类别:
Alliance Grants
相似海外基金
Property-Driven Quality Assurance of Adversarial Robustness of Deep Neural Networks
深度神经网络对抗鲁棒性的属性驱动质量保证
- 批准号:
23K11049 - 财政年份:2023
- 资助金额:
$ 2.19万 - 项目类别:
Grant-in-Aid for Scientific Research (C)
Learn, transfer, generate: Developing novel deep learning models for enhancing robustness and accuracy of small-scale single-cell RNA sequencing studies
学习、转移、生成:开发新颖的深度学习模型,以增强小规模单细胞 RNA 测序研究的稳健性和准确性
- 批准号:
10535708 - 财政年份:2023
- 资助金额:
$ 2.19万 - 项目类别:
CAREER: IIS: RI: Foundations of Deep Neural Network Robustness and Efficiency
职业:IIS:RI:深度神经网络鲁棒性和效率的基础
- 批准号:
2144960 - 财政年份:2022
- 资助金额:
$ 2.19万 - 项目类别:
Continuing Grant
Collaborative Research: Foundations of Deep Learning: Theory, Robustness, and the Brain
协作研究:深度学习的基础:理论、稳健性和大脑 —
- 批准号:
2134040 - 财政年份:2021
- 资助金额:
$ 2.19万 - 项目类别:
Standard Grant
Collaborative Research: Foundations of Deep Learning: Theory, Robustness, and the Brain
协作研究:深度学习的基础:理论、稳健性和大脑 —
- 批准号:
2134105 - 财政年份:2021
- 资助金额:
$ 2.19万 - 项目类别:
Standard Grant
Collaborative Research: Foundations of Deep Learning: Theory, Robustness, and the Brain
协作研究:深度学习的基础:理论、稳健性和大脑 —
- 批准号:
2134108 - 财政年份:2021
- 资助金额:
$ 2.19万 - 项目类别:
Standard Grant
Robustness of Deep Learning Perception Models
深度学习感知模型的鲁棒性
- 批准号:
2579432 - 财政年份:2021
- 资助金额:
$ 2.19万 - 项目类别:
Studentship
Collaborative Research: Foundations of Deep Learning: Theory, Robustness, and the Brain
协作研究:深度学习的基础:理论、稳健性和大脑 —
- 批准号:
2134059 - 财政年份:2021
- 资助金额:
$ 2.19万 - 项目类别:
Standard Grant
CAREER: Towards Better Understanding, Robustness, and Efficiency of Deep Learning
职业:更好地理解深度学习、增强鲁棒性和效率
- 批准号:
2046710 - 财政年份:2021
- 资助金额:
$ 2.19万 - 项目类别:
Continuing Grant
Robustness and uncertainty for deep neural network classification
深度神经网络分类的鲁棒性和不确定性
- 批准号:
552283-2020 - 财政年份:2020
- 资助金额:
$ 2.19万 - 项目类别:
University Undergraduate Student Research Awards