Mathematics of Adversarial Attacks
对抗性攻击的数学
基本信息
- 批准号:EP/V046527/1
- 负责人:
- 金额:$ 25.75万
- 依托单位:
- 依托单位国家:英国
- 项目类别:Research Grant
- 财政年份:2021
- 资助国家:英国
- 起止时间:2021 至 无数据
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
This proposal is built on two observations:1. Empirical experiments have shown that even the most sophisticated and highly-regarded artificial intelligence (AI) tools can be fooled by carefully constructed examples. For example, given a picture of a dog, we can change the picture in a way that is imperceptible to the human eye but makes the AI system change its mind and categorize the picture as a chicken. Such *adversarial attacks* can be shockingly successful, and they clearly have implications for safety, security and ethics. 2. Although many mathematical scientists are contributing to the exciting and fast-moving body of research in AI and deep learning, the main theoretical focus so far has been on approximation power (can we build systems that satisfy a desired list of properties?) and optimization (what is the best way to fine-tune the network details?).There is an urgent, unmet need for actionable understanding around adversarial attacks: are they inevitable, are they identifiable, and are they generalizable to other forms of attack?This motivates the themes of the proposal: Inevitability, Identifiability, and Escalation.Here are three examples of the types of questions that we will address:A) Is it inevitable that any AI system will be susceptible to adversarial attack (in which case we should assign resources to identifying attacks rather than attempting to eliminate them)?B) Typical modern AI hardware is fast but has low accuracy (e.g., each computation may carry only 3 digits); can such imprecision be exploited by new forms of adversarial attack? C) How secure are AI systems to malicious interventions that, rather than attacking the input data, make covert alterations to the parameters in the system? We will, for the first time, develop and extend highly relevant ideas from the field of mathematics (numerical analysis and approximation theory) to produce concepts and tools that allow us to appreciate fundamental limitations of AI technology, and identify when these limitations are being exposed; thereby contributing to issues of security, interpretability and accountability. The proposal will involve a post-doctoral research assistant, who will gain valuable skills in a high-demand area. Also, because issues of trust, privacy and security are central to this project, public engagement activities are built in to the plans. A key route to creating lasting impact is the development of practical case studies that highlight the theory that we develop. This will involve the creation of computer code that uses industry-standard AI platforms and data sets: it is an activity that requires specialist skills in coding and data science, and a qualified software engineer will be employed for this task. Overall, the ideas emerging from this project will transform our understanding of AI systems by using currently overlooked techniques from computational mathematics. Furthermore, by showing that there are challenges at the heart of AI that can be tackled by computational and applied mathematicians, we plan to transform the scale and quality of research interaction at this important mathematics-computer science interface.
这一建议是建立在两个观察:1。经验性实验表明,即使是最复杂和最受重视的人工智能(AI)工具也可能被精心构建的示例所愚弄。例如,给定一张狗的图片,我们可以以人眼无法察觉的方式改变图片,但使AI系统改变主意并将图片归类为鸡。这种对抗性攻击可能会取得令人震惊的成功,它们显然会对安全、安保和道德产生影响。2.尽管许多数学科学家正在为人工智能和深度学习领域令人兴奋和快速发展的研究做出贡献,但到目前为止,主要的理论焦点一直是近似能力(我们能否构建满足所需属性列表的系统?)和优化(微调网络细节的最佳方法是什么?)。有一个紧迫的,未满足的需要,围绕敌对性攻击的可操作的理解:他们是不可避免的,他们是可识别的,他们是一般化的其他形式的攻击?这激发了提案的主题:不可避免性,可识别性和升级。下面是我们将解决的问题类型的三个例子:A)任何AI系统都不可避免地容易受到对抗性攻击吗(在这种情况下,我们应该分配资源来识别攻击,而不是试图消除它们)?B)典型的现代AI硬件速度快但精度低(例如,每次计算可能只携带3位数);这种不精确性会被新形式的对抗性攻击利用吗?C)人工智能系统对恶意干预的安全性有多高,这些恶意干预不是攻击输入数据,而是对系统中的参数进行隐蔽更改?我们将首次开发和扩展数学领域(数值分析和近似理论)中高度相关的思想,以产生概念和工具,使我们能够理解人工智能技术的基本局限性,并确定这些局限性何时暴露;从而有助于解决安全性,可解释性和问责制问题。该提案将涉及一名博士后研究助理,他将在高需求领域获得宝贵的技能。此外,由于信任、隐私和安全问题是该项目的核心,公众参与活动也纳入了计划。创造持久影响的一个关键途径是开发突出我们开发的理论的实际案例研究。这将涉及使用行业标准的人工智能平台和数据集创建计算机代码:这是一项需要编码和数据科学专业技能的活动,并将聘请一名合格的软件工程师来完成这项任务。总的来说,这个项目产生的想法将通过使用目前被忽视的计算数学技术来改变我们对人工智能系统的理解。此外,通过证明人工智能的核心存在可以由计算和应用数学家解决的挑战,我们计划在这个重要的数学-计算机科学接口上改变研究互动的规模和质量。
项目成果
期刊论文数量(9)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
The Feasibility and Inevitability of Stealth Attacks
- DOI:10.1093/imamat/hxad027
- 发表时间:2021-06
- 期刊:
- 影响因子:0
- 作者:I. Tyukin;D. Higham;Eliyas Woldegeorgis;Alexander N Gorban
- 通讯作者:I. Tyukin;D. Higham;Eliyas Woldegeorgis;Alexander N Gorban
Dynamic Katz and related network measures
- DOI:10.1016/j.laa.2022.08.022
- 发表时间:2022-09-26
- 期刊:
- 影响因子:1.1
- 作者:Arrigo, Francesca;Higham, Desmond J.;Wood, Ryan
- 通讯作者:Wood, Ryan
Can We Rely on AI?
- DOI:10.48550/arxiv.2308.15092
- 发表时间:2023-08
- 期刊:
- 影响因子:0
- 作者:D. Higham
- 通讯作者:D. Higham
Adversarial ink: componentwise backward error attacks on deep learning
对抗性墨水:深度学习的组件式后向错误攻击
- DOI:10.1093/imamat/hxad017
- 发表时间:2023
- 期刊:
- 影响因子:1.2
- 作者:Beerens L
- 通讯作者:Beerens L
The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in Deep Learning
深度学习中可验证准确性、鲁棒性和泛化性的界限
- DOI:10.48550/arxiv.2309.07072
- 发表时间:2023
- 期刊:
- 影响因子:0
- 作者:Bastounis A
- 通讯作者:Bastounis A
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Desmond Higham其他文献
Desmond Higham的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Desmond Higham', 18)}}的其他基金
MOLTEN: Mathematics Of Large Technological Evolving Networks
MOLTEN:大型技术演进网络的数学
- 批准号:
EP/I016058/1 - 财政年份:2011
- 资助金额:
$ 25.75万 - 项目类别:
Research Grant
Complex Brain Networks in Health, Development and Disease
健康、发育和疾病中的复杂大脑网络
- 批准号:
G0601353/1 - 财政年份:2007
- 资助金额:
$ 25.75万 - 项目类别:
Research Grant
Theory and Tools for Complex Biological Systems
复杂生物系统的理论和工具
- 批准号:
EP/E049370/1 - 财政年份:2007
- 资助金额:
$ 25.75万 - 项目类别:
Research Grant
相似海外基金
Collaborative Research: CIF: Small: Robust Machine Learning under Sparse Adversarial Attacks
协作研究:CIF:小型:稀疏对抗攻击下的鲁棒机器学习
- 批准号:
2236484 - 财政年份:2023
- 资助金额:
$ 25.75万 - 项目类别:
Standard Grant
Collaborative Research: CIF: Small: Robust Machine Learning under Sparse Adversarial Attacks
协作研究:CIF:小型:稀疏对抗攻击下的鲁棒机器学习
- 批准号:
2236483 - 财政年份:2023
- 资助金额:
$ 25.75万 - 项目类别:
Standard Grant
EXCELLENCE in RESEARCH: SECURING MACHINE LEARNING AGAINST ADVERSARIAL ATTACKS FOR CONNECTED AND AUTONOMOUS VEHICLES
卓越的研究:保护联网和自动驾驶车辆的机器学习免受对抗性攻击
- 批准号:
2200457 - 财政年份:2022
- 资助金额:
$ 25.75万 - 项目类别:
Standard Grant
Boosting Robustness of Deep Neural Networks against Sparsity-aware Adversarial Attacks
提高深度神经网络对抗稀疏感知对抗攻击的鲁棒性
- 批准号:
580570-2022 - 财政年份:2022
- 资助金额:
$ 25.75万 - 项目类别:
Alliance Grants
Investigating adversarial attacks and defences in federated learning.
研究联邦学习中的对抗性攻击和防御。
- 批准号:
2554063 - 财政年份:2021
- 资助金额:
$ 25.75万 - 项目类别:
Studentship
CICI: SIVD: Discover and defend cyber vulnerabilities of deep learning medical diagnosis models to adversarial attacks
CICI:SIVD:发现并防御深度学习医疗诊断模型针对对抗性攻击的网络漏洞
- 批准号:
2115082 - 财政年份:2021
- 资助金额:
$ 25.75万 - 项目类别:
Standard Grant
Collaborative Research: SaTC: CORE: Small: Securing IoT and Edge Devices under Audio Adversarial Attacks
协作研究:SaTC:核心:小型:在音频对抗攻击下保护物联网和边缘设备
- 批准号:
2114161 - 财政年份:2021
- 资助金额:
$ 25.75万 - 项目类别:
Standard Grant
Collaborative Research: SaTC: CORE: Small: Securing IoT and Edge Devices under Audio Adversarial Attacks
协作研究:SaTC:核心:小型:在音频对抗攻击下保护物联网和边缘设备
- 批准号:
2114220 - 财政年份:2021
- 资助金额:
$ 25.75万 - 项目类别:
Standard Grant
Learning Internal Representations Robust against Adversarial Attacks
学习抵御对抗性攻击的内部表示
- 批准号:
20K19824 - 财政年份:2020
- 资助金额:
$ 25.75万 - 项目类别:
Grant-in-Aid for Early-Career Scientists
SaTC: CORE: Medium: Hidden Rules in Neural Networks as Attacks and Adversarial Defenses
SaTC:核心:中:神经网络中作为攻击和对抗性防御的隐藏规则
- 批准号:
1949650 - 财政年份:2020
- 资助金额:
$ 25.75万 - 项目类别:
Standard Grant














{{item.name}}会员




