Attacks against Machine Learning in Structured Domains
针对结构化领域中的机器学习的攻击
基本信息
- 批准号:492020528
- 负责人:
- 金额:--
- 依托单位:
- 依托单位国家:德国
- 项目类别:Research Grants
- 财政年份:
- 资助国家:德国
- 起止时间:
- 项目状态:未结题
- 来源:
- 关键词:
项目摘要
Machine learning techniques are increasingly used in security-critical applications, such as for the detection of malicious code and attacks. However, current learning algorithms are often vulnerable themselves and can be deceived by manipulated inputs. In recent years, a large number of new attack techniques against machine learning has thus been developed. With few exceptions this research has focused on a simplified scenario: The attacks are conducted in the feature space of the learning algorithms only. By making small changes to vectors in this space, it becomes possible to to influence the algorithms' decisions and provoke incorrect predictions. In practice, however, these attacks are only applicable if the manipulated vectors can be mapped back to real objects. For structured data, such as program code, file formats and natural language, this inverse mapping from vectors to structures is almost never defined. Thus, the robustness of many security-critical applications cannot be investigated and tested with the majority of existing attacks.The goal of this project is to explore the security of learning algorithms in structured domains and close a gap of current research. In contrast to previous work, a systematic understanding of the relationship between the problem space of the original data and the feature space will be developed. Two strategies will be pursued for this purpose: First, new inverse mappings for structured data will be explored and developed that replicate missing semantics and syntax in the problem space. Second, new attacks will be devised that operate directly on structured data and thus are not affected by feature mappings. Based on both strategies, new defenses can emerge that build on the interleaving of the problem space and feature space to realize more robust learning systems for computer security.
机器学习技术越来越多地用于安全关键型应用,例如检测恶意代码和攻击。然而,目前的学习算法本身往往是脆弱的,可以被操纵的输入欺骗。近年来,大量针对机器学习的新攻击技术因此被开发出来。除了少数例外,这项研究集中在一个简化的场景:攻击只在学习算法的特征空间中进行。通过对这个空间中的向量进行微小的改变,就有可能影响算法的决策并引发不正确的预测。然而,在实践中,这些攻击仅在被操纵的向量可以被映射回真实的对象时才适用。对于结构化数据,如程序代码、文件格式和自然语言,这种从向量到结构的逆映射几乎从未定义过。因此,许多安全关键的应用程序的鲁棒性无法调查和测试与大多数现有的attacks.The项目的目标是探索在结构化域的学习算法的安全性,并关闭目前的研究差距。与以前的工作相比,系统地了解原始数据的问题空间和特征空间之间的关系。为此,将采取两种策略:首先,将探索和开发结构化数据的新的逆映射,以复制问题空间中缺失的语义和语法。其次,新的攻击将被设计成直接对结构化数据进行操作,因此不受特征映射的影响。基于这两种策略,新的防御可以出现,建立在问题空间和特征空间的交织,实现更强大的学习系统的计算机安全。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Professor Dr. Konrad Rieck其他文献
Professor Dr. Konrad Rieck的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Professor Dr. Konrad Rieck', 18)}}的其他基金
Machine Learning and Digital Watermarking in Adversarial Environments
对抗环境中的机器学习和数字水印
- 批准号:
393063728 - 财政年份:2017
- 资助金额:
-- - 项目类别:
Research Grants
Detection of Software Vulnerabilities using Machine Learning
使用机器学习检测软件漏洞
- 批准号:
242913835 - 财政年份:2014
- 资助金额:
-- - 项目类别:
Research Grants
相似海外基金
Robust Defences against Adversarial Machine Learning for UAV Systems
针对无人机系统对抗性机器学习的稳健防御
- 批准号:
LP230100083 - 财政年份:2024
- 资助金额:
-- - 项目类别:
Linkage Projects
Generative machine learning for narrow spectrum antibiotic discovery against Acinetobacter baumannii
生成机器学习用于发现针对鲍曼不动杆菌的窄谱抗生素
- 批准号:
477936 - 财政年份:2023
- 资助金额:
-- - 项目类别:
Operating Grants
CSAMGuard: Leveraging Advanced Machine Learning to Protect Against CSAM Link Obfuscation
CSAMGuard:利用先进的机器学习来防止 CSAM 链接混淆
- 批准号:
10073540 - 财政年份:2023
- 资助金额:
-- - 项目类别:
Collaborative R&D
Excellence in Research: A Hierarchical Machine Learning Approach for Securing of NoC-Based MPSoCs Against Thermal Attacks
卓越的研究:用于保护基于 NoC 的 MPSoC 免受热攻击的分层机器学习方法
- 批准号:
2302537 - 财政年份:2023
- 资助金额:
-- - 项目类别:
Standard Grant
Applying advanced molecular biology, metabolomics and image analysis using machine-learning technology to improve wheat resistance against Fusarium head blight
利用机器学习技术应用先进的分子生物学、代谢组学和图像分析来提高小麦对赤霉病的抗性
- 批准号:
570375-2021 - 财政年份:2022
- 资助金额:
-- - 项目类别:
Alliance Grants
Realization of chip authentication circuit using a leak monitor and elucidation of resistance mechanism against machine learning attacks
使用泄漏监视器实现芯片认证电路并阐明针对机器学习攻击的抵抗机制
- 批准号:
22K11959 - 财政年份:2022
- 资助金额:
-- - 项目类别:
Grant-in-Aid for Scientific Research (C)
Collaborative Research: SaTC: CORE: Small: Machine Learning for Cybersecurity: Robustness Against Concept Drift
协作研究:SaTC:核心:小型:网络安全机器学习:针对概念漂移的稳健性
- 批准号:
2154873 - 财政年份:2022
- 资助金额:
-- - 项目类别:
Continuing Grant
Collaborative Research: SaTC: CORE: Small: Machine Learning for Cybersecurity: Robustness Against Concept Drift
协作研究:SaTC:核心:小型:网络安全机器学习:针对概念漂移的稳健性
- 批准号:
2154874 - 财政年份:2022
- 资助金额:
-- - 项目类别:
Continuing Grant
EXCELLENCE in RESEARCH: SECURING MACHINE LEARNING AGAINST ADVERSARIAL ATTACKS FOR CONNECTED AND AUTONOMOUS VEHICLES
卓越的研究:保护联网和自动驾驶车辆的机器学习免受对抗性攻击
- 批准号:
2200457 - 财政年份:2022
- 资助金额:
-- - 项目类别:
Standard Grant
Machine against machine: towards a better understanding of future cyber physical system security
机器对机器:更好地理解未来网络物理系统安全
- 批准号:
RGPIN-2019-07292 - 财政年份:2021
- 资助金额:
-- - 项目类别:
Discovery Grants Program - Individual