Algorithmic Watchdog for Differential Privacy: From Theory to Practice

差异隐私的算法看门狗:从理论到实践

基本信息

  • 批准号:
    RGPIN-2022-05283
  • 负责人:
  • 金额:
    $ 1.82万
  • 依托单位:
  • 依托单位国家:
    加拿大
  • 项目类别:
    Discovery Grants Program - Individual
  • 财政年份:
    2022
  • 资助国家:
    加拿大
  • 起止时间:
    2022-01-01 至 2023-12-31
  • 项目状态:
    已结题

项目摘要

Armed with powerful advances in machine learning (ML), the ability of an adversary to gather personal information from an individual's expanding digital footprint is outstripping anyone's capability to keep their information private. While the data collected can have tremendous benefit for consumers via technologies built on ML, this benefit must be tempered with meaningful assurances of privacy. The de-facto standard for reasoning about such assurances in ML is differential privacy (DP). However, despite its widespread adoption in governments and corporations, there is no standardized approach for evaluating and monitoring DP technologies. Incorrectly specified DP parameters may incur a prohibitively high utility loss and-at worse-not provide privacy against an adversary. Today, these potential harms are silent: there are no automated methods for tuning and monitoring algorithms for DP misuse. This research fills this gap by creating mathematical methods that precisely characterize the risks of private information leakage and reduced utility in existing DP algorithms. These methods will constitute a rigorous blueprint for scalable "algorithmic watchdogs" that monitor DP technologies for misuse and unintended harm. Algorithmic watchdogs will reduce the potential harm of deploying DP in applications that use individual-level sensitive data such as Canada's census records. Moreover, they will fundamentally impact how DP is implemented by governments (e.g., Statistics Canada) and companies, and help developers optimally tune the parameters of private learning algorithms, monitor DP technologies for performance loss, and alert for potential misuse. The proposed work has two key novelties. First, we propose to jointly characterize the "operational privacy" and utility guarantees of existing DP algorithms by applying powerful tools from information theory. These two aspects must be simultaneously tracked to avoid misuse in DP deployments. The key advantage of the information-theoretic approach is that it will mathematically delineate the fundamental limits of private learning in an algorithmic-independent manner. These limits, in turn, serve as a rigorous blueprint for designing and benchmarking practical algorithms. Second, the research cuts across the DP development stack: it prevents misuse starting from the (many) mathematical definitions of DP to the actual deployment of the technology using open-source DP software. The overall long-term goal of this initiative is to provide data scientists in governments and corporations with a set of theoretically-grounded and algorithmic tools to guarantee meaningful and operational privacy with provably minimal unintended harms. This research will help government's data scientists understand how operational privacy can impact the accuracy of ML tasks, as well as how to optimally select parameters of learning algorithms in two popular DP applications: queries to statistical databases and training ML models.
随着机器学习(ML)的强大进步,对手从个人不断扩大的数字足迹中收集个人信息的能力超过了任何人保持信息私密性的能力。虽然通过基于机器学习的技术收集的数据可以为消费者带来巨大的好处,但这种好处必须得到有意义的隐私保证。在ML中推理这种保证的事实标准是差分隐私(DP)。然而,尽管它在政府和公司中被广泛采用,但没有标准化的方法来评估和监测DP技术。不正确指定的DP参数可能会导致过高的效用损失,并且在更坏的情况下,不会提供针对对手的隐私。今天,这些潜在的危害是沉默的:没有自动化的方法来调整和监控DP滥用的算法。这项研究填补了这一空白,通过创建数学方法,精确地描述了现有DP算法中私人信息泄露和效用降低的风险。这些方法将构成一个严格的蓝图,可扩展的“算法看门狗”,监测DP技术的滥用和意外伤害。在使用个人级别敏感数据(如加拿大人口普查记录)的应用程序中部署DP的潜在危害将减少。此外,它们将从根本上影响政府如何实施发展伙伴关系(例如,加拿大统计局)和公司,并帮助开发人员优化私人学习算法的参数,监控DP技术的性能损失,并警告潜在的滥用。拟议的工作有两个关键的创新。首先,我们建议联合表征的“操作隐私”和效用保证现有的DP算法,应用强大的工具,从信息论。必须同时跟踪这两个方面,以避免DP部署中的误用。信息论方法的主要优点是,它将以独立于算法的方式在数学上描述私人学习的基本限制。这些限制反过来又作为设计和基准测试实用算法的严格蓝图。其次,该研究跨越了DP开发堆栈:它防止了从DP的(许多)数学定义到使用开源DP软件的技术实际部署的滥用。该计划的总体长期目标是为政府和企业的数据科学家提供一套理论基础和算法工具,以确保有意义的操作隐私,并将意外伤害降至最低。这项研究将帮助政府的数据科学家了解操作隐私如何影响机器学习任务的准确性,以及如何在两个流行的DP应用中最佳地选择学习算法的参数:查询统计数据库和训练机器学习模型。

项目成果

期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Asoodeh, Shahab其他文献

Local Differential Privacy Is Equivalent to Contraction of an $f$-Divergence
局部差分隐私相当于 $f$-Divergence 的收缩
Privacy Amplification of Iterative Algorithms via Contraction Coefficients
通过收缩系数实现迭代算法的隐私放大
Model Projection: Theory and Applications to Fair Machine Learning
模型投影:公平机器学习的理论与应用
Information Extraction Under Privacy Constraints
  • DOI:
    10.3390/info7010015
  • 发表时间:
    2016-03-01
  • 期刊:
  • 影响因子:
    3.1
  • 作者:
    Asoodeh, Shahab;Diaz, Mario;Linder, Tamas
  • 通讯作者:
    Linder, Tamas
Estimation Efficiency Under Privacy Constraints
  • DOI:
    10.1109/tit.2018.2865558
  • 发表时间:
    2019-03-01
  • 期刊:
  • 影响因子:
    2.5
  • 作者:
    Asoodeh, Shahab;Diaz, Mario;Linder, Tamas
  • 通讯作者:
    Linder, Tamas

Asoodeh, Shahab的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Asoodeh, Shahab', 18)}}的其他基金

Algorithmic Watchdog for Differential Privacy: From Theory to Practice
差异隐私的算法看门狗:从理论到实践
  • 批准号:
    DGECR-2022-00429
  • 财政年份:
    2022
  • 资助金额:
    $ 1.82万
  • 项目类别:
    Discovery Launch Supplement

相似海外基金

Algorithmic Watchdog for Differential Privacy: From Theory to Practice
差异隐私的算法看门狗:从理论到实践
  • 批准号:
    DGECR-2022-00429
  • 财政年份:
    2022
  • 资助金额:
    $ 1.82万
  • 项目类别:
    Discovery Launch Supplement
Improving patient safety in radiation therapy with the Watchdog real-time treatment delivery verification system
利用 Watchdog 实时治疗实施验证系统提高放射治疗中的患者安全
  • 批准号:
    nhmrc : GNT1130469
  • 财政年份:
    2017
  • 资助金额:
    $ 1.82万
  • 项目类别:
    Project Grants
Improving patient safety in radiation therapy with the Watchdog real-time treatment delivery verification system
利用 Watchdog 实时治疗实施验证系统提高放射治疗中的患者安全
  • 批准号:
    nhmrc : 1130469
  • 财政年份:
    2017
  • 资助金额:
    $ 1.82万
  • 项目类别:
    Project Grants
Acquisition of Watchdog Monitoring System
收购看门狗监控系统
  • 批准号:
    8947227
  • 财政年份:
    2015
  • 资助金额:
    $ 1.82万
  • 项目类别:
TC: Small: WATCHDOG: Hardware-Assisted Prevention of All Use-After-Free Security Vulnerabilities
TC:小:WATCHDOG:硬件辅助预防所有释放后使用安全漏洞
  • 批准号:
    1116682
  • 财政年份:
    2011
  • 资助金额:
    $ 1.82万
  • 项目类别:
    Standard Grant
Embedded Watchdog Agent/Life Cycle Unit for Performance Assessment and Prediction in Mobile Components
用于移动组件性能评估和预测的嵌入式看门狗代理/生命周期单元
  • 批准号:
    5416947
  • 财政年份:
    2003
  • 资助金额:
    $ 1.82万
  • 项目类别:
    Research Grants
INSTALLATION OF A CONTROLLED ACCESS WATCHDOG SYSTEM
安装受控访问看门狗系统
  • 批准号:
    3648528
  • 财政年份:
    1990
  • 资助金额:
    $ 1.82万
  • 项目类别:
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了