RI: Medium: Recognizing, Mitigating and Governing Bias in AI
RI:媒介:识别、减轻和治理人工智能中的偏见
基本信息
- 批准号:1763642
- 负责人:
- 金额:$ 80万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Standard Grant
- 财政年份:2018
- 资助国家:美国
- 起止时间:2018-09-01 至 2023-08-31
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
Artificial Intelligence (AI) technologies mediate our interactions with the world and our daily decision making, ranging from shopping to hiring to surveillance. The development of rich AI algorithms able to process and learn from unparalleled amounts of data holds promise for making impartial, well-informed decisions. However, such systems also absorb human biases, such as gender stereotyping of activities and occupations. Left unchecked, they will perpetuate these biases on an unparalleled scale. A steady stream of press confirms that this is a widespread problem in real-world applications. This research brings together an interdisciplinary team to develop the science of AI bias. The findings will impact AI researchers and developers (through novel methodologies), computational social scientists (through a deeper study of human biases at web scale), educators and policy makers (through the comprehensive analysis of bias), and downstream users of AI technology. Compared to applications such as criminal risk scoring where fairness has traditionally been studied, modern AI systems are characterized by massive datasets, complex deep models and an unprecedented breadth of applications. This results in a wider spectrum of biases with complex propagation pathways, requiring an in-depth scientific investigation. The project develops the tools and techniques for recognizing, mitigating and governing bias in AI by combining expertise in deep learning, crowdsourcing and dataset curation, AI ethics, analyzing inference risk, web measurement, and science and technology studies. The component on recognizing bias includes an application of the Implicit Association Test combined with zero-shot learning to understand the societal bias of web corpora. Mitigating bias includes bridging active learning with research on adversarial examples for AI models. Governing bias includes a qualitative and quantitative study of downstream bias effects. The research is designed to be tightly connected, as for example when the recognition of curation bias in datasets leads to techniques in mitigating bias through enforcing group fairness in deep learning to governing bias in deployed system through developing bias observatories. The study will include advancements in machine learning (decomposing deep architectures, adapting reinforcement learning, exploring domain adaptation), human-computer interaction (developing novel active learning techniques, studying model interpretability), and digital ethnography (studying the effect of AI bias on culture, establishing an AI bias taxonomy). It will serve as a bridge between these fields, establishing tighter connections between them.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
人工智能(AI)技术调解我们与世界的互动和我们的日常决策,从购物到招聘再到监控。开发能够处理和学习无与伦比的大量数据的丰富AI算法,有望做出公正、明智的决策。然而,这种制度也吸收了人的偏见,如对活动和职业的性别陈规定型观念。如果不加以制止,它们将以前所未有的规模使这些偏见永久化。源源不断的新闻证实,这是一个普遍存在的问题,在现实世界的应用。这项研究汇集了一个跨学科的团队来开发人工智能偏见的科学。这些发现将影响人工智能研究人员和开发人员(通过新的方法),计算社会科学家(通过对网络规模的人类偏见进行更深入的研究),教育工作者和政策制定者(通过对偏见的全面分析)以及人工智能技术的下游用户。与传统上研究公平性的犯罪风险评分等应用相比,现代人工智能系统的特点是大量数据集、复杂的深度模型和前所未有的应用范围。这导致了更广泛的偏差,传播途径复杂,需要进行深入的科学调查。该项目通过结合深度学习,众包和数据集管理,人工智能伦理,分析推理风险,网络测量和科学技术研究的专业知识,开发用于识别,减轻和管理人工智能偏见的工具和技术。认知偏见的部分包括应用内隐联想测验结合零杆学习来理解网络语料库的社会偏见。减轻偏见包括将主动学习与AI模型的对抗性示例研究联系起来。治理偏差包括对下游偏差影响的定性和定量研究。该研究旨在紧密联系,例如,当识别数据集中的策展偏见时,通过在深度学习中实施群体公平来减轻偏见的技术,通过开发偏见观察站来管理部署系统中的偏见。该研究将包括机器学习(分解深度架构,适应强化学习,探索领域适应),人机交互(开发新的主动学习技术,研究模型可解释性)和数字人种学(研究AI偏见对文化的影响,建立AI偏见分类)的进步。该奖项反映了NSF的法定使命,并通过使用基金会的知识价值和更广泛的影响审查标准进行评估,被认为值得支持。
项目成果
期刊论文数量(11)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation
迈向机器学习的交叉性:包括更多身份、处理代表性不足以及进行评估
- DOI:10.1145/3531146.3533101
- 发表时间:2022
- 期刊:
- 影响因子:0
- 作者:Wang, Angelina;Ramaswamy, Vikram V;Russakovsky, Olga
- 通讯作者:Russakovsky, Olga
Mitigating dataset harms requires stewardship: Lessons from 1000 papers
- DOI:
- 发表时间:2021-08
- 期刊:
- 影响因子:0
- 作者:Kenny Peng;Arunesh Mathur;Arvind Narayanan
- 通讯作者:Kenny Peng;Arunesh Mathur;Arvind Narayanan
Towards Fairness in Visual Recognition: Effective Strategies for Bias Mitigation
- DOI:10.1109/cvpr42600.2020.00894
- 发表时间:2019-11
- 期刊:
- 影响因子:0
- 作者:Zeyu Wang;Klint Qinami;Yannis Karakozis;Kyle Genova;P. Nair;K. Hata;Olga Russakovsky
- 通讯作者:Zeyu Wang;Klint Qinami;Yannis Karakozis;Kyle Genova;P. Nair;K. Hata;Olga Russakovsky
Fair Attribute Classification through Latent Space De-biasing
- DOI:10.1109/cvpr46437.2021.00918
- 发表时间:2020-12
- 期刊:
- 影响因子:0
- 作者:V. V. Ramaswamy-V.;Sunnie S. Y. Kim;Olga Russakovsky
- 通讯作者:V. V. Ramaswamy-V.;Sunnie S. Y. Kim;Olga Russakovsky
Understanding and Evaluating Racial Biases in Image Captioning
- DOI:10.1109/iccv48922.2021.01456
- 发表时间:2021-06
- 期刊:
- 影响因子:0
- 作者:Dora Zhao;Angelina Wang;Olga Russakovsky
- 通讯作者:Dora Zhao;Angelina Wang;Olga Russakovsky
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Arvind Narayanan其他文献
Obfuscated databases and group privacy
混淆的数据库和群体隐私
- DOI:
10.1145/1102120.1102135 - 发表时间:
2005 - 期刊:
- 影响因子:0
- 作者:
Arvind Narayanan;Vitaly Shmatikov - 通讯作者:
Vitaly Shmatikov
Dark Patterns at Scale
大规模的深色图案
- DOI:
- 发表时间:
2019 - 期刊:
- 影响因子:0
- 作者:
Arunesh Mathur;Gunes Acar;Michael Friedman;Elena Lucherini;Jonathan R. Mayer;M. Chetty;Arvind Narayanan - 通讯作者:
Arvind Narayanan
BIG Cache Abstraction for Cache Networks
缓存网络的 BIG 缓存抽象
- DOI:
- 发表时间:
2017 - 期刊:
- 影响因子:0
- 作者:
Eman Ramadan;Arvind Narayanan;Zhi;Runhui Li;Gong Zhang - 通讯作者:
Gong Zhang
Characterizing the Use of Browser-Based Blocking Extensions To Prevent Online Tracking
描述使用基于浏览器的阻止扩展来防止在线跟踪的特征
- DOI:
- 发表时间:
2018 - 期刊:
- 影响因子:0
- 作者:
Arunesh Mathur;Jessica Vitak;Arvind Narayanan;M. Chetty - 通讯作者:
M. Chetty
No boundaries: data exfiltration by third parties embedded on web pages
无边界:嵌入网页的第三方的数据泄露
- DOI:
10.2478/popets-2020-0070 - 发表时间:
2020 - 期刊:
- 影响因子:0
- 作者:
Gunes Acar;Steven Englehardt;Arvind Narayanan - 通讯作者:
Arvind Narayanan
Arvind Narayanan的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Arvind Narayanan', 18)}}的其他基金
CHS: Large: Collaborative Research: Pervasive Data Ethics for Computational Research
CHS:大型:协作研究:计算研究的普遍数据伦理
- 批准号:
1704444 - 财政年份:2017
- 资助金额:
$ 80万 - 项目类别:
Standard Grant
CAREER: Measurement, Analysis, and Novel Applications of Blockchains
职业:区块链的测量、分析和新颖应用
- 批准号:
1651938 - 财政年份:2017
- 资助金额:
$ 80万 - 项目类别:
Continuing Grant
TWC: Small: Online tracking: Threat Detection, Measurement and Response
TWC:小型:在线跟踪:威胁检测、测量和响应
- 批准号:
1526353 - 财政年份:2015
- 资助金额:
$ 80万 - 项目类别:
Standard Grant
TWC: Small: Addressing the challenges of cryptocurrencies: Security, anonymity, stability
TWC:小:应对加密货币的挑战:安全性、匿名性、稳定性
- 批准号:
1421689 - 财政年份:2014
- 资助金额:
$ 80万 - 项目类别:
Standard Grant
相似海外基金
Collaborative Research: CyberTraining: Implementation: Medium: Training Users, Developers, and Instructors at the Chemistry/Physics/Materials Science Interface
协作研究:网络培训:实施:媒介:在化学/物理/材料科学界面培训用户、开发人员和讲师
- 批准号:
2321102 - 财政年份:2024
- 资助金额:
$ 80万 - 项目类别:
Standard Grant
RII Track-4:@NASA: Bluer and Hotter: From Ultraviolet to X-ray Diagnostics of the Circumgalactic Medium
RII Track-4:@NASA:更蓝更热:从紫外到 X 射线对环绕银河系介质的诊断
- 批准号:
2327438 - 财政年份:2024
- 资助金额:
$ 80万 - 项目类别:
Standard Grant
Collaborative Research: Topological Defects and Dynamic Motion of Symmetry-breaking Tadpole Particles in Liquid Crystal Medium
合作研究:液晶介质中对称破缺蝌蚪粒子的拓扑缺陷与动态运动
- 批准号:
2344489 - 财政年份:2024
- 资助金额:
$ 80万 - 项目类别:
Standard Grant
Collaborative Research: AF: Medium: The Communication Cost of Distributed Computation
合作研究:AF:媒介:分布式计算的通信成本
- 批准号:
2402836 - 财政年份:2024
- 资助金额:
$ 80万 - 项目类别:
Continuing Grant
Collaborative Research: AF: Medium: Foundations of Oblivious Reconfigurable Networks
合作研究:AF:媒介:遗忘可重构网络的基础
- 批准号:
2402851 - 财政年份:2024
- 资助金额:
$ 80万 - 项目类别:
Continuing Grant
Collaborative Research: CIF: Medium: Snapshot Computational Imaging with Metaoptics
合作研究:CIF:Medium:Metaoptics 快照计算成像
- 批准号:
2403122 - 财政年份:2024
- 资助金额:
$ 80万 - 项目类别:
Standard Grant
Collaborative Research: SHF: Medium: Differentiable Hardware Synthesis
合作研究:SHF:媒介:可微分硬件合成
- 批准号:
2403134 - 财政年份:2024
- 资助金额:
$ 80万 - 项目类别:
Standard Grant
Collaborative Research: SHF: Medium: Enabling Graphics Processing Unit Performance Simulation for Large-Scale Workloads with Lightweight Simulation Methods
合作研究:SHF:中:通过轻量级仿真方法实现大规模工作负载的图形处理单元性能仿真
- 批准号:
2402804 - 财政年份:2024
- 资助金额:
$ 80万 - 项目类别:
Standard Grant
Collaborative Research: CIF-Medium: Privacy-preserving Machine Learning on Graphs
合作研究:CIF-Medium:图上的隐私保护机器学习
- 批准号:
2402815 - 财政年份:2024
- 资助金额:
$ 80万 - 项目类别:
Standard Grant
Collaborative Research: SHF: Medium: Tiny Chiplets for Big AI: A Reconfigurable-On-Package System
合作研究:SHF:中:用于大人工智能的微型芯片:可重新配置的封装系统
- 批准号:
2403408 - 财政年份:2024
- 资助金额:
$ 80万 - 项目类别:
Standard Grant