Algorithmic Fairness in Black-box Machine Learning Models
黑盒机器学习模型中的算法公平性
基本信息
- 批准号:RGPIN-2021-04378
- 负责人:
- 金额:$ 1.75万
- 依托单位:
- 依托单位国家:加拿大
- 项目类别:Discovery Grants Program - Individual
- 财政年份:2022
- 资助国家:加拿大
- 起止时间:2022-01-01 至 2023-12-31
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
Deep learning models are successful machine learning models that have seen incredible growth in the last few years with impressive results in many applications. Deep learning models are being used with increasing frequency for decision making in domains that affect peoples' lives such as employment, education, policing and loan approval. These uses raise concerns about biases of algorithmic discrimination and have motivated the development of fairness-aware machine learning. The initial efforts in this fast-growing field were focusing on formalizing statistical measures of fairness that could be used to train new models. While initial efforts were important first steps towards addressing fairness concerns in machine learning, there were immediate challenges when applying them to deep learning models. The main success of deep learning models comes from a combination of efficient learning algorithms and their huge parametric space with hundreds of layers and millions of parameters. Such huge and complex parametric space makes deep learning models to be known as black box models. This means that we are not able to see the inside of an algorithm and understand how it arrives at a decision. The main objective of this research program is to develop mathematical tools and algorithms for effective and efficient fairness-aware deep learning approaches. The goal is to bring fairness-aware approaches to a level of maturity and reliability that anyone can trust these automated systems without a fear of discrimination. This research program will address the followings: (a) Describe how data is biased and how such biases are affecting the deep learning models. (b) Verify discrimination in trained deep learning models and specify how a trained model is not fair. (c) Deploy fairness guarantees in learning by either imposing fairness constraints or removing bias during learning. (d) Enforce fairness jointly in the learning and decision-making. (e) Explain the final decisions to the end users. Explaining the decision is a necessary step to validate the fairness of the process which can be done through a range of measures from describing the process of decision making to the end users, to providing solutions to change the undesirable decisions. The development and analysis of algorithmic discrimination in deep learning can be a key in enabling future scientific and technological progress. This research will contribute to Canada's position as a leader in artificial intelligence and fuel downstream applications, such as fair health care, that can help Canadians in their daily lives.
深度学习模型是成功的机器学习模型,在过去几年中取得了令人难以置信的增长,在许多应用中取得了令人印象深刻的成果。深度学习模型越来越频繁地用于影响人们生活的领域的决策,例如就业、教育、治安和贷款审批。这些用途引起了人们对算法歧视偏见的担忧,并推动了具有公平意识的机器学习的发展。在这个快速发展的领域,最初的工作重点是规范可用于训练新模型的公平性统计指标。虽然最初的努力是解决机器学习公平问题的重要第一步,但将它们应用于深度学习模型时面临着直接的挑战。深度学习模型的主要成功来自于高效学习算法及其包含数百层和数百万参数的巨大参数空间的组合。如此巨大且复杂的参数空间使得深度学习模型被称为黑盒模型。这意味着我们无法看到算法的内部并理解它如何做出决定。该研究计划的主要目标是开发数学工具和算法,以实现有效且高效的公平意识深度学习方法。目标是将公平意识方法提高到成熟度和可靠性水平,使任何人都可以信任这些自动化系统,而不必担心受到歧视。该研究计划将解决以下问题:(a) 描述数据如何存在偏差以及这些偏差如何影响深度学习模型。 (b) 验证经过训练的深度学习模型中的歧视,并具体说明经过训练的模型如何不公平。 (c) 通过在学习过程中施加公平约束或消除偏见,在学习中部署公平保证。 (d) 共同确保学习和决策的公平性。 (e) 向最终用户解释最终决定。解释决策是验证流程公平性的必要步骤,可以通过一系列措施来完成,从向最终用户描述决策过程,到提供改变不良决策的解决方案。深度学习中算法判别的开发和分析可能是实现未来科技进步的关键。这项研究将有助于巩固加拿大作为人工智能领域领导者的地位,并推动下游应用,例如公平医疗保健,从而帮助加拿大人的日常生活。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Farnadi, Golnoosh其他文献
Lifted Hinge-Loss Markov Random Fields
提升铰链损失马尔可夫随机场
- DOI:
10.1609/aaai.v33i01.33017975 - 发表时间:
2019 - 期刊:
- 影响因子:0
- 作者:
Srinivasan, Sriram;Babaki, Behrouz;Farnadi, Golnoosh;Getoor, Lise - 通讯作者:
Getoor, Lise
A taxonomy of weight learning methods for statistical relational learning
统计关系学习的权重学习方法分类
- DOI:
10.1007/s10994-021-06069-5 - 发表时间:
2021 - 期刊:
- 影响因子:7.5
- 作者:
Srinivasan, Sriram;Dickens, Charles;Augustine, Eriq;Farnadi, Golnoosh;Getoor, Lise - 通讯作者:
Getoor, Lise
Computational personality recognition in social media
- DOI:
10.1007/s11257-016-9171-0 - 发表时间:
2016-06-01 - 期刊:
- 影响因子:3.6
- 作者:
Farnadi, Golnoosh;Sitaraman, Geetha;De Cock, Martine - 通讯作者:
De Cock, Martine
Farnadi, Golnoosh的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Farnadi, Golnoosh', 18)}}的其他基金
Algorithmic Fairness in Black-box Machine Learning Models
黑盒机器学习模型中的算法公平性
- 批准号:
DGECR-2021-00457 - 财政年份:2021
- 资助金额:
$ 1.75万 - 项目类别:
Discovery Launch Supplement
Algorithmic Fairness in Black-box Machine Learning Models
黑盒机器学习模型中的算法公平性
- 批准号:
RGPIN-2021-04378 - 财政年份:2021
- 资助金额:
$ 1.75万 - 项目类别:
Discovery Grants Program - Individual
相似海外基金
Excellence in Research:Towards Data and Machine Learning Fairness in Smart Mobility
卓越研究:实现智能移动中的数据和机器学习公平
- 批准号:
2401655 - 财政年份:2024
- 资助金额:
$ 1.75万 - 项目类别:
Standard Grant
NSF-NSERC: Fairness Fundamentals: Geometry-inspired Algorithms and Long-term Implications
NSF-NSERC:公平基础:几何启发的算法和长期影响
- 批准号:
2342253 - 财政年份:2024
- 资助金额:
$ 1.75万 - 项目类别:
Standard Grant
Sample Size calculations for UPDATing clinical prediction models to Ensure their accuracy and fairness in practice (SS-UPDATE)
用于更新临床预测模型的样本量计算,以确保其在实践中的准确性和公平性(SS-UPDATE)
- 批准号:
MR/Z503873/1 - 财政年份:2024
- 资助金额:
$ 1.75万 - 项目类别:
Research Grant
CAREER: Information-Theoretic Measures for Fairness and Explainability in High-Stakes Applications
职业:高风险应用中公平性和可解释性的信息论测量
- 批准号:
2340006 - 财政年份:2024
- 资助金额:
$ 1.75万 - 项目类别:
Continuing Grant
Proactive Ex Ante Digital Platform Regulations and the Concept of “Fairness”
积极主动的事前数字平台监管和“公平”理念
- 批准号:
24K16261 - 财政年份:2024
- 资助金额:
$ 1.75万 - 项目类别:
Grant-in-Aid for Early-Career Scientists
CAREER: Towards Fairness in the Real World under Generalization, Privacy and Robustness Challenges
职业:在泛化、隐私和稳健性挑战下实现现实世界的公平
- 批准号:
2339198 - 财政年份:2024
- 资助金额:
$ 1.75万 - 项目类别:
Continuing Grant
AF:RI:Small: Fairness in allocation and machine learning problems: algorithms and solution concepts
AF:RI:Small:分配公平性和机器学习问题:算法和解决方案概念
- 批准号:
2334461 - 财政年份:2024
- 资助金额:
$ 1.75万 - 项目类别:
Standard Grant
Financial Inclusion, Fairness and Stability in the AI Era (FinAI)
AI时代的金融普惠、公平与稳定(FinAI)
- 批准号:
EP/Z000378/1 - 财政年份:2024
- 资助金额:
$ 1.75万 - 项目类别:
Research Grant
CRII: AF: RUI: Algorithmic Fairness for Computational Social Choice Models
CRII:AF:RUI:计算社会选择模型的算法公平性
- 批准号:
2348275 - 财政年份:2024
- 资助金额:
$ 1.75万 - 项目类别:
Standard Grant
CAREER: New Frameworks for Ethical Statistical Learning: Algorithmic Fairness and Privacy
职业:道德统计学习的新框架:算法公平性和隐私
- 批准号:
2340241 - 财政年份:2024
- 资助金额:
$ 1.75万 - 项目类别:
Continuing Grant