BIAS: Responsible AI for Labour Market Equality

BIAS:负责任的人工智能促进劳动力市场平等

基本信息

  • 批准号:
    ES/T012382/1
  • 负责人:
  • 金额:
    $ 64.74万
  • 依托单位:
  • 依托单位国家:
    英国
  • 项目类别:
    Research Grant
  • 财政年份:
    2020
  • 资助国家:
    英国
  • 起止时间:
    2020 至 无数据
  • 项目状态:
    已结题

项目摘要

What do we study? BIAS is an interdisciplinary project to understand and tackle the role of AI algorithms in shaping ethnic and gender inequalities in the labour market, which is now increasingly digitized. The project seeks to understand and minimise gender and ethnic biases in the AI-driven labour market processes of job advertising, hiring and professional networking. We further aim to develop 'responsible' AI that mitigates biases and attendant inequalities, by designing AI algorithms and development protocols that are sensitive to such biases. The empirical context of our investigation includes these labour market processes in organisations and on digital job platforms.Why is it important? Labour market inequalities need to be tackled because they deny and thwart equitable and sustainable socio-economic development. In both the UK and Canada, access and rewards to work remain patterned around social distinctions, like gender, race, and ethnicity. The deployment of AI in labour market processes is known to exacerbate such inequalities through perpetuation of existing gender and ethnic biases hiring and career progression. From the point of view of policy our proposal speaks directly to the following priorities in both countries-the UK's Industrial Strategy, which has 'putting the UK at the forefront of the AI and data revolution' as one of its grand challenges; the UK's AI sector deal that aims to 'boost the UK's global position as a leader in developing AI technologies'; and the Canadian SSHRC's goal of tackling persistent demographic (ethnic and gender) disparities in workforce selection and development.Why is it unique? Although we know that AI can exacerbate biases in the labour market, we do not know how AI can mitigate these biases. The project develops responsible and trustworthy AI that reduces labour market inequalities by tackling gender and ethnic/racial biases in job advertising, hiring and professional networking processes. It enhances capacity development for responsible AI applications through training of early career researchers, builds on existing and develops new UK-Canada research partnerships, and develops outputs for multiple stakeholders (researchers, companies and policy units). It speaks to multiple objectives of the funding call.Why is it intellectually original and challenging? The project is interdisciplinary. It integrates and cuts across three distinct streams of research (the first two are from the social sciences and the third from the computational and mathematical sciences) to tackle the research objective through an interdisciplinary approach. The first includes studies on socio-economic antecedents of labour market inequality, which underscore the persistence and prominence of gender and ethnic/racial inequalities in the UK and Canada. The second includes studies from business management (digitalisation, technology/AI adoption and human resource management). It emphasizes that while we know that the use of AI in the labour market processes of job advertising, hiring and professional networking can strengthen ethnic and gender bias in these processes we do not know what those biases are and how AI algorithms can mitigate them, as opposed to merely (re)producing them. The third draws on computational statistics (Bayesian statistic/machine learning) to design new AI algorithms and development protocols that integrate human and machine inputs/outputs.What is the work plan? Our project comprises two interlinked work packages that respectively (1) understand the different dimensions of bias from a multi-stakeholder perspective (e.g. employer, employee, digital platform developer) through in-depth data mining and qualitative investigations when AI algorithms are used in the labour market processes of job advertising, hiring and professional networking; and (2) test/design new AI algorithms to mitigate them and create protocols for their development and implementation.
我们研究什么?BIAS是一个跨学科项目,旨在了解和解决人工智能算法在劳动力市场中塑造种族和性别不平等方面的作用,现在劳动力市场越来越数字化。该项目旨在了解并最大限度地减少人工智能驱动的劳动力市场招聘广告,招聘和专业网络过程中的性别和种族偏见。我们的进一步目标是开发“负责任的”人工智能,通过设计对这种偏见敏感的人工智能算法和开发协议来减轻偏见和随之而来的不平等。我们调查的实证背景包括组织和数字工作平台上的这些劳动力市场流程。为什么它很重要?劳动力市场的不平等现象需要加以解决,因为它们否定和阻碍了公平和可持续的社会经济发展。在英国和加拿大,工作的机会和报酬仍然围绕着社会差异,如性别、种族和民族。众所周知,人工智能在劳动力市场过程中的部署会通过延续现有的性别和种族偏见、招聘和职业发展来加剧这种不平等。从政策的角度来看,我们的提议直接涉及两国的以下优先事项-英国的工业战略,该战略将“将英国置于人工智能和数据革命的前沿”作为其重大挑战之一;英国的人工智能行业协议旨在“提升英国作为人工智能技术发展领导者的全球地位”;加拿大SSHRC的目标是解决劳动力选择和发展中持续存在的人口(种族和性别)差异。为什么它是独一无二的?虽然我们知道人工智能会加剧劳动力市场的偏见,但我们不知道人工智能如何减轻这些偏见。该项目开发负责任且值得信赖的人工智能,通过解决招聘广告、招聘和专业网络流程中的性别和民族/种族偏见来减少劳动力市场的不平等。它通过培训早期职业研究人员来加强负责任的人工智能应用的能力发展,建立在现有的基础上并发展新的英国-加拿大研究伙伴关系,并为多个利益相关者(研究人员,公司和政策单位)开发产出。它涉及到资金呼吁的多个目标。为什么它是智力上的原创性和挑战性?该项目是跨学科的。它整合和跨越三个不同的研究流(前两个是从社会科学和计算和数学科学的第三个)通过跨学科的方法来解决研究目标。第一个包括对劳动力市场不平等的社会经济前因的研究,强调了英国和加拿大性别和民族/种族不平等的持续性和突出性。第二个包括业务管理(数字化,技术/人工智能采用和人力资源管理)的研究。它强调,虽然我们知道人工智能在招聘广告,招聘和专业网络等劳动力市场过程中的使用可以加强这些过程中的种族和性别偏见,但我们不知道这些偏见是什么以及人工智能算法如何减轻它们,而不仅仅是(重新)生产它们。第三个是利用计算统计学(贝叶斯统计/机器学习),设计新的AI算法和开发协议,整合人机输入/输出。工作计划是什么?我们的项目包括两个相互关联的工作包,分别(1)从多方利益相关者的角度理解偏见的不同层面(例如雇主、雇员、数字平台开发商)通过深入的数据挖掘和定性调查,了解人工智能算法在招聘广告、招聘和专业网络等劳动力市场过程中的应用;以及(2)测试/设计新的AI算法以减轻它们,并为其开发和实施创建协议。

项目成果

期刊论文数量(10)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
The Sanction of Authority: Promoting Public Trust in AI
权威制裁:促进公众对人工智能的信任
  • DOI:
  • 发表时间:
    2021
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Bran Knowles
  • 通讯作者:
    Bran Knowles
Word Embeddings via Causal Inference: Gender Bias Reducing and Semantic Information Preserving
  • DOI:
    10.1609/aaai.v36i11.21443
  • 发表时间:
    2021-12
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Lei Ding;Dengdeng Yu;Jinhan Xie;Wenxing Guo;Shenggang Hu;Meichen Liu;Linglong Kong;Hongsheng Dai;Yanchun Bao;Bei Jiang
  • 通讯作者:
    Lei Ding;Dengdeng Yu;Jinhan Xie;Wenxing Guo;Shenggang Hu;Meichen Liu;Linglong Kong;Hongsheng Dai;Yanchun Bao;Bei Jiang
Balancing Gender Bias in Job Advertisements With Text-Level Bias Mitigation.
  • DOI:
    10.3389/fdata.2022.805713
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    3.1
  • 作者:
    Hu S;Al-Ani JA;Hughes KD;Denier N;Konnikov A;Ding L;Xie J;Hu Y;Tarafdar M;Jiang B;Kong L;Dai H
  • 通讯作者:
    Dai H
Humble AI
  • DOI:
    10.1145/3587035
  • 发表时间:
    2023-08
  • 期刊:
  • 影响因子:
    22.7
  • 作者:
    Bran Knowles;J. D’cruz;John T. Richards;Kush R. Varshney
  • 通讯作者:
    Bran Knowles;J. D’cruz;John T. Richards;Kush R. Varshney
The Many Facets of Trust in AI: Formalizing the Relation Between Trust and Fairness, Accountability, and Transparency
人工智能信任的多个方面:将信任与公平、问责制和透明度之间的关系形式化
  • DOI:
    10.48550/arxiv.2208.00681
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Knowles B
  • 通讯作者:
    Knowles B
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Monideepa Tarafdar其他文献

IN ORGANIZATIONAL ADOPTION OF ELECTRONIC COMMERCE : THE NEED FOR AN INTERDISCIPLINARY PERSPECTIVE
电子商务的组织采用:需要跨学科的视角
  • DOI:
  • 发表时间:
    2017
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Monideepa Tarafdar
  • 通讯作者:
    Monideepa Tarafdar
Role of Social Media in Social Protest Cycles: A Sociomaterial Examination
社交媒体在社会抗议周期中的作用:社会材料检验
  • DOI:
    10.1287/isre.2021.1013
  • 发表时间:
    2021
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Monideepa Tarafdar;Deepa Ray
  • 通讯作者:
    Deepa Ray
Examining alignment between supplier management practices and information systems strategy
检查供应商管理实践与信息系统战略之间的一致性
  • DOI:
    10.1108/14635771211258034
  • 发表时间:
    2012
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Sufian Qrunfleh;Monideepa Tarafdar;T. S. Ragu
  • 通讯作者:
    T. S. Ragu
ICT-Based Communication Events as Triggers of Stress: A Mixed Methods Study
基于 ICT 的沟通事件作为压力的触发因素:混合方法研究
Proximal and distal antecedents of problematic information technology use in organizations
组织中使用有问题的信息技术的近端和远端前因
  • DOI:
  • 发表时间:
    2021
  • 期刊:
  • 影响因子:
    5.9
  • 作者:
    H. Pirkkalainen;Monideepa Tarafdar;Markus Salo;Markus Makkonen
  • 通讯作者:
    Markus Makkonen

Monideepa Tarafdar的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

相似海外基金

Museum Visitor Experience and the Responsible Use of AI to Communicate Colonial Collections
博物馆参观者体验和负责任地使用人工智能来交流殖民地收藏品
  • 批准号:
    AH/Z505547/1
  • 财政年份:
    2024
  • 资助金额:
    $ 64.74万
  • 项目类别:
    Research Grant
FRAIM: Framing Responsible AI Implementation and Management
FRAIM:制定负责任的人工智能实施和管理
  • 批准号:
    AH/Z505596/1
  • 财政年份:
    2024
  • 资助金额:
    $ 64.74万
  • 项目类别:
    Research Grant
"Ethical Review to Support Responsible AI in Policing - A Preliminary Study of West Midlands Police's Specialist Data Ethics Review Committee "
“支持警务中负责任的人工智能的道德审查——西米德兰兹郡警察专家数据道德审查委员会的初步研究”
  • 批准号:
    AH/Z505626/1
  • 财政年份:
    2024
  • 资助金额:
    $ 64.74万
  • 项目类别:
    Research Grant
Creating a Dynamic Archive of Responsible Ecosystems in the Context of Creative AI
在创意人工智能的背景下创建负责任的生态系统的动态档案
  • 批准号:
    AH/Z505572/1
  • 财政年份:
    2024
  • 资助金额:
    $ 64.74万
  • 项目类别:
    Research Grant
Towards Embedding Responsible AI in the School System: Co-Creation with Young People
将负责任的人工智能嵌入学校系统:与年轻人共同创造
  • 批准号:
    AH/Z505560/1
  • 财政年份:
    2024
  • 资助金额:
    $ 64.74万
  • 项目类别:
    Research Grant
Building AI-Powered Responsible Workforce by Integrating Large Language Models into Computer Science Curriculum
通过将大型语言模型集成到计算机科学课程中,打造人工智能驱动的负责任的劳动力队伍
  • 批准号:
    2336061
  • 财政年份:
    2024
  • 资助金额:
    $ 64.74万
  • 项目类别:
    Standard Grant
Collaborative Research: FW-HTF-RL: Trapeze: Responsible AI-assisted Talent Acquisition for HR Specialists
合作研究:FW-HTF-RL:Trapeze:负责任的人工智能辅助人力资源专家人才获取
  • 批准号:
    2326193
  • 财政年份:
    2023
  • 资助金额:
    $ 64.74万
  • 项目类别:
    Standard Grant
Collaborative Research: FW-HTF-RL: Trapeze: Responsible AI-assisted Talent Acquisition for HR Specialists
合作研究:FW-HTF-RL:Trapeze:负责任的人工智能辅助人力资源专家人才获取
  • 批准号:
    2326194
  • 财政年份:
    2023
  • 资助金额:
    $ 64.74万
  • 项目类别:
    Standard Grant
Responsible AI for responsible lenders
为负责任的贷方提供负责任的人工智能
  • 批准号:
    10076260
  • 财政年份:
    2023
  • 资助金额:
    $ 64.74万
  • 项目类别:
    Grant for R&D
Improved biomedical data harmonisation, the cornerstone of trustworthy and responsible AI in Healthcare
改进生物医学数据协调,这是医疗保健领域值得信赖和负责任的人工智能的基石
  • 批准号:
    10076467
  • 财政年份:
    2023
  • 资助金额:
    $ 64.74万
  • 项目类别:
    Grant for R&D
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了