Enabling a Responsible AI Ecosystem
打造负责任的人工智能生态系统
基本信息
- 批准号:AH/X007146/1
- 负责人:
- 金额:$ 791.91万
- 依托单位:
- 依托单位国家:英国
- 项目类别:Research Grant
- 财政年份:2022
- 资助国家:英国
- 起止时间:2022 至 无数据
- 项目状态:未结题
- 来源:
- 关键词:
项目摘要
Problem Space: There is now a broad base of research in AI ethics, policy and law that can inform and guide efforts to construct a Responsible AI (R-AI) ecosystem, but three gaps must be bridged before this is achieved:(a) Traditional disciplinary silos, boundaries, and incentives limiting successful generation and translation of R-AI knowledge must be addressed by new incentives and supportive infrastructure.(b) The high barriers to adoption of existing research and creative work in R-AI must be removed so that public institutions, non-profits and industry (especially SME) can convert and embed R-AI into reliable, accessible and scalable practices and methods that can be broadly adopted.(c) The primary actors empowered to research, articulate and adopt responsible AI standards must be more broadly representative of, and answerable to, the publics and communities most impacted by AI developments.Approach: We will develop and deliver a UK-wide infrastructure that lays secure foundations for the bridges across these three gaps so that AI in the UK is responsible, ethical and accountable by default. The programme will be structured around three core pillars of work (Translation and Co-Construction, Embedding and Adoption, and Answerability and Accountability) and four strategic delivery themes that establish programme coherence: (1) AI for Humane Innovation: integrating within AI research the humanistic perspectives that enable the personal, cultural, and political flourishing of human beings, by weaving historical, philosophical, literary and other humane arts into dialogue with AI communities of research, policy and practise (2) AI for Inspired Innovation: activities to infuse the AI ecosystem with more vibrant, imaginative and creative visions of R-AI futures (3) AI for Equitable Innovation: activities directing research and policy attention to the need to ensure that broader UK publics, particularly those marginalised within the digital economy, can expect more sustainable and equitable futures from AI development, and (4) AI for Resilient Innovation: uplifting research, policy and practise that ensures AI ameliorates growing threats to global and national security, rule of law, liberty, and social cohesion.Team: Co-Directors Vallor and Luger will lead and deliver the programme alongside the Ada Lovelace Institute, with a cross-disciplinary team of Co-Is who will leverage their networks to broaden/diversify the R-AI community and enhance disciplinary engagement with R-AI, acting as translational interfaces to ensure AHRC programme communications speak effectively to underrepresented cohorts within their communities/disciplines. As partner, the BBC will support public engagement activities to ensure trust, breadth of reach and public legitimacy.Activities: Through a comprehensive programme of translation, research and engagement activities we will:(1) Support existing and foster new R-AI partnerships, connecting AI researchers, industry, policymakers and publics around the cross-cutting themes.(2) Build broader responsible AI visions by developing infrastructure for translation of R-AI research, inviting ECRs and new voices from the arts, humanities and civil society, to co-shape, interrogate and enrich visions of flourishing with AI.(3) Learning from early R-AI work, surface and map barriers, incentives and opportunities to make R-AI research responsive to the needs and challenges faced by policymakers, regulators, technologists, and wider publics.(4) Embed R-AI in policy and practice by conducting research and building capacity for translation of R-AI research into accessible and usable guidance for policymakers, industry leaders, SMEs and publics.(5) Build trust in AI by rethinking accountability through three applied lenses: accountability to wider publics, answerability of current systems, and public mechanisms for recourse through consultation, creative mechanisms and synthesis activities.
问题空间:现在,人工智能道德、政策和法律方面有广泛的研究基础,可以为构建负责任的人工智能(R-AI)生态系统的努力提供信息和指导,但在实现这一目标之前必须弥合三个差距:(a)传统的学科孤岛、边界和激励措施限制了R-AI知识的成功生成和翻译必须通过新的激励措施和支持性基础设施来解决。(b)必须消除R-AI中现有研究和创造性工作的高度障碍,以便公共机构,非营利组织和行业(特别是中小企业)能够将R-AI转换并嵌入到可广泛采用的可靠,可访问和可扩展的实践和方法中。(c)授权研究、阐明和采用负责任的人工智能标准的主要参与者必须更广泛地代表受人工智能发展影响最大的公众和社区,并对他们负责。方法:我们将开发和提供一个英国范围内的基础设施,为跨越这三个差距的桥梁奠定安全的基础,以便英国的人工智能在默认情况下是负责任的、道德的和负责任的。该方案将围绕三个核心工作支柱展开(翻译与共建、嵌入与采纳、可复制性与问责制)以及建立方案一致性的四个战略交付主题:(1)人工智能促进人文创新:在人工智能研究中整合人文视角,使人类的个人,文化和政治繁荣,通过编织历史,哲学,文学和其他人文艺术与人工智能研究,政策和实践社区的对话(2)人工智能激励创新:为人工智能生态系统注入更具活力,想象力和创造性的R-AI未来愿景的活动(3)人工智能公平创新:引导研究和政策关注的活动,以确保更广泛的英国公众,特别是那些在数字经济中被边缘化的公众,可以从人工智能发展中期待更可持续和公平的未来,以及(4)人工智能促进弹性创新:提升研究,政策和实践,确保人工智能改善对全球和国家安全,法治,自由和社会凝聚力日益增长的威胁。团队:联合主任瓦勒和卢格将领导和提供该计划与阿达洛夫莱斯研究所,由Co-Is组成的跨学科团队将利用他们的网络来扩大/多样化R-AI社区,并加强与R-AI的学科接触,充当翻译界面,确保AHRC项目传播有效地与其社区/学科内代表性不足的群体对话。作为合作伙伴,英国广播公司将支持公众参与活动,以确保信任,覆盖范围和公共合法性。活动:通过翻译,研究和参与活动的综合计划,我们将:(1)支持现有的并促进新的R-AI伙伴关系,围绕交叉主题将AI研究人员,行业,政策制定者和公众联系起来。(2)通过开发用于翻译R-AI研究的基础设施,邀请来自艺术,人文和民间社会的ECR和新声音,共同塑造,询问和丰富AI繁荣的愿景,建立更广泛的负责任的AI愿景。(3)从早期的R-AI工作中学习,表面和地图的障碍,激励措施和机会,使R-AI研究响应政策制定者,监管机构,技术人员和更广泛的公众所面临的需求和挑战。(4)通过开展研究和建设能力,将R-AI研究转化为政策制定者、行业领导者、中小企业和公众可获得和可用的指导,将R-AI嵌入政策和实践。(5)通过三个应用镜头重新思考问责制来建立对人工智能的信任:对更广泛公众的问责制,当前系统的问责制,以及通过协商,创造性机制和综合活动进行追索的公共机制。
项目成果
期刊论文数量(4)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Reactive agency and technology
反应性机构和技术
- DOI:10.1007/s43681-023-00366-6
- 发表时间:2023
- 期刊:
- 影响因子:0
- 作者:Tollon F
- 通讯作者:Tollon F
Typology of Risks of Generative Text-to-Image Models
- DOI:10.1145/3600211.3604722
- 发表时间:2023-07
- 期刊:
- 影响因子:0
- 作者:Charlotte M. Bird;Eddie L. Ungless;Atoosa Kasirzadeh
- 通讯作者:Charlotte M. Bird;Eddie L. Ungless;Atoosa Kasirzadeh
The Routledge Handbook of Philosophy of Responsibility
劳特利奇责任哲学手册
- DOI:10.4324/9781003282242-43
- 发表时间:2023
- 期刊:
- 影响因子:0
- 作者:Vallor S
- 通讯作者:Vallor S
Responsible Agency Through Answerability
责任机构通过责任性
- DOI:10.1145/3597512.3597529
- 发表时间:2023
- 期刊:
- 影响因子:0
- 作者:Hatherall L
- 通讯作者:Hatherall L
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Shannon Vallor其他文献
Why Reliabilism Is not Enough: Epistemic and Moral Justification in Machine Learning
为什么可靠性主义还不够:机器学习中的认知和道德论证
- DOI:
- 发表时间:
2020 - 期刊:
- 影响因子:0
- 作者:
A. Smart;Larry James;B. Hutchinson;Simone Wu;Shannon Vallor - 通讯作者:
Shannon Vallor
An Introduction to Software Engineering Ethics
软件工程伦理简介
- DOI:
- 发表时间:
2013 - 期刊:
- 影响因子:0
- 作者:
Shannon Vallor;Arvind Narayanan - 通讯作者:
Arvind Narayanan
Artificial Intelligence and Public Trust
人工智能与公众信任
- DOI:
- 发表时间:
2017 - 期刊:
- 影响因子:0
- 作者:
Shannon Vallor - 通讯作者:
Shannon Vallor
Social networking technology and the virtues
- DOI:
10.1007/s10676-009-9202-1 - 发表时间:
2010-06 - 期刊:
- 影响因子:3.6
- 作者:
Shannon Vallor - 通讯作者:
Shannon Vallor
Carebots and Caregivers: Sustaining the Ethical Ideal of Care in the Twenty-First Century
护理机器人和护理人员:维持二十一世纪的护理道德理想
- DOI:
- 发表时间:
2011 - 期刊:
- 影响因子:0
- 作者:
Shannon Vallor - 通讯作者:
Shannon Vallor
Shannon Vallor的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Shannon Vallor', 18)}}的其他基金
Making Systems Answer: Dialogical Design as a Bridge for Responsibility Gaps in Trustworthy Autonomous Systems
让系统给出答案:对话设计作为可信赖自治系统中责任差距的桥梁
- 批准号:
EP/W011654/1 - 财政年份:2022
- 资助金额:
$ 791.91万 - 项目类别:
Research Grant
相似海外基金
Museum Visitor Experience and the Responsible Use of AI to Communicate Colonial Collections
博物馆参观者体验和负责任地使用人工智能来交流殖民地收藏品
- 批准号:
AH/Z505547/1 - 财政年份:2024
- 资助金额:
$ 791.91万 - 项目类别:
Research Grant
FRAIM: Framing Responsible AI Implementation and Management
FRAIM:制定负责任的人工智能实施和管理
- 批准号:
AH/Z505596/1 - 财政年份:2024
- 资助金额:
$ 791.91万 - 项目类别:
Research Grant
"Ethical Review to Support Responsible AI in Policing - A Preliminary Study of West Midlands Police's Specialist Data Ethics Review Committee "
“支持警务中负责任的人工智能的道德审查——西米德兰兹郡警察专家数据道德审查委员会的初步研究”
- 批准号:
AH/Z505626/1 - 财政年份:2024
- 资助金额:
$ 791.91万 - 项目类别:
Research Grant
Creating a Dynamic Archive of Responsible Ecosystems in the Context of Creative AI
在创意人工智能的背景下创建负责任的生态系统的动态档案
- 批准号:
AH/Z505572/1 - 财政年份:2024
- 资助金额:
$ 791.91万 - 项目类别:
Research Grant
Building AI-Powered Responsible Workforce by Integrating Large Language Models into Computer Science Curriculum
通过将大型语言模型集成到计算机科学课程中,打造人工智能驱动的负责任的劳动力队伍
- 批准号:
2336061 - 财政年份:2024
- 资助金额:
$ 791.91万 - 项目类别:
Standard Grant
Towards Embedding Responsible AI in the School System: Co-Creation with Young People
将负责任的人工智能嵌入学校系统:与年轻人共同创造
- 批准号:
AH/Z505560/1 - 财政年份:2024
- 资助金额:
$ 791.91万 - 项目类别:
Research Grant
Collaborative Research: FW-HTF-RL: Trapeze: Responsible AI-assisted Talent Acquisition for HR Specialists
合作研究:FW-HTF-RL:Trapeze:负责任的人工智能辅助人力资源专家人才获取
- 批准号:
2326193 - 财政年份:2023
- 资助金额:
$ 791.91万 - 项目类别:
Standard Grant
Collaborative Research: FW-HTF-RL: Trapeze: Responsible AI-assisted Talent Acquisition for HR Specialists
合作研究:FW-HTF-RL:Trapeze:负责任的人工智能辅助人力资源专家人才获取
- 批准号:
2326194 - 财政年份:2023
- 资助金额:
$ 791.91万 - 项目类别:
Standard Grant
Responsible AI for responsible lenders
为负责任的贷方提供负责任的人工智能
- 批准号:
10076260 - 财政年份:2023
- 资助金额:
$ 791.91万 - 项目类别:
Grant for R&D
Improved biomedical data harmonisation, the cornerstone of trustworthy and responsible AI in Healthcare
改进生物医学数据协调,这是医疗保健领域值得信赖和负责任的人工智能的基石
- 批准号:
10076467 - 财政年份:2023
- 资助金额:
$ 791.91万 - 项目类别:
Grant for R&D