Realising Accountable Intelligent Systems (RAInS)
实现负责任的智能系统(RAInS)
基本信息
- 批准号:EP/R033846/1
- 负责人:
- 金额:$ 100.53万
- 依托单位:
- 依托单位国家:英国
- 项目类别:Research Grant
- 财政年份:2019
- 资助国家:英国
- 起止时间:2019 至 无数据
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
Intelligent systems technologies are being utilised in more and more scenarios including autonomous vehicles, smart home appliances, public services, retail and manufacturing. But what happens when such systems fail, as in the case of recent high-profile accidents involving autonomous vehicles? How are such systems (and their developers) held to account if they are found to be making biased or unfair decisions? Can we interrogate intelligent systems, to ensure they are fit for purpose before they're deployed? These are all real and timely challenges, given that intelligent systems will increasingly affect many aspects of everyday life.While all new technologies have the capacity to do harm, with intelligent systems it may be difficult or even impossible to know what went wrong or who should be held responsible. There is a very real concern that the complexity of many AI technologies, the data and interactions between the surrounding systems and workflows, will reduce the justification for consequential decisions to "the algorithm made me do it", or indeed "we don't know what happened". And yet the potential for such systems to outperform humans in accuracy of decision-making, and even safety suggests that the desire to use them will be difficult to resist. The question then is how we might endeavour to have the best of both worlds. How can we benefit from the superhuman capacity and efficiency that such systems offer without giving up our desire for accountability, transparency and responsibility? How can we avoid a stalemate choice between forgoing the benefits of automated systems altogether or accepting a degree of arbitrariness that would be unthinkable in society's usual human relationships?Working closely with a range of stakeholders, including members of the public, the legal profession and technology companies, we will explore what it means to realise future intelligent systems that are transparent and accountable. The Accountability Fabric is our vision of a future computational infrastructure supporting audit of such systems - somewhat analogous to (but more sophisticated than) the 'blackbox' flight recorders associated with passenger aircraft. Our work will increase transparency not only after the fact, but also in a manner which allows for early interrogation and audit which in turn may help to prevent or to mitigate harm ex ante. Before we can realise the Accountability Fabric, several key issues need to be investigated:What are the important factors that influence citizen's perceptions of trust and accountability of intelligent systems? What form ought legal liability take for intelligent systems? How can the law operate fairly and incentivize optimal behaviour from those developing/using such systems?How do we formulate an appropriate vocabulary with which to describe and characterise intelligent systems, their context, behaviours and biases?What are the technical means for recording the behaviour of intelligent systems, from the data used, the algorithms deployed, and the flow-on effects of the decisions being made?Can we realise an accountability solution for intelligent systems, operating across a range of technologies and organisational boundaries, that is able to support third party audit and assessment? Answers to these (and the many other questions that will certainly emerge) will lead us to develop prototype solutions that will be evaluated with project partners. Our ambition is to create a means by which the developer of an intelligent system can provide a secure, tamper-proof record of the system's characteristics and behaviours that can be shared (under controlled circumstances) with relevant authorities in the event of an incident or complaint.
智能系统技术正被越来越多的场景使用,包括自动驾驶汽车、智能家电、公共服务、零售和制造业。但是,如果这类系统出现故障,比如最近发生的涉及自动驾驶汽车的备受瞩目的事故,会发生什么?如果这些系统(及其开发人员)被发现做出有偏见或不公平的决定,如何追究他们的责任?我们能否审问智能系统,以确保它们在部署之前适合用途?这些都是真实而及时的挑战,因为智能系统将越来越多地影响日常生活的许多方面。虽然所有新技术都有造成危害的能力,但对于智能系统,可能很难甚至不可能知道哪里出了问题,或者谁应该承担责任。人们真正担心的是,许多人工智能技术的复杂性、周围系统和工作流之间的数据和交互,将使做出后果性决定的理由降为“算法让我这么做的”,或者实际上是“我们不知道发生了什么”。然而,这类系统在决策准确性、甚至安全性方面超越人类的潜力表明,使用它们的愿望将很难抗拒。那么,问题是我们如何才能做到两全其美。我们如何才能在不放弃对责任、透明度和责任的渴望的情况下,从这种制度提供的超人能力和效率中受益?我们如何才能避免在完全放弃自动化系统的好处和接受在社会正常人际关系中不可想象的任意性之间陷入僵局?我们将与包括公众、法律界和科技公司在内的一系列利益相关者密切合作,探索实现未来透明和负责任的智能系统意味着什么。问责制结构是我们对未来计算基础设施的愿景,支持对此类系统的审计-有点类似于(但比)与客机相关的“黑匣子”飞行记录器。我们的工作不仅将在事后增加透明度,而且还将以一种允许及早审讯和审计的方式增加透明度,这反过来又可能有助于预防或减轻事前伤害。在我们实现问责结构之前,需要调查几个关键问题:影响公民对智能系统的信任和问责的重要因素是什么?智能系统的法律责任应该采取什么形式?法律如何公平运作,并激励那些开发/使用这类系统的人的最佳行为?我们如何制定一个合适的词汇,用来描述和描述智能系统、它们的背景、行为和偏见?从使用的数据、部署的算法和所做决策的影响来看,记录智能系统行为的技术手段是什么?我们能否为智能系统实现一种问责解决方案,该解决方案跨越一系列技术和组织边界,能够支持第三方审计和评估?这些(以及肯定会出现的许多其他问题)的答案将引导我们开发原型解决方案,并将与项目合作伙伴一起进行评估。我们的目标是创造一种方法,使智能系统的开发者能够提供系统特征和行为的安全、防篡改的记录,以便在发生事件或投诉时(在受控情况下)与相关当局共享。
项目成果
期刊论文数量(10)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Workshop on Reviewable and Auditable Pervasive Systems (WRAPS)
可审查和可审计普适系统研讨会 (WRAPS)
- DOI:10.1145/3460418.3479265
- 发表时间:2021
- 期刊:
- 影响因子:0
- 作者:Norval C
- 通讯作者:Norval C
The Accountability Fabric: A Suite of Semantic Tools For Managing AI System Accountability and Audit
Accountability Fabric:一套用于管理人工智能系统问责和审计的语义工具
- DOI:
- 发表时间:2021
- 期刊:
- 影响因子:0
- 作者:Milan Markovic
- 通讯作者:Milan Markovic
Towards Accountability Driven Development for Machine Learning Systems
迈向责任驱动的机器学习系统开发
- DOI:
- 发表时间:2021
- 期刊:
- 影响因子:0
- 作者:Chiu Pang Fung
- 通讯作者:Chiu Pang Fung
Using Knowledge Graphs to Unlock Practical Collection, Integration, and Audit of AI Accountability Information
使用知识图来解锁人工智能问责信息的实际收集、集成和审计
- DOI:10.1109/access.2022.3188967
- 发表时间:2022
- 期刊:
- 影响因子:3.9
- 作者:Naja I
- 通讯作者:Naja I
On Evidence Capture for Accountable AI Systems
关于负责任的人工智能系统的证据捕获
- DOI:
- 发表时间:2021
- 期刊:
- 影响因子:0
- 作者:Wei Pang
- 通讯作者:Wei Pang
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Peter Edwards其他文献
Human impacts on the wellbeing of urban trees in Wellington, New Zealand
人类对新西兰惠灵顿城市树木健康的影响
- DOI:
- 发表时间:
2024 - 期刊:
- 影响因子:0
- 作者:
Peter Edwards;Robyn Simcock;Eleanor Absalom;G. Diprose - 通讯作者:
G. Diprose
Aucs/tr9509 Learning Mechanisms for Information Filtering Agents
Aucs/tr9509 信息过滤代理的学习机制
- DOI:
- 发表时间:
1995 - 期刊:
- 影响因子:0
- 作者:
T. Payne;Peter Edwards - 通讯作者:
Peter Edwards
Development of a Digital Tool to Overcome the Challenges of Rural Food SMEs
开发数字工具来克服农村食品中小企业的挑战
- DOI:
10.1080/14702541.2014.994673 - 发表时间:
2015 - 期刊:
- 影响因子:1
- 作者:
S. V. D. Loo;Liang Chen;Peter Edwards;Jennifer A. Holden;S. Karamperidis;Martin J. Kollingbaum;Angela C Marqui;John D. Nelson;Timothy J. Norman;Maja Piecyk;E. Pignotti - 通讯作者:
E. Pignotti
Modern developments in the tuberculosis scheme
- DOI:
10.1016/s0033-3506(33)80173-9 - 发表时间:
1933-10-01 - 期刊:
- 影响因子:
- 作者:
George Jessel;G.T. Hebert;Peter Edwards;R.C. Wingfield;F.T.H. Wood - 通讯作者:
F.T.H. Wood
Revisiting the sustainability science research agenda
- DOI:
10.1007/s11625-024-01586-3 - 发表时间:
2024-10-24 - 期刊:
- 影响因子:5.300
- 作者:
Mesfin Sahle;Shruti Ashish Lahoti;So-Young Lee;Katja Brundiers;Carena J. van Riper;Christian Pohl;Herlin Chien;Iris C. Bohnet;Noé Aguilar-Rivera;Peter Edwards;Prajal Pradhan;Tobias Plieninger;Wiebren Johannes Boonstra;Alexander G. Flor;Annamaria Di Fabio;Arnim Scheidel;Chris Gordon;David J. Abson;Erik Andersson;Federico Demaria;Jasper O. Kenter;Jeremy Brooks;Joanne Kauffman;Maike Hamann;Martin Graziano;Nidhi Nagabhatla;Nobuo Mimura;Nora Fagerholm;Patrick O’Farrell;Osamu Saito;Kazuhiko Takeuchi - 通讯作者:
Kazuhiko Takeuchi
Peter Edwards的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Peter Edwards', 18)}}的其他基金
ConstraAining The RolE Of Sulfur In The Earth System (CARES)
限制硫在地球系统中的作用 (CARES)
- 批准号:
NE/W009315/1 - 财政年份:2023
- 资助金额:
$ 100.53万 - 项目类别:
Research Grant
Understanding the sources of atmospheric chlorine in a mid-continental megacity
了解中部大陆特大城市大气中氯的来源
- 批准号:
NE/V010042/1 - 财政年份:2020
- 资助金额:
$ 100.53万 - 项目类别:
Research Grant
Quantification of Utility of Atmospheric Network Technologies (QUANT)
大气网络技术效用量化 (QUANT)
- 批准号:
NE/T00195X/1 - 财政年份:2019
- 资助金额:
$ 100.53万 - 项目类别:
Research Grant
Laser induced fluorescence instrument for the detection of trace levels of atmospheric sulfur dioxide (SO2)
用于检测大气中痕量二氧化硫(SO2)的激光诱导荧光仪
- 批准号:
NE/T008555/1 - 财政年份:2019
- 资助金额:
$ 100.53万 - 项目类别:
Research Grant
The Food Sentiment Observatory: Exploiting New Forms of Data to Help Inform Policy on Food Safety & Food Crime Risks
食品情绪观察站:利用新形式的数据来帮助制定食品安全政策
- 批准号:
ES/P011004/1 - 财政年份:2017
- 资助金额:
$ 100.53万 - 项目类别:
Research Grant
Trusted Things & Communities: Understanding & Enabling A Trusted IoT Ecosystem
值得信赖的事物
- 批准号:
EP/N028074/1 - 财政年份:2016
- 资助金额:
$ 100.53万 - 项目类别:
Research Grant
Social Media - Developing Understanding, Infrastructure & Engagement (Social Media Enhancement)
社交媒体 - 发展理解、基础设施
- 批准号:
ES/M001628/1 - 财政年份:2014
- 资助金额:
$ 100.53万 - 项目类别:
Research Grant
TRUMP: A Trusted Mobile Platform for the Self-Management of Chronic Illness in Rural Areas
特朗普:用于农村地区慢性病自我管理的值得信赖的移动平台
- 批准号:
EP/J00068X/1 - 财政年份:2012
- 资助金额:
$ 100.53万 - 项目类别:
Research Grant
Neutron Compton Scattering For Functional Energy Materials
功能能源材料的中子康普顿散射
- 批准号:
EP/K002546/1 - 财政年份:2012
- 资助金额:
$ 100.53万 - 项目类别:
Research Grant
PolicyGrid II - Supporting Interdisciplinary Evidence Bases for Scientific Collaboration & Policy Making
PolicyGrid II - 支持科学合作的跨学科证据基础
- 批准号:
ES/F029713/1 - 财政年份:2009
- 资助金额:
$ 100.53万 - 项目类别:
Research Grant
相似海外基金
CAREER: Privacy-Accountable Mobile Software Supply Chain
职业:隐私负责的移动软件供应链
- 批准号:
2339537 - 财政年份:2024
- 资助金额:
$ 100.53万 - 项目类别:
Continuing Grant
Design: Microbiology Leaders Evolving & Accountable to Progress
设计:不断发展的微生物学领导者
- 批准号:
2233509 - 财政年份:2023
- 资助金额:
$ 100.53万 - 项目类别:
Standard Grant
Collaborative Research: DASS: Accountable Open Source Infrastructure
合作研究:DASS:负责任的开源基础设施
- 批准号:
2317169 - 财政年份:2023
- 资助金额:
$ 100.53万 - 项目类别:
Standard Grant
Collaborative Research: DASS: Accountable Open Source Infrastructure
合作研究:DASS:负责任的开源基础设施
- 批准号:
2317168 - 财政年份:2023
- 资助金额:
$ 100.53万 - 项目类别:
Standard Grant
Collaborative Research: HNDS-I: Digitally Accountable Public Representation
合作研究:HNDS-I:数字化负责任的公共代表
- 批准号:
2318461 - 财政年份:2023
- 资助金额:
$ 100.53万 - 项目类别:
Standard Grant
Collaborative Research: HNDS-I: Digitally Accountable Public Representation
合作研究:HNDS-I:数字化负责任的公共代表
- 批准号:
2318460 - 财政年份:2023
- 资助金额:
$ 100.53万 - 项目类别:
Standard Grant
Scalable & Accountable Privacy-Preserving Blockchain with Enhanced Security
可扩展
- 批准号:
DP220101234 - 财政年份:2023
- 资助金额:
$ 100.53万 - 项目类别:
Discovery Projects
CAREER: Structural and Accountable Behavior Understandings and Human-centered AI Designs with Naturalistic Micromobility Riding Data
职业:结构和负责任的行为理解以及以人为中心的人工智能设计与自然微移动骑行数据
- 批准号:
2239897 - 财政年份:2023
- 资助金额:
$ 100.53万 - 项目类别:
Continuing Grant
Effect of Medicaid Accountable Care Organizations on Behavioral Health Care Quality and Outcomes for Children
医疗补助责任护理组织对儿童行为保健质量和结果的影响
- 批准号:
10729117 - 财政年份:2023
- 资助金额:
$ 100.53万 - 项目类别:
Trustworthy and Accountable Decision-Support Frameworks for Biodiversity - A Virtual Labs based Approach
值得信赖和负责任的生物多样性决策支持框架 - 基于虚拟实验室的方法
- 批准号:
NE/X002233/1 - 财政年份:2022
- 资助金额:
$ 100.53万 - 项目类别:
Research Grant














{{item.name}}会员




