Making Systems Answer: Dialogical Design as a Bridge for Responsibility Gaps in Trustworthy Autonomous Systems

让系统给出答案:对话设计作为可信赖自治系统中责任差距的桥梁

基本信息

  • 批准号:
    EP/W011654/1
  • 负责人:
  • 金额:
    $ 71.31万
  • 依托单位:
  • 依托单位国家:
    英国
  • 项目类别:
    Research Grant
  • 财政年份:
    2022
  • 资助国家:
    英国
  • 起止时间:
    2022 至 无数据
  • 项目状态:
    未结题

项目摘要

As computing systems become increasingly autonomous--able to independently pilot vehicles, detect fraudulent banking transactions, or read and diagnose our medical scans--it is vital that humans can confidently assess and ensure their trustworthiness. Our project develops a novel, people-centred approach to overcoming a major obstacle to this, known as responsibility gaps. Responsibility gaps occur when we cannot identify a person who is morally responsible for an action with high moral stakes, either because it is unclear who was behind the act, or because the agent does not meet the conditions for moral responsibility; for example, if the act was not voluntary, or if the agent was not aware of it. Responsibility gaps are a problem because holding others responsible for what they do is how we maintain social trust. Autonomous systems create new responsibility gaps. They operate in high-stakes areas such as health and finance, but their actions may not be under the control of a morally responsible person, or may not be fully understandable or predictable by humans due to complex 'black-box' algorithms driving these actions. To make such systems trustworthy, we need to find a way of bridging these gaps. Our project draws upon research in philosophy, cognitive science, law and AI to develop new ways for autonomous system developers, users and regulators to bridge responsibility gaps-by boosting the ability of systems to deliver a vital and understudied component of responsibility, namely answerability. When we say someone is 'answerable' for an act, it is a way of talking about their responsibility. But answerability is not about having someone to blame; it is about supplying people who are affected by our actions with the answers they need or expect. Responsible humans answer for actions in many different ways; they can explain, justify, reconsider, apologise, offer amends, make changes or take future precautions. Answerability encompasses a richer set of responsibility practices than explainability in computing, or accountability in law.Often, the very act of answering for our actions improves us, helping us be more responsible and trustworthy in the future. This is why answerability is key to bridging responsibility gaps. It is not about who we name as the 'responsible person' (which is more difficult to identify in autonomous systems), but about what we owe to the people holding the system responsible. If the system as a whole (machines + people) can get better at giving the answers that are owed, the system can still meet present and future responsibilities to others. Hence, answerability is a system capability for executing responsibilities that can bridge responsibility gaps. Our ambition is to provide the theoretical and empirical evidence and computational techniques that demonstrate how to enable autonomous systems (including wider "systems" of developers, owners, users, etc) to supply the kinds of answers that people seek from trustworthy agents. Our first workstream establishes the theoretical and conceptual framework that allows answerability to be better understood and executed by system developers, users and regulators. The second workstream grounds this in a people-centred, evidence-driven approach by engaging various publics, users, beneficiaries and regulators of autonomous systems in the research. Focus groups, workshops and interviews will be used to discuss cases and scenarios in health, finance and government that reveal what kinds of answers people expect from trustworthy systems operating in these areas. Finally, our third workstream develops novel computational AI techniques for boosting the answerability of autonomous systems through more dialogical and responsive interfaces with users and regulators. Our research outputs and activities will produce a mix of academic, industry and public-facing resources for designing, deploying and governing more answerable autonomous systems.
随着计算系统变得越来越自主--能够独立驾驶车辆、检测欺诈性银行交易或读取和诊断我们的医疗扫描--人类能够自信地评估和确保其可信度至关重要。我们的项目开发了一种新颖的、以人为本的方法,以克服这方面的一个主要障碍,即所谓的责任差距。当我们无法确定一个人在道德上对一项具有高度道德风险的行动负有责任时,就会出现责任差距,因为不清楚谁是该行为的幕后黑手,或者因为行为人不符合道德责任的条件;例如,如果该行为不是自愿的,责任差距是一个问题,因为让他人为自己的行为负责是我们维持社会信任的方式。自治系统造成了新的责任差距。他们在健康和金融等高风险领域开展业务,但他们的行为可能不受道德责任人的控制,或者由于驱动这些行为的复杂“黑匣子”算法,人类可能无法完全理解或预测。为了使这些系统值得信赖,我们需要找到一种弥合这些差距的方法。我们的项目借鉴了哲学,认知科学,法律和人工智能的研究,为自主系统开发人员,用户和监管机构开发新的方法来弥合责任差距-通过提高系统提供责任的重要和未充分研究的组成部分的能力,即可回答性。 当我们说某人对某个行为“负责”时,这是谈论他们的责任的一种方式。但责任感并不是要责怪某人;而是要为那些受到我们行为影响的人提供他们需要或期望的答案。负责任的人以许多不同的方式对行为负责;他们可以解释,证明,重新考虑,道歉,提供补偿,做出改变或采取未来的预防措施。可解释性包含了比计算中的可解释性或法律中的问责制更丰富的责任实践。通常,为我们的行为负责的行为会改善我们,帮助我们在未来变得更负责任和值得信赖。这就是为什么问责制是弥合责任差距的关键。这不是关于我们将谁命名为“负责人”(这在自治系统中更难识别),而是关于我们欠那些让系统负责的人什么。如果系统作为一个整体(机器+人)能够更好地给出所欠的答案,系统仍然可以满足当前和未来对他人的责任。因此,可问责性是一种履行职责的系统能力,可以弥合责任差距。我们的目标是提供理论和经验证据以及计算技术,以证明如何使自治系统(包括更广泛的开发人员,所有者,用户等“系统”)提供人们从值得信赖的代理中寻求的答案。我们的第一个工作流程建立了理论和概念框架,使系统开发人员,用户和监管机构能够更好地理解和执行问责制。第二个工作流程通过让各种公众、用户、受益者和自治系统的监管者参与研究,以人为本,以证据为导向。焦点小组、研讨会和访谈将用于讨论卫生、金融和政府领域的案例和情景,揭示人们对这些领域值得信赖的系统的期望。最后,我们的第三个工作流程开发了新的计算AI技术,通过与用户和监管机构的更多对话和响应界面来提高自治系统的可回答性。我们的研究成果和活动将产生学术,工业和面向公众的资源组合,用于设计,部署和管理更负责任的自治系统。

项目成果

期刊论文数量(7)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Free Will - Philosophers and Neuroscientists in Conversation
自由意志——哲学家和神经科学家的对话
  • DOI:
    10.1093/oso/9780197572153.003.0011
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Hall J
  • 通讯作者:
    Hall J
The Routledge Handbook of Philosophy of Responsibility
劳特利奇责任哲学手册
  • DOI:
    10.4324/9781003282242-43
  • 发表时间:
    2023
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Vallor S
  • 通讯作者:
    Vallor S
Technical Perspective: The Impact of Auditing for Algorithmic Bias
技术视角:审计对算法偏差的影响
  • DOI:
    10.1145/3571152
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    22.7
  • 作者:
    Conitzer V
  • 通讯作者:
    Conitzer V
Artificial Moral Advisors: A New Perspective from Moral Psychology
人工道德顾问:道德心理学的新视角
  • DOI:
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Yuxin Liu
  • 通讯作者:
    Yuxin Liu
Responsible Agency Through Answerability
责任机构通过责任性
  • DOI:
    10.1145/3597512.3597529
  • 发表时间:
    2023
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Hatherall L
  • 通讯作者:
    Hatherall L
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Shannon Vallor其他文献

Why Reliabilism Is not Enough: Epistemic and Moral Justification in Machine Learning
为什么可靠性主义还不够:机器学习中的认知和道德论证
An Introduction to Software Engineering Ethics
软件工程伦理简介
  • DOI:
  • 发表时间:
    2013
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Shannon Vallor;Arvind Narayanan
  • 通讯作者:
    Arvind Narayanan
Artificial Intelligence and Public Trust
人工智能与公众信任
  • DOI:
  • 发表时间:
    2017
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Shannon Vallor
  • 通讯作者:
    Shannon Vallor
Social networking technology and the virtues
  • DOI:
    10.1007/s10676-009-9202-1
  • 发表时间:
    2010-06
  • 期刊:
  • 影响因子:
    3.6
  • 作者:
    Shannon Vallor
  • 通讯作者:
    Shannon Vallor
Carebots and Caregivers: Sustaining the Ethical Ideal of Care in the Twenty-First Century
护理机器人和护理人员:维持二十一世纪的护理道德理想

Shannon Vallor的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Shannon Vallor', 18)}}的其他基金

Enabling a Responsible AI Ecosystem
打造负责任的人工智能生态系统
  • 批准号:
    AH/X007146/1
  • 财政年份:
    2022
  • 资助金额:
    $ 71.31万
  • 项目类别:
    Research Grant

相似国自然基金

Graphon mean field games with partial observation and application to failure detection in distributed systems
  • 批准号:
  • 批准年份:
    2025
  • 资助金额:
    0.0 万元
  • 项目类别:
    省市级项目
EstimatingLarge Demand Systems with MachineLearning Techniques
  • 批准号:
  • 批准年份:
    2024
  • 资助金额:
    万元
  • 项目类别:
    外国学者研究基金
Understanding complicated gravitational physics by simple two-shell systems
  • 批准号:
    12005059
  • 批准年份:
    2020
  • 资助金额:
    24.0 万元
  • 项目类别:
    青年科学基金项目
Simulation and certification of the ground state of many-body systems on quantum simulators
  • 批准号:
  • 批准年份:
    2020
  • 资助金额:
    40 万元
  • 项目类别:
全基因组系统作图(systems mapping)研究三种细菌种间互作遗传机制
  • 批准号:
    31971398
  • 批准年份:
    2019
  • 资助金额:
    58.0 万元
  • 项目类别:
    面上项目
The formation and evolution of planetary systems in dense star clusters
  • 批准号:
    11043007
  • 批准年份:
    2010
  • 资助金额:
    10.0 万元
  • 项目类别:
    专项基金项目

相似海外基金

Comprehensive Model for the Formation-Evolution of Massive Cosmic Systems: The Answer is Blowing in the Wind
巨大宇宙系统形成演化的综合模型:答案在风中飘扬
  • 批准号:
    RGPIN-2018-05106
  • 财政年份:
    2022
  • 资助金额:
    $ 71.31万
  • 项目类别:
    Discovery Grants Program - Individual
Comprehensive Model for the Formation-Evolution of Massive Cosmic Systems: The Answer is Blowing in the Wind
巨大宇宙系统形成演化的综合模型:答案在风中飘扬
  • 批准号:
    RGPIN-2018-05106
  • 财政年份:
    2021
  • 资助金额:
    $ 71.31万
  • 项目类别:
    Discovery Grants Program - Individual
Comprehensive Model for the Formation-Evolution of Massive Cosmic Systems: The Answer is Blowing in the Wind
巨大宇宙系统形成演化的综合模型:答案在风中飘扬
  • 批准号:
    RGPIN-2018-05106
  • 财政年份:
    2020
  • 资助金额:
    $ 71.31万
  • 项目类别:
    Discovery Grants Program - Individual
Comprehensive Model for the Formation-Evolution of Massive Cosmic Systems: The Answer is Blowing in the Wind
巨大宇宙系统形成演化的综合模型:答案在风中飘扬
  • 批准号:
    RGPIN-2018-05106
  • 财政年份:
    2019
  • 资助金额:
    $ 71.31万
  • 项目类别:
    Discovery Grants Program - Individual
Comprehensive Model for the Formation-Evolution of Massive Cosmic Systems: The Answer is Blowing in the Wind
巨大宇宙系统形成演化的综合模型:答案在风中飘扬
  • 批准号:
    RGPIN-2018-05106
  • 财政年份:
    2018
  • 资助金额:
    $ 71.31万
  • 项目类别:
    Discovery Grants Program - Individual
Answer set programming and systems
答案集编程和系统
  • 批准号:
    9225-2005
  • 财政年份:
    2009
  • 资助金额:
    $ 71.31万
  • 项目类别:
    Discovery Grants Program - Individual
Answer set programming and systems
答案集编程和系统
  • 批准号:
    9225-2005
  • 财政年份:
    2008
  • 资助金额:
    $ 71.31万
  • 项目类别:
    Discovery Grants Program - Individual
Development of High-Performance Systems for Model-based Problem Solving via Answer Set Programming
通过答案集编程开发基于模型的问题解决的高性能系统
  • 批准号:
    67090203
  • 财政年份:
    2008
  • 资助金额:
    $ 71.31万
  • 项目类别:
    Research Grants
Collaborative Research-NeTS-NOSS: AutoNomouS netWorked sEnsoR systems (ANSWER)
协作研究-NETS-NOSS:自主网络传感器系统(答案)
  • 批准号:
    0721523
  • 财政年份:
    2007
  • 资助金额:
    $ 71.31万
  • 项目类别:
    Continuing Grant
Answer set programming and systems
答案集编程和系统
  • 批准号:
    9225-2005
  • 财政年份:
    2007
  • 资助金额:
    $ 71.31万
  • 项目类别:
    Discovery Grants Program - Individual
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了