Antecedents and Consequences of Trust in Artificial Agents
信任人工代理的前因和后果
基本信息
- 批准号:ES/V015176/1
- 负责人:
- 金额:$ 30.68万
- 依托单位:
- 依托单位国家:英国
- 项目类别:Research Grant
- 财政年份:2022
- 资助国家:英国
- 起止时间:2022 至 无数据
- 项目状态:未结题
- 来源:
- 关键词:
项目摘要
Machines powered by artificial intelligence (AI) are revolutionising the social world. We rely on AI when we check the traffic on Google Maps, when we connect with a driver on Uber, or when we apply for a credit check. But as the technological sophistication of AI increases, so too are the number and type of tasks that we rely on AI agents for - for example, to allocate scarce medical resources and assist with decisions about turning off life support, to recommend criminal sentences, and even to identify and kill enemy soldiers. AI agents are approaching a level of complexity that progressively requires them to embody not just artificial intelligence but also artificial morality, making decisions that would be directly described as moral or immoral if made by humans. The increased use of AI agents has the potential for tremendous economic and social benefits, but for society to reap these benefits, people need to be able to trust these AI agents. While we know that trust is critical, we know very little about the specific antecedents and consequences of such trust in AI, especially when it comes to the increasing use of AI in morally-relevant contexts. This is important because morality is far from simple: We live in a world replete with moral dilemmas, with different ethical theories favouring different mutually exclusive actions. Previous work in humans shows that we use moral judgments as a cue for trustworthiness, so that it is not enough to just ask whether we trust someone to make moral decisions: we have to consider the type of moral decision they are making, how they are making it, and in what context. If we want to understand trust in AI, we need to ask the same questions - but there is no guarantee that the answers will be the same. We need to understand how trust in AI depends depend on what kind of moral decision they are making (e.g. consequentialist or deontological judgments: Research Question #1) how they are making it (e.g. based on a coarse and interpretable set of decision rules or "black box" machine learning: Research Question #2), and in what relational and operational context (e.g. whether the machine performs close, personal tasks or abstract, impersonal ones, Research Question #3).In this project I will conduct 11 experiments to investigate how trust in AI is sensitive to what moral decisions are made; how they are made; and in what relational contexts. I will use a number of different experimental approaches tapping both implicit and explicit trust and recruit a range of populations (British laypeople; trained philosophers and AI industry experts; a study with a convenience sample of participants all around the world; and an international experiment with participants representative for age and gender recruited simultaneously in 7 countries). At the end of the grant period, I will host a full-day interdisciplinary conference/workshop consisting of both academic and non-academic attendees to bring together experts working in AI together to consider the psychological challenges of programming trustworthy AI and the philosophical issues of using public preferences as a basis for policy relating to ethical AI. This work will have important theoretical and methodological implications for research on the antecedents and consequences of trust in AI, highlighting the necessity of moving beyond simply asking whether we could trust AI to instead ask what types of decisions will we trust AI to make, what kinds of AI system we want making moral decisions, and in what contexts. These findings will have significant societal impact in helping public experts working on AI understand the how, when, and why people trust AI agents, allowing us to reap the economic and social benefits of AI that are fundamentally predicated on them being trusted by the public.
由人工智能(AI)驱动的机器正在彻底改变社会世界。当我们在谷歌地图上检查交通时,当我们与Uber上的司机联系时,或者当我们申请信用检查时,我们都依赖人工智能。但是,随着人工智能技术的复杂性增加,我们依赖人工智能代理的任务数量和类型也在增加-例如,分配稀缺的医疗资源并协助决定关闭生命支持,建议刑事判决,甚至识别和杀死敌方士兵。人工智能代理正在接近一个复杂的水平,逐渐要求它们不仅体现人工智能,而且还体现人工道德,如果由人类做出,则会直接描述为道德或不道德的决策。人工智能代理的使用增加有可能带来巨大的经济和社会效益,但要让社会获得这些效益,人们需要能够信任这些人工智能代理。虽然我们知道信任是至关重要的,但我们对这种信任的具体前提和后果知之甚少,特别是在道德相关的环境中越来越多地使用人工智能时。这一点很重要,因为道德远非简单:我们生活在一个充满道德困境的世界里,不同的道德理论支持不同的相互排斥的行为。之前对人类的研究表明,我们使用道德判断作为可信度的线索,所以仅仅问我们是否信任某人做出道德决定是不够的:我们必须考虑他们正在做出的道德决定的类型,他们是如何做出的,以及在什么背景下做出的。如果我们想了解对人工智能的信任,我们需要问同样的问题--但不能保证答案是一样的。我们需要了解对人工智能的信任如何取决于他们正在做出什么样的道德决定(例如,结果论或义务论判断:研究问题#1)他们是如何做出的(例如,基于一组粗略且可解释的决策规则或“黑盒”机器学习:研究问题#2),以及在什么样的关系和操作环境中(例如,机器是否执行密切的,个人的任务或抽象的,客观的,在这个项目中,我将进行11个实验来研究人工智能中的信任如何对做出的道德决定敏感;它们是如何做出的;以及在什么样的关系背景下。我将使用多种不同的实验方法,挖掘隐性和显性信任,并招募一系列人群(英国外行;训练有素的哲学家和人工智能行业专家;一项对世界各地参与者进行方便样本的研究;以及一项国际实验,其中参与者代表7个国家的年龄和性别)。在资助期结束时,我将举办一个全天的跨学科会议/研讨会,由学术和非学术与会者组成,将人工智能领域的专家聚集在一起,共同考虑编程值得信赖的人工智能的心理挑战,以及使用公众偏好作为道德人工智能相关政策基础的哲学问题。这项工作将对研究人工智能信任的前因和后果具有重要的理论和方法论意义,强调了超越简单地问我们是否可以信任人工智能的必要性,而是问我们信任人工智能做出什么类型的决策,我们希望什么样的人工智能系统做出道德决策,以及在什么情况下。这些发现将产生重大的社会影响,帮助从事人工智能工作的公共专家了解人们如何、何时以及为什么信任人工智能代理,使我们能够获得人工智能的经济和社会效益,这些效益从根本上取决于公众对它们的信任。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Jim Everett其他文献
Jim Everett的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Jim Everett', 18)}}的其他基金
A Person-Centred Approach to Understanding Trust in Moral Machines
以人为本的方法来理解道德机器的信任
- 批准号:
EP/Y00440X/1 - 财政年份:2024
- 资助金额:
$ 30.68万 - 项目类别:
Research Grant
相似国自然基金
Exposing Verifiable Consequences of the Emergence of Mass
- 批准号:12135007
- 批准年份:2021
- 资助金额:313 万元
- 项目类别:重点项目
Accretion variability and its consequences: from protostars to planet-forming disks
- 批准号:12173003
- 批准年份:2021
- 资助金额:60 万元
- 项目类别:面上项目
Consequences of MALT1 mutation for B cell tolerance
- 批准号:
- 批准年份:2021
- 资助金额:30 万元
- 项目类别:青年科学基金项目
相似海外基金
Fitness and evolutionary consequences of developmental plasticity
发育可塑性的适应性和进化后果
- 批准号:
DP240102830 - 财政年份:2024
- 资助金额:
$ 30.68万 - 项目类别:
Discovery Projects
The demographic consequences of extreme weather events in Australia
澳大利亚极端天气事件对人口的影响
- 批准号:
DP240102733 - 财政年份:2024
- 资助金额:
$ 30.68万 - 项目类别:
Discovery Projects
Intended and unintended consequences of the ZnO ban from pig diets on antimicrobial resistance, post-weaning diarrhoea and the microbiome
猪日粮中禁用氧化锌对抗菌素耐药性、断奶后腹泻和微生物组的有意和无意的影响
- 批准号:
BB/Y003861/1 - 财政年份:2024
- 资助金额:
$ 30.68万 - 项目类别:
Research Grant
Collaborative Research: REU Site Mystic Aquarium: Plankton to Whales: Consequences of Global Change within Marine Ecosystems
合作研究:REU 站点神秘水族馆:浮游生物到鲸鱼:海洋生态系统内全球变化的后果
- 批准号:
2349354 - 财政年份:2024
- 资助金额:
$ 30.68万 - 项目类别:
Continuing Grant
Conference: 2024 Thiol-Based Redox Regulation and Signaling GRC and GRS: Mechanisms and Consequences of Redox Signaling
会议:2024年基于硫醇的氧化还原调节和信号传导GRC和GRS:氧化还原信号传导的机制和后果
- 批准号:
2418618 - 财政年份:2024
- 资助金额:
$ 30.68万 - 项目类别:
Standard Grant
Doctoral Dissertation Research: Assessing the physiological consequences of diet and environment for gorillas in zoological settings
博士论文研究:评估动物环境中大猩猩饮食和环境的生理后果
- 批准号:
2341433 - 财政年份:2024
- 资助金额:
$ 30.68万 - 项目类别:
Standard Grant
NGO-Prosecutorial Complex in Universal Jurisdiction Cases: Structure and Consequences for Justice and Public Knowledge about Human Rights Violations
普遍管辖权案件中的非政府组织-检察复合体:正义的结构和后果以及公众对侵犯人权行为的了解
- 批准号:
2314061 - 财政年份:2024
- 资助金额:
$ 30.68万 - 项目类别:
Standard Grant
Understanding the motives and consequences of parents' educational investment : Competition, Parental Aversion, and Intergenerational mobility.
了解父母教育投资的动机和后果:竞争、父母厌恶和代际流动。
- 批准号:
24K16383 - 财政年份:2024
- 资助金额:
$ 30.68万 - 项目类别:
Grant-in-Aid for Early-Career Scientists
Intended and unintended consequences of the ZnO ban from pig diets on antimicrobial resistance, post-weaning diarrhoea and the microbiome.
猪日粮中禁用氧化锌对抗菌素耐药性、断奶后腹泻和微生物组的有意和无意的影响。
- 批准号:
BB/Y004108/1 - 财政年份:2024
- 资助金额:
$ 30.68万 - 项目类别:
Research Grant
Phenotypic consequences of a modern human-specific amino acid substitution in ADSL
ADSL 中现代人类特异性氨基酸取代的表型后果
- 批准号:
24K18167 - 财政年份:2024
- 资助金额:
$ 30.68万 - 项目类别:
Grant-in-Aid for Early-Career Scientists














{{item.name}}会员




