Rule of Law in the Age of AI: Principles of Distributive Liability for Multi-Agent Societies

人工智能时代的法治:多主体社会的分配责任原则

基本信息

  • 批准号:
    ES/T007079/1
  • 负责人:
  • 金额:
    $ 51.67万
  • 依托单位:
  • 依托单位国家:
    英国
  • 项目类别:
    Research Grant
  • 财政年份:
    2020
  • 资助国家:
    英国
  • 起止时间:
    2020 至 无数据
  • 项目状态:
    已结题

项目摘要

The UK and Japan appeal to similar models of subjectivity in categorizing legal liability. Rooted historically and philosophically in the figure of the human actor capable of exercising free will within a given environment, such a model of subjectivity ascribes legal liability to human agents imagined as autonomous and independent. However, recent advancements in artificial intelligence (AI) that augment the autonomy of artificial agents such as autonomous driving systems, social robots equipped with artificial emotional intelligence, and intelligent surgery or diagnosis assistant system challenge this traditional notion of agency while presenting serious practical problems for determining legal liability within networks of distributed human-machine agency. For example, if the accident occurs from cooperation between human and an intelligent machine, we do not know how to distribute legal liability based on current legal theory. Although legal theory assumes that the autonomous human agent should take the responsibility of the accident, but in the case of human-intelligent machine interaction, human subjectivity itself is influenced by the behavior of intelligent machines, according to the findings of cognitive psychology, of the critical theory of subjectivity, and of the anthropology of science and technology. This lack of the transparent and clear distributive principles of legal liability may hamper the healthy development of society where human dignity and technological innovation can travel together, because, no one can trust the behavior and quality of the machine, that may cause corporal or lethal injury, without workable legal liability regime. Faced with this challenge, that is caused and will be aggravated by the proliferation of AI in UK and Japan, an objective of our study is to make the distributive principle of legal liability clear in the multi-agent society and proposing the relevant legal policy to establish the rule of law in the age of AI, that enables us to construct the "Najimi society" where humans and intelligent machines can cohabit, with sensitivity to the cultural diversity of the formation of subjectivity. In order to achieve the objective above, we create the three interrelated and collaborative research groups: Group 1: Law-Economics-Philosophy group that proposes the stylized model to analyze and evaluate the multi-agent situation, based on dynamic game theory connected to the philosophy of the relativity of human subjectivity, in order to figure out the distributive principle of legal liability and the legal policy for the rule of law in the age of AI, based on both the quantitative data and qualitative data from the other groups, with the support from experienced legal practitioner and policy makers.Group 2: Cognitive Robotics and Human Factors and Cognitive Psychology group that implements various computer simulation and psychological experiments to capture data on human interaction and performance with as well as attidues and experience of intelligent machines - in this case (simulated) autonomous vehicles. The outputs of this group will examine the validity of the first group's model and provide mainly the quantitative data relating to subjectivity with the first group, leading to help to construct more reliable model and workable legal principles and policies.Group 3: Cultural Anthropology group that engages in comparative ethnographic fieldwork on human-robot relations within Japan and the UK to better account for the cultural variability of distributed agency within differing social, legal, and scientific contexts. The output of this group will help the interpretation of the quantitative data and allow the first group to keep sensitivities to the diversity. By the inherently transdisiciplinary and international cooperation described above, our project will contribute to make UK and Japanese society more adoptive to emerging technology through clarifying the legal regime.
英国和日本在对法律责任进行分类时采用了类似的主观性模式。这种主体性模式从历史和哲学上植根于能够在特定环境中行使自由意志的人类行为者的形象,将法律责任归因于被想象为自主和独立的人类行为者。然而,人工智能(AI)的最新进展增强了人工代理的自主性,如自动驾驶系统、配备人工情感智能的社交机器人以及智能手术或诊断辅助系统,这对这种传统的代理概念提出了挑战,同时也给在分布式人机代理网络中确定法律责任带来了严重的实际问题。例如,如果事故是由于人和智能机器之间的合作而发生的,我们不知道根据现有的法律理论如何分配法律责任。虽然法律理论认为自主的人类主体应该承担事故的责任,但根据认知心理学、主体性批判理论和科学技术人类学的研究结果,在人与智能机器交互的情况下,人的主体性本身受到智能机器行为的影响。这种缺乏透明和明确的法律责任分配原则可能会阻碍人类尊严和技术创新可以共存的社会的健康发展,因为如果没有可行的法律责任制度,没有人可以信任机器的行为和质量,这可能会导致人身伤害或致命伤害。面对人工智能在英国和日本的扩散所引发并将加剧的这一挑战,我们研究的一个目的是明确多主体社会中的法律责任分配原则,并提出相关的法律政策,以建立人工智能时代的法治,使我们能够在敏感于主体性形成的文化多样性的情况下,构建人类和智能机器可以共存的“纳吉米社会”。为了实现上述目标,我们创建了三个相互关联和协作的研究小组:第一组:法-经济学-哲学小组,提出了分析和评估多主体情况的程式化模型,基于动态博弈论,结合人的主体性相对论的哲学,以找出法律责任的分配原则和人工智能时代的法治法律政策,基于其他小组的定量数据和定性数据,在经验丰富的法律从业人员和政策制定者的支持下。第二组:认知机器人和人类因素与认知心理学小组,该小组实施各种计算机模拟和心理实验,以捕获关于人类与智能机器的交互和性能以及智能机器的属性和经验的数据-在这种情况下是(模拟的)自动驾驶车辆。该小组的成果将检验第一组模型的有效性,并主要与第一组提供与主观性有关的定量数据,从而有助于构建更可靠的模型和可行的法律原则和政策。第三组:文化人类学小组,该小组在日本和英国从事关于人与机器人关系的比较人种学田野工作,以更好地解释不同社会、法律和科学背景下分布式机构的文化变异性。这一组的产出将有助于对定量数据的解释,并使第一组能够保持对多样性的敏感性。通过上述固有的跨学科和国际合作,我们的项目将通过澄清法律制度,帮助英国和日本社会更好地采用新兴技术。

项目成果

期刊论文数量(7)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
The Effects of Cyber Readiness and Response on Human Trust in Self Driving Cars
网络准备和响应对自动驾驶汽车中人类信任的影响
  • DOI:
    10.54941/ahfe1003719
  • 发表时间:
    2023
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Marcinkiewicz V
  • 通讯作者:
    Marcinkiewicz V
Towards anthropomorphising autonomous vehicles: speech and embodiment on trust and blame after an accident
走向拟人化的自动驾驶汽车:事故后关于信任和指责的言论和体现
  • DOI:
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Wallbridge CD
  • 通讯作者:
    Wallbridge CD
Public perception of autonomous vehicle capability determines judgment of blame and trust in road traffic accidents
  • DOI:
    10.1016/j.tra.2023.103887
  • 发表时间:
    2024-01
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Qiyuan Zhang;Christopher D. Wallbridge;Dylan M. Jones;Phillip L. Morgan
  • 通讯作者:
    Qiyuan Zhang;Christopher D. Wallbridge;Dylan M. Jones;Phillip L. Morgan
Judgements of Autonomous Vehicle Capability Determine Attribution of Blame in Road Traffic Accidents
自动驾驶能力判断决定道路交通事故责任归属
  • DOI:
    10.2139/ssrn.4093012
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Zhang Q
  • 通讯作者:
    Zhang Q
Using Simulation-software-generated Animations to Investigate Attitudes Towards Autonomous Vehicles Accidents
使用仿真软件生成的动画来调查人们对自动驾驶汽车事故的态度
  • DOI:
    10.1016/j.procs.2022.09.410
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Zhang Q
  • 通讯作者:
    Zhang Q
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Phillip Morgan其他文献

Maladaptive Behaviour in Phishing Susceptibility: How Email Context Influences the Impact of Persuasion Techniques
网络钓鱼易感性中的适应不良行为:电子邮件上下文如何影响说服技术的效果
  • DOI:
    10.54941/ahfe1003718
  • 发表时间:
    2023
  • 期刊:
  • 影响因子:
    0
  • 作者:
    George Raywood;Dylan Jones;Phillip Morgan
  • 通讯作者:
    Phillip Morgan
The impact on retention figures of the introduction of a comfort call during a contact lens trial
  • DOI:
    10.1016/j.clae.2018.03.078
  • 发表时间:
    2018-06-01
  • 期刊:
  • 影响因子:
  • 作者:
    Emma Cooney;Phillip Morgan
  • 通讯作者:
    Phillip Morgan
The uncertainty of students from a widening access context undertaking an integrated master’s degree in social studies
学生在获得社会研究综合硕士学位时面临的不确定性
  • DOI:
  • 发表时间:
    2019
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Caroline Lohmann;Phillip Morgan
  • 通讯作者:
    Phillip Morgan
Cyclist and pedestrian trust in automated vehicles: An on-road and simulator trial
骑车人和行人对自动驾驶汽车的信任:道路和模拟器试验
Clinicians risk becoming ‘liability sinks’ for artificial intelligence
临床医生面临人工智能“成为责任池”的风险
  • DOI:
  • 发表时间:
    2024
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Tom Lawton;Phillip Morgan;Zoe Porter;Shireen Hickey;Alice Cunningham;Nathan Hughes;Ioanna Iacovides;Yan Jia;Vishal Sharma;I. Habli
  • 通讯作者:
    I. Habli

Phillip Morgan的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

相似国自然基金

低表面亮度星系的恒星形成指标之间的相关性以及Kennicutt-Schmidt Law的研究
  • 批准号:
    12003043
  • 批准年份:
    2020
  • 资助金额:
    24.0 万元
  • 项目类别:
    青年科学基金项目
约化群酉表示的branching law及其应用
  • 批准号:
    10971103
  • 批准年份:
    2009
  • 资助金额:
    24.0 万元
  • 项目类别:
    面上项目

相似海外基金

Equality in the Algorithmic Age: A New Frontier for European Union Law?
算法时代的平等:欧盟法律的新领域?
  • 批准号:
    1071088
  • 财政年份:
    2023
  • 资助金额:
    $ 51.67万
  • 项目类别:
    Studentship
Law, Literature and Naturalization in an Age of Empire
帝国时代的法律、文学和归化
  • 批准号:
    DE230100098
  • 财政年份:
    2023
  • 资助金额:
    $ 51.67万
  • 项目类别:
    Discovery Early Career Researcher Award
For Liberty and Co-creation in the Age of Digital Transformation and AI: A Comparative Study of Values, Issues, and Designs relating to Information Law in Japan, the United States, and Europe
为了数字化转型和人工智能时代的自由与共同创造:日本、美国和欧洲信息法相关价值观、问题和设计的比较研究
  • 批准号:
    22K01274
  • 财政年份:
    2022
  • 资助金额:
    $ 51.67万
  • 项目类别:
    Grant-in-Aid for Scientific Research (C)
The role of law in dilemma between child protection and child's autonomy: case of age of consent
法律在儿童保护与儿童自主权之间的困境中的作用:同意年龄的案例
  • 批准号:
    20J00100
  • 财政年份:
    2020
  • 资助金额:
    $ 51.67万
  • 项目类别:
    Grant-in-Aid for JSPS Fellows
Transformation and challenges of the space law in the age of NewSpace
新空间时代空间法的变革与挑战
  • 批准号:
    20H01438
  • 财政年份:
    2020
  • 资助金额:
    $ 51.67万
  • 项目类别:
    Grant-in-Aid for Scientific Research (B)
Reconstrunction of the theiry of the trade mark law in the degital platform age
数字平台时代商标法的重构
  • 批准号:
    20K20741
  • 财政年份:
    2020
  • 资助金额:
    $ 51.67万
  • 项目类别:
    Grant-in-Aid for Challenging Research (Exploratory)
IN SEARCH OF 'CLIMATE CHANGE LAW': PUBLIC GOODS AND PRIVATE ACTORS IN THE AGE OF REGULATORY GOVERNANCE
寻找“气候变化法”:监管治理时代的公共物品和私人行为者
  • 批准号:
    2088199
  • 财政年份:
    2018
  • 资助金额:
    $ 51.67万
  • 项目类别:
    Studentship
German Philosophy of Criminal Law from the Age of Late Absolutism to the End of the Napoleonic Era: A Systematic and Reception-Historic Analysis
从专制主义晚期到拿破仑时代末期的德国刑法哲学:系统的、接受的历史分析
  • 批准号:
    380283557
  • 财政年份:
    2017
  • 资助金额:
    $ 51.67万
  • 项目类别:
    Research Grants
A Study on the Treatment of Young People in Juvenile Law and Criminal Law: From the perspective of juvenile age consistency and recidivism prevention
少年法和刑法中青少年待遇研究——以未成年人年龄一致性和预防累犯为视角
  • 批准号:
    17K03431
  • 财政年份:
    2017
  • 资助金额:
    $ 51.67万
  • 项目类别:
    Grant-in-Aid for Scientific Research (C)
A Comparative Study of Core Principles and Re-balance relating to Information Law in the Age of Next Generation AI and the Internet of Things in Japan, the United States, and Europe
下一代人工智能和物联网时代日美欧信息法核心原则与再平衡比较研究
  • 批准号:
    17K03501
  • 财政年份:
    2017
  • 资助金额:
    $ 51.67万
  • 项目类别:
    Grant-in-Aid for Scientific Research (C)
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了