Trustworthy and Ethical Assurance of Digital Twins (TEA-DT)

数字孪生的可信和道德保证 (TEA-DT)

基本信息

  • 批准号:
    AH/Z505663/1
  • 负责人:
  • 金额:
    $ 32.39万
  • 依托单位:
  • 依托单位国家:
    英国
  • 项目类别:
    Research Grant
  • 财政年份:
    2024
  • 资助国家:
    英国
  • 起止时间:
    2024 至 无数据
  • 项目状态:
    未结题

项目摘要

In recent years, considerable effort has gone into defining "responsible" AI research and innovation. Though progress is tangible, many sectors still lack the tools and capabilities for operationalising and implementing ethical principles. Furthermore, many project teams also find it challenging to know how to achieve goals, such as fairness or explainability, and communicate that they have been realised to other stakeholders of affected users. If ignored, these gaps could hamper efforts to build public trust in AI technologies or amplify existing societal harms and inequalities caused by biased and non-transparent sociotechnical systems. The Trustworthy and Ethical Assurance for Digital Twins (TEA-DT) project will develop an existing open-source platform, known as the Trustworthy and Ethical Assurance (TEA) Platform, which has been designed to help users navigate the process of addressing the aforementioned challenges. The TEA platform helps users and project teams define, operationalise, and implement ethical principles as goals to be assured, and also provides means for communicating how these goals have been realised. It achieves this by guiding individuals and project teams to identify the relevant set of claims and evidence that justify their chosen ethical principles, using a participatory approach that can be embedded throughout a project's lifecycle. The output of the platform—a user-generated assurance case—can be co-designed and vetted by various stakeholders, fostering trust through open, clear, and accessible communication. The TEA platform consists of three main elements: 1) an online tool for crafting well-reasoned arguments about ethical goals, 2) user-friendly guidance to foster critical thinking among teams and organisations, and 3) a supportive community infrastructure for sharing and discussing best practices. Although the platform is designed for a wide range of applications, the TEA-DT project will specifically focus on digital twins—virtual duplicates that are closely coupled to their physical counterpart to enable access to data and insights that can improve and optimise the way their real-world versions operate. More specifically, the project team will carry out scoping research on the assurance of digital twins within three different contexts: health, natural environment, and infrastructure. Although digital twins promise vast societal benefit in these areas, the fact that they increasingly rely on various forms of AI and often operate in safety-critical settings, means that several challenges must be addressed to ensure their ethical and trustworthy development. For instance, in health, questions about data privacy and ownership arise; environmental applications must tackle bias and fairness issues, complicated by global scales and differing laws; and in infrastructure, technical challenges concerning uncertainty communication give rise to additional needs for transparency and explainability. In collaboration with key partners and stakeholders, the TEA-DT project will carry out scoping research to co-develop exemplary assurance cases and enhance the platform's features to make it more user-friendly and integrated into workflows. By committing to open research and community-building principles, the project aims to a) systematically share best practices and standards, b) make the operationalisation of ethical principles more accessible and inclusive, and c) integrate the project sustainably with existing networks and communities.
近年来,人们在定义“负责任”的人工智能研究和创新方面付出了相当大的努力。虽然取得了切实进展,但许多部门仍然缺乏实施和实施道德原则的工具和能力。此外,许多项目团队还发现,很难知道如何实现目标,如公平性或可解释性,并向受影响用户的其他利益相关者传达这些目标已经实现。如果被忽视,这些差距可能会阻碍建立公众对人工智能技术信任的努力,或扩大偏见和不透明的社会技术系统造成的现有社会危害和不平等。值得信赖和道德保证数字双胞胎(TEA-DT)项目将开发一个现有的开源平台,称为值得信赖和道德保证(TEA)平台,旨在帮助用户导航解决上述挑战的过程。TEA平台帮助用户和项目团队定义、操作和实施道德原则作为需要保证的目标,并提供沟通这些目标如何实现的手段。它通过指导个人和项目团队识别证明其所选道德原则合理性的相关主张和证据,使用可嵌入整个项目生命周期的参与式方法来实现这一点。该平台的输出-用户生成的保证案例-可以由各种利益相关者共同设计和审查,通过开放,清晰和可访问的通信促进信任。TEA平台由三个主要元素组成:1)一个在线工具,用于制作关于道德目标的合理论证,2)用户友好的指导,以促进团队和组织之间的批判性思维,以及3)支持性社区基础设施,用于分享和讨论最佳实践。虽然该平台是为广泛的应用而设计的,但TEA-DT项目将特别关注数字孪生-与物理副本紧密耦合的虚拟副本,以访问数据和见解,从而改善和优化其现实世界版本的操作方式。更具体地说,该项目团队将在三个不同的背景下对数字双胞胎的保证进行范围研究:健康,自然环境和基础设施。尽管数字孪生在这些领域有望带来巨大的社会效益,但它们越来越依赖各种形式的人工智能,并且经常在安全关键环境中运行,这意味着必须解决几个挑战,以确保其道德和可信赖的发展。例如,在卫生领域,出现了数据隐私和所有权问题;环境应用程序必须解决偏见和公平问题,这一问题因全球规模和不同的法律而变得更加复杂;在基础设施领域,与不确定性通信有关的技术挑战增加了对透明度和可解释性的需求。TEA-DT项目将与主要合作伙伴和利益攸关方合作,开展范围界定研究,以共同开发示范性保证案例,并增强平台的功能,使其更加用户友好并融入工作流程。通过致力于开放式研究和社区建设原则,该项目旨在a)系统地分享最佳实践和标准,B)使道德原则的实施更容易获得和更具包容性,以及c)将该项目可持续地与现有网络和社区整合。

项目成果

期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Christopher Burr其他文献

Fair by design: A sociotechnical approach to justifying the fairness of AI-enabled systems across the lifecycle
设计公平:一种社会技术方法,用于证明人工智能系统在整个生命周期中的公平性
  • DOI:
  • 发表时间:
    2024
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Marten H. L. Kaas;Christopher Burr;Zoe Porter;Berk Ozturk;Philippa Ryan;Michael Katell;Nuala Polo;Kalle Westerling;Ibrahim Habli
  • 通讯作者:
    Ibrahim Habli
DIRAC current, upcoming and planned capabilities and technologies
DIRAC 当前、即将推出和计划的功能和技术
  • DOI:
  • 发表时间:
    2024
  • 期刊:
  • 影响因子:
    0
  • 作者:
    F. Stagni;Alexandre F. Boyer;A. Tsaregorodtsev;Andrii Lytovchenko;André Sailer;C. Haen;Christopher Burr;D. Bauer;Simon Fayer;Janusz Martyniak;Cédric Serfon
  • 通讯作者:
    Cédric Serfon
Lithium an emerging contaminant: bioavailability, effects on protein expression, and homeostasis disruption in short-term exposure of rainbow trout.
锂是一种新兴污染物:生物利用度、对蛋白质表达的影响以及虹鳟鱼短期暴露中的稳态破坏。
  • DOI:
  • 发表时间:
    2015
  • 期刊:
  • 影响因子:
    4.5
  • 作者:
    Victoria Tkatcheva;D. Poirier;R. Chong;Vasile I. Furdui;Christopher Burr;R. Leger;Jaspal Parmar;Teresa A. Switzer;Stefanie Maedler;E. Reiner;J. Sherry;D. Simmons
  • 通讯作者:
    D. Simmons
Artificial intelligence, human rights, democracy, and the rule of law: a primer
人工智能、人权、民主和法治:入门
  • DOI:
    10.5281/zenodo.4639743
  • 发表时间:
    2021
  • 期刊:
  • 影响因子:
    0
  • 作者:
    David Leslie;Christopher Burr;M. Aitken;Josh Cowls;Michael Katell;Morgan Briggs
  • 通讯作者:
    Morgan Briggs
AI Sustainability in Practice Part One: Foundations for Sustainable AI Projects
人工智能可持续性实践第一部分:可持续人工智能项目的基础
  • DOI:
  • 发表时间:
    2024
  • 期刊:
  • 影响因子:
    0
  • 作者:
    David Leslie;Cami Rincón;Morgan Briggs;A. Perini;Smera Jayadeva;Ann Borda;SJ Bennett;Christopher Burr;Mhairi Aitken;Michael Katell;Claudia Fischer;Janis Wong;Ismael Kherroubi Garcia
  • 通讯作者:
    Ismael Kherroubi Garcia

Christopher Burr的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

相似海外基金

"Ethical Review to Support Responsible AI in Policing - A Preliminary Study of West Midlands Police's Specialist Data Ethics Review Committee "
“支持警务中负责任的人工智能的道德审查——西米德兰兹郡警察专家数据道德审查委员会的初步研究”
  • 批准号:
    AH/Z505626/1
  • 财政年份:
    2024
  • 资助金额:
    $ 32.39万
  • 项目类别:
    Research Grant
Integrating Spiritual, Moral and Ethical Considerations into Science Communication for Improved Decision Making and Public Action on Climate Science
将精神、道德和伦理考虑纳入科学传播,以改进气候科学的决策和公共行动
  • 批准号:
    2318681
  • 财政年份:
    2024
  • 资助金额:
    $ 32.39万
  • 项目类别:
    Standard Grant
CAREER: Ethical Machine Learning in Health: Robustness in Data, Learning and Deployment
职业:健康领域的道德机器学习:数据、学习和部署的稳健性
  • 批准号:
    2339381
  • 财政年份:
    2024
  • 资助金额:
    $ 32.39万
  • 项目类别:
    Continuing Grant
Education DCL: EAGER: An Embedded Case Study Approach for Broadening Students' Mindset for Ethical and Responsible Cybersecurity
教育 DCL:EAGER:一种嵌入式案例研究方法,用于拓宽学生道德和负责任的网络安全思维
  • 批准号:
    2335636
  • 财政年份:
    2024
  • 资助金额:
    $ 32.39万
  • 项目类别:
    Standard Grant
CAREER: New Frameworks for Ethical Statistical Learning: Algorithmic Fairness and Privacy
职业:道德统计学习的新框架:算法公平性和隐私
  • 批准号:
    2340241
  • 财政年份:
    2024
  • 资助金额:
    $ 32.39万
  • 项目类别:
    Continuing Grant
Evaluating scientific and ethical approaches to newborn screening with whole genome sequencing using large-scale population cohorts
使用大规模人群队列评估通过全基因组测序进行新生儿筛查的科学和伦理方法
  • 批准号:
    MR/X021351/1
  • 财政年份:
    2024
  • 资助金额:
    $ 32.39万
  • 项目类别:
    Research Grant
The AI Advantage: Developing Trusted, Ethical & Accessible AI Augmented Human Decision Making & Automation for SMBs
人工智能的优势:发展值得信赖、道德的
  • 批准号:
    10076405
  • 财政年份:
    2023
  • 资助金额:
    $ 32.39万
  • 项目类别:
    Grant for R&D
The influence of ethical and social values on decision-making about participation in personalized breast cancer screening: A constructivist grounded theory study of South Asian people in Canada
伦理和社会价值观对参与个性化乳腺癌筛查决策的影响:针对加拿大南亚人的建构主义扎根理论研究
  • 批准号:
    491227
  • 财政年份:
    2023
  • 资助金额:
    $ 32.39万
  • 项目类别:
    Fellowship Programs
Indigenizing Health Research Ethics in British Columbia with Indigenous Communities, Collectives and Organizations: Co-Create Wise Practices & Distinctions-Based Ethical Protocols in Indigenous Health Research
不列颠哥伦比亚省与土著社区、集体和组织的本土化健康研究伦理:共同创造明智的实践
  • 批准号:
    479951
  • 财政年份:
    2023
  • 资助金额:
    $ 32.39万
  • 项目类别:
    Operating Grants
The ethical considerations of age-related technologies in promoting health and well-being of older adults
与年龄相关的技术在促进老年人健康和福祉方面的伦理考虑
  • 批准号:
    497988
  • 财政年份:
    2023
  • 资助金额:
    $ 32.39万
  • 项目类别:
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了