Public trust of artificial intelligence in the precision CDS health ecosystem

精准CDS健康生态系统中人工智能的公众信任

基本信息

项目摘要

Abstract Artificial intelligence-enhanced Clinical Decision Support (AI-CDS) is a growing multibillion-dollar industry leveraging a wide range of clinical, genomic, social, geographical, web-based, and wearable device data for improvements in health outcomes broadly circumscribed under the term “precision health.” Powered by Big Data, characterized by volume, velocity, veracity, variety, and value, “big knowledge” in the form of AI-CDS is becoming increasingly ubiquitous (volume), rapidly developing (velocity), available to a wide range of medical fields (variety), based on data from a wide range of sources that reflects the health of individuals and populations (veracity), and focused on lowering costs and promoting better health outcomes (value). Current policy paradigms for CDS, including whether to classify it as a medical device, are not designed for adaptive artificial intelligence technologies. Patients and providers have no reasonable way to discern how these “black box” technologies operate or their accuracy. Innovative policies (e.g. standards in product labeling) that address these concerns are likely to require direct consumer outreach and communications to ensure public trust in the growing AI-CDS field. Indeed, public trust in AI-CDS has been identified as a top priority for the AI- CDS big knowledge ecosystem by the National Academy of Medicine, NIH, FDA, and OMB, among others. Trust is particularly salient given the range of critical ethical and policy considerations related to transparency, privacy, non-maleficence, equity, accountability, and utility of AI-CDS. In Aim 1 of our proposed study, we will measure the public's current trust in AI-CDS for precision health and assess (a) its relationship to the public's expectations and concerns about privacy, equity, non-maleficence, responsibility, and utility and (b) how it may be affected by policies and practices, such as labeling or certification. In Aim 2 we will use deliberative democracy methods and expert interviews, designed to directly inform policy and standards that address perceived risks of AI-CDS and in Aim 3 we propose to develop a product information label that would both increase transparency and accessibility of information about AI-CDS for patients and providers. The continued acceptance and adoption of AI-CDS is predicated on public trust and our proposal provides a research-focused and evidence-based approach to incorporating public participation into emerging national standards.
抽象的 人工智能增强型临床决策支持 (AI-CDS) 是一个价值数十亿美元的不断增长的行业 利用广泛的临床、基因组、社会、地理、基于网络和可穿戴设备数据 健康结果的改善被广泛地界定为“精准健康”一词。由大公司提供支持 数据以数量、速度、准确性、多样性和价值为特征,以 AI-CDS 形式呈现的“大知识”是 变得越来越普遍(数量),迅速发展(速度),可用于广泛的医疗领域 领域(各种),基于反映个人健康状况的广泛来源的数据 人口(真实性),并专注于降低成本和促进更好的健康结果(价值)。当前的 CDS 的政策范式,包括是否将其归类为医疗器械,并不是为适应性而设计的 人工智能技术。患者和提供者没有合理的方法来辨别这些“黑色” 盒子”技术的运行或其准确性。创新政策(例如产品标签标准) 解决这些问题可能需要直接向消费​​者进行宣传和沟通,以确保公众 对不断发展的 AI-CDS 领域的信任。事实上,公众对 AI-CDS 的信任已被确定为 AI-CDS 的首要任务。 CDS 大知识生态系统由美国国家医学院、NIH、FDA 和 OMB 等组成。 鉴于与透明度相关的一系列关键道德和政策考虑因素,信任尤为重要, AI-CDS 的隐私、非恶意、公平、责任和实用性。在我们拟议研究的目标 1 中,我们将 衡量公众当前对 AI-CDS 精准健康的信任度,并评估 (a) 其与公众的关系 对隐私、公平、非恶意、责任和效用的期望和担忧,以及 (b) 如何实现 受到政策和实践的影响,例如标签或认证。在目标 2 中,我们将使用深思熟虑的方法 民主方法和专家访谈,旨在直接为解决问题的政策和标准提供信息 AI-CDS 的感知风险,在目标 3 中,我们建议开发一个产品信息标签,既能 为患者和提供者提高 AI-CDS 信息的透明度和可访问性。这 AI-CDS 的持续接受和采用取决于公众的信任,我们的提案规定 采用以研究为重点和基于证据的方法,将公众参与纳入新兴市场 国家标准。

项目成果

期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Jodyn Elizabeth Platt其他文献

Jodyn Elizabeth Platt的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Jodyn Elizabeth Platt', 18)}}的其他基金

Public trust of artificial intelligence in the precision CDS health ecosystem
精准CDS健康生态系统中人工智能的公众信任
  • 批准号:
    10092723
  • 财政年份:
    2021
  • 资助金额:
    $ 74.37万
  • 项目类别:
Public trust of artificial intelligence in the precision CDS health ecosystem - Administrative Supplement
精准CDS健康生态系统中人工智能的公众信任-行政补充
  • 批准号:
    10598371
  • 财政年份:
    2021
  • 资助金额:
    $ 74.37万
  • 项目类别:
Public trust of artificial intelligence in the precision CDS health ecosystem
精准CDS健康生态系统中人工智能的公众信任
  • 批准号:
    10632123
  • 财政年份:
    2021
  • 资助金额:
    $ 74.37万
  • 项目类别:
Mapping the sociotechnical ecosystem of precision medicine
绘制精准医疗的社会技术生态系统
  • 批准号:
    9892643
  • 财政年份:
    2020
  • 资助金额:
    $ 74.37万
  • 项目类别:

相似海外基金

CRII: SaTC: Privacy vs. Accountability--Usable Deniability and Non-Repudiation for Encrypted Messaging Systems
CRII:SaTC:隐私与责任——加密消息系统的可用否认性和不可否认性
  • 批准号:
    2348181
  • 财政年份:
    2024
  • 资助金额:
    $ 74.37万
  • 项目类别:
    Standard Grant
Attribution of Machine-generated Code for Accountability
机器生成代码的责任归属
  • 批准号:
    DP240102164
  • 财政年份:
    2024
  • 资助金额:
    $ 74.37万
  • 项目类别:
    Discovery Projects
Global Governing Gaps and Accountability Traps for Solar Energy and Storage
太阳能和存储的全球治理差距和问责陷阱
  • 批准号:
    DP230103043
  • 财政年份:
    2024
  • 资助金额:
    $ 74.37万
  • 项目类别:
    Discovery Projects
Collaborative Research: U.S. institutions after COVID-19: Trust, accountability, and public perceptions
合作研究:COVID-19 后的美国机构:信任、责任和公众看法
  • 批准号:
    2422394
  • 财政年份:
    2024
  • 资助金额:
    $ 74.37万
  • 项目类别:
    Standard Grant
Collaborative Research: The Architecture of Accountability in 21st Century Latin America
合作研究:21 世纪拉丁美洲的问责架构
  • 批准号:
    2314749
  • 财政年份:
    2023
  • 资助金额:
    $ 74.37万
  • 项目类别:
    Standard Grant
Ethical Industry 4.0: Embedding Legality, Integrity and Accountability in Digital Manufacturing Ecosystems
道德工业 4.0:将合法性、诚信和责任融入数字制造生态系统
  • 批准号:
    2412678
  • 财政年份:
    2023
  • 资助金额:
    $ 74.37万
  • 项目类别:
    Standard Grant
Conference: Understanding Democracy, Elections, and Political Accountability
会议:了解民主、选举和政治责任
  • 批准号:
    2321010
  • 财政年份:
    2023
  • 资助金额:
    $ 74.37万
  • 项目类别:
    Standard Grant
The Tipuna Project: Intergenerational Healing, Settler Accountability and Decolonising Participatory Action Research in Aotearoa
Tipuna 项目:新西兰的代际疗愈、定居者责任和非殖民化参与行动研究
  • 批准号:
    AH/X008223/1
  • 财政年份:
    2023
  • 资助金额:
    $ 74.37万
  • 项目类别:
    Research Grant
CAREER: Integrating Trust and Accountability into Compliance Enforcement for a Secure Internet of Things
职业:将信任和问责融入安全物联网的合规执行中
  • 批准号:
    2237012
  • 财政年份:
    2023
  • 资助金额:
    $ 74.37万
  • 项目类别:
    Continuing Grant
Collaborative Research: SaTC: CORE: Small: Accountability for Central Bank Digital Currency
协作研究:SaTC:核心:小型:中央银行数字货币的责任
  • 批准号:
    2325477
  • 财政年份:
    2023
  • 资助金额:
    $ 74.37万
  • 项目类别:
    Continuing Grant
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了