AI-informed decision making based on decision field theory
基于决策场理论的人工智能决策
基本信息
- 批准号:EP/X020207/1
- 负责人:
- 金额:$ 9.9万
- 依托单位:
- 依托单位国家:英国
- 项目类别:Research Grant
- 财政年份:2022
- 资助国家:英国
- 起止时间:2022 至 无数据
- 项目状态:未结题
- 来源:
- 关键词:
项目摘要
Machine learning (ML), Deep Learning (DL) and Artificial Intelligence (AI) have notably contributed to the development of recommender systems (playlist generators for video and music contents, content recommenders for social media and web services platforms, etc.), several types of recognition (e.g. face, image, speech), and self-driving cars, among many others. Using deep neural networks (DNNs), researchers have achieved higher accuracy than human participants in image recognition, have predicted the biomolecular target of a drug or which environmental chemicals are of serious concern to the human health, winning the Merck Molecular Activity Challenge, and the 2014 Tox21 data challenge, respectively.Despite their success across several fields, there have been a few recent cases where these approaches have drastically failed. For example, take the recent case of Uber's self-driving car that killed a pedestrian or IBM Watson's AI (Watson for Oncology), which gave potentially fatal cancer treatment recommendations. Understanding what went wrong is not an easy task, as explainability remains a core challenge in AI. The lack of explainability becomes especially crucial whenever AI is used, e.g. by governments, public and private sectors, to make decisions having an impact on human and behavioural sciences in general, since wrong or misleading decisions or the inability to understand their mechanisms may lead to dramatic consequences in many areas (medical treatment, retail and products supply, etc.). To make the results produced by powerful AI tools more interpretable, reliable and accountable, they should explain how and why a particular decision was made, e.g. which attributes were important in the decision making and with which confidence. There have been several efforts to improve the explainability of AI, most of them focusing on enhancing the explainability and transparency of DNNs, see, e.g. the Policy briefing "Explainable AI: The basic" from the Royal Society (https://royalsociety.org/ai-interpretability). This project contributes to this effort from a different perspective. Our goal is to perform AI-informed decision making driven by Decision Field Theory (DFT), proposing a new set of what we call AI-informed DFT-driven decision-making models. Such models integrate human behaviour with AI by combining stochastic processes coming from DFT with ML tools and have the unique feature of having interpretable parameters. On the one hand, we will generalise the class of DFT models to reproduce characteristics and behaviour of interest and run ML and inferential approaches (mainly likelihood-free based) to estimate the underlying interpretable DFT model parameters. On the other hand, we will use black-box DNN models as proxy (i.e. approximating) models of the interpretable DFT models (with a reversed role with respect to Table 1 of the above-mentioned policy briefing) and use them to learn the processes of interest and make informed predictions (i.e. decisions) driven by DFT. Hence, by using AI to learn these processes, estimating their parameters and making predictions, we will shed light on explaining to the end user why and how a particular decision was made, a crucial feature of interpretable AI-informed decision-making models.
机器学习(ML)、深度学习(DL)和人工智能(AI)对推荐系统(视频和音乐内容的播放列表生成器、社交媒体和网络服务平台的内容生成器等)的发展做出了显著贡献。几种类型的识别(例如,面部、图像、语音)和自动驾驶汽车等。利用深度神经网络(DNN),研究人员在图像识别方面取得了比人类参与者更高的准确性,预测了药物的生物分子靶点或哪些环境化学物质对人类健康有严重影响,分别赢得了默克分子活性挑战赛和2014年Tox 21数据挑战赛。尽管他们在多个领域取得了成功,最近有一些案例表明这些方法彻底失败了。例如,最近Uber的自动驾驶汽车撞死了一名行人,或者IBM的沃森人工智能(沃森用于肿瘤学)给出了可能致命的癌症治疗建议。理解哪里出了问题并不是一件容易的事情,因为可解释性仍然是人工智能的核心挑战。当政府、公共和私营部门使用人工智能来做出对人类和行为科学产生影响的决策时,缺乏可解释性变得尤为重要,因为错误或误导性的决策或无法理解其机制可能会在许多领域(医疗、零售和产品供应等)导致严重后果。为了使强大的人工智能工具产生的结果更具可解释性,可靠性和可问责性,它们应该解释如何以及为什么做出特定的决策,例如哪些属性在决策中很重要,以及有多大的信心。已经有几项努力来提高AI的可解释性,其中大多数集中在增强DNN的可解释性和透明度上,参见例如来自皇家学会的政策简报“可解释的AI:基础”(https://royalsociety.org/ai-interpretability)。该项目从不同的角度为这一努力做出了贡献。我们的目标是执行由决策场理论(DFT)驱动的AI知情决策,提出一组我们称之为AI知情DFT驱动的决策模型。这些模型通过将DFT中的随机过程与ML工具相结合,将人类行为与AI相结合,并且具有可解释参数的独特功能。一方面,我们将概括DFT模型的类别,以再现感兴趣的特征和行为,并运行ML和推理方法(主要是基于无可能性的)来估计底层可解释的DFT模型参数。另一方面,我们将使用黑盒DNN模型作为可解释DFT模型的代理(即近似)模型(与上述政策简报的表1相反),并使用它们来学习感兴趣的过程并做出由DFT驱动的明智预测(即决策)。因此,通过使用人工智能来学习这些过程,估计它们的参数并进行预测,我们将向最终用户解释为什么以及如何做出特定的决策,这是可解释的人工智能决策模型的一个关键特征。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Massimiliano Tamborrino其他文献
Parameter inference from hitting times for perturbed Brownian motion
- DOI:
10.1007/s10985-014-9307-7 - 发表时间:
2014-09-04 - 期刊:
- 影响因子:1.000
- 作者:
Massimiliano Tamborrino;Susanne Ditlevsen;Peter Lansky - 通讯作者:
Peter Lansky
Massimiliano Tamborrino的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
相似海外基金
DECIDE: Decoding cities for informed decision-making: a digital twin approach for minoring territorial disparities and enhancing urban liveability
决策:解码城市以做出明智的决策:缩小地域差距和提高城市宜居性的数字孪生方法
- 批准号:
EP/Y028716/1 - 财政年份:2023
- 资助金额:
$ 9.9万 - 项目类别:
Fellowship
Improving Recruitment, Engagement, and Access for Community Health Equity for BRAIN Next-Generation Human Neuroimaging Research and Beyond (REACH for BRAIN)
改善 BRAIN 下一代人类神经影像研究及其他领域的社区健康公平的招募、参与和获取 (REACH for BRAIN)
- 批准号:
10730955 - 财政年份:2023
- 资助金额:
$ 9.9万 - 项目类别:
Monitoring Community Efforts to Increase Colorectal Cancer Screening in African Americans
监测社区为增加非裔美国人结直肠癌筛查所做的努力
- 批准号:
10627341 - 财政年份:2023
- 资助金额:
$ 9.9万 - 项目类别:
Data-driven and science-informed methods for the discovery of biomedical mechanisms and processes
用于发现生物医学机制和过程的数据驱动和科学信息方法
- 批准号:
10624014 - 财政年份:2023
- 资助金额:
$ 9.9万 - 项目类别:
CAREER: Informed Decision Making for Software Change
职业:软件变更的知情决策
- 批准号:
2239107 - 财政年份:2023
- 资助金额:
$ 9.9万 - 项目类别:
Continuing Grant
Neonatal and Obstetric Health Outcomes among Women Diagnosed with Vasa Previa in Canada (NOHOW-VP)
加拿大诊断为前置血管的妇女的新生儿和产科健康结果 (NOHOW-VP)
- 批准号:
491331 - 财政年份:2023
- 资助金额:
$ 9.9万 - 项目类别:
Operating Grants
Developing a Scalable FASD-Informed Person-Centered Planning Intervention
制定可扩展的 FASD 知情的以人为中心的规划干预措施
- 批准号:
10644186 - 财政年份:2023
- 资助金额:
$ 9.9万 - 项目类别:
University of Minnesota Clinical and Translational Science Institute (UMN CTSI)
明尼苏达大学临床与转化科学研究所 (UMN CTSI)
- 批准号:
10763967 - 财政年份:2023
- 资助金额:
$ 9.9万 - 项目类别:
Scalable and Interoperable framework for a clinically diverse and generalizable sepsis Biorepository using Electronic alerts for Recruitment driven by Artificial Intelligence (short title: SIBER-AI)
使用人工智能驱动的招募电子警报的临床多样化和通用脓毒症生物库的可扩展和可互操作框架(简称:SIBER-AI)
- 批准号:
10576015 - 财政年份:2023
- 资助金额:
$ 9.9万 - 项目类别:
Collaborative Research: RI: Medium: Informed, Fair, Efficient, and Incentive-Aware Group Decision Making
协作研究:RI:媒介:知情、公平、高效和具有激励意识的群体决策
- 批准号:
2313137 - 财政年份:2023
- 资助金额:
$ 9.9万 - 项目类别:
Standard Grant