FAI: Using Explainable AI to Increase Equity and Transparency in the Juvenile Justice System’s Use of Risk Scores
FAI:利用可解释的人工智能提高少年司法系统风险评分使用的公平性和透明度
基本信息
- 批准号:2147256
- 负责人:
- 金额:$ 39.3万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Standard Grant
- 财政年份:2022
- 资助国家:美国
- 起止时间:2022-05-01 至 2025-04-30
- 项目状态:未结题
- 来源:
- 关键词:
项目摘要
Throughout the United States, juvenile justice systems use juvenile risk and need-assessment (JRNA) scores to identify the likelihood a youth will commit another offense in the future. This risk assessment score is then used by juvenile justice practitioners to inform how to intervene with a youth to prevent reoffending (e.g., referring youth to a community-based program vs. placing a youth in a juvenile correctional center). Unfortunately, most risk assessment systems lack transparency and often the reasons why a youth received a particular score are unclear. Moreover, how these scores are used in the decision making process is sometimes not well understood by families and youth affected by such decisions. This possibility is problematic because it can hinder individuals’ buy-in to the intervention recommended by the risk assessment as well as mask potential bias in those scores (e.g., if youth of a particular race or gender have risk scores driven by a particular item on the assessment). To address this issue, project researchers will develop automated, computer-generated explanations for these risk scores aimed at explaining how these scores were produced. Investigators will then test whether these better-explained risk scores help youth and juvenile justice decision makers understand the risk score a youth is given. In addition, the team of researchers will investigate whether these risk scores are working equally well for different groups of youth (for example, equally well for boys and for girls) and identify any potential biases in how they are being used in an effort to understand how equitable the decision making process is for demographic groups based on race and gender. The project is embedded within the juvenile justice system and aims to evaluate how real stakeholders understand how the risk scores are generated and used within that system based on actual juvenile justice system data. More specifically, this project aims to understand how risk assessment scores are currently being used in the juvenile justice system and how interpretable machine learning methods can be used to make black-box risk assessment algorithms more transparent (without reverse engineering them given that most assessments are proprietary). The team of researchers endeavor to understand the way that juvenile justice risk scores are being used through the analysis of quantitative data from the juvenile justice system (which details the risk scores and justice system decisions) and through qualitative data collected via key informant interviews. In the second phase of the work, the team of researchers will train various interpretable machine learning algorithms to predict youth’s risk scores (which are currently generated by a proprietary, black-box algorithm). The team will also predict the sentencing dispositions for youth based on these risk scores and other pertinent data collected by the juvenile justice system. The project team will then test and measure how understandable a series of the automated explanations derived from these machine learning methods are to youth, families, judges and probation officers. The goal of this step will be to identify algorithms that are highly predictive of the risk score and dispositions, respectively and then to identify methods that provide clear, human-interpretable explanations of the risk and dispositions to key stakeholders throughout the process. This step will also allow researchers to optimize methods for explaining outcomes by possibly identifying one method that is more understandable for explaining risk scores to youth compared to another method that is more understandable for their families or probation officers, for example. Finally, the project team will also explore the potential for bias throughout the process (from risk scoring to the use of the scores) and ways in which these interpretable algorithms can be used to help identify, quantify and mitigate biases.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
在整个美国,青少年司法系统使用青少年风险和需要评估(JRNA)评分来确定青少年未来再次犯罪的可能性。然后,少年司法从业者使用这一风险评估分数来告知如何干预青少年以防止再次犯罪(例如,让青少年参加以社区为基础的计划,而不是将青少年安置在少年管教中心)。不幸的是,大多数风险评估系统缺乏透明度,青年获得特定分数的原因往往不清楚。此外,受这些决定影响的家庭和青年有时不能很好地理解如何在决策过程中使用这些分数。这种可能性是有问题的,因为它可能阻碍个人接受风险评估建议的干预措施,并掩盖这些分数中的潜在偏见(例如,如果特定种族或性别的青年的风险分数由评估中的特定项目驱动)。为了解决这个问题,项目研究人员将为这些风险分数开发自动的、计算机生成的解释,旨在解释这些分数是如何产生的。然后,调查人员将测试这些更好解释的风险评分是否有助于青少年和青少年司法决策者理解青少年获得的风险评分。此外,研究小组将调查这些风险评分对不同的青年群体是否同样有效(例如,对男孩和女孩同样有效),并找出在如何使用这些评分方面的任何潜在偏见,以努力了解基于种族和性别的人口群体的决策过程有多公平。该项目嵌入少年司法系统,旨在评估真正的利益攸关方如何了解如何根据少年司法系统的实际数据在该系统内生成和使用风险分数。更具体地说,该项目旨在了解青少年司法系统目前如何使用风险评估分数,以及如何使用可解释的机器学习方法使黑箱风险评估算法更加透明(不会对其进行反向工程,因为大多数评估都是专有的)。研究小组努力通过分析少年司法系统的定量数据(详细说明风险分数和司法系统决定)以及通过关键线人访谈收集的定性数据,了解少年司法风险分数的使用方式。在这项工作的第二阶段,研究团队将训练各种可解释的机器学习算法来预测年轻人的风险分数(目前由一种专有的黑盒算法生成)。该小组还将根据这些风险分数和少年司法系统收集的其他相关数据来预测对青少年的量刑倾向。然后,项目团队将测试和衡量从这些机器学习方法派生的一系列自动解释对青少年、家庭、法官和缓刑监督官的理解程度。这一步骤的目标将是分别确定高度预测风险得分和处置的算法,然后确定在整个过程中向关键利益攸关方提供清晰、人类可解释的风险和处置解释的方法。这一步骤还将允许研究人员优化解释结果的方法,例如,可能通过确定一种方法来向年轻人解释风险分数,而不是另一种更容易理解他们的家人或缓刑监督官的方法。最后,项目团队还将探索在整个过程中(从风险评分到分数的使用)存在偏见的可能性,以及如何使用这些可解释的算法来帮助识别、量化和减轻偏见。该奖项反映了NSF的法定使命,并通过使用基金会的智力优势和更广泛的影响审查标准进行评估,被认为值得支持。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Trent Buskirk其他文献
Trent Buskirk的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
相似国自然基金
Molecular Interaction Reconstruction of Rheumatoid Arthritis Therapies Using Clinical Data
- 批准号:31070748
- 批准年份:2010
- 资助金额:34.0 万元
- 项目类别:面上项目
相似海外基金
ICF: Using Explainable Artificial Intelligence to predict future stroke using routine historical investigations
ICF:使用可解释的人工智能通过常规历史调查来预测未来中风
- 批准号:
MR/Y503472/1 - 财政年份:2024
- 资助金额:
$ 39.3万 - 项目类别:
Research Grant
Real-time inversion using self-explainable deep learning driven by expert knowledge
使用由专家知识驱动的可自我解释的深度学习进行实时反演
- 批准号:
EP/Z000653/1 - 财政年份:2024
- 资助金额:
$ 39.3万 - 项目类别:
Research Grant
A Study on Explainable Recommender Systems using Personal Network
使用个人网络的可解释推荐系统研究
- 批准号:
23H03504 - 财政年份:2023
- 资助金额:
$ 39.3万 - 项目类别:
Grant-in-Aid for Scientific Research (B)
Explainable Population Estimation Using Deep Learning from Satellite Imagery
使用卫星图像深度学习进行可解释的人口估计
- 批准号:
2890100 - 财政年份:2023
- 资助金额:
$ 39.3万 - 项目类别:
Studentship
PFI–TT: Development of an Explainable and Robust Detector of Forged Multimedia and Cyber Threats using Artificial intelligence
PFI™TT:使用人工智能开发可解释且强大的伪造多媒体和网络威胁检测器
- 批准号:
2329858 - 财政年份:2023
- 资助金额:
$ 39.3万 - 项目类别:
Continuing Grant
Personalized Risk Stratification in Atrial Fibrillation using Portable, Explainable Artificial Intelligence
使用便携式、可解释的人工智能对心房颤动进行个性化风险分层
- 批准号:
10905154 - 财政年份:2023
- 资助金额:
$ 39.3万 - 项目类别:
PFI–TT: Development of an Explainable and Robust Detector of Forged Multimedia and Cyber Threats using Artificial intelligence
PFI™TT:使用人工智能开发可解释且强大的伪造多媒体和网络威胁检测器
- 批准号:
2409577 - 财政年份:2023
- 资助金额:
$ 39.3万 - 项目类别:
Continuing Grant
Automated detection and tracking of space debris using Explainable AI
使用可解释的人工智能自动检测和跟踪空间碎片
- 批准号:
2878198 - 财政年份:2023
- 资助金额:
$ 39.3万 - 项目类别:
Studentship
Developing explainable decision support systems for inventory management using deep reinforcement learning
使用深度强化学习开发可解释的库存管理决策支持系统
- 批准号:
23K13514 - 财政年份:2023
- 资助金额:
$ 39.3万 - 项目类别:
Grant-in-Aid for Early-Career Scientists
Japan-U.S. collaboration to find novel biomarkers of CDK4/6 inhibitor using explainable deep learning and spatial genetic analysis.
日美合作利用可解释的深度学习和空间遗传分析寻找 CDK4/6 抑制剂的新型生物标志物。
- 批准号:
22KK0118 - 财政年份:2022
- 资助金额:
$ 39.3万 - 项目类别:
Fund for the Promotion of Joint International Research (Fostering Joint International Research (B))