Institute for Trustworthy AI in Law and Society (TRAILS)
法律与社会可信人工智能研究所 (TRAILS)
基本信息
- 批准号:2229885
- 负责人:
- 金额:$ 2000万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Cooperative Agreement
- 财政年份:2023
- 资助国家:美国
- 起止时间:2023-06-01 至 2028-05-31
- 项目状态:未结题
- 来源:
- 关键词:
项目摘要
Artificial Intelligence (AI) systems have potential to enhance human capacity and increase productivity. They also can catalyze innovation and mitigate complex problems. Current AI systems are not created in a way that is transparent making them a challenge to public trust. The opaque processes used produce results that are not well understood. Trust is further undermined by the harms that AI systems can cause. Those most affected are the communities excluded from participating in AI system developments. This lack of trustworthiness will result in slower adoption of these AI technologies. It is critical to AI innovation to include groups affected by the benefits and harms of these AI systems. The TRAILS (Trustworthy AI in Law and Society) Institute is a partnership of the University of Maryland, The George Washington University, Morgan State University, and Cornell University. It encourages community participation in AI development of techniques, tools, and scientific theories. Design and policy recommendations produced will promote the trustworthiness of AI systems. A first goal of the TRAILS Institute is to discover ways to change the design and development of AI systems. This will help communities make informed choices about AI technology adoption. A second goal is the development of best practices for industry and government. This will foster AI innovation while keeping communities safe, engaged, and informed. The TRAILS Institute has explicit plans for increasing participation of affected communities. This includes participation of K-12 education up through Congressional staff. These plans will elicit the concerns and expectations from the affected communities. They also provide improved understanding of the risks and benefits of AI-enabled systems.The TRAILS Institute's research program identifies four key thrusts. These thrusts target key aspects of the AI system development lifecycle. The first is Social Values. It involves increasing participation throughout all aspects of AI development. This ensures the values produced by AI systems reflect community and interested parties’ values. This includes participatory design with diverse communities. The result is community-based interventions and adaptations for the AI development lifecycle. The second thrust is Technical Design. It includes the development of algorithms to promote transparency and trust in AI. This includes the development of tools that increase robustness in AI systems. It also promotes user and developer understanding of how AI systems operate. The third trust is Socio-Technical Perceptions. This involves the development of novel measures including psychometric techniques and experimental paradigms. These measures will assess the interpretability and explainability of AI systems. This will enable a deeper understanding and perception of existing metrics and algorithms. This provides understanding of the values perceived and held by included community members. The fourth thrust is Governance. It includes documentation and analysis of governance regimes for both data and technologies. These provide the underpinning AI for the development of platform and technology regulation. Ethnographers will analyze the institute itself and partner organizations. They will document ways in which technical choices translate to governance impacts. The research focus is in two use-inspired areas. The first being information dissemination systems (e.g., social medial platforms). The second is energy-intensive systems (e.g., autonomous systems). The institute's education and workforce development efforts in AI include new educational offerings. These cater to many markets, ranging from secondary through executive education. The TRAILS Institute is especially focused on expanding access to foundational education. The focus is on historically marginalized and minoritized groups of learners and users. The institute will work with these communities to learn from, educate, and recruit participants. The focus is to retain, support, and empower those marginalized in mainstream AI. The integration of these communities into this AI research program broadens participation in AI development and governance.The National Institute of Standards and Technology (NIST) is partnering with NSF to provide funding for this Institute.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
人工智能(AI)系统具有增强人类能力和提高生产率的潜力。它们还可以促进创新,缓解复杂的问题。目前的人工智能系统并不是以一种透明的方式创建的,这使得它们对公众的信任构成了挑战。使用的不透明过程产生的结果并不被很好地理解。人工智能系统可能造成的伤害进一步破坏了信任。受影响最大的是被排除在参与人工智能系统开发之外的社区。这种可信度的缺乏将导致这些人工智能技术的采用速度变慢。将受这些人工智能系统的好处和危害影响的群体包括在内,这对人工智能创新至关重要。Trails(法律和社会中值得信赖的人工智能)研究所是马里兰大学、乔治华盛顿大学、摩根州立大学和康奈尔大学的合作伙伴。它鼓励社区参与人工智能技术、工具和科学理论的开发。产生的设计和政策建议将促进人工智能系统的可信性。Trails Institute的第一个目标是找到改变人工智能系统设计和开发的方法。这将帮助社区对人工智能技术的采用做出明智的选择。第二个目标是为行业和政府制定最佳实践。这将促进人工智能创新,同时保持社区的安全、参与和知情。Trails Institute有明确的计划,以增加受影响社区的参与。这包括通过国会工作人员参与K-12教育。这些计划将引起受影响社区的关注和期望。它们还提供了对人工智能系统的风险和好处的更好的理解。Trails研究所的研究计划确定了四个关键推动力。这些推力针对人工智能系统开发生命周期的关键方面。首先是社会价值观。它涉及在人工智能开发的各个方面增加参与。这确保了人工智能系统产生的价值观反映了社区和相关方的价值观。这包括与不同社区的参与式设计。其结果是基于社区的干预和对人工智能开发生命周期的适应。第二个推力是技术设计。它包括开发算法,以促进人工智能的透明度和信任。这包括开发工具,以提高人工智能系统的健壮性。它还促进了用户和开发人员对人工智能系统如何运行的理解。第三个信任是社会技术认知。这涉及开发新的测量方法,包括心理测量技术和实验范例。这些措施将评估人工智能系统的可解释性和可解释性。这将使人们能够更深入地理解和感知现有的指标和算法。这有助于理解被纳入的社区成员所感知和持有的价值观。第四个推动力是治理。它包括对数据和技术的治理制度的记录和分析。这些都为平台和技术监管的发展提供了支撑人工智能。民族志学者将分析该研究所本身和合作伙伴组织。他们将记录技术选择转化为治理影响的方式。研究的重点是两个受使用启发的领域。第一个是信息传播系统(例如,社交媒体平台)。第二类是能源密集型系统(例如,自治系统)。该研究所在人工智能领域的教育和劳动力发展努力包括新的教育项目。这些课程迎合了许多市场,从中学到高管教育。Trails研究所特别注重扩大获得基础教育的机会。重点放在历史上被边缘化和小型化的学习者和用户群体。该研究所将与这些社区合作,向参与者学习、教育和招募参与者。重点是留住、支持和授权那些在主流人工智能中处于边缘地位的人。将这些社区整合到这个人工智能研究计划中,扩大了对人工智能开发和治理的参与。国家标准与技术研究所(NIST)正在与NSF合作,为该研究所提供资金。该奖项反映了NSF的法定使命,并通过使用基金会的智力优势和更广泛的影响审查标准进行评估,被认为值得支持。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Hal Daume其他文献
Seamful XAI: Operationalizing Seamful Design in Explainable AI
Seamful XAI:在可解释的 AI 中实施 Seamful 设计
- DOI:
- 发表时间:
2024 - 期刊:
- 影响因子:0
- 作者:
Upol Ehsan;Qingzi Vera Liao;Samir Passi;Mark O. Riedl;Hal Daume - 通讯作者:
Hal Daume
Hal Daume的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Hal Daume', 18)}}的其他基金
RI: EAGER: Collaborative Research: Adaptive Heads-up Displays for Simultaneous Interpretation
RI:EAGER:协作研究:用于同声传译的自适应平视显示器
- 批准号:
1748663 - 财政年份:2017
- 资助金额:
$ 2000万 - 项目类别:
Standard Grant
RI: Small: Linguistic Semantics and Discourse from Leaky Distant Supervision
RI:小:来自泄漏远程监督的语言语义和话语
- 批准号:
1618193 - 财政年份:2016
- 资助金额:
$ 2000万 - 项目类别:
Continuing Grant
EAGER: Discrete Algorithms in NLP
EAGER:NLP 中的离散算法
- 批准号:
1451430 - 财政年份:2014
- 资助金额:
$ 2000万 - 项目类别:
Standard Grant
RI: SMALL: Statistical Linguistic Typology
RI:小:统计语言类型学
- 批准号:
1153487 - 财政年份:2011
- 资助金额:
$ 2000万 - 项目类别:
Continuing Grant
ICML 2011 Proposal for Student Poster Program and Travel Scholarships
ICML 2011 年学生海报计划和旅行奖学金提案
- 批准号:
1130109 - 财政年份:2011
- 资助金额:
$ 2000万 - 项目类别:
Standard Grant
Collaborative Research: EAGER: Computational Thinking Olympiad
合作研究:EAGER:计算思维奥林匹克竞赛
- 批准号:
1048401 - 财政年份:2010
- 资助金额:
$ 2000万 - 项目类别:
Standard Grant
RI: SMALL: Statistical Linguistic Typology
RI:小:统计语言类型学
- 批准号:
0916372 - 财政年份:2009
- 资助金额:
$ 2000万 - 项目类别:
Continuing Grant
Computational Thinking Olympiad: Brainstorming Workshop
计算思维奥林匹克:头脑风暴研讨会
- 批准号:
0848473 - 财政年份:2008
- 资助金额:
$ 2000万 - 项目类别:
Standard Grant
Cross-Task Learning for Natural Language Processing
自然语言处理的跨任务学习
- 批准号:
0712764 - 财政年份:2007
- 资助金额:
$ 2000万 - 项目类别:
Continuing Grant
相似海外基金
Toward Trustworthy Generative AI by Integrating Large Language Model with Knowledge Graph
通过将大型语言模型与知识图相结合,迈向可信赖的生成式人工智能
- 批准号:
24K20834 - 财政年份:2024
- 资助金额:
$ 2000万 - 项目类别:
Grant-in-Aid for Early-Career Scientists
Human-centric Digital Twin Approaches to Trustworthy AI and Robotics for Improved Working Conditions
以人为本的数字孪生方法,实现值得信赖的人工智能和机器人技术,以改善工作条件
- 批准号:
10109582 - 财政年份:2024
- 资助金额:
$ 2000万 - 项目类别:
EU-Funded
Accelerating Trustworthy AI: developing a first-to-market AI System Risk Management Platform for Insurance Product creation
加速可信人工智能:开发首个上市的人工智能系统风险管理平台,用于保险产品创建
- 批准号:
10093285 - 财政年份:2024
- 资助金额:
$ 2000万 - 项目类别:
Collaborative R&D
CAREER: An Integrated Trustworthy AI Research and Education Framework for Modeling Human Behavior in Climate Disasters
职业生涯:用于模拟气候灾害中人类行为的综合可信人工智能研究和教育框架
- 批准号:
2338959 - 财政年份:2024
- 资助金额:
$ 2000万 - 项目类别:
Standard Grant
CAP: Capacity Building for Trustworthy AI in Medical Systems (TAIMS)
CAP:医疗系统中值得信赖的人工智能的能力建设(TAIMS)
- 批准号:
2334391 - 财政年份:2023
- 资助金额:
$ 2000万 - 项目类别:
Standard Grant
Trustworthy decentralized AI for large-scale IoT representation learning
用于大规模物联网表征学习的值得信赖的去中心化人工智能
- 批准号:
22KJ0878 - 财政年份:2023
- 资助金额:
$ 2000万 - 项目类别:
Grant-in-Aid for JSPS Fellows
EAGER: SaTC: Sweaty Digits: Bridging Chemistry and AI-Empowered Imaging for Secure and Trustworthy Human Identity Verification
EAGER:SaTC:汗水数字:桥接化学和人工智能成像,实现安全可信的人类身份验证
- 批准号:
2330240 - 财政年份:2023
- 资助金额:
$ 2000万 - 项目类别:
Standard Grant
Accelerating adoption of trustworthy AI in radiology: scalable software for non-technical clinical users to independently validate commercial products at local sites
加速在放射学中采用值得信赖的人工智能:为非技术临床用户提供可扩展的软件,以在本地站点独立验证商业产品
- 批准号:
10064189 - 财政年份:2023
- 资助金额:
$ 2000万 - 项目类别:
Collaborative R&D
Improved biomedical data harmonisation, the cornerstone of trustworthy and responsible AI in Healthcare
改进生物医学数据协调,这是医疗保健领域值得信赖和负责任的人工智能的基石
- 批准号:
10076467 - 财政年份:2023
- 资助金额:
$ 2000万 - 项目类别:
Grant for R&D
DEVELOPING TRUSTWORTHY ARTIFICIAL INTELLIGENCE (AI)- DRIVEN TOOLS TO PREDICT VASCULAR DISEASE RISK AND PROGRESSION - VASCULAID
开发值得信赖的人工智能 (AI) 驱动工具来预测血管疾病风险和进展 - VASCULAID
- 批准号:
10079480 - 财政年份:2023
- 资助金额:
$ 2000万 - 项目类别:
EU-Funded














{{item.name}}会员




