Collaborative Research: RI: Small: Modeling and Learning Ethical Principles for Embedding into Group Decision Support Systems
协作研究:RI:小型:建模和学习嵌入群体决策支持系统的道德原则
基本信息
- 批准号:2007955
- 负责人:
- 金额:$ 16.74万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Standard Grant
- 财政年份:2021
- 资助国家:美国
- 起止时间:2021-01-01 至 2024-12-31
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
Many settings in everyday life require making decisions by combining the subjective preferences of individuals in a group, such as where to go to eat, where to go on vacation, whom to hire, which ideas to fund, or what route to take. In many domains, these subjective preferences are combined with moral values, ethical principles, or business constraints that are applicable to the decision scenario and are often prioritized over the preferences. The potential conflict of moral values with subjective preferences are keenly felt both when AI systems recommend products to us and when we use AI enabled systems to make group decisions. This research seeks to make AI more accountable by providing mechanisms to bound the decisions that AI systems can make, ensuring that the outcomes of the group decision making process aligns with human values. To achieve the goal of building ethically-bounded, AI-enabled group decision making systems, this project takes inspiration from humans, who often constrain their decisions and actions according to a number of exogenous priorities coming from moral, ethical, or business values. This research project will address the current lack of principled, formal approaches for embedding ethics into AI agents and AI enabled group decision support systems by advancing the state of the art in the safety and robustness of AI agents which, given how broadly AI touches our daily lives, will have broad impact and benefit to society.Specifically, the long-term goal of this project is to establish mathematical and machine learning foundations for embedding ethical guidelines into AI for group decision-making systems. Within the machine ethics field there are two main approaches: the bottom-up approach focused on data-driven machine learning techniques and the top-down approach following symbolic and logic-based formalisms. This project brings these two methodologies closer together through three specific aims. (1) Modeling and Evaluating Ethical Principles: this project will extend principles in social choice theory and fair division using preference models from the literature on knowledge representation and preference reasoning. (2) Learning Ethical Principles From Data: this project will develop novel machine-learning frameworks to learn individual ethical principles and then aggregate them for use in group decision making systems. And finally, (3) Embedding Ethical Principles into Group Decision Support Systems: this project will develop novel frameworks for designing AI-based mechanisms for ethical group decision-making. This research will establish novel methods for the formal and experimental unification of aspects of the top-down or rule-based approach with the bottoms-up or data-based approach for embedding ethics into group decision making systems. The project will also formalize a framework for ethical and constrained reasoning across teams of computational agents.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
日常生活中的许多场景都需要结合群体中个人的主观偏好来做出决策,例如去哪里吃饭,去哪里度假,雇用谁,资助哪些想法,或者采取什么路线。在许多领域,这些主观偏好与道德价值观、伦理原则或适用于决策场景的商业约束相结合,并且通常优先于偏好。 当人工智能系统向我们推荐产品时,以及当我们使用人工智能系统进行群体决策时,都能敏锐地感受到道德价值观与主观偏好之间的潜在冲突。 这项研究旨在通过提供机制来约束人工智能系统可以做出的决策,确保群体决策过程的结果与人类价值观保持一致,从而使人工智能更加负责任。 为了实现构建道德约束的、支持人工智能的群体决策系统的目标,该项目从人类那里获得灵感,人类经常根据来自道德、伦理或商业价值观的一些外部优先事项来约束他们的决策和行动。 该研究项目将通过推进人工智能代理的安全性和鲁棒性方面的最新技术水平,解决目前缺乏将道德嵌入人工智能代理和人工智能支持的群体决策支持系统的原则性,正式方法的问题,鉴于人工智能广泛触及我们的日常生活,这将对社会产生广泛的影响和益处。具体而言,该项目的长期目标是建立数学和机器学习基础,以便将道德准则嵌入人工智能中,用于群体决策系统。 在机器伦理领域,有两种主要的方法:自下而上的方法,专注于数据驱动的机器学习技术,以及自上而下的方法,遵循基于符号和逻辑的形式主义。该项目通过三个具体目标使这两种方法更加接近。(1)建模和评估道德原则:该项目将使用知识表示和偏好推理文献中的偏好模型扩展社会选择理论和公平分配的原则。(2)从数据中学习道德原则:该项目将开发新的机器学习框架,以学习个人道德原则,然后将其汇总用于群体决策系统。最后,(3)将伦理原则嵌入群体决策支持系统:该项目将开发新的框架,用于设计基于AI的伦理群体决策机制。 这项研究将建立新的方法,正式和实验统一的方面,自上而下或基于规则的方法与自下而上或基于数据的方法嵌入到群体决策系统的道德。 该项目还将正式制定一个框架,用于跨计算代理团队的道德和约束推理。该奖项反映了NSF的法定使命,并被认为值得通过使用基金会的智力价值和更广泛的影响审查标准进行评估来支持。
项目成果
期刊论文数量(14)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Behavioral Stable Marriage Problems
行为稳定的婚姻问题
- DOI:
- 发表时间:2021
- 期刊:
- 影响因子:0
- 作者:Martin, A.;Venable, K.B.;Mattei, N.
- 通讯作者:Mattei, N.
Computing welfare-Maximizing fair allocations of indivisible goods
计算福利——最大化不可分割物品的公平分配
- DOI:
- 发表时间:2022
- 期刊:
- 影响因子:6.4
- 作者:Aziz, Haris;Huang, Xin;Mattei, Nicholas;Segal-Halevi, Erel
- 通讯作者:Segal-Halevi, Erel
Learning Behavioral Soft Constraints from Demonstrations
从演示中学习行为软约束
- DOI:
- 发表时间:2021
- 期刊:
- 影响因子:0
- 作者:Glazier, A.;Loreggia, A.;Mattei, N.;Rahgooy, T.;Rossi, F.;Venable, K.B.
- 通讯作者:Venable, K.B.
Modeling Voters in Multi-Winner Approval Voting
- DOI:10.1609/aaai.v35i6.16716
- 发表时间:2020-12
- 期刊:
- 影响因子:0
- 作者:J. Scheuerman;J. Harman;Nicholas Mattei;K. Venable
- 通讯作者:J. Scheuerman;J. Harman;Nicholas Mattei;K. Venable
Does Delegating Votes Protect Against Pandering Candidates?
委托投票是否可以防止迎合候选人?
- DOI:
- 发表时间:2023
- 期刊:
- 影响因子:0
- 作者:Sun, Xiaolin;Masur, Jacob;Abramowitz, Ben;Mattei, Nicholas;Zizhan
- 通讯作者:Zizhan
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Nicholas Mattei其他文献
PeerNomination: Relaxing Exactness for Increased Accuracy in Peer Selection
PeerNomination:放松精确性以提高同行选择的准确性
- DOI:
10.24963/ijcai.2020/55 - 发表时间:
2020 - 期刊:
- 影响因子:0
- 作者:
Nicholas Mattei;P. Turrini;Stanislav Zhydkov - 通讯作者:
Stanislav Zhydkov
Decision making under uncertainty: theoretical and empirical results on social choice, manipulation, and bribery
不确定性下的决策:社会选择、操纵和贿赂的理论和实证结果
- DOI:
- 发表时间:
2012 - 期刊:
- 影响因子:0
- 作者:
J. Goldsmith;Nicholas Mattei - 通讯作者:
Nicholas Mattei
Exploring Social Choice Mechanisms for Recommendation Fairness in SCRUF
探索 SCRUF 中推荐公平性的社会选择机制
- DOI:
10.48550/arxiv.2309.08621 - 发表时间:
2023 - 期刊:
- 影响因子:0
- 作者:
Amanda A. Aird;Cassidy All;Paresha Farastu;Elena Stefancova;Joshua Sun;Nicholas Mattei;Robin Burke - 通讯作者:
Robin Burke
Fiction as an Introduction to Computer Science Research
小说作为计算机科学研究的入门
- DOI:
- 发表时间:
2014 - 期刊:
- 影响因子:2.4
- 作者:
J. Goldsmith;Nicholas Mattei - 通讯作者:
Nicholas Mattei
span class="small-caps"PeerNomination/span: A novel peer selection algorithm to handle strategic and noisy assessments
跨类名“小型大写字母”同行提名/跨类名:一种新颖的同行选择算法,用于处理策略性和有噪声的评估
- DOI:
10.1016/j.artint.2022.103843 - 发表时间:
2023-03-01 - 期刊:
- 影响因子:4.600
- 作者:
Omer Lev;Nicholas Mattei;Paolo Turrini;Stanislav Zhydkov - 通讯作者:
Stanislav Zhydkov
Nicholas Mattei的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Nicholas Mattei', 18)}}的其他基金
NSF-BSF: RI: Small: Mechanisms and Algorithms for Improving Peer Selection
NSF-BSF:RI:小型:改进同行选择的机制和算法
- 批准号:
2134857 - 财政年份:2022
- 资助金额:
$ 16.74万 - 项目类别:
Standard Grant
III: Medium: Collaborative Research: Fair Recommendation Through Social Choice
III:媒介:协作研究:通过社会选择进行公平推荐
- 批准号:
2107505 - 财政年份:2021
- 资助金额:
$ 16.74万 - 项目类别:
Standard Grant
相似国自然基金
Research on Quantum Field Theory without a Lagrangian Description
- 批准号:24ZR1403900
- 批准年份:2024
- 资助金额:0.0 万元
- 项目类别:省市级项目
Cell Research
- 批准号:31224802
- 批准年份:2012
- 资助金额:24.0 万元
- 项目类别:专项基金项目
Cell Research
- 批准号:31024804
- 批准年份:2010
- 资助金额:24.0 万元
- 项目类别:专项基金项目
Cell Research (细胞研究)
- 批准号:30824808
- 批准年份:2008
- 资助金额:24.0 万元
- 项目类别:专项基金项目
Research on the Rapid Growth Mechanism of KDP Crystal
- 批准号:10774081
- 批准年份:2007
- 资助金额:45.0 万元
- 项目类别:面上项目
相似海外基金
Collaborative Research: RI: Medium: Principles for Optimization, Generalization, and Transferability via Deep Neural Collapse
合作研究:RI:中:通过深度神经崩溃实现优化、泛化和可迁移性的原理
- 批准号:
2312841 - 财政年份:2023
- 资助金额:
$ 16.74万 - 项目类别:
Standard Grant
Collaborative Research: RI: Medium: Principles for Optimization, Generalization, and Transferability via Deep Neural Collapse
合作研究:RI:中:通过深度神经崩溃实现优化、泛化和可迁移性的原理
- 批准号:
2312842 - 财政年份:2023
- 资助金额:
$ 16.74万 - 项目类别:
Standard Grant
Collaborative Research: RI: Small: Foundations of Few-Round Active Learning
协作研究:RI:小型:少轮主动学习的基础
- 批准号:
2313131 - 财政年份:2023
- 资助金额:
$ 16.74万 - 项目类别:
Standard Grant
Collaborative Research: RI: Medium: Lie group representation learning for vision
协作研究:RI:中:视觉的李群表示学习
- 批准号:
2313151 - 财政年份:2023
- 资助金额:
$ 16.74万 - 项目类别:
Continuing Grant
Collaborative Research: RI: Medium: Principles for Optimization, Generalization, and Transferability via Deep Neural Collapse
合作研究:RI:中:通过深度神经崩溃实现优化、泛化和可迁移性的原理
- 批准号:
2312840 - 财政年份:2023
- 资助金额:
$ 16.74万 - 项目类别:
Standard Grant
Collaborative Research: RI: Small: Deep Constrained Learning for Power Systems
合作研究:RI:小型:电力系统的深度约束学习
- 批准号:
2345528 - 财政年份:2023
- 资助金额:
$ 16.74万 - 项目类别:
Standard Grant
Collaborative Research: RI: Small: Motion Fields Understanding for Enhanced Long-Range Imaging
合作研究:RI:小型:增强远程成像的运动场理解
- 批准号:
2232298 - 财政年份:2023
- 资助金额:
$ 16.74万 - 项目类别:
Standard Grant
Collaborative Research: RI: Small: End-to-end Learning of Fair and Explainable Schedules for Court Systems
合作研究:RI:小型:法院系统公平且可解释的时间表的端到端学习
- 批准号:
2232055 - 财政年份:2023
- 资助金额:
$ 16.74万 - 项目类别:
Standard Grant
Collaborative Research: RI: Medium: Lie group representation learning for vision
协作研究:RI:中:视觉的李群表示学习
- 批准号:
2313149 - 财政年份:2023
- 资助金额:
$ 16.74万 - 项目类别:
Continuing Grant
Collaborative Research: CompCog: RI: Medium: Understanding human planning through AI-assisted analysis of a massive chess dataset
合作研究:CompCog:RI:中:通过人工智能辅助分析海量国际象棋数据集了解人类规划
- 批准号:
2312374 - 财政年份:2023
- 资助金额:
$ 16.74万 - 项目类别:
Standard Grant