Satisficing Trust in Human Robot Teams

满足人类机器人团队的信任

基本信息

  • 批准号:
    EP/X028569/1
  • 负责人:
  • 金额:
    $ 155.31万
  • 依托单位:
  • 依托单位国家:
    英国
  • 项目类别:
    Research Grant
  • 财政年份:
    2023
  • 资助国家:
    英国
  • 起止时间:
    2023 至 无数据
  • 项目状态:
    未结题

项目摘要

In this project, we design and develop Human-Robot Teams (using experiments with real robots and modelling with Reinforcement Learning) to conduct urban search and related activity. A team will consist of 1-3 human operators and 2-6 robots. We extend the definition of a 'team' beyond robots and humans on the ground. Drawing an analogy with the management of major incidents (in UK Emergency Services), operational activity is performed at the 'bronze' level, i.e., by the local human-robot team which is overseen by tactical coordinators at the 'silver' level, e.g., providing guidance on legal or other constraints, and which answers to high-level strategic command at 'gold' level, e.g., redefining goals for the mission etc. In this way the 'team' is more than local coordination and trust applies through the command hierarchy as well as horizontally across each level. Communication may be intermittent, and the mission's goals and constraints might change during the mission. This is a further driver of variation in trust, along with mission, activity, situation etc. Each team member, human or robot, will be allocated tasks within the team and perform these in an autonomous manner. Key to team performance will be the ability to acquire and maintain Distributed Situation Awareness, i.e., team members will have their own interpretation of the situation as they see it, and their own interpretation of the behaviour of their teammates. Teammate behaviour could be inferred from observation of what teammates are doing in a given situation, and whether this is to be expected. This creates behavioural markers of trust. We also consider the confidence with which teammates might express the Situation Awareness, e.g., in terms of their interpretation of the data they perceive in the situation. From the interpretation of teammate behaviour, we explore appropriately scaled trust (using the concept of a 'ladder of trust' on which trust moves up and down depending on the quality of situation awareness, the behaviour of teammates, the threat posed by the situation). From the Distributed Situation Awareness, we also explore counter-factual ('what-if') reasoning to cope with uncertain and ambiguous situations (where ambiguity might relate to permissions and rights to perform tasks, or to the consequences of an action, as well as Situation Awareness).
在这个项目中,我们设计和开发人-机器人团队(使用真实机器人实验和强化学习建模)来进行城市搜索和相关活动。每队由1-3名人工操作员和2-6名机器人组成。我们扩展了“团队”的定义,超越了地面上的机器人和人类。与重大事件的管理(在英国紧急服务部门)类似,业务活动是在“铜”级执行的,即由当地的人机团队执行,由“银”级战术协调员监督,例如,就法律或其他限制提供指导,并在“金”级响应高级战略命令,例如,重新定义任务目标等。通过这种方式,“团队”不仅仅是当地的协调和信任,可以通过命令层次结构以及横向跨越每个级别。通信可能是断断续续的,任务的目标和限制可能在任务期间发生变化。这是信任变化的进一步驱动因素,还有使命、活动、情况等。每个团队成员,无论是人还是机器人,都将在团队中分配任务,并以自主的方式执行这些任务。团队绩效的关键将是获得和保持分布式态势感知的能力,也就是说,团队成员将对他们所看到的形势有自己的解释,以及他们对队友行为的解释。队友的行为可以通过观察队友在特定情况下的行为来推断,以及这是否在预料之中。这创造了信任的行为标志。我们还会考虑团队成员表达情境意识的信心,例如,根据他们对情境中感知到的数据的解释。从对队友行为的解释出发,我们探索了适当规模的信任(使用“信任阶梯”的概念,信任根据情境意识的质量、队友的行为、情境构成的威胁而上下移动)。从分布式态势感知中,我们还探索了反事实(“假设”)推理,以应对不确定和模糊的情况(其中模糊性可能与执行任务的权限和权利有关,或者与行动的后果有关,以及态势感知)。

项目成果

期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Chris Baber其他文献

Experimental studies in a reconfigurable C4 test-bed for network enabled capability
可重构 C4 测试台的网络启用能力实验研究
  • DOI:
  • 发表时间:
    2006
  • 期刊:
  • 影响因子:
    0
  • 作者:
    N. Stanton;G. Walker;P. Salmon;S. Gulliver;D. Jenkins;Darshna Ladva;Laura Rafferty;M. Young;S. Watts;Chris Baber
  • 通讯作者:
    Chris Baber
Probabilistic Approach of Dealing with Uncertainties in Distributed Constraint Optimization Problems and Situation Awareness for Multi-agent Systems
处理分布式约束优化问题中的不确定性的概率方法和多智能体系统的态势感知
  • DOI:
  • 发表时间:
    2020
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Sagir Muhammad Yusuf;Chris Baber
  • 通讯作者:
    Chris Baber
What You See Is What You Do: Applying Ecological Interface Design to Visual Analytics
所见即所得:将生态界面设计应用于视觉分析
  • DOI:
  • 发表时间:
    2015
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Natan Morar;Chris Baber;Peter Bak;Adam Duncan
  • 通讯作者:
    Adam Duncan
Cooking guide: Direct and indirect form of interaction in the digital kitchen environment
烹饪指南:数字厨房环境中的直接和间接交互形式
  • DOI:
  • 发表时间:
    2015
  • 期刊:
  • 影响因子:
    0
  • 作者:
    K. Azir;Chris Baber;M. Jusoh
  • 通讯作者:
    M. Jusoh
A Digital Alternative to the TNO Stereo Test to Qualify Military Aircrew.
TNO 立体声测试的数字替代方案,用于鉴定军事机组人员的资格。
  • DOI:
    10.3357/amhp.6111.2022
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    0.9
  • 作者:
    B. Posselt;Eric S. Seemiller;M. Winterbottom;Chris Baber;S. Hadley
  • 通讯作者:
    S. Hadley

Chris Baber的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

相似海外基金

ERI: Developing a Trust-supporting Design Framework with Affect for Human-AI Collaboration
ERI:开发一个支持信任的设计框架,影响人类与人工智能的协作
  • 批准号:
    2301846
  • 财政年份:
    2023
  • 资助金额:
    $ 155.31万
  • 项目类别:
    Standard Grant
CAREER: Physiological Modeling of Longitudinal Human Trust in Autonomy for Operational Environments
职业:作战环境自主纵向人类信任的生理建模
  • 批准号:
    2238977
  • 财政年份:
    2023
  • 资助金额:
    $ 155.31万
  • 项目类别:
    Continuing Grant
Trust in User-generated Evidence: Analysing the Impact of Deepfakes on Accountability Processes for Human Rights Violations (TRUE)
信任用户生成的证据:分析 Deepfakes 对侵犯人权行为问责流程的影响 (TRUE)
  • 批准号:
    EP/X016021/1
  • 财政年份:
    2022
  • 资助金额:
    $ 155.31万
  • 项目类别:
    Research Grant
CAREER: Enhancing Trust-Driven Human-Autonomy Interaction: Modeling Trust Dynamics and Supporting Trust Calibration
职业:增强信任驱动的人类自主交互:对信任动态进行建模并支持信任校准
  • 批准号:
    2045009
  • 财政年份:
    2021
  • 资助金额:
    $ 155.31万
  • 项目类别:
    Standard Grant
CRII: CPS: A Bi-Trust Framework for Collaboration-Quality Improvement in Human-Robot Collaborative Contexts
CRII:CPS:人机协作环境中协作质量改进的双信任框架
  • 批准号:
    2104742
  • 财政年份:
    2021
  • 资助金额:
    $ 155.31万
  • 项目类别:
    Standard Grant
Trust and Explainable AI in Human-Machine Interaction
人机交互中的信任和可解释的人工智能
  • 批准号:
    2859094
  • 财政年份:
    2021
  • 资助金额:
    $ 155.31万
  • 项目类别:
    Studentship
A Novel AI-Human Teaming Approach to Trust and Cooperation in AI-Cybersecurity Education
人工智能网络安全教育中信任与合作的新型人工智能与人类团队合作方法
  • 批准号:
    2121559
  • 财政年份:
    2021
  • 资助金额:
    $ 155.31万
  • 项目类别:
    Standard Grant
Minimising Human Efforts to Fight Fake News and Restore the Public Trust
最大限度地减少人类打击假新闻和恢复公众信任的努力
  • 批准号:
    DE200101465
  • 财政年份:
    2020
  • 资助金额:
    $ 155.31万
  • 项目类别:
    Discovery Early Career Researcher Award
Trust and Safety in Autonomous Mobility Systems: A Human-centred Approach
自主移动系统的信任和安全:以人为本的方法
  • 批准号:
    DP200102604
  • 财政年份:
    2020
  • 资助金额:
    $ 155.31万
  • 项目类别:
    Discovery Projects
PATH-AI: Mapping an Intercultural Path to Privacy, Agency, and Trust in Human-AI Ecosystems
PATH-AI:在人类人工智能生态系统中绘制隐私、代理和信任的跨文化路径
  • 批准号:
    ES/T007354/1
  • 财政年份:
    2020
  • 资助金额:
    $ 155.31万
  • 项目类别:
    Research Grant
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了