Verifiable Autonomy
可验证的自主权
基本信息
- 批准号:EP/L024845/1
- 负责人:
- 金额:$ 81.65万
- 依托单位:
- 依托单位国家:英国
- 项目类别:Research Grant
- 财政年份:2014
- 资助国家:英国
- 起止时间:2014 至 无数据
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
Autonomy is surely a core theme of technology in the 21st century. Within 20 years, we expect to see fully autonomous vehicles, aircraft, robots, devices, swarms, and software, all of which will (and must) be able to make their own decisions without direct human intervention. The economic implications are enormous: for example, the global civil unmanned air- vehicle (UAV) market has been estimated to be £6B over the next 10 years, while the world-wide market for robotic systems is expected to exceed $50B by 2025.This potential is both exciting and frightening. Exciting, in that this technology can allow us to develop systems and tackle tasks well beyond current possibilities. Frightening in that the control of these systems is now taken away from us. How do we know that they will work? How do we know that they are safe? And how can we trust them? All of these are impossible questions for current technology. We cannot say that such systems are safe, will not deliberately try to injure humans, and will always try their best to keep humans safe. Without such guarantees, these new technologies will neither be allowed by regulators nor accepted by the public.Imagine that we have a generic architecture for autonomous systems such that the choices the system makes can be guaranteed? And these guarantees are backed by strong mathematical proof? If we have such an architecture, upon which our autonomous systems (be they robots, vehicles, or software) can be based, then we can indeed guarantee that our systems never intentionally act dangerously, will endeavour to be safe, and will - as far as possible - act in an ethical and trustworthy way. It is important to note that this is separate from the problem of how accurately the system understands its environment. Due to inaccuracy in modelling the real world, we cannot say that a system will be absolutely safe or will definitely achieve something; instead we can say that it tries to be safe and decides to carry out a task to its best ability. This distinction is crucial: we can only prove that the system never decides to do the wrong thing, we cannot guarantee that accidents will never happen. Consequently, we also need to make an autonomous system judge the quality of its understanding and require it to act taking this into account. We should also verify, by our methods, that the system's choices do not exacerbate any potential safety problems.Our hypothesis is that by identifying and separating out the high-level decision-making component within autonomous systems, and providing comprehensive formal verification techniques for this, we can indeed directly tackle questions of safety, ethics, legality and reliability. In this project, we build on internationally leading work on agent verification (Fisher), control and learning (Veres), safety and ethics (Winfield), and practical autonomous systems (Veres, Winfield) to advance the underlying verification techniques and so develop a framework allowing us to tackle questions such as the above. In developing autonomous systems for complex and unknown environments, being able to answer such questions is crucial.
自主性无疑是21世纪世纪技术的核心主题。在20年内,我们预计将看到完全自动驾驶的车辆、飞机、机器人、设备、蜂群和软件,所有这些都将(而且必须)能够在没有人类直接干预的情况下做出自己的决定。经济影响是巨大的:例如,全球民用无人机(UAV)市场估计在未来10年内将达到60亿英镑,而机器人系统的全球市场预计到2025年将超过500亿美元。令人兴奋的是,这项技术可以让我们开发系统并解决远远超出当前可能性的任务。令人恐惧的是,这些系统的控制权现在从我们手中被夺走了。我们怎么知道他们会工作吗?我们怎么知道他们是安全的?我们怎么能相信他们?所有这些都是目前技术无法解决的问题。我们不能说这样的系统是安全的,不会故意伤害人类,总是尽最大努力保证人类的安全。如果没有这样的保证,这些新技术既不会被监管机构允许,也不会被公众接受。想象一下,我们有一个自治系统的通用架构,这样系统做出的选择就可以得到保证?这些保证都有强有力的数学证明吗?如果我们有这样一个架构,我们的自主系统(无论是机器人、车辆还是软件)都可以基于这个架构,那么我们确实可以保证我们的系统永远不会故意做出危险的行为,将努力做到安全,并将尽可能以道德和值得信赖的方式行事。重要的是要注意,这与系统如何准确地理解其环境的问题是分开的。由于对真实的世界建模的不准确性,我们不能说一个系统绝对安全或一定会实现某些目标;相反,我们可以说它试图做到安全,并决定尽最大努力执行任务。这一区别至关重要:我们只能证明这个制度从来没有决定做错事,我们不能保证永远不会发生意外。因此,我们还需要让一个自治系统判断其理解的质量,并要求它考虑到这一点采取行动。我们还应该通过我们的方法来验证系统的选择不会加剧任何潜在的安全问题。我们的假设是,通过识别和分离自治系统中的高层决策组件,并为此提供全面的形式化验证技术,我们确实可以直接解决安全、道德、合法性和可靠性问题。在这个项目中,我们建立在国际领先的工作代理验证(费舍尔),控制和学习(Veres),安全和道德(温菲尔德),和实用的自治系统(Veres,温菲尔德),以推进底层的验证技术,从而开发一个框架,使我们能够解决上述问题。在为复杂和未知的环境开发自主系统时,能够回答这些问题至关重要。
项目成果
期刊论文数量(10)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Formal Methods. FM 2019 International Workshops - Porto, Portugal, October 7-11, 2019, Revised Selected Papers, Part I
正式方法。
- DOI:10.1007/978-3-030-54994-7_16
- 发表时间:2020
- 期刊:
- 影响因子:0
- 作者:Alves G
- 通讯作者:Alves G
Reflections on Artificial Intelligence for Humanity
对人工智能造福人类的思考
- DOI:10.1007/978-3-030-69128-8_2
- 发表时间:2021
- 期刊:
- 影响因子:0
- 作者:Chatila R
- 通讯作者:Chatila R
On Proactive, Transparent, and Verifiable Ethical Reasoning for Robots
- DOI:10.1109/jproc.2019.2898267
- 发表时间:2019-03-01
- 期刊:
- 影响因子:20.6
- 作者:Bremner, Paul;Dennis, Louise A.;Winfield, Alan F.
- 通讯作者:Winfield, Alan F.
Autonomous Nuclear Waste Management
- DOI:10.1109/mis.2018.111144814
- 发表时间:2018-11-01
- 期刊:
- 影响因子:6.4
- 作者:Aitken, Jonathan M.;Veres, Sandor M.;Mort, Paul E.
- 通讯作者:Mort, Paul E.
Cake, Death, and Trolleys
蛋糕、死亡和手推车
- DOI:10.1145/3278721.3278767
- 发表时间:2018
- 期刊:
- 影响因子:0
- 作者:Bjørgen E
- 通讯作者:Bjørgen E
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Michael Fisher其他文献
Making Sense of the World: Models for Reliable Sensor-Driven Systems
理解世界:可靠的传感器驱动系统模型
- DOI:
- 发表时间:
2018 - 期刊:
- 影响因子:0
- 作者:
Muffy Calder;S. Dobson;Michael Fisher;J. Mccann - 通讯作者:
J. Mccann
The Concept of Self-Identity
自我认同的概念
- DOI:
- 发表时间:
2014 - 期刊:
- 影响因子:0
- 作者:
Michael Fisher;Martin Abbott;K. Lyytinen - 通讯作者:
K. Lyytinen
Optimizing revenue: Service Provisioning Systems with QoS Contracts
优化收入:具有 QoS 合同的服务供应系统
- DOI:
- 发表时间:
2007 - 期刊:
- 影响因子:0
- 作者:
J. Palmer;I. Mitrani;M. Mazzucco;P. McKee;Michael Fisher;By;J Palmer - 通讯作者:
J Palmer
Clausal Resolution for CTL*
CTL 的条款决议*
- DOI:
- 发表时间:
1999 - 期刊:
- 影响因子:0
- 作者:
A. Bolotov;C. Dixon;Michael Fisher - 通讯作者:
Michael Fisher
Michael Fisher的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Michael Fisher', 18)}}的其他基金
Rapid: Impact of Hurricane Florence on Drinking Water Safety in Eastern and Central North Carolina: Rapid Assessment and Recommendations for Recovery and Resilience
快速:佛罗伦萨飓风对北卡罗来纳州东部和中部饮用水安全的影响:快速评估以及恢复和复原力建议
- 批准号:
1903010 - 财政年份:2018
- 资助金额:
$ 81.65万 - 项目类别:
Standard Grant
Network on the Verification and Validation of Autonomous Systems
自治系统验证和确认网络
- 批准号:
EP/M027309/1 - 财政年份:2015
- 资助金额:
$ 81.65万 - 项目类别:
Research Grant
NSF/CBMS Regional Conference in the Mathematical Sciences - The Mathematics of the Social and Behavioral Sciences
NSF/CBMS 数学科学区域会议 - 社会和行为科学的数学
- 批准号:
1137949 - 财政年份:2012
- 资助金额:
$ 81.65万 - 项目类别:
Standard Grant
Engineering Autonomous Space Software
工程自主空间软件
- 批准号:
EP/F037201/1 - 财政年份:2008
- 资助金额:
$ 81.65万 - 项目类别:
Research Grant
Verifying Interoperability Requirements in Pervasive Systems
验证普及系统中的互操作性要求
- 批准号:
EP/F033567/1 - 财政年份:2008
- 资助金额:
$ 81.65万 - 项目类别:
Research Grant
Model Checking Agent Programming Languages
模型检查代理编程语言
- 批准号:
EP/D052548/1 - 财政年份:2006
- 资助金额:
$ 81.65万 - 项目类别:
Research Grant
Statistical Mechanics and Phase Transitions
统计力学和相变
- 批准号:
0301101 - 财政年份:2003
- 资助金额:
$ 81.65万 - 项目类别:
Continuing Grant
相似海外基金
CAREER: Facilitating Autonomy of Robots Through Learning-Based Control
职业:通过基于学习的控制促进机器人的自主性
- 批准号:
2422698 - 财政年份:2024
- 资助金额:
$ 81.65万 - 项目类别:
Continuing Grant
Collaborative Research: SLES: Guaranteed Tubes for Safe Learning across Autonomy Architectures
合作研究:SLES:跨自治架构安全学习的保证管
- 批准号:
2331878 - 财政年份:2024
- 资助金额:
$ 81.65万 - 项目类别:
Standard Grant
DISTOPIA - Distorting the Aerospace Manufacturing Boundaries: Operational Integration of Autonomy on Titanium
DISTOPIA - 扭曲航空航天制造边界:钛合金上的自主运营集成
- 批准号:
10086469 - 财政年份:2024
- 资助金额:
$ 81.65万 - 项目类别:
Collaborative R&D
Perceptions, practices, autonomy levels and other variations among teachers who use AI teaching tools
使用人工智能教学工具的教师的认知、实践、自主水平和其他差异
- 批准号:
24K16628 - 财政年份:2024
- 资助金额:
$ 81.65万 - 项目类别:
Grant-in-Aid for Early-Career Scientists
CAREER: Towards Safe and Interpretable Autonomy in Healthcare
职业:迈向医疗保健领域安全且可解释的自主权
- 批准号:
2340139 - 财政年份:2024
- 资助金额:
$ 81.65万 - 项目类别:
Standard Grant
GAIA: Ground-Aerial maps Integration for increased Autonomy outdoors
GAIA:地空地图集成以增强户外自主性
- 批准号:
EP/Y003438/1 - 财政年份:2024
- 资助金额:
$ 81.65万 - 项目类别:
Research Grant
SaTC: CORE: Medium: Increasing user autonomy and advertiser and platform responsibility in online advertising
SaTC:核心:中:增加在线广告中的用户自主权以及广告商和平台责任
- 批准号:
2318290 - 财政年份:2024
- 资助金额:
$ 81.65万 - 项目类别:
Continuing Grant
CPS: Medium: GOALI: Enabling Safe Innovation for Autonomy: Making Publish/Subscribe Really Real-Time
CPS:中:GOALI:实现自主安全创新:使发布/订阅真正实时
- 批准号:
2333120 - 财政年份:2024
- 资助金额:
$ 81.65万 - 项目类别:
Standard Grant
Collaborative Research: SLES: Guaranteed Tubes for Safe Learning across Autonomy Architectures
合作研究:SLES:跨自治架构安全学习的保证管
- 批准号:
2331879 - 财政年份:2024
- 资助金额:
$ 81.65万 - 项目类别:
Standard Grant
CAREER: Safe Autonomy for Soft Robots
职业:软机器人的安全自主
- 批准号:
2340111 - 财政年份:2024
- 资助金额:
$ 81.65万 - 项目类别:
Standard Grant