Multi-Object Video Behaviour Modelling for Abnormality Detection and Differentiation
用于异常检测和区分的多对象视频行为建模
基本信息
- 批准号:EP/G063974/1
- 负责人:
- 金额:$ 44.66万
- 依托单位:
- 依托单位国家:英国
- 项目类别:Research Grant
- 财政年份:2009
- 资助国家:英国
- 起止时间:2009 至 无数据
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
There are over 4.2 million closed-circuit television (CCTV) surveillance cameras operational in the UK and many more worldwide, collecting a colossal amount of video data for security, safety, and infrastructure and facility management purposes. A typical existing CCTV system relies on a handful of human operators at a centralised control room for monitoring video inputs from hundreds of cameras. Too many cameras and too few operators leave the system ill equipped to fulfil the task of detecting events and anomalies that require immediate and appropriate response. Consequently, the use of the existing CCTV surveillance systems is limited predominately to post-mortem analysis. There is thus an increasing demand for automated intelligent systems for analysing the content of the vast quantities of surveillance videos and triggering alarms in a timely and robust fashion. One of the most critical components and functionalities of such a system is to monitor object behaviour captured in the videos and detect/predict any suspicious and abnormal behaviour that could pose a threat to public safety and security. This project aims to develop underpinning capabilities for an innovative intelligent video analytics system for detecting abnormal video behaviour in public spaces. More specifically, the project will address three open problems:1.To develop a new model for spatio-temporal visual context for abnormal behaviour detection. Behaviours are inherently context-aware, exhibited through constraints imposed by scene layout and the temporal nature of activities in a given scene. Consequently, the same behaviour can be deemed as either normal or abnormal depending on where and when it occurs. We aim to go beyond the state-of-the-art semantic scene modelling approaches, most of which are focused solely on modelling scene layout such as entry and exit points, by developing a more comprehensive spatio-temporal model of dynamic visual context. 2.To develop a novel multi-object behaviour model for real-time detection and differentiation of abnormalities in complex video behaviours that involve multiple objects interacting with each other (e.g. a group of people meet in front of a ticket office at a train station and then go to different platforms). 3.To develop a novel online adaptive learning algorithm for estimating the parameters of the behaviour model to be developed. Although video abnormality detection tools are already available in many existing CCTV control systems, human operators are often reluctant to use them because there are too many parameters to tune and re-tune for different scenarios given changing visual context. With the incremental and adaptive learning algorithm our behaviour model can be used for different surveillance scenarios over a long period of time with minimal human intervention. More importantly, using the algorithm, our behaviour model will become adaptive to both changes of visual context (therefore the definition of normality/abnormality), and valuable feedbacks from human operators on the abnormality detection output of the model.
英国有超过420万个闭路电视(CCTV)监控摄像头,全球范围内有更多,收集了大量的视频数据,用于安全,基础设施和设施管理。典型的现有CCTV系统依赖于集中控制室的少数操作人员来监控来自数百个摄像头的视频输入。摄像机太多而操作员太少,使系统无法完成检测需要立即作出适当反应的事件和异常情况的任务。因此,现有闭路电视监视系统的使用主要限于事后分析。因此,对用于分析大量监控视频的内容并以及时和鲁棒的方式触发警报的自动化智能系统的需求日益增加。这种系统最关键的组件和功能之一是监控视频中捕获的对象行为,并检测/预测可能对公共安全和安保构成威胁的任何可疑和异常行为。该项目旨在为创新的智能视频分析系统开发基础功能,以检测公共场所的异常视频行为。更具体地说,该项目将解决三个开放的问题:1。开发一个新的时空视觉背景异常行为检测模型。行为本质上是上下文感知的,通过场景布局和给定场景中活动的时间性质所施加的约束来表现。因此,同样的行为可以被认为是正常的或不正常的,这取决于它发生在哪里和什么时候。我们的目标是超越国家的最先进的语义场景建模方法,其中大部分是只专注于建模场景布局,如入口和出口点,通过开发一个更全面的时空模型的动态视觉环境。2.开发一种新的多对象行为模型,用于实时检测和区分涉及多个对象相互作用的复杂视频行为中的异常(例如,一群人在火车站售票处前相遇,然后前往不同的平台)。3.提出了一种新的在线自适应学习算法,用于估计待开发的行为模型的参数。虽然视频异常检测工具已经在许多现有的CCTV控制系统中可用,但是人类操作员通常不愿意使用它们,因为在给定变化的视觉环境的情况下,有太多的参数要针对不同的场景进行调整和重新调整。通过增量和自适应学习算法,我们的行为模型可以在很长一段时间内用于不同的监控场景,而人工干预最少。更重要的是,使用该算法,我们的行为模型将适应视觉环境的变化(因此是正常/异常的定义),以及人类操作员对模型异常检测输出的有价值的反馈。
项目成果
期刊论文数量(8)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Tao Xiang其他文献
Practical cloud storage auditing using serverless computing
使用无服务器计算的实用云存储审计
- DOI:
10.1007/s11432-022-3597-3 - 发表时间:
2023-10 - 期刊:
- 影响因子:0
- 作者:
Fei Chen;Jianquan Cai;Tao Xiang;Xiaofeng Liao - 通讯作者:
Xiaofeng Liao
TrustBuilder: A non-repudiation scheme for IoT cloud applications
TrustBuilder:物联网云应用的不可否认方案
- DOI:
10.1016/j.cose.2022.102664 - 发表时间:
2022-02 - 期刊:
- 影响因子:5.6
- 作者:
Fei Chen;Jiahao Wang;Jianqiang Li;Yang Xu;Cheng Zhang;Tao Xiang - 通讯作者:
Tao Xiang
The Terahertz Metamaterial Sensor for Imidacloprid Detection
用于吡虫啉检测的太赫兹超材料传感器
- DOI:
10.1002/mmce.22840 - 发表时间:
2020-12 - 期刊:
- 影响因子:1.7
- 作者:
Yang Jun;Qi Limei;Uqaili Junaid Ahmed;Shi Dan;Yin Lu;Liu Ziyu;Tao Xiang;Dai Linlin;Lan Chuwen - 通讯作者:
Lan Chuwen
A training-integrity privacy-preserving federated learning scheme with trusted execution environment
一种具有可信执行环境的训练完整性隐私保护联邦学习方案
- DOI:
10.1016/j.ins.2020.02.037 - 发表时间:
2020-06 - 期刊:
- 影响因子:8.1
- 作者:
Yu Chen;Fang Luo;Tong Li;Tao Xiang;Zheli Liu;Jin Li - 通讯作者:
Jin Li
Core-shell Mg66Zn30Ca4 bulk metallic glasses composites reinforced by Fe with high strength and controllable degradation
Fe增强高强度可控降解核壳Mg66Zn30Ca4大块金属玻璃复合材料
- DOI:
10.1016/j.intermet.2021.107334 - 发表时间:
2021-11 - 期刊:
- 影响因子:4.4
- 作者:
Kun Li;Zeyun Cai;Peng Du;Tao Xiang;Xinxin Yang;Guoqiang Xie - 通讯作者:
Guoqiang Xie
Tao Xiang的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
相似海外基金
NeTS: Medium: Object-Centric, View-Adaptive and Progressive Coding and Streaming of Point Cloud Video
NeTS:Medium:以对象为中心、视图自适应和渐进式的点云视频编码和流式传输
- 批准号:
2312839 - 财政年份:2023
- 资助金额:
$ 44.66万 - 项目类别:
Continuing Grant
RINGS: Object-Oriented Video Analytics for Next-Generation Mobile Environments
RINGS:下一代移动环境的面向对象视频分析
- 批准号:
2147909 - 财政年份:2022
- 资助金额:
$ 44.66万 - 项目类别:
Continuing Grant
Audio-visual object-based dynamic scene representation from monocular video
单目视频中基于视听对象的动态场景表示
- 批准号:
2701695 - 财政年份:2022
- 资助金额:
$ 44.66万 - 项目类别:
Studentship
Object State Recognition via Multi-Modal Analysis of Videos and Video Caption Sequences
通过视频和视频字幕序列的多模态分析进行对象状态识别
- 批准号:
22K21296 - 财政年份:2022
- 资助金额:
$ 44.66万 - 项目类别:
Grant-in-Aid for Research Activity Start-up
PhD in Computer Science - Video Object Segmentation (VOS) In Egocentric Video
计算机科学博士 - 以自我为中心的视频中的视频对象分割 (VOS)
- 批准号:
2615063 - 财政年份:2021
- 资助金额:
$ 44.66万 - 项目类别:
Studentship
Object Detection and Recognition in Active Video
活动视频中的对象检测和识别
- 批准号:
RGPIN-2016-03939 - 财政年份:2021
- 资助金额:
$ 44.66万 - 项目类别:
Discovery Grants Program - Individual
Object Detection and Recognition in Active Video
活动视频中的对象检测和识别
- 批准号:
RGPIN-2016-03939 - 财政年份:2020
- 资助金额:
$ 44.66万 - 项目类别:
Discovery Grants Program - Individual
Object-based motion estimation for highly efficient streaming video
用于高效流视频的基于对象的运动估计
- 批准号:
DP190103717 - 财政年份:2019
- 资助金额:
$ 44.66万 - 项目类别:
Discovery Projects
Object Detection and Recognition in Active Video
活动视频中的对象检测和识别
- 批准号:
RGPIN-2016-03939 - 财政年份:2019
- 资助金额:
$ 44.66万 - 项目类别:
Discovery Grants Program - Individual
Object Detection and Recognition in Active Video
活动视频中的对象检测和识别
- 批准号:
RGPIN-2016-03939 - 财政年份:2018
- 资助金额:
$ 44.66万 - 项目类别:
Discovery Grants Program - Individual














{{item.name}}会员




