LOCATE: LOcation adaptive Constrained Activity recognition using Transfer learning
LOCATE:使用迁移学习的位置自适应约束活动识别
基本信息
- 批准号:EP/N033779/1
- 负责人:
- 金额:$ 12.5万
- 依托单位:
- 依托单位国家:英国
- 项目类别:Research Grant
- 财政年份:2016
- 资助国家:英国
- 起止时间:2016 至 无数据
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
It is estimated that there are six million surveillance cameras in the UK, with only 17% of them publicly operated. Increasingly, people are installing CCTV cameras in their homes for security or remote monitoring of elderly, infants or pets. Despite this increase, the use of the overwhelming majority of these cameras is limited to evidence gathering or live viewing. These sensors are currently incapable of providing smart monitoring - identifying an infant in danger or a dehydrated elderly. Similarly, CCTV in public places is mostly used for evidence gathering. Following years of research, methods capable of automatically recognising activities of interest, such as a person departing a service station without making a payment for refueling the car, or one tampering with a fuel dispenser, are now available, achieving acceptable levels of success and low false alarms. Though automatic after installation, the installation process not only requires putting the hardware in place but also involves an expert studying the footage and designing a model suitable for the monitored location. At each new location, e.g. each new service station, a new model is needed, requiring the effort and time of an expert. This is expensive, difficult to scale and at times implausible such as for home monitoring for example. This requirement to build location-specific models is currently limiting the adoption of automatic recognition of activities, despite the potential benefits.This project, LOCATE, proposes an algorithmic solution that is capable of using a pre-built model in a different location and adapting it by simply observing the new scene for a few days. The solution is inspired by the human ability to intelligently apply previously-acquired knowledge to solve new challenges. The researchers will work with senior scientists from two leading UK video analytics industrial partners; QinetiQ and Thales. Using these partners' expertise, the project will provide practical and valuable insight that can further boost the strong UK industry of video analytics. The United Kingdom is currently a global player in the video analytics market, and the leading country in the Europe, Middle East and Africa (EMEA) region. The method will be applicable to various domains, including for home monitoring and CCTV in public places. To evaluate the proposed approach for home monitoring, LOCATE will work alongside the EPSRC-funded project SPHERE, which aims to develop and deploy a sensor-based platform for residential healthcare in and around Bristol. The findings of LOCATE will be integrated within the SPHERE platform, towards automatic monitoring of activities of daily living in a new home, such as preparing a meal, eating or taking medication. The targeted plug-and-play approach will enable a non-expert user to setup a camera and automatically detect whether an elderly in the home had had their meal and medication, for example. A shop owner can similarly detect pickpocketing attempts in their store. The community can thus make better use of the already in place network of visual sensors.
据估计,英国有600万个监控摄像头,其中只有17%是公共运营的。越来越多的人在家中安装闭路电视摄像头,以确保安全或远程监控老人、婴儿或宠物。尽管数量有所增加,但绝大多数摄像机的使用仅限于证据收集或现场观看。这些传感器目前无法提供智能监控-识别处于危险中的婴儿或脱水的老人。同样,公共场所的闭路电视大多用于取证。经过多年的研究,现在可以使用能够自动识别感兴趣的活动的方法,例如一个人离开服务站而不支付汽车加油费,或者一个人篡改加油机,实现可接受的成功水平和低误报。虽然安装后是自动的,但安装过程不仅需要将硬件安装到位,还需要专家研究镜头并设计适合监控位置的模型。在每个新的位置,例如每个新的服务站,需要新的模型,这需要专家的努力和时间。这是昂贵的,难以扩展的,并且有时是不可信的,例如对于家庭监控。这种建立特定位置模型的要求目前限制了自动识别活动的采用,尽管有潜在的好处。这个项目,LOCATE,提出了一种算法解决方案,能够在不同的位置使用预先建立的模型,并通过简单地观察新场景几天来调整它。该解决方案的灵感来自人类智能地应用先前获得的知识来解决新挑战的能力。研究人员将与英国两家领先的视频分析行业合作伙伴QinetiQ和Thales的高级科学家合作。利用这些合作伙伴的专业知识,该项目将提供实用和有价值的见解,可以进一步推动强大的英国视频分析行业。英国目前是视频分析市场的全球参与者,也是欧洲、中东和非洲(EMEA)地区的领先国家。该方法将适用于各种领域,包括家庭监控和公共场所的闭路电视。为了评估拟议的家庭监测方法,LOCATE将与EPSRC资助的项目SPHERE合作,该项目旨在为布里斯托及其周边地区的居民医疗保健开发和部署基于传感器的平台。LOCATE的研究结果将被整合到SPHERE平台中,以自动监控新家庭的日常生活活动,例如做饭,吃饭或服药。例如,有针对性的即插即用方法将使非专业用户能够设置摄像头并自动检测家中的老人是否已经用餐和服药。店主同样可以检测到他们商店中的扒窃企图。因此,社区可以更好地利用现有的视觉传感器网络。
项目成果
期刊论文数量(10)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Fine-Grained Action Retrieval Through Multiple Parts-of-Speech Embeddings
- DOI:10.1109/iccv.2019.00054
- 发表时间:2019-08
- 期刊:
- 影响因子:0
- 作者:Michael Wray;Diane Larlus;G. Csurka;D. Damen
- 通讯作者:Michael Wray;Diane Larlus;G. Csurka;D. Damen
Rescaling Egocentric Vision: Collection, Pipeline and Challenges for EPIC-KITCHENS-100
- DOI:10.1007/s11263-021-01531-2
- 发表时间:2021-10-20
- 期刊:
- 影响因子:19.5
- 作者:Damen, Dima;Doughty, Hazel;Wray, Michael
- 通讯作者:Wray, Michael
The EPIC-KITCHENS Dataset: Collection, Challenges and Baselines
- DOI:10.1109/tpami.2020.2991965
- 发表时间:2021-11-01
- 期刊:
- 影响因子:23.6
- 作者:Damen, Dima;Doughty, Hazel;Wray, Michael
- 通讯作者:Wray, Michael
Multi-Modal Domain Adaptation for Fine-Grained Action Recognition
- DOI:10.1109/cvpr42600.2020.00020
- 发表时间:2020-01
- 期刊:
- 影响因子:0
- 作者:Jonathan Munro;D. Damen
- 通讯作者:Jonathan Munro;D. Damen
Learning Visual Actions Using Multiple Verb-Only Labels
- DOI:
- 发表时间:2019-07
- 期刊:
- 影响因子:0
- 作者:Michael Wray;D. Damen
- 通讯作者:Michael Wray;D. Damen
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Dima Damen其他文献
Correspondence, Matching and Recognition
- DOI:
10.1007/s11263-015-0827-8 - 发表时间:
2015-05-14 - 期刊:
- 影响因子:9.300
- 作者:
Tilo Burghardt;Dima Damen;Walterio Mayol-Cuevas;Majid Mirmehdi - 通讯作者:
Majid Mirmehdi
Cognitive Robotics Systems
- DOI:
10.1007/s10846-015-0244-9 - 发表时间:
2015-06-03 - 期刊:
- 影响因子:2.800
- 作者:
Lazaros Nalpantidis;Renaud Detry;Dima Damen;Gabriele Bleser;Maya Cakmak;Mustafa Suphi Erden - 通讯作者:
Mustafa Suphi Erden
Explaining Activities as Consistent Groups of Events
- DOI:
10.1007/s11263-011-0497-0 - 发表时间:
2011-10-05 - 期刊:
- 影响因子:9.300
- 作者:
Dima Damen;David Hogg - 通讯作者:
David Hogg
Dima Damen的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Dima Damen', 18)}}的其他基金
UMPIRE: United Model for the Perception of Interactions in visuoauditory REcognition
裁判:视觉听觉识别中交互感知的联合模型
- 批准号:
EP/T004991/1 - 财政年份:2020
- 资助金额:
$ 12.5万 - 项目类别:
Fellowship
相似国自然基金
空间co-location模式挖掘中的模糊技术研究
- 批准号:61966036
- 批准年份:2019
- 资助金额:40.0 万元
- 项目类别:地区科学基金项目
领域驱动空间co-location模式挖掘技术研究
- 批准号:61472346
- 批准年份:2014
- 资助金额:80.0 万元
- 项目类别:面上项目
带不精确概率和约束的co-location挖掘及其可视化研究
- 批准号:61272126
- 批准年份:2012
- 资助金额:20.0 万元
- 项目类别:面上项目
不确定数据的空间co-location模式挖掘技术研究
- 批准号:61063008
- 批准年份:2010
- 资助金额:23.0 万元
- 项目类别:地区科学基金项目
相似海外基金
Unlicensed Low-Power Wide Area Networks for Location-based Services
用于基于位置的服务的免许可低功耗广域网
- 批准号:
24K20765 - 财政年份:2024
- 资助金额:
$ 12.5万 - 项目类别:
Grant-in-Aid for Early-Career Scientists
CAREER: AF: Algorithms for Facility Location Problems with Uncertainty
职业:AF:具有不确定性的设施位置问题的算法
- 批准号:
2339371 - 财政年份:2024
- 资助金额:
$ 12.5万 - 项目类别:
Continuing Grant
Doctoral Dissertation Research: Predicting the location of hominin cave fossil sites with a machine learning approach
博士论文研究:利用机器学习方法预测古人类洞穴化石遗址的位置
- 批准号:
2341328 - 财政年份:2024
- 资助金额:
$ 12.5万 - 项目类别:
Standard Grant
IENgine: developing a subscription-based membership application to connect live/location-based immersive experience, (LIE) creatives with employers.
IENgine:开发基于订阅的会员应用程序,将基于实时/位置的沉浸式体验 (LIE) 创意人员与雇主联系起来。
- 批准号:
ES/Y011104/1 - 财政年份:2024
- 资助金额:
$ 12.5万 - 项目类别:
Research Grant
P.E.A.R.L. (Project - Enterprise Asset Retrieval & Location)
珍珠。
- 批准号:
83001498 - 财政年份:2023
- 资助金额:
$ 12.5万 - 项目类别:
Innovation Loans
Robust robot location and behaviors for on-farm navigation.
用于农场导航的强大机器人位置和行为。
- 批准号:
10073593 - 财政年份:2023
- 资助金额:
$ 12.5万 - 项目类别:
Collaborative R&D
ChapARone: No-code Augmented Reality CMS with location-based WebAR/AR for Stations
ChapARone:无代码增强现实 CMS,具有基于位置的 WebAR/AR 站点
- 批准号:
10089927 - 财政年份:2023
- 资助金额:
$ 12.5万 - 项目类别:
Collaborative R&D
Elucidation of the selection process in the location of nuisance facilities and the proposal for the future location decision process by political geography
阐明滋扰设施选址过程,并根据政治地理学提出未来选址决策过程的建议
- 批准号:
23K00989 - 财政年份:2023
- 资助金额:
$ 12.5万 - 项目类别:
Grant-in-Aid for Scientific Research (C)
Illumination of TAAR2 Location, Function and Regulators
TAAR2 位置、功能和调节器的阐明
- 批准号:
10666759 - 财政年份:2023
- 资助金额:
$ 12.5万 - 项目类别:
Dynamic location equilibrium problem with the recursive structure of multi-entity in disaster prone areas
灾害易发区多实体递归结构动态位置均衡问题
- 批准号:
23KJ0771 - 财政年份:2023
- 资助金额:
$ 12.5万 - 项目类别:
Grant-in-Aid for JSPS Fellows