III: Medium: Collaborative Research: Exploiting Context in Cartographic Evolutionary Documents to Extract and Build Linked Spatial-Temporal Datasets

III:媒介:协作研究:利用制图进化文档中的上下文来提取和构建链接的时空数据集

基本信息

  • 批准号:
    1563933
  • 负责人:
  • 金额:
    $ 32.12万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
    Continuing Grant
  • 财政年份:
    2016
  • 资助国家:
    美国
  • 起止时间:
    2016-09-01 至 2020-08-31
  • 项目状态:
    已结题

项目摘要

Millions of historical maps are in digital archives today. For example, the U.S. Geological Survey has created and scanned over 200,000 topographic maps covering a 125-year period. Maps are a form of "evolutionary visual documents" because they display landscape changes over long periods of time and across large areas. Such documents are of tremendous value because they provide a high-resolution window into the past at a continental scale. Unfortunately, without time-intensive manual digitization scanned maps are unusable for research purposes. Map features, such as wetlands and roads, while readable by humans, are only available as images. This interdisciplinary collaborative project involving researchers and their students at University of Southern California and University of Colorado, Boulder will develop a set of open-source technologies and tools that allow users to extract map features from a large number of map sheets and track changes of features between map editions in a Geographical Information System. The resulting open-source tools will enable exciting new forms of research and learning in history, demography, economics, sociology, ecology, and other disciplines. The data produced by this project will be made publically available and through case studies integrated with other historical archives. Spatially and temporally linked knowledge covering man-made and natural features over more than 125 years holds enormous potential for the physical and social sciences. The wealth of information contained in these maps is unique, especially for the time before the widespread use of aerial photography. The ability to automatically transform the scanned paper maps stored in large archives into spatio-temporally linked knowledge will create an important resource for social and natural scientists studying global change and other socio-geographic processes that play out over large areas and long periods of time. The research goal of this project is to develop a recognition and data integration framework that extracts, organizes, and links the knowledge found in visual documents that evolve over time, such as a map series. While past work has focused on feature extraction from single well-conditioned map images, this framework will handle large volume historical map archives for efficient, robust extraction of man-made and natural features and link the features across time (map editions), space (map sheets), and scale. The framework will perform recognition in maps with poor graphical quality by exploiting contextual information in the form of linked knowledge. This contextual information comes from existing spatial data sources or has been extracted from more recent high-quality map editions, which can be used to improve and refine the training steps for automatically processing maps in an archive. The framework also exploits knowledge of the semantic relationships between features to increase robustness, efficiency, and the degree of automation of the methods developed and characterize uncertainty in the extracted data as well as in linking between extracted data across space, time, and scale. This research project will validate the methods by using case studies that evaluate the extracted, fully linked data collections for major feature types (built-up area, infrastructure, hydrography and vegetation) from both the USGS and Ordnance Survey maps. The researchers will use multiple study regions that represent different histories in landscape evolution and transitions driven by processes such as urbanization and its effects on rural and wild landscapes (e.g., the I-95 megapolitan urban corridor). Publications, software, and datasets for this project will be made available on the project website (http://spatial-computing.github.io/unlocking-spatiotemporal-map-data).
今天,数以百万计的历史地图保存在数字档案馆中。例如,美国地质调查局已经创建和扫描了超过20万张覆盖125年的地形图。地图是“进化的可视化文档”的一种形式,因为它们显示了长时间和大范围内的景观变化。这些文件具有巨大的价值,因为它们在大陆范围内提供了一个了解过去的高分辨率窗口。不幸的是,如果没有耗时的人工数字化,扫描的地图就不能用于研究目的。湿地和道路等地图要素虽然可供人类阅读,但只能以图像的形式提供。这个跨学科合作项目涉及南加州大学和科罗拉多大学博尔德分校的研究人员及其学生,他们将开发一套开源技术和工具,使用户能够从大量地图图纸中提取地图要素,并在地理信息系统中跟踪地图版本之间的要素变化。由此产生的开源工具将使历史学、人口学、经济学、社会学、生态学和其他学科中令人兴奋的新形式的研究和学习成为可能。该项目产生的数据将公之于众,并将通过与其他历史档案相结合的案例研究提供。125多年来,在空间和时间上相互关联的知识涵盖了人造和自然特征,对自然科学和社会科学具有巨大的潜力。这些地图所包含的丰富信息是独一无二的,特别是在航空摄影广泛使用之前。能够将存储在大型档案中的扫描纸质地图自动转换为时空关联的知识,这将为研究全球变化和其他社会地理过程的社会和自然科学家创造一个重要的资源,这些过程在大范围和长时间内发生。这个项目的研究目标是开发一个识别和数据集成框架,以提取、组织和链接在随时间演变的可视文档(如地图系列)中发现的知识。虽然过去的工作侧重于从单个条件良好的地图图像中提取特征,但该框架将处理大量的历史地图档案,以高效、稳健地提取人造和自然特征,并将不同时间(地图版本)、空间(地图图纸)和比例尺的特征联系起来。该框架将通过利用关联知识形式的上下文信息,在图形质量较差的地图上进行识别。这些背景信息来自现有的空间数据源或从较新的高质量地图版本中提取,可用于改进和改进自动处理存档中地图的培训步骤。该框架还利用特征之间的语义关系的知识来提高所开发方法的稳健性、效率和自动化程度,并表征提取数据中的不确定性以及跨空间、时间和尺度的提取数据之间的链接。这项研究项目将通过使用案例研究来验证方法,这些案例研究评估从美国地质勘探局和全国地形测量局地图中提取的、完全关联的主要要素类型(建成区、基础设施、水文和植被)的数据集合。研究人员将使用多个研究区域,这些区域代表了由城市化及其对农村和野生景观(例如,I-95兆城市走廊)的影响等过程驱动的景观演变和过渡的不同历史。该项目的出版物、软件和数据集将在项目网站(http://spatial-computing.github.io/unlocking-spatiotemporal-map-data).上提供

项目成果

期刊论文数量(1)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Combining Remote-Sensing-Derived Data and Historical Maps for Long-Term Back-Casting of Urban Extents
  • DOI:
    10.3390/rs13183672
  • 发表时间:
    2021-07
  • 期刊:
  • 影响因子:
    5
  • 作者:
    Johannes H. Uhl;S. Leyk;Zekun Li;Weiwei Duan;Basel Shbita;Yao-Yi Chiang;Craig A. Knoblock
  • 通讯作者:
    Johannes H. Uhl;S. Leyk;Zekun Li;Weiwei Duan;Basel Shbita;Yao-Yi Chiang;Craig A. Knoblock
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Stefan Leyk其他文献

Effects of varying temporal scale on spatial models of mortality patterns attributed to pediatric diarrhea.
不同时间尺度对小儿腹泻死亡率模式空间模型的影响。
Flood risk and the built environment: big property data for environmental justice and social vulnerability analysis
  • DOI:
    10.1007/s11111-025-00485-8
  • 发表时间:
    2025-02-26
  • 期刊:
  • 影响因子:
    2.500
  • 作者:
    Yilei Yu;Aaron Flores;Dylan Connor;Sara Meerow;Anna E. Braswell;Stefan Leyk
  • 通讯作者:
    Stefan Leyk

Stefan Leyk的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Stefan Leyk', 18)}}的其他基金

Collaborative Research: HNDS-I: Data Infrastructure for Research on Historical Settlement and Population Growth in the United States
合作研究:HNDS-I:美国历史聚落和人口增长研究的数据基础设施
  • 批准号:
    2121976
  • 财政年份:
    2021
  • 资助金额:
    $ 32.12万
  • 项目类别:
    Standard Grant
The Creeping Disaster along the Coast: Built Environment, Coastal Communities and Population Vulnerability to Sea Level Rise
沿海蔓延的灾难:建筑环境、沿海社区和人口对海平面上升的脆弱性
  • 批准号:
    1924670
  • 财政年份:
    2019
  • 资助金额:
    $ 32.12万
  • 项目类别:
    Standard Grant

相似海外基金

III : Medium: Collaborative Research: From Open Data to Open Data Curation
III:媒介:协作研究:从开放数据到开放数据管理
  • 批准号:
    2420691
  • 财政年份:
    2024
  • 资助金额:
    $ 32.12万
  • 项目类别:
    Standard Grant
Collaborative Research: III: Medium: Designing AI Systems with Steerable Long-Term Dynamics
合作研究:III:中:设计具有可操纵长期动态的人工智能系统
  • 批准号:
    2312865
  • 财政年份:
    2023
  • 资助金额:
    $ 32.12万
  • 项目类别:
    Standard Grant
Collaborative Research: III: MEDIUM: Responsible Design and Validation of Algorithmic Rankers
合作研究:III:媒介:算法排序器的负责任设计和验证
  • 批准号:
    2312932
  • 财政年份:
    2023
  • 资助金额:
    $ 32.12万
  • 项目类别:
    Standard Grant
Collaborative Research: III: Medium: Algorithms for scalable inference and phylodynamic analysis of tumor haplotypes using low-coverage single cell sequencing data
合作研究:III:中:使用低覆盖率单细胞测序数据对肿瘤单倍型进行可扩展推理和系统动力学分析的算法
  • 批准号:
    2415562
  • 财政年份:
    2023
  • 资助金额:
    $ 32.12万
  • 项目类别:
    Standard Grant
III: Medium: Collaborative Research: Integrating Large-Scale Machine Learning and Edge Computing for Collaborative Autonomous Vehicles
III:媒介:协作研究:集成大规模机器学习和边缘计算以实现协作自动驾驶汽车
  • 批准号:
    2348169
  • 财政年份:
    2023
  • 资助金额:
    $ 32.12万
  • 项目类别:
    Continuing Grant
Collaborative Research: III: Medium: VirtualLab: Integrating Deep Graph Learning and Causal Inference for Multi-Agent Dynamical Systems
协作研究:III:媒介:VirtualLab:集成多智能体动态系统的深度图学习和因果推理
  • 批准号:
    2312501
  • 财政年份:
    2023
  • 资助金额:
    $ 32.12万
  • 项目类别:
    Standard Grant
Collaborative Research: III: Medium: Knowledge discovery from highly heterogeneous, sparse and private data in biomedical informatics
合作研究:III:中:生物医学信息学中高度异构、稀疏和私有数据的知识发现
  • 批准号:
    2312862
  • 财政年份:
    2023
  • 资助金额:
    $ 32.12万
  • 项目类别:
    Standard Grant
Collaborative Research: III: MEDIUM: Responsible Design and Validation of Algorithmic Rankers
合作研究:III:媒介:算法排序器的负责任设计和验证
  • 批准号:
    2312930
  • 财政年份:
    2023
  • 资助金额:
    $ 32.12万
  • 项目类别:
    Standard Grant
Collaborative Research: III: Medium: New Machine Learning Empowered Nanoinformatics System for Advancing Nanomaterial Design
合作研究:III:媒介:新的机器学习赋能纳米信息学系统,促进纳米材料设计
  • 批准号:
    2347592
  • 财政年份:
    2023
  • 资助金额:
    $ 32.12万
  • 项目类别:
    Standard Grant
Collaborative Research: IIS: III: MEDIUM: Learning Protein-ish: Foundational Insight on Protein Language Models for Better Understanding, Democratized Access, and Discovery
协作研究:IIS:III:中等:学习蛋白质:对蛋白质语言模型的基础洞察,以更好地理解、民主化访问和发现
  • 批准号:
    2310113
  • 财政年份:
    2023
  • 资助金额:
    $ 32.12万
  • 项目类别:
    Standard Grant
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了