SPX: Collaborative Research: FASTLEAP: FPGA based compact Deep Learning Platform
SPX:协作研究:FASTLEAP:基于 FPGA 的紧凑型深度学习平台
基本信息
- 批准号:2333009
- 负责人:
- 金额:$ 84.87万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Standard Grant
- 财政年份:2022
- 资助国家:美国
- 起止时间:2022-10-01 至 2024-11-30
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
With the rise of artificial intelligence in recent years, Deep Neural Networks (DNNs) have been widely used because of their high accuracy, excellent scalability, and self-adaptiveness properties. Many applications employ DNNs as the core technology, such as face detection, speech recognition, scene parsing. To meet the high accuracy requirement of various applications, DNN models are becoming deeper and larger, and are evolving at a fast pace. They are computation and memory intensive and pose intensive challenges to the conventional Von Neumann architecture used in computing. The key problem addressed by the project is how to accelerate deep learning, not only inference, but also training and model compression, which have not received enough attention in the prior research. This endeavor has the potential to enable the design of fast and energy-efficient deep learning systems, applications of which are found in our daily lives -- ranging from autonomous driving, through mobile devices, to IoT systems, thus benefiting the society at large.The outcome of this project is FASTLEAP - an Field Programmable Gate Array (FPGA)-based platform for accelerating deep learning. The platform takes in a dataset as an input and outputs a model which is trained, pruned, and mapped on FPGA, optimized for fast inferencing. The project will utilize the emerging FPGA technologies that have access to High Bandwidth Memory (HBM) and consist of floating-point DSP units. In a vertical perspective, FASTLEAP integrates innovations from multiple levels of the whole system stack algorithm, architecture and down to efficient FPGA hardware implementation. In a horizontal perspective, it embraces systematic DNN model compression and associated FPGA-based training, as well as FPGA-based inference acceleration of compressed DNN models. The platform will be delivered as a complete solution, with both the software tool chain and hardware implementation to ensure the ease of use. At algorithm level of FASTLEAP, the proposed Alternating Direction Method of Multipliers for Neural Networks (ADMM-NN) framework, will perform unified weight pruning and quantization, given training data, target accuracy, and target FPGA platform characteristics (performance models, inter-accelerator communication). The training procedure in ADMM-NN is performed on a platform with multiple FPGA accelerators, dictated by the architecture-level optimizations on communication and parallelism. Finally, the optimized FPGA inference design is generated based on the trained DNN model with compression, accounting for FPGA performance modeling. The project will address the following SPX research areas: 1) Algorithms: Bridging the gap between deep learning developments in theory and their system implementations cognizant of performance model of the platform. 2) Applications: Scaling of deep learning for domains such as image processing. 3) Architecture and Systems: Automatic generation of deep learning designs on FPGA optimizing area, energy-efficiency, latency, and throughput.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
近年来,随着人工智能的兴起,深度神经网络(DNN)以其高精度、良好的可扩展性和自适应特性得到了广泛的应用。许多应用都采用DNN作为核心技术,如人脸检测、语音识别、场景分析等。为了满足各种应用对高精度的要求,DNN模型正在变得越来越深入和庞大,并且正在以快速的速度发展。它们是计算和存储密集型的,并且对传统的用于计算的冯·诺伊曼体系结构构成了密集的挑战。该项目解决的关键问题是如何加速深度学习,不仅是推理,而且是训练和模型压缩,这在以往的研究中没有得到足够的重视。这一努力有可能使设计快速、节能的深度学习系统成为可能,其应用在我们的日常生活中-从自动驾驶到移动设备,再到物联网系统,从而造福于整个社会。该项目的成果是FASTLEAP-一个基于现场可编程门阵列(FPGA)的加速深度学习的平台。该平台接受一个数据集作为输入,并输出一个模型,该模型经过训练、修剪和映射到现场可编程门阵列上,优化了快速推理。该项目将利用新兴的可访问高带宽存储器(HBM)并由浮点数字信号处理器单元组成的现场可编程门阵列技术。从纵向角度来看,FASTLEAP集成了从整个系统堆栈算法、体系结构到高效的FPGA硬件实现的多个层面的创新。从水平的角度来看,它包括系统的DNN模型压缩和相关的基于FPGA的训练,以及基于FPGA的压缩DNN模型的推理加速。该平台将作为一个完整的解决方案交付,包括软件工具链和硬件实施,以确保易用性。在FASTLEAP的算法级,提出的神经网络乘子交替方向方法(ADMM-NN)框架将在给定训练数据、目标精度和目标FPGA平台特性(性能模型、加速器间通信)的情况下执行统一的权重剪枝和量化。ADMM-NN中的训练过程是在一个具有多个FPGA加速器的平台上执行的,这取决于体系结构级的通信和并行性优化。最后,基于训练好的带压缩的DNN模型生成优化的FPGA推理设计,为FPGA性能建模提供了依据。该项目将解决以下SPX研究领域:1)算法:弥合深度学习理论发展与其系统实现之间的差距,认识到平台的性能模型。2)应用:深度学习在图像处理等领域的伸缩。3)架构和系统:自动生成关于FPGA优化面积、能效、延迟和吞吐量的深度学习设计。该奖项反映了NSF的法定使命,并通过使用基金会的智力优势和更广泛的影响审查标准进行评估,被认为值得支持。
项目成果
期刊论文数量(2)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Dynasparse: Accelerating GNN Inference through Dynamic Sparsity Exploitation
- DOI:10.1109/ipdps54959.2023.00032
- 发表时间:2023-03
- 期刊:
- 影响因子:0
- 作者:Bingyi Zhang;V. Prasanna
- 通讯作者:Bingyi Zhang;V. Prasanna
A Framework for Monte-Carlo Tree Search on CPU-FPGA Heterogeneous Platform via on-chip Dynamic Tree Management
基于片上动态树管理的 CPU-FPGA 异构平台蒙特卡罗树搜索框架
- DOI:10.1145/3543622.3573177
- 发表时间:2023
- 期刊:
- 影响因子:0
- 作者:Meng, Yuan;Kannan, Rajgopal;Prasanna, Viktor
- 通讯作者:Prasanna, Viktor
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Xuehai Qian其他文献
Response characterization on the microstructure, and mechanical and corrosion behavior of clad rebars of different weld materials
不同焊接材料包覆钢筋的微观结构、力学性能和腐蚀行为的响应特性
- DOI:
10.1016/j.cscm.2025.e04316 - 发表时间:
2025-07-01 - 期刊:
- 影响因子:6.600
- 作者:
Zecheng Zhuang;Xuehai Qian;Lei Zeng;Weiping Lu;Zhen Li;Yong Xiang - 通讯作者:
Yong Xiang
Effects of varying weld speeds on the microstructure, mechanical properties, and corrosion behavior of clad rebars in a marine environment
不同焊接速度对海洋环境中复合钢筋的微观结构、力学性能和腐蚀行为的影响
- DOI:
10.1038/s41598-025-08448-7 - 发表时间:
2025-07-02 - 期刊:
- 影响因子:3.900
- 作者:
Zecheng Zhuang;Weiping Lu;Zhe Gou;Lei Zeng;Xuehai Qian;Rifeng Wang;Erte Lin;Zhen Li;Yong Xiang;Jianping Tan - 通讯作者:
Jianping Tan
Graph Transformer for Quantum Circuit Reliability Prediction
用于量子电路可靠性预测的图形变压器
- DOI:
- 发表时间:
2022 - 期刊:
- 影响因子:0
- 作者:
Hanrui Wang;Pengyu Liu;Jinglei Cheng;Zhiding Liang;Jiaqi Gu;Zi;Yongshan Ding;Weiwen Jiang;Yiyu Shi;Xuehai Qian;D. Pan;F. Chong;Song Han - 通讯作者:
Song Han
RobustState: Boosting Fidelity of Quantum State Preparation via Noise-Aware Variational Training
RobustState:通过噪声感知变分训练提高量子态准备的保真度
- DOI:
- 发表时间:
2023 - 期刊:
- 影响因子:0
- 作者:
Hanrui Wang;Yilian Liu;Pengyu Liu;Jiaqi Gu;Zi;Zhiding Liang;Jinglei Cheng;Yongshan Ding;Xuehai Qian;Yiyu Shi;David Z. Pan;Frederic T. Chong;Song Han - 通讯作者:
Song Han
Efficient Performance Estimation and Work-Group Size Pruning for OpenCL Kernels on GPUs
GPU 上 OpenCL 内核的高效性能估计和工作组大小修剪
- DOI:
10.1109/tpds.2019.2958343 - 发表时间:
2020-05 - 期刊:
- 影响因子:0
- 作者:
Xiebing Wang;Xuehai Qian;Alois Knoll;Kai Huang - 通讯作者:
Kai Huang
Xuehai Qian的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Xuehai Qian', 18)}}的其他基金
CAREER: Algorithm-Centric High Performance Graph Processing
职业:以算法为中心的高性能图形处理
- 批准号:
2331038 - 财政年份:2022
- 资助金额:
$ 84.87万 - 项目类别:
Continuing Grant
SHF: Small: High Performance Graph Pattern Mining System and Architecture
SHF:小型:高性能图模式挖掘系统和架构
- 批准号:
2333645 - 财政年份:2022
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
SHF: Small: High Performance Graph Pattern Mining System and Architecture
SHF:小型:高性能图模式挖掘系统和架构
- 批准号:
2127543 - 财政年份:2021
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
SPX: Collaborative Research: FASTLEAP: FPGA based compact Deep Learning Platform
SPX:协作研究:FASTLEAP:基于 FPGA 的紧凑型深度学习平台
- 批准号:
1919289 - 财政年份:2019
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
CAREER: Algorithm-Centric High Performance Graph Processing
职业:以算法为中心的高性能图形处理
- 批准号:
1750656 - 财政年份:2018
- 资助金额:
$ 84.87万 - 项目类别:
Continuing Grant
SHF: Small: Accelerating Graph Processing with Vertically Integrated Programming Model, Runtime and Architecture
SHF:小型:利用垂直集成编程模型、运行时和架构加速图形处理
- 批准号:
1717754 - 财政年份:2017
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
CSR: Small: Collaborative Research: GAMBIT: Efficient Graph Processing on a Memristor-based Embedded Computing Platform
CSR:小型:协作研究:GAMBIT:基于忆阻器的嵌入式计算平台上的高效图形处理
- 批准号:
1717984 - 财政年份:2017
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
CRII: SHF: Improving Programmability of GPGPU/NVRAM Integrated Systems with Holistic Architectural Support
CRII:SHF:通过整体架构支持提高 GPGPU/NVRAM 集成系统的可编程性
- 批准号:
1657333 - 财政年份:2017
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
Student Travel Support for the 2017 International Conference on Architecture Support for Programming Languages and Operating Systems (ASPLOS)
2017 年编程语言和操作系统架构支持国际会议 (ASPLOS) 的学生旅行支持
- 批准号:
1720467 - 财政年份:2017
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
相似海外基金
SPX: Collaborative Research: Automated Synthesis of Extreme-Scale Computing Systems Using Non-Volatile Memory
SPX:协作研究:使用非易失性存储器自动合成超大规模计算系统
- 批准号:
2408925 - 财政年份:2023
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
SPX: Collaborative Research: Scalable Neural Network Paradigms to Address Variability in Emerging Device based Platforms for Large Scale Neuromorphic Computing
SPX:协作研究:可扩展神经网络范式,以解决基于新兴设备的大规模神经形态计算平台的可变性
- 批准号:
2401544 - 财政年份:2023
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
SPX: Collaborative Research: Intelligent Communication Fabrics to Facilitate Extreme Scale Computing
SPX:协作研究:促进超大规模计算的智能通信结构
- 批准号:
2412182 - 财政年份:2023
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
SPX: Collaborative Research: Cross-stack Memory Optimizations for Boosting I/O Performance of Deep Learning HPC Applications
SPX:协作研究:用于提升深度学习 HPC 应用程序 I/O 性能的跨堆栈内存优化
- 批准号:
2318628 - 财政年份:2022
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
SPX: Collaborative Research: NG4S: A Next-generation Geo-distributed Scalable Stateful Stream Processing System
SPX:合作研究:NG4S:下一代地理分布式可扩展状态流处理系统
- 批准号:
2202859 - 财政年份:2022
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
SPX: Collaborative Research: Memory Fabric: Data Management for Large-scale Hybrid Memory Systems
SPX:协作研究:内存结构:大规模混合内存系统的数据管理
- 批准号:
2132049 - 财政年份:2021
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
SPX: Collaborative Research: Automated Synthesis of Extreme-Scale Computing Systems Using Non-Volatile Memory
SPX:协作研究:使用非易失性存储器自动合成超大规模计算系统
- 批准号:
2113307 - 财政年份:2020
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
SPX: Collaborative Research: FASTLEAP: FPGA based compact Deep Learning Platform
SPX:协作研究:FASTLEAP:基于 FPGA 的紧凑型深度学习平台
- 批准号:
1919117 - 财政年份:2019
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
SPX: Collaborative Research: Intelligent Communication Fabrics to Facilitate Extreme Scale Computing
SPX:协作研究:促进超大规模计算的智能通信结构
- 批准号:
1918987 - 财政年份:2019
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
SPX: Collaborative Research: Parallel Algorithm by Blocks - A Data-centric Compiler/runtime System for Productive Programming of Scalable Parallel Systems
SPX:协作研究:块并行算法 - 用于可扩展并行系统的高效编程的以数据为中心的编译器/运行时系统
- 批准号:
1919021 - 财政年份:2019
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant