AitF: Collaborative Research: A Framework of Simultaneous Acceleration and Storage Reduction on Deep Neural Networks Using Structured Matrices
AitF:协作研究:使用结构化矩阵的深度神经网络同时加速和存储减少的框架
基本信息
- 批准号:1733701
- 负责人:
- 金额:$ 34.8万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Standard Grant
- 财政年份:2017
- 资助国家:美国
- 起止时间:2017-09-15 至 2021-08-31
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
Deep neural networks (DNNs) have emerged as a class of powerful techniques for learning solutions in a number of challenging problem domains, including computer vision, natural language processing and bioinformatics. These solutions have been enabled mainly because we now have computational accelerators able to sift though the myriad of data required to train a neural network. As the size of DNN models continues to grow, computational and memory resource requirements for training will also grow, limiting deployment of deep learning in many practical applications. Leveraging the theory of structured matrices, this project will develop a general framework for efficient DNN training and inference, providing a significant reduction in algorithmic complexity measures in terms of both computation and storage. The project, if successful, should fundamentally impact a broad class of deep learning applications. It will explore accelerating this new structure for deep learning algorithms targeting emerging accelerator architectures, and will evaluate the benefits of these advances across a number of application domains, including big data analytics, cognitive systems, unmanned vehicles and aerial systems, and wearable devices. The interdisciplinary nature of this project bridges the areas of matrix theory, machine learning, and computer architecture, and will affect education at both Northeastern and CCNY, including the involvement of underrepresented and undergraduate students in the rich array of research tasks. The project will: (1) for the first time, develop a general theoretical framework for structured matrix-based DNN models and perform detailed analysis and investigation of error bounds, convergence, fast training algorithms, etc.; (2) develop low-space-cost and high-speed inference and training schemes for the fully connected layers of DNNs; (3) impose a weight tensor with structure and enable low computational and space cost convolutional layers; (4) develop high-performance and energy-efficient implementations of deep learning systems on high-performance parallel platforms, low-power embedded platforms, as well as emerging computing paradigms and devices; (5) perform a comprehensive evaluation of the proposed approaches on different performance metrics in a variety of platforms. The project will deliver tuned implementations targeting a range of computational platforms, including ASICs, FPGAs, GPUs and cloud servers. The hardware optimizations will focus on producing high-speed and low-cost implementations of deep learning systems.
深度神经网络(DNN)已经成为一类强大的技术,用于在许多具有挑战性的问题领域学习解决方案,包括计算机视觉,自然语言处理和生物信息学。 这些解决方案之所以能够实现,主要是因为我们现在有了计算加速器,能够筛选训练神经网络所需的大量数据。 随着DNN模型的规模不断增长,用于训练的计算和内存资源需求也将增长,这限制了深度学习在许多实际应用中的部署。 利用结构化矩阵理论,该项目将开发一个有效DNN训练和推理的通用框架,在计算和存储方面显著降低算法复杂性。 该项目如果成功,将从根本上影响广泛的深度学习应用。 它将探索加速这种针对新兴加速器架构的深度学习算法的新结构,并将评估这些进步在许多应用领域的好处,包括大数据分析,认知系统,无人驾驶车辆和航空系统以及可穿戴设备。 该项目的跨学科性质桥接了矩阵理论,机器学习和计算机体系结构的领域,并将影响东北大学和CCNY的教育,包括代表性不足和本科生参与丰富的研究任务。 该项目将:(1)首次提出了基于结构矩阵的DNN模型的一般理论框架,并对误差界、收敛性、快速训练算法等进行了详细的分析和研究; (2)为DNN的全连接层开发低空间成本和高速推理和训练方案;(3)施加具有结构的权重张量,并实现低计算和空间成本的卷积层;(4)在高性能并行平台,低功耗嵌入式平台以及新兴计算范式和设备上开发高性能和节能的深度学习系统实现;(5)在不同的平台上对所提出的方法进行全面评估。该项目将针对一系列计算平台(包括ASIC、FPGA、GPU和云服务器)提供优化的实现。硬件优化将专注于深度学习系统的高速和低成本实现。
项目成果
期刊论文数量(17)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
E-RNN: Design Optimization for Efficient Recurrent Neural Networks in FPGAs
- DOI:10.1109/hpca.2019.00028
- 发表时间:2018-12
- 期刊:
- 影响因子:0
- 作者:Zhe Li;Caiwen Ding;Siyue Wang;Wujie Wen;Youwei Zhuo;Chang Liu;Qinru Qiu;Wenyao Xu;X. Lin;Xuehai Qian;Yanzhi Wang
- 通讯作者:Zhe Li;Caiwen Ding;Siyue Wang;Wujie Wen;Youwei Zhuo;Chang Liu;Qinru Qiu;Wenyao Xu;X. Lin;Xuehai Qian;Yanzhi Wang
Reinforced Adversarial Attacks on Deep Neural Networks Using ADMM
- DOI:10.1109/globalsip.2018.8646651
- 发表时间:2018-11
- 期刊:
- 影响因子:0
- 作者:Pu Zhao;Kaidi Xu;Tianyun Zhang;M. Fardad;Yanzhi Wang;X. Lin
- 通讯作者:Pu Zhao;Kaidi Xu;Tianyun Zhang;M. Fardad;Yanzhi Wang;X. Lin
ADMM attack: an enhanced adversarial attack for deep neural networks with undetectable distortions
- DOI:10.1145/3287624.3288750
- 发表时间:2019-01
- 期刊:
- 影响因子:0
- 作者:Pu Zhao;Kaidi Xu;Sijia Liu;Yanzhi Wang;X. Lin
- 通讯作者:Pu Zhao;Kaidi Xu;Sijia Liu;Yanzhi Wang;X. Lin
Defending DNN Adversarial Attacks with Pruning and Logits Augmentation
- DOI:10.1109/globalsip.2018.8646578
- 发表时间:2018-11
- 期刊:
- 影响因子:0
- 作者:Siyue Wang;Xiao Wang;Shaokai Ye;Pu Zhao;X. Lin
- 通讯作者:Siyue Wang;Xiao Wang;Shaokai Ye;Pu Zhao;X. Lin
HSIM-DNN: Hardware Simulator for Computation-, Storage- and Power-Efficient Deep Neural Networks
- DOI:10.1145/3299874.3317996
- 发表时间:2019-05
- 期刊:
- 影响因子:0
- 作者:Mengshu Sun-;Pu Zhao;Yanzhi Wang;N. Chang;X. Lin
- 通讯作者:Mengshu Sun-;Pu Zhao;Yanzhi Wang;N. Chang;X. Lin
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Xue Lin其他文献
Beta-adrenergic stimulation does not activate Na+/Ca2+ exchange current in guinea pig, mouse, and rat ventricular myocytes.
β-肾上腺素能刺激不会激活豚鼠、小鼠和大鼠心室肌细胞中的 Na /Ca2 交换电流。
- DOI:
- 发表时间:
2006 - 期刊:
- 影响因子:0
- 作者:
Xue Lin;Hikari Jo;Y. Sakakibara;K. Tambara;Bongju Kim;M. Komeda;S. Matsuoka - 通讯作者:
S. Matsuoka
First-principles prediction of two atomic-thin phosphorene allotropes with potentials for sun-light-driven water splitting
两种具有太阳光驱动水分解潜力的原子薄磷烯同素异形体的第一性原理预测
- DOI:
10.1088/1361-648x/aaf74f - 发表时间:
2019-01 - 期刊:
- 影响因子:0
- 作者:
Jiao Na;Zhou Pan;Xue Lin;He Chaoyu;Sun Lizhong - 通讯作者:
Sun Lizhong
B-adrenergic stimulation does not activate Na^+-Ca^<2+> exchange current in guinea-pig, mouse and rat ventricular myocytes
B-肾上腺素能刺激不会激活豚鼠、小鼠和大鼠心室肌细胞中的 Na^ -Ca^<2 > 交换电流
- DOI:
- 发表时间:
2006 - 期刊:
- 影响因子:0
- 作者:
Xue Lin;Hikari Jo;Yutaka Sakakibara;Keiichi Tambara;Bongju Kim;Masashi Komeda;Satoshi Matsuoka. - 通讯作者:
Satoshi Matsuoka.
Nutrient consumption patterns of Lactobacillus plantarum and their application in suancai
植物乳杆菌的营养消耗规律及其在酸菜中的应用
- DOI:
- 发表时间:
2021 - 期刊:
- 影响因子:5.4
- 作者:
Ao Zhang;Zongcai Zhang;Kenan Zhang;Xin Liu;Xue Lin;Zhen Zhang;Tianyu Bao;Zhen Feng - 通讯作者:
Zhen Feng
Comprehensive evaluation of aroma and taste properties of different parts from the wampee fruit.
- DOI:
10.1016/j.fochx.2023.100835 - 发表时间:
2023-10-30 - 期刊:
- 影响因子:6.1
- 作者:
Zhiheng Zhao;Yaofei Hao;Yijun Liu;Yousheng Shi;Xue Lin;Lu Wang;Pan Wen;Xiaoping Hu;Jianxun Li - 通讯作者:
Jianxun Li
Xue Lin的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Xue Lin', 18)}}的其他基金
SHF: Medium: Collaborative Research: ADMM-NN: A Unified Software/Hardware Framework of DNN Computation and Storage Reduction Using ADMM
SHF:中:协作研究:ADMM-NN:使用 ADMM 进行 DNN 计算和存储缩减的统一软硬件框架
- 批准号:
1901378 - 财政年份:2019
- 资助金额:
$ 34.8万 - 项目类别:
Continuing Grant
CPS: Small: Collaborative Research: SecureNN: Design of Secured Autonomous Cyber-Physical Systems Against Adversarial Machine Learning Attacks
CPS:小型:协作研究:SecureNN:针对对抗性机器学习攻击的安全自主网络物理系统的设计
- 批准号:
1932351 - 财政年份:2019
- 资助金额:
$ 34.8万 - 项目类别:
Standard Grant
相似海外基金
AitF: Collaborative Research: Topological Algorithms for 3D/4D Cardiac Images: Understanding Complex and Dynamic Structures
AitF:协作研究:3D/4D 心脏图像的拓扑算法:理解复杂和动态结构
- 批准号:
2051197 - 财政年份:2020
- 资助金额:
$ 34.8万 - 项目类别:
Standard Grant
AitF: Collaborative Research: Fast, Accurate, and Practical: Adaptive Sublinear Algorithms for Scalable Visualization
AitF:协作研究:快速、准确和实用:用于可扩展可视化的自适应次线性算法
- 批准号:
1940759 - 财政年份:2019
- 资助金额:
$ 34.8万 - 项目类别:
Standard Grant
AitF: Collaborative Research: Fast, Accurate, and Practical: Adaptive Sublinear Algorithms for Scalable Visualization
AitF:协作研究:快速、准确和实用:用于可扩展可视化的自适应次线性算法
- 批准号:
2006206 - 财政年份:2019
- 资助金额:
$ 34.8万 - 项目类别:
Standard Grant
AiTF: Collaborative Research: Distributed and Stochastic Algorithms for Active Matter: Theory and Practice
AiTF:协作研究:活跃物质的分布式随机算法:理论与实践
- 批准号:
1733812 - 财政年份:2018
- 资助金额:
$ 34.8万 - 项目类别:
Standard Grant
AitF: Collaborative Research: A Framework of Simultaneous Acceleration and Storage Reduction on Deep Neural Networks Using Structured Matrices
AitF:协作研究:使用结构化矩阵的深度神经网络同时加速和存储减少的框架
- 批准号:
1854742 - 财政年份:2018
- 资助金额:
$ 34.8万 - 项目类别:
Standard Grant
AitF: Collaborative Research: Topological Algorithms for 3D/4D Cardiac Images: Understanding Complex and Dynamic Structures
AitF:协作研究:3D/4D 心脏图像的拓扑算法:理解复杂和动态结构
- 批准号:
1855760 - 财政年份:2018
- 资助金额:
$ 34.8万 - 项目类别:
Standard Grant
AiTF: Collaborative Research: Distributed and Stochastic Algorithms for Active Matter: Theory and Practice
AiTF:协作研究:活跃物质的分布式随机算法:理论与实践
- 批准号:
1733680 - 财政年份:2018
- 资助金额:
$ 34.8万 - 项目类别:
Standard Grant
AitF: Collaborative Research: Automated Medical Image Segmentation via Object Decomposition
AitF:协作研究:通过对象分解进行自动医学图像分割
- 批准号:
1733742 - 财政年份:2017
- 资助金额:
$ 34.8万 - 项目类别:
Standard Grant
AitF: Collaborative Research: Fast, Accurate, and Practical: Adaptive Sublinear Algorithms for Scalable Visualization
AitF:协作研究:快速、准确和实用:用于可扩展可视化的自适应次线性算法
- 批准号:
1733796 - 财政年份:2017
- 资助金额:
$ 34.8万 - 项目类别:
Standard Grant
AitF: Collaborative Research: Algorithms and Mechanisms for the Distribution Grid
AitF:协作研究:配电网算法和机制
- 批准号:
1733832 - 财政年份:2017
- 资助金额:
$ 34.8万 - 项目类别:
Standard Grant