Machine Learning-assisted Modeling and Design of Approximate Computing with Generalizability and Interpretability

具有通用性和可解释性的机器学习辅助建模和近似计算设计

基本信息

  • 批准号:
    2202329
  • 负责人:
  • 金额:
    $ 19.41万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
    Standard Grant
  • 财政年份:
    2022
  • 资助国家:
    美国
  • 起止时间:
    2022-08-01 至 2025-07-31
  • 项目状态:
    未结题

项目摘要

By 2040, the projected energy consumed by computers will exceed the electricity the world can generate, unless radical changes are made in the way we design computers. This project aims to develop approximate computing techniques to drastically reduce the energy consumption in modern computation-intensive computing, for example, video/image processing and machine learning applications. Approximate computing is a promising technique that can trade off accuracy for energy saving and performance improvement. This project addresses one of the fundamental obstacles that has been impeding the practical usage of approximate computing: how to accurately and quickly design approximate computing systems to maximize the benefits of approximate computing without introducing too much accuracy loss or errors. The success of this project can greatly improve the practicality of approximate computing and enable its wide usage in real-world applications, such as video/image processing and machine learning. By paving the way for future approximate computing, this project can lead to considerable energy saving for future computing paradigm and carbon footprint reduction. It also advances the applications of machine learning algorithms in circuit design area, as well as identifies several fundamental machine learning questions motived by special features of the circuit design problem, such that machine learning algorithms can better benefit hardware design. This can lead to new design and testing approaches for a broad range of computing systems, from low-power embedded systems to high-performance data centers. The educational plan will integrate research activities into curriculum development and will provide students with early research training. The team is committed to broadening the participation of undergraduates and underrepresented groups in engineering research and in STEM outreach activities.Given the huge amount of energy consumed by modern computation-intensive computing such as machine learning and video/image processing applications, energy efficient computing is an urgent need. Approximate computing, by slightly trading off accuracy for better performance and/or efficiency, e.g., computation latency, area, and energy and power, has been a promising new computing paradigm. Many approximate computing approaches, such as low-precision computing, voltage scaling, inexact approximate circuits and memory, have achieved orders of speedup or energy saving. However, to safely deploy approximate computing in practice, two major challenges need to be addressed: (1) how to accurately and quickly estimate the impact of approximation on application output quality; and (2) how to accurately and quickly find the best approximation configuration to maximize the benefits of approximate computing. This proposal presents three closely-interacted research tasks to address these two challenges and to seek the wide-reaching benefits of approximate computing: (1) to develop input-aware error models of approximate circuits and input-aware simulation platform for approximate computing; (2) to develop a graph neural networks (GNNs)-based framework to quickly estimate application output quality; (3) to develop a resource-aware approximation configuration framework to optimize performance/energy while satisfying user-defined quality constraints. The goal of this project is to unveil the underlining knowledge of the intrinsic relations between output quality and input data, approximate circuits, and approximate program structures. The project will provide a practical, generalizable, and interpretable toolset that can learn to configure approximate computing once and for all.The intellectual merits of this project include both AxC design innovations as well as machine learning innovations. (1) This project will develop input-aware error models for approximate circuits considering the impact of input data, which are usually overlooked; then, it exposes the circuit-level errors to behavioral-level approximate programs by developing an input-aware simulation platform. This will form the foundation for a much-needed holistic evaluation of approximate computing. (2) This project will develop inductive GNN models and machine learning models to predict the output quality of any unseen approximate programs and approximation configurations. This will provide key generalizability and interpretability of approximate computing. In addition, the investigation of GNN in this project will uncover two new fundamental studies to GNN community, increasing GNN expressiveness power by amending graph connectivity, and utilizing graph regularity. (3) This project will design a resource-aware reinforcement learning (RL) based approach to automatically configure approximation settings for optimal performance/energy-quality tradeoff. In addition, the investigation of GNN and RL in this project will uncover a new research question, the joint optimization of the RL agent and the surrogate model. This project is a pioneering approach to the joint areas of reinforcement learning, graph neural networks, and approximate computing. It aims to establish the technological foundation for practical approximate computing.This project will bring an unprecedented transformation in our ability to understanding and designing approximate computing for practical use, by enabling a more disciplined, generalizable, and interpretable approximation. The research team will release models, tools, and infrastructures to the research and industry community. This can lead to new design and testing approaches for a broad range of computing systems, from low-power embedded systems to high-performance data centers. The educational plan will integrate research activities into curriculum development and will provide students with early exposure to research. The PIs are committed to broadening the participation of undergraduates and underrepresented groups in engineering research and in STEM outreach activities.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
到2040年,计算机消耗的预计能量将超过世界所能产生的电力,除非以我们设计计算机的方式进行根本性更改。该项目旨在开发近似计算技术,以大大减少现代计算密集型计算的能源消耗,例如视频/图像处理和机器学习应用程序。近似计算是一种有前途的技术,可以将精确度取决于节能和提高性能。该项目解决了一直阻碍近似计算的实际用法的基本障碍之一:如何准确,快速设计近似计算系统,以最大程度地提高近似计算的好处,而不会引入过多的准确性损失或错误。该项目的成功可以极大地提高近似计算的实用性,并在现实世界中(例如视频/图像处理和机器学习)中进行广泛使用。通过为未来的近似计算铺平道路,该项目可以为将来的计算范式和碳足迹减少提供大量节能。它还可以推进机器学习算法在电路设计区域的应用,并确定了电路设计问题的特殊功能所激发的几个基本机器学习问题,因此机器学习算法可以更好地使硬件设计受益。这可能会导致广泛的计算系统的新设计和测试方法,从低功耗嵌入式系统到高性能数据中心。教育计划将将研究活动纳入课程发展,并将为学生提供早期的研究培训。该团队致力于扩大本科生和代表性不足的团体在工程研究和STEM外展活动中的参与。通过现代计算密集型计算(例如机器学习和视频/图像处理应用程序)消耗的大量能量,能源有效计算是迫切需要的。通过稍微将准确性以提高性能和/或效率(例如,计算潜伏期,面积,能量和功率)进行近似计算,这是一个有希望的新计算范式。许多近似计算方法,例如低精度计算,电压缩放,不确定的近似电路和内存,都达到了加速或节能的顺序。但是,要安全地在实践中部署近似计算,需要解决两个主要挑战:(1)如何准确,快速估计近似对应用程序输出质量的影响; (2)如何准确,快速找到最佳的近似配置,以最大程度地提高近似计算的好处。该提案提出了三项紧密相互交互的研究任务,以应对这两个挑战,并寻求近似计算的广泛益处:(1)开发近似电路的输入感知错误模型和输入感知的模拟平台,以近似计算; (2)开发基于图形神经网络(GNN)的框架以快速估算应用程序输出质量; (3)开发一个资源感知的近似配置框架,以优化性能/能量,同时满足用户定义的质量约束。该项目的目的是揭示有关输出质量和输入数据,近似电路和近似程序结构之间内在关系的强调知识。该项目将提供一种实用,可推广和可解释的工具集,可以学会一劳永逸地配置近似计算。该项目的智力优点包括AXC设计创新以及机器学习创新。 (1)考虑输入数据的影响,该项目将为近似电路开发输入感知的错误模型,而输入数据通常会被忽略;然后,它通过开发输入感知的仿真平台将电路级错误暴露于行为级别近似程序。这将构成对近似计算的急需的整体评估的基础。 (2)该项目将开发归纳GNN模型和机器学习模型,以预测任何看不见的近似程序和近似配置的输出质量。这将提供近似计算的关键概括性和解释性。此外,该项目中GNN的调查将发现对GNN社区的两项新的基本研究,从而通过修改图形连接并利用图形规律性来提高GNN表现力。 (3)该项目将设计一种基于资源感知的增强学习(RL)方法,以自动配置近似设置,以实现最佳性能/能量质量折衷。此外,该项目中GNN和RL的研究将发现一个新的研究问题,RL代理的联合优化和替代模型。该项目是对加强学习,图形神经网络和近似计算的联合领域的开创性方法。它的目的是建立实用近似计算的技术基础。该项目将通过启用一个更具纪律性,可解释和可解释的近似值来理解和设计近似计算的能力,从而为我们理解和设计近似计算的能力带来前所未有的转变。研究团队将向研究和行业社区发布模型,工具和基础架构。这可能会导致广泛的计算系统的新设计和测试方法,从低功耗嵌入式系统到高性能数据中心。教育计划将将研究活动纳入课程发展,并将为学生提供早期的研究。 PI致力于扩大本科生和代表性不足的群体在工程研究和STEM推广活动中的参与。该奖项反映了NSF的法定任务,并被认为是通过基金会的知识分子的智力优点和更广泛的影响来通过评估来获得支持的。

项目成果

期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Cong Hao其他文献

Interconnection Allocation Between Functional Units and Registers in High-Level Synthesis
高级综合中功能单元和寄存器之间的互连分配
3D-IC signal TSV assignment for thermal and wirelength optimization
用于热和线长优化的 3D-IC 信号 TSV 分配
TSV Assignment of Thermal and Wirelength Optimization for 3D-IC Routing
3D-IC 布线的热和线长优化的 TSV 分配
An Efficient Algorithm for 3D-IC TSV Assignment
3D-IC TSV 分配的高效算法
  • DOI:
  • 发表时间:
    2016
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Cong Hao;Nan Ding;Takeshi Yoshimura
  • 通讯作者:
    Takeshi Yoshimura
Thermal-Aware Floorplanning for NoC-Sprinting
NoC-Sprinting 的热感知布局规划
  • DOI:
  • 发表时间:
    2016
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Hui Zhu;Cong Hao;Takeshi Yoshimura
  • 通讯作者:
    Takeshi Yoshimura

Cong Hao的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Cong Hao', 18)}}的其他基金

CSR: Small: Multi-FPGA System for Real-time Fraud Detection with Large-scale Dynamic Graphs
CSR:小型:利用大规模动态图进行实时欺诈检测的多 FPGA 系统
  • 批准号:
    2317251
  • 财政年份:
    2024
  • 资助金额:
    $ 19.41万
  • 项目类别:
    Standard Grant
CAREER: Next Generation of High-Level Synthesis for Agile Architectural Design (ArchHLS)
职业:下一代敏捷架构设计高级综合 (ArchHLS)
  • 批准号:
    2338365
  • 财政年份:
    2024
  • 资助金额:
    $ 19.41万
  • 项目类别:
    Continuing Grant

相似国自然基金

文本—行人图像跨模态匹配的鲁棒性特征学习及语义对齐研究
  • 批准号:
    62362045
  • 批准年份:
    2023
  • 资助金额:
    32 万元
  • 项目类别:
    地区科学基金项目
基于深度学习方法的南海海气耦合延伸期智能预报研究
  • 批准号:
    42375143
  • 批准年份:
    2023
  • 资助金额:
    50 万元
  • 项目类别:
    面上项目
面向机器人复杂操作的接触形面和抓取策略共适应学习
  • 批准号:
    52305030
  • 批准年份:
    2023
  • 资助金额:
    30 万元
  • 项目类别:
    青年科学基金项目
社交媒体中的上市公司谣言识别、后果及治理研究:多模态深度学习视角
  • 批准号:
    72302018
  • 批准年份:
    2023
  • 资助金额:
    30 万元
  • 项目类别:
    青年科学基金项目
资源受限下集成学习算法设计与硬件实现研究
  • 批准号:
    62372198
  • 批准年份:
    2023
  • 资助金额:
    50 万元
  • 项目类别:
    面上项目

相似海外基金

Small Molecule Degraders of Tryptophan 2,3-Dioxygenase Enzyme (TDO) as Novel Treatments for Neurodegenerative Disease
色氨酸 2,3-双加氧酶 (TDO) 的小分子降解剂作为神经退行性疾病的新疗法
  • 批准号:
    10752555
  • 财政年份:
    2024
  • 资助金额:
    $ 19.41万
  • 项目类别:
A framework for machine learning assisted directed evolution of plastic-degrading enzymes
机器学习辅助塑料降解酶定向进化的框架
  • 批准号:
    10059716
  • 财政年份:
    2023
  • 资助金额:
    $ 19.41万
  • 项目类别:
    Launchpad
Collaborative Research: Machine Learning-assisted Ultrafast Physical Vapor Deposition of High Quality, Large-area Functional Thin Films
合作研究:机器学习辅助超快物理气相沉积高质量、大面积功能薄膜
  • 批准号:
    2226918
  • 财政年份:
    2023
  • 资助金额:
    $ 19.41万
  • 项目类别:
    Standard Grant
Development of multimode vacuum ionization for use in medical diagnostics
开发用于医疗诊断的多模式真空电离
  • 批准号:
    10697560
  • 财政年份:
    2023
  • 资助金额:
    $ 19.41万
  • 项目类别:
Technology Assisted Treatment for Binge Eating Behavior
暴食行为的技术辅助治疗
  • 批准号:
    10603975
  • 财政年份:
    2023
  • 资助金额:
    $ 19.41万
  • 项目类别:
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了