2016:DianNao Family Energy-Efficient Hardware Accelerators for Machine Learning-程序员宅基地

技术标签: pr  

  • 这个发表在了
  • https://cacm.acm.org/
  • communication of the ACM

  • 参考文献链接是
Chen Y , Chen T , Xu Z , et al. 
DianNao Family: Energy-Efficient Hardware Accelerators for Machine Learning[J]. 
Communications of the Acm, 2016, 59(11):105-112
  • https://cacm.acm.org/magazines/2016/11/209123-diannao-family/fulltext
  • 果然找到了他

  • 特码的,我下载了,6不6
The original version of this paper is entitled “DianNao: A
Small-Footprint, High-Throughput Accelerator for Ubiq-
uitous Machine Learning” and was published in Proceed-
ings of the International Conference on Architectural Support
for Programming Languages and Operating Systems (ASPLOS)
49, 4 (March 2014), ACM, New York, NY, 269284.

Abstract

  • ML pervasive
    • broad range of applications
      • broad range of systems(embedded to data centers)

  • computer
    • toward heterogeneous multi-cores
    • a mix of cores and hardware accelerators,
  • designing hardware accelerators for ML
    • achieve high efficiency and broad application scope

第二段

  • efficient computational primitives
    • important for a hardware accelerator,
  • inefficient memory transfers can
    • potentially void the throughput, energy, or cost advantages of accelerators,
  • an Amdahl’s law effect
  • become a first-order concern,

  • like in processors,
    • rather than an element factored in accelerator design on a second step

  • a series of hardware accelerators
    • designed for ML(nn),
    • the impact of memory on accelerator design, performance, and energy.

  • representative neural network layers
  • 450.65x over GPU
  • energy by 150.31x on average
    • for 64-chip DaDianNao (a member of the DianNao family)

1 INTRODUCTION

  • designing hardware accelerators which realize the best possible tradeoff between flexibility and efficiency is becoming a prominent
    issue.

  • The first question is for which category of applications one should primarily design accelerators?
  • Together with the architecture trend towards accelerators, a second simultaneous and significant trend in high-performance and embedded applications is developing: many of the emerging high-performance and embedded applications, from image/video/audio recognition to automatic translation, business analytics, and robotics rely on machine learning
    techniques.
  • This trend in application comes together with a third trend in machine learning (ML) where a small number
    of techniques, based on neural networks (especially deep learning techniques 16, 26 ), have been proved in the past few
    years to be state-of-the-art across a broad range of applications.
  • As a result, there is a unique opportunity to design accelerators having significant application scope as well as
    high performance and efficiency. 4

第二段

  • Currently, ML workloads
  • mostly executed on
    • multicores using SIMD[44]
    • on GPUs[7]
    • or on FPGAs[2]

  • the aforementioned trends
    • have already been identified
    • by researchers who have proposed accelerators implementing,
  • CNNs[2]
  • Multi-Layer Perceptrons [43] ;

  • accelerators focusing on other domains,
    • image processing,
    • propose efficient implementations of some of the computational primitives used
    • by machine-learning techniques, such as convolutions[37]

  • There are also ASIC implementations of ML
    • such as Support Vector Machine and CNNs.

  • these works focused on
    • efficiently implementing the computational primitives
      • ignore memory transfers for the sake of simplicity[37,43]
      • plug their computational accelerator to memory via a more or less sophisticated DMA. [2,12,19]

第三段

  • While efficient implementation of computational primitives is a first and important step with promising results,
    inefficient memory transfers can potentially void the throughput, energy, or cost advantages of accelerators, that is, an
    Amdahl’s law effect, and thus, they should become a first-
    order concern, just like in processors, rather than an element
    factored in accelerator design on a second step.

  • Unlike in processors though, one can factor in the specific nature of
    memory transfers in target algorithms, just like it is done for accelerating computations.

  • This is especially important in the domain of ML where there is a clear trend towards scaling up the size of learning models in order to achieve better accuracy and more functionality. 16, 24

第四段

  • In this article, we introduce a series of hardware accelerators designed for ML (especially neural networks), including
    DianNao, DaDianNao, ShiDianNao, and PuDianNao as listed in Table 1.
  • We focus our study on memory usage, and we investigate the accelerator architecture to minimize memory
    transfers and to perform them as efficiently as possible.

2 DIANNAO: A NN ACCELERATOR

  • DianNao
    • first of DianNao accelerator family,
  • accommodates sota nn techniques (dl ),
  • inherits the broad application scope of nn.

2.1 Architecture

  • DianNao
    • input buffer for input (NBin)
    • output buffer for output (NBout)
    • buffer for synaptic(突触) weights (SB)
    • connected to a computational block (performing both synapses and neurons computations)
    • NFU, and CP, see Figure 1

NBin是存放输入神经元
SB是存放突触的权重的
这个NBout是存放输出神经元

我觉得图示的可以这样理解:2个输入神经元,2个突触,将这2个对应乘起来,输出是1个神经元啊。但是我的这个NFU牛逼啊,他可以一次性求两个输出神经元。

NFU

  • a functional block of T i T_i Ti inputs/synapses(突触)
    • T n T_n Tn output neurons,
  • time-shared by different algorithmic blocks of neurons.

这个NFU对 T i T_i Ti个输入和突触运算,得到 T n T_n Tn个输出神经元,突触不是应该是 T i × T n T_i\times T_n Ti×Tn个吗??,

  • Depending on the layer type,
    • computations at the NFU can be decomposed in either two or three stages

  • For classifier and convolutional:
    • multiplication of synapses × \times × inputs:NFU-1
    • , additions of all multiplications, :NFU-2
    • sigmoid. :NFU-3

如果是分类层或者卷积的话的话,那就是简单的突触 × \times × 输入,然后加起来,求sigmoid。这个我能理解哦,这种情况不就是卷积吗。

如果是分类层,那么输入就是

  • last stage (sigmoid or another nonlinear function) can vary.

  • For pooling, no multiplication(no synapse),
    • pooling can be average or max.

  • adders(加法器) have multiple inputs,
    • they are in fact adder trees,

  • the second stage also contains
    • shifters and max operators for pooling.

要啥移位啊??

  • the sigmoid function (for classifier and convolutional layers)can be efficiently implemented using ( f ( x ) = a i x × + b i , x ∈ [ x i , x i + 1 ] f(x) = a_i x \times + b_i , x \in [x_i , x_{i+1} ] f(x)=aix×+bi,x[xi,xi+1]) (16 segments are sufficient)

On-chip Storage

  • on-chip storage structures of DianNao
    • can be construed as modified buffers of scratchpads.

  • While a cache is an excellent storage structure for a general-purpose processor, it is a sub-optimal way to exploit reuse because of the cache access overhead (tag check, associativity, line size, speculative read, etc.) and cache conflicts.
  • The efficient alternative, scratchpad, is used in VLIW processors but it is known to be very difficult to compile for.
  • However a scratchpad in a dedicated accelerator realizes the best of both worlds: efficient
    storage, and both efficient and easy exploitation of locality because only a few algorithms have to be manually adapted.
第二段
  • on-chip storage into three (NBin, NBout,and SB), because there are three type of data (input neurons,output neurons and synapses) with different characteristics (read width and reuse distance).

  • The first benefit of splitting structures is to tailor the SRAMs to the appropriate
    read/write width,
  • and the second benefit of splitting storage structures is to avoid conflicts, as would occur in a cache.
  • Moreover, we implement three DMAs to exploit spatial locality of data, one for each buffer (two load DMAs for inputs, one store DMA for outputs).

2.2 Loop tiling

  • DianNao 用 loop tiling去减少memory access
    • so可容纳大的神经网络
  • 举例
    • 一个classifier 层
      • N n N_n Nn输出神经元
      • 全连接到 N i N_i Ni的输入
      • 如下图

N n N_n Nn个输出, N i N_i Ni个输入,sypase应该是 N n × N i N_n\times N_i Nn×Ni大小,用这个矩阵 × N i \times N_i ×Ni即可得到结果啊

  • 先取出来一块
    • 有点疑惑啊
    • 万一右边第一个元素和左边全部元素都有关
    • 你咋算啊 ()
    • 其实啊,我他妈算右边第一个时候
    • 只需要用到和synapse的一行呀!
    • 那你那个大大的synapse矩阵咋办啊
      在这里插入图片描述
  • 下面是原始代码和和
    • tiled代码
    • 他把分类层映射到DianNao

在这里插入图片描述

for(int n=0;n<Nn;n++)
	sum[n]=0;
for(int n=0;n<Nn;n++) //输出神经元
	for(int i=0;i<Ni;i++) //输入神经元
		sum[n]+=synapse[n][i]*neuron[i];
for(int n=0;n<Nn;n++)
	neuron[n]=Sigmoid(sum[n]);		
  • 俺的想法:
    • 一次来Tnn个输出
    • 和Tii个输入
    • 然后这个东西对于硬件还是太大了
    • 再拆
    • 来Tn个和Ti个吧
    • 就酱
for(int nnn=0;nnn<Nn;nnn+=Tnn){
    //tiling for output 神经元
//第一个for循环准备扔出去Tnn个输出
    for(int iii=0;iii<Ni;iii+=Tii){
    //tiling for input 神经元
//第二个for循环准备扔进来Tii个输入
//下面就这两个东西动手

        for(int nn=nnn;nn<nnn+Tnn;nn+=Tn){
    
//第三个for循环觉得觉得Tnn还是太大了,继续拆
//大小是Tn
//那么我们对每一个Tnn块!(开始位置是nn哦!!)
//我们如下求解

///
            for(int n=nn;n<nn+Tn;n++)
//第一步把中间结果全部搞成零!

            sum[n]=0;
//为求sum[n],sum[n]=synapse的第n行乘neuron的全部啊!
        for(int ii=iii;ii<iii+Tii;ii+=Ti)

//上面的for是对Ti进行拆

            for(int n=nn;n<nn+Tn;n++)
                for(int i=ii;i<ii+Ti;i++)
                    sum[n]+=synapse[n][i]*neuron[i];

    for(int nn=nnn;nn<nnn+Tnn;nn+=Tn)
        neuron[n]=sigmoid(sum[n]);
///

 }   }  }
  • 在tiled代码中, i i ii ii n n nn nn
    • 表示NFU有 T i T_i Ti个输入和突触
      • T n T_n Tn个输出神经元
  • 输入神经元被每个输出神经元需要重用
    • 但这个输入向量也太他妈大了
    • 塞不到Nbin块里啊
    • 所以也要对循环 i i ii ii分块,因子 T i i T_{ii} Tii

上面的代码肯定有问题,正确的如下:

	for (int nnn = 0; nnn < Nn; nnn += Tnn) {
    
		for (int nn = nnn; nn < nnn + Tnn; nn += Tn) {
    
			for (int n = nn; n < nn + Tn; n++)
				sum[n] = 0;
			for (int iii = 0; iii < Ni; iii += Tii) {
    
				for (int ii = iii; ii < iii + Tii; ii += Ti)
					for (int n = nn; n < nn + Tn; n++)
						for (int i = ii; i < ii + Ti; i++)
							sum[n] += synapse[n][i] * neuron[i];
			}
			for (int n = nn; n < nn + Tn; n++)
				printf("s%ds ", sum[n]);
		}
	}
	for (int index = 0; index < Nn; index++)
		printf("%d ", sum[index]);
版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/zhoutianzi12/article/details/110244427

智能推荐

QIIME 2教程. 21进化树q2-phylogeny(2020.11)-程序员宅基地

文章浏览阅读3.3k次,点赞2次,收藏11次。使用q2-phylogeny进行系统发育推断Phylogenetic inference with q2-phylogenyhttps://docs.qiime2.org/2020.11/..._qiime2里怎么让发育树生根

python3 异步错误 asyncio.Semaphore RuntimeError: Task got Future attached to a different loop-程序员宅基地

文章浏览阅读8.9k次,点赞8次,收藏11次。错误现象asyncio.Semaphore RuntimeError: Task got Future attached to a different loopasyncio.Semaphore RuntimeError:任务将Future连接到另一个循环当我在Python 3.7中运行此代码时:import asynciosem = asyncio.Semaphore(2)async def work(): async with sem: print('w_attached to a different loop

react引用Ant Design后,表格点击时添加背景色_table中oncell怎么设置表体单元格属性点击换背景颜色react-程序员宅基地

文章浏览阅读4.8k次,点赞2次,收藏7次。问题描述: Antd中新建一个表格后,默认只添加了鼠标经过事件,即:鼠标经过表格的每一行时,那一行就会添加一个淡色的背景。但是现在有需求,要给表格添加一个点击事件,就是在点击每一行时,也会添加一个背景效果。解决方法: 查看Antd的官网文档可以发现,表格有rowClassName和onRow两个属性,我们通过这两个属性来实现表格的点击添加背景色效果。..._table中oncell怎么设置表体单元格属性点击换背景颜色react

android WebView介绍(一)_android webview l-程序员宅基地

文章浏览阅读602次。原文 http://www.cnblogs.com/lyricgan88/p/3361383.html在Android手机中内置了一款高性能webkit内核浏览器,在SDK中封装成名为WebView的组件。WebView使用:(1)添加权限:AndroidManifest.xml中必须使用许可"android.permission.INTERNET",否则会出Web page _android webview l

Jenkins报错:ERROR: Maven JVM terminated unexpectedly with exit code 1-程序员宅基地

文章浏览阅读2k次。Jenkins报错:Build step 'Execute shell' marked build as failureERROR: Maven JVM terminated unexpectedly with exit code 1Finished: FAILURE[root@localhost bin]# free -m total used free shared buff/cache availableMem: _maven jvm terminated unexpectedly with exit code 1

随便推点

KNN(K近邻算法) 实现非侵入式电荷复杂分解(NILM)_nilm算法 arduino-程序员宅基地

文章浏览阅读4.1k次,点赞2次,收藏41次。首先介绍以下KNN算法,KNN算法的全称是K近邻算法,基本的算法按流程就是: 1. 计算测试集中的样本到训练集中每个样本的距离 2. 对计算得到的距离进行排序 3. 选取和当前的测试样本距离最近的K个训练样本,作为这个测试样本的K个最近的邻居 4. 统计k个邻居的类别的概率 5. K个邻居里面所属的类别概率最高的即为该测试样本的类别上面的是用KNN来做分类的算法流程,从上面的..._nilm算法 arduino

Seq2Seq模型介绍-程序员宅基地

文章浏览阅读1.6k次。转载自:https://zhuanlan.zhihu.com/p/1943089431.Seq2Seq模型简介Seq2Seq模型是输出的长度不确定时采用的模型,这种情况一般是在机器翻译的任务中出现,将一句中文翻译成英文,那么这句英文的长度有可能会比中文短,也有可能会比中文长,所以输出的长度就不确定了。 如下图所,输入的中文长度为4,输出的英文长度为2。 在网络结构中,输入一个中文序列,然后输出它对应的中文翻译,输出的部分的结果预测后面,根据上面的例子,也就是先输出“machine”,将"machine_seq2seq模型

Centos7安装jdk详细过程_centos 安装jdk17-程序员宅基地

文章浏览阅读4.6k次。步骤1:查看当前Linux系统是否已经安装java命令: rpm -qa|grep -i java 步骤2,如果有则卸载两个openJDK,没有则跳过此步骤至第三步命令:rpm -e --nodeps 要卸载的软件,-e表示卸载,--nodeps表示不检查依赖关系rpm -e --nodeps java-1.6.0-openjdk-1.6.0.41-1.13.13.1.el6_8.x86_64rp..._centos 安装jdk17

06.background&font-------11.线性渐变&径向渐变_background-image径向渐变-程序员宅基地

文章浏览阅读311次。1. 我们可以通过渐变来实现复杂的背景颜色,实现从一个颜色向另一个颜色过渡的效果注意:渐变是图片,需要通过background-image来设置2.线性渐变:linear-gradient,颜色沿着一条直线发生变化用法:background-image:linear-gradient(渐变方向,渐变颜色1,渐变颜色2),可同时设置多个渐变颜色,多个颜色默认情况下平均分布在线性渐变开始部分可以指定渐变的方向:to top、 to bottom(渐变方向为向下渐变,也是默认方向..._background-image径向渐变

PCL:计算点云法向量并可视化_pcl 计算点云法向量并展示-程序员宅基地

文章浏览阅读5.4k次,点赞6次,收藏19次。本文介绍了PCL法线估计并可视化的方法,同时给出了OpenMP加速法线估计。_pcl 计算点云法向量并展示

[C++] 使用指针调用类成员方法_类指针调用成员函数-程序员宅基地

文章浏览阅读4.1k次,点赞23次,收藏30次。任何指向“类”的成员函数指针,由于携带额外的所属对象信息,与一般的函数有根本的不同,不能直接用来进行函数调用。用C++ 描述:对于类class MyClass 中的成员函数 void Method(); ,它的函数类型为void (MyClass::*f)(); 而不是void (*f)(); 。_类指针调用成员函数

推荐文章

热门文章

相关标签