`In-memory computing' is being widely explored as a novel computing paradigm to mitigate the well known memory bottleneck. This emerging paradigm aims at embedding some aspects of computations inside the memory array, thereby avoiding frequent and expensive movement of data between the compute unit and the storage memory. In-memory computing with respect to Silicon memories has been widely explored on various memory bit-cells. Embedding computation inside the 6 transistor (6T) SRAM array is of special interest since it is the most widely used on-chip memory. In this paper, we present a novel in-memory multiplication followed by accumulation operation capable of performing parallel dot products within 6T SRAM without any changes to the standard bitcell. We, further, study the effect of circuit non-idealities and process variations on the accuracy of the LeNet-5 and VGG neural network architectures against the MNIST and CIFAR-10 datasets, respectively. The proposed in-memory dot-product mechanism achieves 88.8% and 99% accuracy for the CIFAR-10 and MNIST, respectively. Compared to the standard von Neumann system, the proposed system is 6.24x better in energy consumption and 9.42x better in delay.
翻译:正在广泛探索“在分子中计算”这一新型计算模式,以缓解众所周知的记忆瓶颈。这一新兴模式旨在将计算的某些方面嵌入内存阵列,从而避免计算单位和存储存储存储器之间数据频繁和昂贵的移动。在各种记忆位细胞中广泛探讨了硅记忆的模拟计算。在6个晶体管(6T)SRAM阵列内嵌入计算具有特殊意义,因为这是在芯片记忆中最广泛使用的。在本文中,我们提出了一个新型的模拟倍增,随后是积累操作,能够在6TSRAM内进行平行 dot产品运行,而不对标准比目细胞作任何改动。我们进一步研究了LNet-5和VGGG神经网络结构对分别对MNIST和CIFAR-10数据集准确性的影响。拟议中的点产品机制在CIFAR-10和MNIST(9x)中分别实现了88.8%和99%的准确性。更好的能源消耗与拟议中的系统相比,在SON-10和9x中,更佳的系统是更好的系统。