Libraries at University of Nebraska-Lincoln

 

Abstract

Nowadays Convolution Neural Network (CNN) has become the state of the art for machine learning algorithms due to their high accuracy. However, implementation of CNN algorithms on hardware platforms becomes challenging due to high computation complexity, memory bandwidth and power consumption. Hardware accelerators such as Graphics Processing Unit (GPU), Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC) are suitable platforms to model CNN algorithms. Recently FPGAs have been considered as an attractive platform for CNN implementation. Modern FPGAs have various embedded hardware and software blocks such as a soft processor, DSP slice and memory blocks. These embedded resources along with customized logic blocks, makes FPGA a perfect candidate for CNN model. Also, the major advantage of FPGA in the case of CNN is its parallelism and pipelining architecture which helps to accelerate CNN operations. The primary goal of this bibliometric review is to determine the scope of current literature in the field of implementing CNN algorithms on various hardware platforms, with a particular emphasis on the FPGA platform for CNN-based applications. Data from Scopus is mostly used in this bibliometric analysis. It reveals that researchers from China, India, and the United Kingdom make the most significant contributions in the form of conferences, journals, and book proceedings. All the documents are from subject areas of Engineering, Computer Science, Mathematics, Physics and Astronomy, Decision Sciences, and Material Science make significant contributions.

Share

COinS