Towards Robust Approximate Computing
Date
Authors
ORCID
Journal Title
Journal ISSN
Volume Title
Publisher
item.page.doi
Abstract
Many classes of applications exhibit significant tolerance to inaccuracies in their computations. Some examples include image processing, multimedia applications, and machine learning. These inaccuracies can be exploited to build circuits with a smaller area, lower power, and higher performance. The main problem with approximate computing is that the resultant circuit relies on the training data used. This can lead to intolerable errors if the final workload differs substantially from the training data. Thus, approximate circuits are not used in commercial chips or only very minor approximations end up being used, typically restricted to well-known bit-width reduction, and floating point to fixed point conversions. This dissertation deals with this problem and presents methods to keep the error within a given bound when the workload characteristics differ from the training data. Two main approaches are presented. The first exploits the benefits of runtime reconfiguration in Field Programmable Gate Arrays (FPGAs) and builds a library of pre-characterized approximate hardware accelerators, which are then used to configure the FPGA based on the current workload. A different approach is presented for the Application Specific Integrated Circuits (ASICs) case, that does not allow to reconfigure the hardware. In this case, the accelerators are partitioned into tiles which are then turned on and off based on the workload and the error introduced by it. Finally, a self-tunable hardware architecture is presented, which in turn enables or disables these tiles based on the internal state of the signals. These methods have been extensively verified and prototyped on FPGAs in order to measure their trade-offs.