Approximate Computing

Approximate Computing is is an emerging paradigm for energy-efficient design. It focuses on three different strategies: (1) Approximate arithmetic circuits such as adders, multipliers or logical circuits that either compute with less accuracy by design or as deal with bit errors as a result of „undervolting“. (2) Approximate storage and memory such as reduced accuracy in floating point or integer data or accepting less reliable memory due to refresh rate reductions. (3) Software-level approximation such as memoization, „loop-skipping“ or randomized algorithms.

Approximate Arithmetic Logic Units

Over the past 80 years, most of algorithmic development has relied on the concept of correct computing, in particular at the level of the basic compute unit: the arithmetic logic unit (ALU). All operations that change the state of static variables – either stored in registers or random-access memory (RAM) in a computer – rely on this special-purpose circuit. This digital circuit performs arithmetic and bitwise operations on binary numbers; it is the fundamental building block of the central processing unit (CPU) of computers, floating-point units (FPUs), and even graphics processing units (GPUs). However, with the rapid rise of machine learning (ML) and artificial intelligence (AI) systems over the past 20 years, correctness in computing is not as essential as in operating systems or databases; almost all ML and AI algorithm are based on numerical approximations of (cost) minimisation problems. In this work, we explore an approach where rather than assuming that the representation of parameters is approximate, we assume that we reduce the voltage of a processing unit below the critical threshold where computational errors can be excluded. The switching power consumption of a CMOS transistor is directly proportional to the frequency f and also proportional to the quadratic on the supply voltage; when lowering voltage below a critical threshold, bit flip errors will occur. We analytically derive the distribution over sums of two integers resulting from under-powering an ALU using message passing to efficiently compute these distributions. One application of knowing this characterisation is to use under-powered ALUs to directly implement probabilistic programs where the distribution parameters are directly controlled by the CMOS supply voltage.

  • R. Herbrich, On Noisy Additions, 2022.
    [BibTeX] [Abstract] [Download PDF]

    So far, most of algorithmic development has relied on the concept of correct computing, in particular at the level of the basic compute unit: the arithmetic logical unit (ALU). With the rapid rise of machine learning (ML) and artificial intelligence (AI) systems, correct computing is no longer strictly required as almost all ML and AI algorithm are based on numerical approximations of (cost) minimization problems. On the other hand, energy-efficiency is becoming increasingly important. While current research into energy-efficient ML/AI focuses on {\em approximate representations} of numbers (e.g., k-bit deep neural networks), we will explore the idea of approximate computing resulting out of under-powering the ALU of a processor. In this note, we will analytically derive the distribution over integers resulting from under-powering an ALU using message-passing to efficiently compute these distributions. One application of knowing this characterization is to use under-powered ALUs to directly implement probabilistic programs where the distribution parameters are directly controlled by the CMOS supply voltage.

    @unpublished{herbrich2022noisyadditions,
    author = {Herbrich, Ralf},
    abstract = {So far, most of algorithmic development has relied on the concept of correct computing, in particular at the level of the basic compute unit: the arithmetic logical unit (ALU). With the rapid rise of machine learning (ML) and artificial intelligence (AI) systems, correct computing is no longer strictly required as almost all ML and AI algorithm are based on numerical approximations of (cost) minimization problems. On the other hand, energy-efficiency is becoming increasingly important. While current research into energy-efficient ML/AI focuses on {\em approximate representations} of numbers (e.g., k-bit deep neural networks), we will explore the idea of approximate computing resulting out of under-powering the ALU of a processor. In this note, we will analytically derive the distribution over integers resulting from under-powering an ALU using message-passing to efficiently compute these distributions. One application of knowing this characterization is to use under-powered ALUs to directly implement probabilistic programs where the distribution parameters are directly controlled by the CMOS supply voltage.},
    month = {July},
    note = {Draft Technical Report},
    title = {On Noisy Additions},
    year = {2022},
    url = {https://herbrich.me/papers/noisy_additions.pdf}}