M07 Design Solutions | Stochastic Computation: the Hypes and the Hopes

Printer-friendly versionPDF version
Date: 
2017-03-27
Time: 
14:30-18:00
Location / Room: 
2A

Organisers

Jie Han, University of Alberta, CA (Contact Jie Han)
Marc Riedel, University of Minnesota, US (Contact Marc Riedel)

An early paper by Gaines in 1969 established the concept of stochastic computing. Initially, much of the interest in the topic was in the field of neural networks, where the concept is known as "pulsed" or "pulse-coded" computation. In fact, the general concept of stochastic computing dates back even earlier, to work by J. von Neumann in the 1950s. He applied probabilistic logic to the study of thresholding and multiplexing operations on bundles of wires with stochastic signals. As he eloquently states in the introduction to his seminal paper, "Our present treatment of error is unsatisfactory and ad hoc. … Error is viewed (in this work), therefore, not as an extraneous and misdirected or misdirecting accident, but as an essential part of the process under consideration …" We find this view, that randomness and noise are integral to computation, to be compelling in the modern era of nanoscale electronics.


The interest in stochastic computing has resurged in recent years. Although intriguing and promising, the paradigm is a proverbial hammer in want of a nail. The hammer: a method for synthesizing circuits that compute complex functions with remarkably simple logic. The nail: applications where simple logic matters. The paradigm suffers from high latency, and so was never compelling for high-performance, high-accuracy computation. However, in the context of novel electronic substrates, it is a potentially transformative approach. It provides high clock skew tolerance and significant reductions in dynamic power.


Different researchers have developed the idea from different angles, some focusing on reliability, others on area and power advantages. A particularly promising application is for novel forms of electronics. The design paradigm of computation on stochastic bit streams is potentially transformative, allowing complex functions to be implemented with very limited transistor counts. For instance, multiplication can be performed with a single AND gate. Complex functions such as exponentials and trigonometric functions can be computed with a handful of gates. Because the bit stream representation is uniform, with all bits weighted equally, circuits designed this way are highly tolerant of soft errors (i.e., bit flips). This tutorial will provide an overview of the current state of the art, a discussion of the advantages and disadvantages of the various design methodologies, and a summary of the applications. It will also discuss open problems and future research directions.

Agenda

TimeLabelSession
14:30M07.1Stochastic Computation: the Hypes and the Hopes

This tutorial is aimed at a review of the current status, a discussion of the advantages and disadvantages, and an attempt to identify some important questions that are to be addressed for stochastic computation.

14:30M07.1.1Introduction to Stochastic Computation: Background, History, and Underlying Theory

15:15M07.1.2Stochastic Computational Models for Reliability Evaluation, Biological and Neural Networks

16:30M07.1.3A General Synthesis methodology and a Deterministic Approach to "Stochastic Computing"

17:15M07.1.4Applications: Skew Tolerance, Low Power and Novel Substrates