Return to Vision


The objectives of this axis were:

  • Characterization and optimization of new stochastic nano-devices;
  • Implementation of single and assembly of Bayesian gates combining stochastic nano-devices and conventional logical gates;
  • An efficient emulation using reconfigurable logic (FPGA) architecture of tree calculi of Bayesian gates and prototypes of hardware implementing simple combinations of Bayesian gates;
  • A proposition of a non von Neumann architecture for probabilistic computing, a prototype emulation of parts of this architecture using FPGA and a study of the electronic challenges that will be encounter in a future implementation of this architecture

It is widely recognized that the CMOS industry is facing a number of hurdles to continue improving processors in terms of size, performance and dissipation. While the number of transistors keeps increasing, since 2005, their frequency is stalling due to the extremely large thermal dissipation. In order to keep on increasing performances, computer architects have introduced parallelism in their systems via multi/many-core architectures. Nevertheless other technical complications will have to be solved such as dark silicon and the increasing number of defective components due to their shrinking dimensions.

In this context, implementing Bayesian Inference on classical sequential computers is not only inefficient in term of performance, but also a terrible waste of electrical energy.  It is indeed ironic that calculations involving intensive probabilistic inferences (like Monte-Carlo simulations), and requiring huge numbers of random number generations, are running on totally deterministic and error-free machines.

On the contrary,  by adapting hardware to the specific constraints of probabilistic computation we have succeed to improve the energy consumption and the performance by two order of magnitudes. In that trend, there have been recent approaches at developing non-binary low-level representation of probability, such as the technology used for error correction in memory chips, and also previous work on probabilistic gates, however these follow  different approaches (Lyrics; Lingamneni et al, 2012).

In the BAMBI project, we will develop new approaches which will give rise to completely new massively parallel computers dedicated to probabilistic inferences.