Vision

An allosteric macromolecule moves from states to states according to interaction with primary messengers. In some active states it produces secondary messenger. This mechanism can be seen as performing elementary probabilistic inferences.

An allosteric macromolecule moves from states to states according to interaction with primary messengers. In some active states it produces secondary messenger. This mechanism can be seen as performing elementary probabilistic inferences.

Living organisms have developed highly sophisticated information processing networks, operating at various time/space scales and achieving a variety of specific and adaptive functions. In contrast with computers, biological systems exploit completely different substrates: membranes, macromolecules and messengers diffusing in water environment.

Unlike computers, biological systems are massively parallel, highly tolerant to component and signal variability; they do not require a precise centralized clock; they are not supervised by an external user; they continuously work and adapt themselves through interaction with the external world. Most of these important features have already been emphasized, and some of them (e.g. parallelism or asynchrony) have been successfully implemented in modern computers and networks. However, the development of modern computers is principally devoted to increasing performance and decreasing size and energy consumption, without any modification of the principles of computation. In particular, all the components must always perform deterministic and exact operations on sets of binary signals. This is very constraining, and strongly impedes further progress in terms of speed, miniaturization and power consumption.

In our view, a key point is that biological information processing systems are based on a completely different organization adapted to live in the real world, rather than a symbolic world. As a consequence, sensory data interpretation, knowledge, anticipation, decision-making and rule learning must deal with incomplete and uncertain information. Thus probability theory is the necessary mathematical framework to understand, for example, how sensory-motor and cognitive systems with limited computational, memory and energy resources can efficiently interact with their highly complex and changing environment. It can be proved formally and practically that probability theory is an extension of the Boolean logic on which all conventional computers are based. Therefore, building artificial computing systems founded on probabilistic inferences, and not merely on logical inferences, will dramatically enhance their robustness, autonomy and flexibility.

During the project we have proposed new computational models and artificial components by relaxing the hard constraints of deterministic and exact operations of basic components.

The digital age moved computers out of their initial scientific and technical enclaves. Contrasting with the beginning of this era, the vast majority of users nowadays are totally ignorant of the principles of digital processing. This was made possible by the development of very intuitive and friendly interfaces, and by the discovery of new usages, like social networks. However, modern processors are still based on the same fundamental components, i.e. logical gates, reliable binary circuits, etc. As a consequence, computers are generally unable to interact directly with the real world; there must be a human user to do this job. Industrial robots are a notable exception to this, but they require highly constrained and controlled working environments.

Our long-term vision is that completely new applications and usages of artificial systems will emerge permitting these systems to cope with incomplete information and make decision under uncertainty. These application domains include autonomous robotics but go far beyond. Artifacts capable of reasoning and adapting in novel and uncertain situations, as animals or humans do, could become highly interactive and intelligent, and could efficiently help people to learn and communicate, or to prevent risky situations. This implies a radical change to current computation models at the very deepest design level of these artificial systems.

Algebra

During the project we persued the following objectives: develop a theory of probabilistic computation based on elementary operators and gates, ¬†propose a way to encode probability distribution with stochastic binary signals and pulses, develop mathematical tools for analyzing the dynamic properties of networks of Bayesian gates ¬†and extend some basic notions of language and programming …

Biology

In biology, we advanced a completely new hypothesis stating that biochemical networks are probabilistic computers capable of performing Bayesian inferences. This contrasts with traditional descriptive models that assume that variability and multiplicity in cell signaling pathways are either not significant, or simply spurious result from specific biological constraints. In our view, these are key features …

Hardware

The objectives of this axis were: Characterization and optimization of new stochastic nano-devices; Implementation of single and assembly of Bayesian gates combining stochastic nano-devices and conventional logical gates; An efficient emulation using reconfigurable logic (FPGA) architecture of tree calculi of Bayesian gates and prototypes of hardware implementing simple combinations of Bayesian gates; A proposition of …