AMYBIA
Aggregating MYriads of Biologically-Inspired Agents
Hardware-Theory codesign
Our objective is to develop innovative tools for the simulation of our models on massively parallel devices such as FPGAs. To this aim, we propose to adopt an hardware-theory codesign process, i.e. to establish a coherent path of research linking the design of a model to its actual implementation on a massively distributed way on digital circuits. The major interest of hardware-theory codesign is that it helps both obtain efficient simulation tools and question the way we design our models.
This methodology was partially explored to define another kind of hardware compatible and massively distributed models, based on neural networks, as illustrated in
[Gir00],
[Gir06a], and
[Gir06b].
As far as our models are concerned, we focus on massively parallel systems such as cellular automata or multi-agent systems. Informally, we identify these systems as those in which the numbers of active computing units is the same order as the number of components. This is clearly the case for cellular automata where each cell is active and updates its state by reading the states of its neighbours.
Paradoxically, despite the use of decentralised, local rules, actual simulations of these models are done by classical sequential processors. This results in a considerable slowing down of the simulations, inasmuch as, in sequential machines, one generally has to use large buffers to emulate the parallelism of updates. Moreover, our experience in parallel hardware implementations of massively distributed models shows that any sequential simulation of such models implies a partial distortion or simplification of the distributed computing model. Thus, actually implementing the model on a parallel machine is mandatory to take all constraints into account (see for example
[GTH07]). Furthermore, a particularly suitable feature of this approach is that it is expected to provide as bonus a feed-back to improve the distributed essence of the models we define.
Hence, actual implementation of the models on parallel machines is used here both a sanity check for the realism of our models and a direct way to improve their performance. For example, if a model uses stochastic laws, as it is generally the case, then the code of the laws that govern each component should be written with great care. Indeed, the use of random generators is a costly process - sometimes more important than the calculus of the transitions themselves - and it is important not to introduce too many biases in the simulation. Distributed implementations also imply the use of distributed random generators for which a balance between random variable independence and hardware area must be carefully found.
|