The amount of time which entities are able to lag behind "current simulation time" is determined by their minimum output delay . This must be greater than zero. A practical minimum is the integration timestep (e.g. 20usec), but in practice it is expected it will be of the order of 1-10 ms to account for axonal delay.
The figure below illustrates 4 entities at the start of a simulation.
Each entity has a separate output delay (either 2 or 3 ms in this
example), and at the start all entities are at time 0.
When the simulation is set running, current simulation time can
advance to the minimum output delay of all entities, and any entity
can be advanced up to the current simulation time by calling
its handleEvents() method. If all 4 entities
were on different processors, they could all be advanced in parallel.
Figures 17 and 18 show simulation time
advancing.
Note that at any point of time the entities' local times are staggered, and that the entities lag behind current simulation time by anything from 0 to their min output delay. The times of entities will "leapfrog" each other. This has performance advantages, as each entity can do a large number of timestep updates in one function call, with the state variables likely to be in cache. The other advantage is that entities can be physically distributed on different processors, and there can be a degree of slackness in the synchronisation requirements between processors. The catch, from an ease of programming point of view is that entities cannot manipulate other entities directly and instantaneously for causality reasons. An entity wishing to create more entities has to do so by posting and event after its minimum output delay.