Skip to main content.
(up)
(home) > Tutorials

V.8 Tutorial 8: Parallel simulations

This tutorial explains how to build and run a parallel simulation on mars, with the results being output on a Linux workstation.

A broad section of Part III of this manual deals with setting up the files required for parallel simulations. Make sure you went through it and proceeded accordingly. There are two ways of starting a parallel simulation.

If you intend to run the simulation on your own parallel environment, you will require ensure that the number of nodes (including the server) and IP addresses are set up correctly as explained in Part I, then start the server first (Script or ServerClientMulti.bat), and then start your all your clients (Client or Client.bat).

If you intend to run the simulation on mars, the HPC used during the development of NMLPlay, owned by the school of Informatics, University of Edinburgh, you can set up an automated script (which requires key to have been generated as explained). We shall run this particular tutorial on mars. Type the  launch_12_procs on the command line, as explained in Part II of the manual.

If the connections happens correctly you should see nb_nodes *  nb_nodes messages such as Test othercqs: ID: ID_connecting othercqs new size : new_size class of the RmtApp: neosim.kernel.BasicKernel_Stub. You should also get some other messages like "there alpha beta delta gama". They are messages indicating that the application proceeds normally. Other messages dealing with the command queues might appear.

Create the same network of neurons as explained in Tutorial 2. You might just load FirstModel.xml if you have already carried out Tutorial 2 under Linux.

Click on "Environment" on the top panel. Then select ImplementationNeuron, and press the "Edit" button below. R is the resistance and should be 20.0, C the capacity should be 150.0. Other parameters should be defined as described in Figure 37.

Figure 37 : Definition of the neural parameters.

Right-click on the population NewPop0. A small menu will appear (Figure 38).

Figure 38 : The editing menu.

This menu allow editing particular priorities without having to scroll to find them. Left-click on structure. Expand a bit the new window and select a 3D structure 8x8x8 structure (Figure 39).

Figure 39 : Defining the 512 3D neural grid.

Obviously, the number of neurons and especially the number of connections (synapses) that we shall be able to simulate depends on the particular architecture of your network of workstations. The current distribution algorithm insure that the number of neurons is evenly distributed among the nodes. On mars, 10,000 neurons and 100,000,000 connections have been simulated simultaneously (most basic neurons and most basic neural connections). It took 2 days to build the network. However more realistic models will encounter more severe limitations (the compression algorithms do not deal with customized synapses and neurons). In this tutorial we use a small number of neurons (512).

In order to run the simulation on a parallel environment we need to define more specifically the kernel parameters. Edit the settings as explained in tutorial 3. The "Kernel parameters" need to be defined as "rmi -c IP_Address username nb_nodes nbthread_per_node".

I prefer to untick the visualization checkbox in order to disable it, although you might keep it if you are provided with a good graphic card. Your parameters should correspond to Figure 40.



Figure 40 : Parameters of the parallel simulation.

Save the simulation under the name "ParallelSim.xml". Now you can run it as explained in Tutorial 3.

Now you know how to run a parallel simulation. In the future, you might be interested in running more complicated models, and run them fast. You should not forget that the minimal time step will be the biggest criterion when you want the computation to be speeded up. This minimal time step is either the "Stepping" setting, or the "Visualization delay" (see Figure 40), or the smallest delay among all the projections and attachment delay. Beware that a small delay will impact severely on the speed of the application. Therefore, if the application is running slowly, check that the "timestep", "visualisation delay" and "attachment delay" are long enough. The minimal time step should normally correspond to the smallest delay among all the projections (i. e. the minimal synaptic delay). However, if for some reason the application went too fast and you want to slow down the visualization rendering of the network spiking activity in order to monitor it more closely for instance, you could use a visualisation delay smaller than the minimal synaptic delay.