SimGrid
3.9
Versatile Simulation of Distributed Systems
|
A number of options can be given at runtime to change the default SimGrid behavior. For a complete list of all configuration options accepted by the SimGrid version used in your simulator, simply pass the –help configuration flag to your program. If some of the options are not documented on this page, this is a bug that you should please report so that we can fix it. Note that some of the options presented here may not be available in your simulators, depending on the compile-time options that you used.
There is several way to pass configuration options to the simulators. The most common way is to use the –cfg
command line argument. For example, to set the item Item
to the value Value
, simply type the following:
my_simulator --cfg=Item:Value (other arguments)
Several –cfg
command line arguments can naturally be used. If you need to include spaces in the argument, don't forget to quote the argument. You can even escape the included quotes (write \' for ' if you have your argument between ').
Another solution is to use the <config>
tag in the platform file. The only restriction is that this tag must occure before the first platform element (be it <AS>
, <cluster>
, <peer>
or whatever). The <config>
tag takes an id
attribute, but it is currently ignored so you don't really need to pass it. The important par is that within that tag, you can pass one or several <prop>
tags to specify the configuration to use. For example, setting Item
to Value
can be done by adding the following to the beginning of your platform file:
<config> <prop id="Item" value="Value"/> </config>
A last solution is to pass your configuration directly using the C interface. If you happen to use the MSG interface, this is very easy with the MSG_config() function. If you do not use MSG, that's a bit more complex, as you have to mess with the internal configuration set directly as follows. Check the relevant page for details on all the functions you can use in this context, _sg_cfg_set
being the only configuration set currently used in SimGrid.
SimGrid comes with several network and CPU models built in, and you can change the used model at runtime by changing the passed configuration. The three main configuration items are given below. For each of these items, passing the special help
value gives you a short description of all possible values. Also, –help-models
should provide information about all models for all existing resources.
As of writting, the accepted network models are the following. Over the time new models can be added, and some experimental models can be removed; check the values on your simulators for an uptodate information. Note that the CM02 model is described in the research report A Network Model for Simulation of Grid Application while LV08 is described in Accuracy Study and Improvement of Network Simulation in the SimGrid Framework.
If you compiled SimGrid accordingly, you can use packet-level network simulators as network models (see Packet level simulation). In that case, you have two extra models, described below, and some specificadditional configuration flags".
Concerning the CPU, we have only one model for now:
The workstation concept is the aggregation of a CPU with a network card. Three models exists, but actually, only 2 of them are interesting. The "compound" one is simply due to the way our internal code is organized, and can easily be ignored. So at the end, you have two workstation models: The default one allows to aggregate an existing CPU model with an existing network model, but does not allow parallel tasks because these beasts need some collaboration between the network and CPU model. That is why, ptask_07 is used by default when using SimDag.
The network and CPU models that are based on lmm_solve (that is, all our analytical models) accept specific optimization configurations.
It is still possible to disable the maxmin_selective_update
feature because it can reveal counter-productive in very specific scenarios where the interaction level is high. In particular, if all your communication share a given backbone link, you should disable it: without maxmin_selective_update
, every communications are updated at each step through a simple loop over them. With that feature enabled, every communications will still get updated in this case (because of the dependency induced by the backbone), but through a complicated pattern aiming at following the actual dependencies.
The analytical models handle a lot of floating point values. It is possible to change the epsilon used to update and compare them through the maxmin/precision item (default value: 0.00001). Changing it may speedup the simulation by discarding very small actions, at the price of a reduced numerical precision.
By default, Surf computes the analytical models sequentially to share their resources and update their actions. It is possible to run them in parallel, using the surf/nthreads item (default value: 1). If you use a negative value, the amount of available cores is automatically detected and used instead.
Depending on the workload of the models and their complexity, you may get a speedup or a slowdown because of the synchronization costs of threads.
The analytical models need to know the maximal TCP window size to take the TCP congestion mechanism into account. This is set to 20000 by default, but can be changed using the network/TCP_gamma item.
On linux, this value can be retrieved using the following commands. Both give a set of values, and you should use the last one, which is the maximal size.
cat /proc/sys/net/ipv4/tcp_rmem # gives the sender window cat /proc/sys/net/ipv4/tcp_wmem # gives the receiver window
These factors allow to betterly take the slow start into account. The corresponding values were computed through data fitting one the timings of packet-level simulators. You should not change these values unless you are really certain of what you are doing. See Accuracy Study and Improvement of Network Simulation in the SimGrid Framework for more informations about these coeficients.
If you are using the SMPI model, these correction coeficients are themselves corrected by constant values depending on the size of the exchange. Again, only hardcore experts should bother about this fact.
As of SimGrid v3.7, cross-traffic effects can be taken into account in analytical simulations. It means that ongoing and incoming communication flows are treated independently. In addition, the LV08 model adds 0.05 of usage on the opposite direction for each new created flow. This can be useful to simulate some important TCP phenomena such as ack compression.
For that to work, your platform must have two links for each pair of interconnected hosts. An example of usable platform is available in examples/msg/gtnets/crosstraffic-p.xml
.
This is activated through the network/crosstraffic item, that can be set to 0 (disable this feature) or 1 (enable it).
Note that with the default workstation model this option is activated by default.
When you want to use network coordinates, as it happens when you use an <AS> in your platform file with Vivaldi
as a routing, you must set the network/coordinates to yes
so that all mandatory initialization are done in the simulator.
(this configuration item is experimental and may change or disapear)
It is possible to specify a timing gap between consecutive emission on the same network card through the network/sender_gap item. This is still under investigation as of writting, and the default value is to wait 0 seconds between emissions (no gap applied).
(this configuration item is experimental and may change or disapear)
It is possible to specify that messages below a certain size will be sent as soon as the call to MPI_Send is issued, without waiting for the correspondant receive. This threshold can be configured through the smpi/async_small_thres item. The default value is 0. This behavior can also be manually set for MSG mailboxes, by setting the receiving mode of the mailbox with a call to MSG_mailbox_set_async . For MSG, all messages sent to this mailbox will have this behavior, so consider using two mailboxes if needed.
When using the packet-level pseudo-models, several specific configuration flags are provided to configure the associated tools. There is by far not enough such SimGrid flags to cover every aspects of the associated tools, since we only added the items that we needed ourselves. Feel free to request more items (or even better: provide patches adding more items).
When using NS3, the only existing item is ns3/TcpModel, corresponding to the ns3::TcpL4Protocol::SocketType configuration item in NS3. The only valid values (enforced on the SimGrid side) are 'NewReno' or 'Reno' or 'Tahoe'.
When using GTNeTS, two items exist:
To enable the experimental SimGrid model-checking support the program should be executed with the command line argument
--cfg=model-check:1
Safety properties are expressed as assertions using the function
void MC_assert(int prop);
If you want to specify liveness properties (beware, that's experimental), you have to pass them on the command line, specifying the name of the file containing the property, as formated by the ltl2ba program.
--cfg=model-check/property:<filename>
Of course, specifying a liveness property enables the model-checking so that you don't have to give –cfg=model-check:1
in addition.
By default, the system is backtracked to its initial state to explore another path instead of backtracking to the exact step before the fork that we want to explore (this is called stateless verification). This is done this way because saving intermediate states can rapidly exhaust the available memory. If you want, you can change the value of the model-check/checkpoint
variable. For example, the following configuration will ask to take a checkpoint every step. Beware, this will certainly explode your memory. Larger values are probably better, make sure to experiment a bit to find the right setting for your specific system.
--cfg=model-check/checkpoint:1
Of course, specifying this option enables the model-checking so that you don't have to give –cfg=model-check:1
in addition.
The main issue when using the model-checking is the state space explosion. To counter that problem, several exploration reduction techniques can be used. There is unfortunately no silver bullet here, and the most efficient reduction techniques cannot be applied to any properties. In particular, the DPOR method cannot be applied on liveness properties since it may break some cycles in the exploration that are important to the property validity.
--cfg=model-check/reduction:<technique>
For now, this configuration variable can take 2 values: none: Do not apply any kind of reduction (mandatory for now for liveness properties) dpor: Apply Dynamic Partial Ordering Reduction. Only valid if you verify local safety properties.
Of course, specifying a reduction technique enables the model-checking so that you don't have to give –cfg=model-check:1
in addition.
In SimGrid, the user code is virtualized in a specific mecanism allowing the simulation kernel to control its execution: when a user process requires a blocking action (such as sending a message), it is interrupted, and only gets released when the simulated clock reaches the point where the blocking operation is done.
In SimGrid, the containers in which user processes are virtualized are called contexts. Several context factory are provided, and you can select the one you want to use with the contexts/factory configuration item. Some of the following may not exist on your machine because of portability issues. In any case, the default one should be the most effcient one (please report bugs if the auto-detection fails for you). They are sorted here from the slowest to the most effient:
The only reason to change this setting is when the debugging tools get fooled by the optimized context factories. Threads are the most debugging-friendly contextes.
Each virtualized used process is executed using a specific system stack. The size of this stack has a huge impact on the simulation scalability, but its default value is rather large. This is because the error messages that you get when the stack size is too small are rather disturbing: this leads to stack overflow (overwriting other stacks), leading to segfaults with corrupted stack traces.
If you want to push the scalability limits of your code, you really want to reduce the contexts/stack_size item. Its default value is 128 (in Kib), while our Chord simulation works with stacks as small as 16 Kib, for example. For the thread factory, the default value is the one of the system, if it is too large/small, it has to be set with this parameter.
Parallel execution of the user code is only considered stable in SimGrid v3.7 and higher. It is described in INRIA RR-7653.
If you are using the ucontext
or raw
context factories, you can request to execute the user code in parallel. Several threads are launched, each of them handling as much user contexts at each run. To actiave this, set the contexts/nthreads item to the amount of cores that you have in your computer (or -1 to have the amount of cores auto-detected).
Even if you asked several worker threads using the previous option, you can request to start the parallel execution (and pay the associated synchronization costs) only if the potential parallelism is large enough. For that, set the contexts/parallel_threshold item to the minimal amount of user contexts needed to start the parallel execution. In any given simulation round, if that amount is not reached, the contexts will be run sequentially directly by the main thread (thus saving the synchronization costs). Note that this option is mainly useful when the grain of the user code is very fine, because our synchronization is now very efficient.
When parallel execution is activated, you can choose the synchronization schema used with the contexts/synchro item, which value is either:
The tracing subsystem can be configured in several different ways depending on the nature of the simulator (MSG, SimDag, SMPI) and the kind of traces that need to be obtained. See the Tracing Configuration Options subsection to get a detailed description of each configuration option.
We detail here a simple way to get the traces working for you, even if you never used the tracing API.
--cfg=tracing:1 --cfg=tracing/uncategorized:1 --cfg=triva/uncategorized:uncat.plistThe first parameter activates the tracing subsystem, the second tells it to trace host and link utilization (without any categorization) and the third creates a graph configuration file to configure Triva when analysing the resulting trace file.
--cfg=tracing:1 --cfg=tracing/categorized:1 --cfg=triva/categorized:cat.plistThe first parameter activates the tracing subsystem, the second tells it to trace host and link categorized utilization and the third creates a graph configuration file to configure Triva when analysing the resulting trace file.
smpirun -trace ...The -trace parameter for the smpirun script runs the simulation with –cfg=tracing:1 and –cfg=tracing/smpi:1. Check the smpirun's -help parameter for additional tracing options.
Sometimes you might want to put additional information on the trace to correctly identify them later, or to provide data that can be used to reproduce an experiment. You have two ways to do that:
--cfg=tracing/comment:my_simulation_identifier
--cfg=tracing/comment_file:my_file_with_additional_information.txt
Please, use these two parameters (for comments) to make reproducible simulations. For additional details about this and all tracing options, check See the TracingConfiguration Options subsection".
The SMPI interface provides several specific configuration items. These are uneasy to see since the code is usually launched through the smiprun
script directly.
In SMPI, the sequential code is automatically benchmarked, and these computations are automatically reported to the simulator. That is to say that if you have a large computation between a MPI_Recv()
and a MPI_Send()
, SMPI will automatically benchmark the duration of this code, and create an execution task within the simulator to take this into account. For that, the actual duration is measured on the host machine and then scaled to the power of the corresponding simulated machine. The variable smpi/running_power allows to specify the computational power of the host machine (in flop/s) to use when scaling the execution times. It defaults to 20000, but you really want to update it to get accurate simulation results.
When the code is constituted of numerous consecutive MPI calls, the previous mechanism feeds the simulation kernel with numerous tiny computations. The smpi/cpu_threshold item becomes handy when this impacts badly the simulation performance. It specify a threshold (in second) under which the execution chunks are not reported to the simulation kernel (default value: 1e-6). Please note that in some circonstances, this optimization can hinder the simulation accuracy.
Most of the time, you run MPI code through SMPI to compute the time it would take to run it on a platform that you don't have. But since the code is run through the smpirun
script, you don't have any control on the launcher code, making difficult to report the simulated time when the simulation ends. If you set the smpi/display_timing item to 1, smpirun
will display this information when the simulation ends.
Simulation time: 1e3 seconds.
It is possible to specify a list of directories to search into for the <include> tag in XML files by using the path configuration item. To add several directory to the path, set the configuration item several times, as in
--cfg=path:toto --cfg=path:tutu
By default, when Ctrl-C is pressed, the status of all existing simulated processes is displayed. This is very useful to debug your code, but it can reveal troublesome in some cases (such as when the amount of processes becomes really big). This behavior is disabled when verbose-exit is set to 0 (it is to 1 by default).
It can be done by using XBT. Go to Logging support for more details.
contexts/factory
: Selecting the virtualization factorycontexts/nthreads
: Running user code in parallelcontexts/parallel_threshold
: Running user code in parallelcontexts/stack_size
: Adapting the used stack sizecontexts/synchro
: Running user code in parallelcpu/maxmin_selective_update
: Optimization level of the platform modelscpu/model
: Selecting the platform modelscpu/optim
: Optimization level of the platform modelsgtnets/jitter
: Configuring packet-level pseudo-modelsgtnets/jitter_seed
: Configuring packet-level pseudo-modelsmaxmin/precision
: Numerical precision of the platform modelsmodel-check
: Configuring the Model-Checkingmodel-check/property
: Specifying a liveness propertymodel-check/checkpoint
: Going for stateful verificationmodel-check/reduce
: Specifying the kind of reductionnetwork/bandwidth_factor
: Corrective simulation factorsnetwork/coordinates
: Coordinated-based network modelsnetwork/crosstraffic
: Simulating cross-trafficnetwork/latency_factor
: Corrective simulation factorsnetwork/maxmin_selective_update
: Optimization level of the platform modelsnetwork/model
: Selecting the platform modelsnetwork/optim
: Optimization level of the platform modelsnetwork/sender_gap
: Simulating sender gapnetwork/TCP_gamma
: Maximal TCP window sizenetwork/weight_S
: Corrective simulation factorsns3/TcpModel
: Configuring packet-level pseudo-modelssurf/nthreads
: Parallel threads for model updatessmpi/running_power
: Automatic benchmarking of SMPI codesmpi/display_timing
: Reporting simulation timesmpi/cpu_threshold
: Automatic benchmarking of SMPI codesmpi/async_small_thres
: Simulating asyncronous sendpath:
XML file inclusion pathverbose-exit
: Behavior on Ctrl-Cworkstation/model
: Selecting the platform models