ILOG CPLEX 11.0 User's Manual > Advanced Programming Techniques > Parallel Optimizers > Parallel MIP Optimizer > Output from the Parallel MIP Optimizer

The parallel MIP optimizer generates slightly different output in the node log (see Progress Reports: Interpreting the Node Log) from the sequential MIP optimizer. The following paragraphs explain those differences.

ILOG CPLEX prints a summary of timing statistics specific to the parallel MIP optimizer at the end of optimization. You can see typical timing statistics in the following sample run.

Problem 'fixnet6.mps.gz' read.
Read time =    0.02 sec.
CPLEX> o
Tried aggregator 1 time.
MIP Presolve modified 308 coefficients.
Aggregator did 1 substitutions.
Reduced MIP has 477 rows, 877 columns, and 1754 nonzeros.
Presolve time =    0.00 sec.
Clique table members: 2.
MIP emphasis: balance optimality and feasibility.
Root relaxation solution time =    0.00 sec.
      Nodes                               Cuts/
   Node  Left     Objective  IInf  Best Integer     Best Node    ItCnt     Gap
*     0+    0                        97863.0000     3192.0420       60   96.74%
*     0+    0                         4505.0000     3192.0420       60   29.14%
      0     0     3383.7784    16     4505.0000      Cuts: 35      114   24.89%
      0     0     3489.7887    13     4505.0000      Cuts: 17      134   22.54%
      0     0     3531.4512    16     4505.0000      Cuts: 11      149   21.61%
*     0+    0                         4501.0000     3531.4512      149   21.54%
      0     0     3552.4474    19     4501.0000       Cuts: 9      159   21.07%
      0     0     3588.7513    19     4501.0000       Cuts: 8      178   20.27%
      0     0     3597.3630    20     4501.0000       Cuts: 5      185   20.08%
      0     0     3598.6200    21     4501.0000       Cuts: 6      190   20.05%
      0     0     3601.2921    22     4501.0000       Cuts: 3      194   19.99%
*     0+    0                         4457.0000     3601.2921      194   19.20%
      0     0     3602.5189    23     4457.0000       Cuts: 3      197   19.17%
      0     0     3633.5846    20     4457.0000       Cuts: 2      204   18.47%
      0     0     3638.8299    22     4457.0000     Covers: 1      206   18.36%
      0     0     3649.9331    22     4457.0000       Cuts: 3      212   18.11%
      0     0     3652.9758    28     4457.0000       Cuts: 5      218   18.04%
      0     0     3656.7563    24     4457.0000       Cuts: 3      227   17.95%
      0     0     3659.1865    29     4457.0000       Cuts: 4      232   17.90%
      0     0     3664.7373    26     4457.0000     Covers: 1      236   17.78%
      0     0     3676.2923    33     4457.0000     Covers: 5      244   17.52%
      0     0     3676.7448    31     4457.0000   Flowcuts: 2      250   17.51%
      0     0     3676.9154    33     4457.0000   Flowcuts: 2      254   17.50%
*     0+    0                         3994.0000     3676.9154      254    7.94%
*     0+    0                         3990.0000     3676.9154      254    7.85%
*     0+    0                         3985.0000     3676.9154      254    7.73%
      0     2     3680.6918    22     3985.0000     3676.9154      254    7.73%
*    22     5      integral     0     3983.0000     3770.3216      358    5.34%
 
Root node processing (before b&c):
  Real time             =    1.39
Parallel b&c, 2 threads:
  Real time             =    0.24
  Sync time (average)   =    0.08
  Wait time (average)   =    0.03
                          -------
Total (root+branch&cut) =    1.63 sec.
 
Cover cuts applied:  13
Flow cuts applied:  40
Gomory fractional cuts applied:  6
Solution pool: 8 solutions saved
 
MIP - Integer optimal solution:  Objective =  3.9830000000e+03
 
Solution time =    1.63 sec.  Iterations = 426  Nodes = 40
 

The sample reports that the processors spent an average of 0.03 of a second of real time waiting for other processors to provide nodes to be processed. The Sync time is the average time a processor spends trying to synchronize by accessing globally shared data. Because only one processor at a time can access such data, the amount of time spent in synchronization is really crucial: any other processor that tries to access this region must wait, thus sitting idle, and this idle time is counted separately from the Wait time.

There is another difference in the way logging occurs in the parallel MIP optimizer. When this optimizer is called, it makes a number of copies of the problem. These copies are known as clones. The parallel MIP optimizer creates as many clones as there are threads available to it. When the optimizer exits, these clones and the memory they used are discarded.

If a log file is active when the clones are created, then ILOG CPLEX creates a clone log file for each clone. The clone log files are named cloneK.log, where K is the index of the clone, ranging from 0 (zero) to the number of threads minus one. Since the clones are created at each call to the parallel MIP optimizer and discarded when it exits, the clone logs are opened at each call and closed at each exit. (The clone log files are not removed when the clones themselves are discarded.)

The clone logs contain information normally recorded in the ordinary log file (by default, cplex.log) but inconvenient to send through the normal log channel. The information likely to be of most interest to you are special messages, such as error messages, that result from calls to the LP optimizers called for the subproblems.