Skip to main content

Function: benchmark()

benchmark(problemGenerator: ProblemGenerator, inputs: any[], params: BenchmarkParameters): Promise <BenchmarkResult[]>

Benchmark given model or a set of models.

Parameters

ParameterTypeDescription
problemGeneratorProblemGeneratorA function that takes an input and returns a

ProblemDefinition or a Model.

The function will be called for each input in the array inputs.
inputsany[]An array of inputs to the problem generator.
paramsBenchmarkParametersBenchmark parameters to use (it includes Parameters of the solver).

Returns

Promise <BenchmarkResult[]>

An array of results, one for each run.

Remarks

This function is used to run benchmarks of the solver. It can be used to solve a single model or multiple models. The models can be generated from data files, or they can be generated on the fly. The function can also export the models into JSON, JavaScript or text formats.

The function will call the given problemGenerator for each input in the array inputs. The input can be anything, it is up to the problem generator to interpret it (it could be e.g. a name of a data file). The problem generator should return either a Model or a ProblemDefinition (a model with parameters and warm start). Then the function will solve the model and return an array of results.

Using BenchmarkParameters it is possible to specify additional parameters of the benchmark. For example:

  • Each problem can be solved multiple times with different random random seeds (using parameter BenchmarkParameters.nbSeeds). This is useful to get more reliable statistics about the problem performance.
  • Multiple models can be solved in parallel to speed up the computation (using parameter BenchmarkParameters.nbParallelRuns | BenchmarkParameters.nbParallelRuns). In this case it is useful to limit the number of threads for each solve by parameter Parameters.nbWorkers.
  • The function can also output the results in CSV or JSON formats or export the models into JSON, JavaScript or text formats.

See BenchmarkParameters for more details.

If multiple models are solved (or one model with multiple seeds), this function suppresses the normal output and instead it prints a table with statistics of the runs. The table is printed on the standard output.

If the array inputs is empty then the function writes an error message to the standard output and terminates the program with exit code 1.

In case of an error during solve, the function does not throw an exception but returns ErrorBenchmarkResult for the given run.

As BenchmarkParameters are an extension of Parameters, the parameter params can be used to overwrite parameters for all solves. If problemGenerator returns a ProblemDefinition, then the parameters from the problem definition are used as a base and the parameters from params are used to overwrite them (using combineParameters).

Example

Let's suppose that we have a function createModel that takes a filename as a parameter and returns Model. For example, the function can model jobshop problem and read the data from a file.

We are going to create a command line application around createModel that allows to solve multiple models, using multiple random seeds, run benchmarks in parallel, store results in files etc.

import * as CP from '@scheduleopt/optalcp';

function createModel(filename: string): CP.Model {
...
}

// What to print when --help is specified (assuming that the program name is benchmark.js):
let usage = "Usage: node mybenchmark.js [options] <datafile1> [<datafile2> ...";

// Unless specified differently on the command line, time limit will be 60s:
let params = CP.BenchmarkParameters = { timeLimit: 60 }
// Parse command line arguments, unrecognized arguments are assumed to be file names:
let filenames = CP.parseSomeBenchmarkParameters(params, usage);

// And run the benchmark. The benchmark will call createModel for each file in filenames.
CP.benchmark(createModel, filenames, params);

The resulting program can be used for example as follows:

node mybenchmark.js --nbParallelRuns 2 --nbWorkers 2 --worker0.noOverlapPropagationLevel 4 \
--output results.json --summary summary.csv --log `logs/{name}.txt` \
data/*.txt

In this case the program will solve all benchmarks from the directory data, running two solves in parallel, each with two workers (threads). The first worker will use propagation level 4 for the Model.noOverlap constraint. The results will be stored in JSON file results.json (an array of BenchmarkResult objects), a summary will be stored in CSV file summary.csv, and the log files for individual runs will be stored in the directory logs (one file for each run named after the model, see Model.setName.