Parallel Model Calibration using MPI and Boost.MPI

The message passing standard MPI is a language-independent communication protocol. MPI supports the parallelization of numerical algorithms on both massive parallel computers and on symmetric multi processor systems. MPI is standardized, highly portable and the de facto standard on massive parallel supercomputers. Even though MPI can be used in a multi-threading environment it is normally used in a multi-process environment. Therefore MPI is tailor-made to parallelize algorithms based on the non thread-safe QuantLib.

The roots of the MPI specification are going back to the early 90’s and you will feel the age if you use the C-API, which is designed to achieve maximum performance. The Boost.MPI library – quoting from the web page – “is a C++ friendly interface to the standard Message Passing Interface… Boost.MPI can build MPI data types for user-defined types using the Boost.Serialization library”.

Model calibration can be a very time-consuming task, e.g. the calibration of a Heston or a Heston-Hull-White model using American puts with discrete dividends. The class MPICalibrationHelper acts as a MPI wrapper for a given CalibrationHelper and allows to parallelize an existing model calibration routine (hopefully with minimal impact/effort). The source code is available here. It contains the MPICalibrationHelper class and as an example the parallel version of the DAXCalibration test case (part of test-suite/hestonmodel.cpp). The code depends on QuantLib 1.0 or higher, Boost.Thread and Boost.MPI.

The diagram above shows the speed-up of a Heston-Hull-White calibration with discrete dividends on an eight core machine using a finite difference pricing engine. The main reason for the sub-linear scaling is the limited memory bandwidth between the CPUs and the main memory and not the MPI communication overhead.

Advertisement