LSMC: Need for fast and accurate Generalized Least Squares

Update 21.02.2016: Added values for QR decomposition with pivoting and QuantLib performance improvements.

Least Squares Monte Carlo simulations spend a significant amount of the total computation time on the generalized least squares especially if the problem itself has a high dimensional state.  Preferred techniques to solve the normal equations are the QR decomposition

A = Q\cdot R

where Q is orthogonal and R is upper triangular or the singular value decomposition (SVD)

A = U\cdot w \cdot V^T

where U is column-orthogonal, V  is an orthogonal matrix and w is a positive semi-definite diagonal matrix. The Cholesky Factorization

A = L \cdot L^T

where L is a lower triangular is the fasted method but often numerically unstable. The author in [1] summaries all methods and also outlines the computational efforts involved for m observations and n parameters as

  • Chlolesky Factorization: costs \sim mn^2 + n^3/3 flops
  • QR decomposition: costs \sim 2mn^2 - 2n^3/3 flops
  • Singular value decomposition: costs 2mn^2 + 11n^3 flops

For LSMC simulations we have m\gg n and therefore the QR decomposition has no computational advantages over the SVD.  Since a QR decomposition without column pivoting has numerical problems if A is rank-deficient, the singular value decomposition is often the method of choice for LSMC simulations.

All three decomposition methods are available in QuantLib, in LAPACK (with or without optimized OpenBLAS library) and in Intel’s MKL library. A standard Swing option valuation via LSMC should serve as a test bed to measure the performance of the QR decomposition with and without column pivoting and of the SVD algorithm. For LAPACK and MKL the methods dgels and dgesvd have been used to implement the SVD and the QR decomposition without pivoting whereas QR with pivoting is based on dgeqp3, dormqr and dtrsm.  In order to keep results comparable the single thread performance was measured in all cases. The reference prices are calculated via finite difference methods and all LSMC implementations have led to the same price in line with the reference price. The current QR implementation in QuantLib 1.7 has a performance issue if the number of rows is much larger than the number of columns. For these tests an improved version of QuantLib’s QR decomposition has been used.

swing_perf

As expected MKL is often the fasted library but the difference between MKL and  LAPACK plus OpenBLAS is small.

[1] Do Q Lee, 2012, Numerically Efficient Methods for Solving Least Squares Problems

 

Probability Distribution of the Heston Model, Part I

The Heston model is defined by the following stochastic differential equation of the log spot x_t = \ln S_t

\begin{array}{rcl} dx_t &=& \left(r_t - q_t - \frac{\nu_t}{2}\right)dt + \sqrt\nu_t dW^{x}_t \nonumber \\ d\nu_t&=& \kappa\left(\theta-\nu_t \right ) dt + \sigma\sqrt\nu_t dW^{\nu}_t \nonumber \\ \rho dt &=& dW^{x}_tdW^{\nu}_t \end{array}

To a significant extent the popularity of the Heston model is based on the fact that semi-closed formulas for vanilla European options exist using the characteristic function of the model. The time evolution of the probability density function p(x_t, v_t, t) is given by the corresponding Fokker-Planck equation [1]

\frac{\partial p}{\partial t} = \frac{1}{2}\frac{\partial^2}{\partial x^2}(\nu p) + \frac{\partial^2}{\partial x \partial \nu} (\rho\sigma\nu p) + \left(\frac{\nu}{2} -r_t+q_t\right)\frac{\partial}{\partial x} p + \frac{\sigma^2}{2}\frac{\partial^2}{\partial \nu^2}(\nu p) - \frac{\partial}{\partial \nu}\left(\kappa\left(\theta-\nu \right ) p\right)

with the initial condition

p(x,\nu,t=0) = \delta(x-x_0)\delta(\nu-\nu_0)

The reduced probability density function

p(x_t, t \mid x_0,\nu_0) = \int_0^\infty p(x_t, \nu_t, t) d\nu

for this initial value problem can be calculated using a semi-closed integral formula [2]

\begin{array}{rcl} \Gamma &=& \kappa+i\rho\sigma p_x \\ \Omega &=& \sqrt{\Gamma^2 + \sigma^2\left(p_x^2-ip_x \right )} \\ p(x_t, t \mid \nu_0) &=& \int_{-\infty}^\infty \frac{dp_x}{2\pi}\exp\left( ip_x (x_t-x_0 -(r-q)t) -\nu_0 \frac{p_x^2-ip_x}{\Gamma + \Omega \coth\left(\Omega t/2 \right )} \right) \\ && \ \ \ \ \ \ \ \ \times \exp\left(-\frac{2\theta\kappa}{\sigma^2}\ln\left(\cosh\frac{\Omega t}{2} + \frac{\Gamma}{\Omega} \sinh \frac{\Omega t}{2}\right )+\frac{\kappa\Gamma\theta t}{\sigma^2} \right) \\ &=& \int_{-\infty}^\infty \frac{dp_x}{2\pi} \tilde{p}(p_x,t \mid \nu_0) \end{array}

This gives the opportunity to write a pricing engine for arbitrary European payoffs. The value of an European option with payoff function P(S_t) \in \mathbb{L}^2 at maturity t is given by

\begin{array}{rcl} \text{npv} &=& \int_{-\infty}^\infty dx_t\int_{0}^\infty d\nu_t P(S_0 e^{x_t+(r_t-q_t)t})e^{-r_t t} p(x_t,\nu_t,t) \\ &=& e^{-r_t t}\int_{-\infty}^\infty dx_t P(S_0 e^{x_t+(r_t-q_t)t})p(x_t,t \mid x_0,\nu_0) \end{array}

The calculation needs two nested integrations which can be carried out efficiently using e. g. the Gauss-Lobatto algorithm. The solution of the equation

| \tilde{p}(p_x, t \mid \nu_0)| = \text{QL\_EPSILON}

determines the upper boundary for the integration over p_x. The boundaries \left[ -x_{min}, x_{max}\right] for the integration over x_t are chosen such that the interval covers ten times the expected variance

-x_{min} = x_{max}=10\sqrt{\int_0^{t}E\left[ \nu_t \right ] dt} = 10\sqrt{\theta t + \frac{1}{\kappa}\left(\nu_0-\theta\right)\left(1-e^{-\kappa t}\right)}

Obviously the nested integration makes this algorithm more tricky than the standard ways to price plain vanilla European options but it is not limited to vanilla payoffs. The implementation of this algorithm can be found here within the QuantLib trunk on Github.

Broadie and Kaya [1] have outlined an algorithm to sample from the full probability density function p(x_t, \nu_t, t \mid x_0, \nu_0) instead of the reduced density function p(x_t, t \mid x_0, \nu_0). Starting point for this algorithm is the exact solution of the Heston stochastic differential equation

\begin{array}{rcl} x_t &=& x_o + (r_t-q_t)t - \frac{1}{2}\int_0^t \nu_s ds + \rho\int_0^t \sqrt{\nu_s} dW_s^{(1)} + \sqrt{1-\rho^2}\int_0^t \sqrt{\nu_s} dW_s^{(2)} \nonumber \\ \nu_t &=& \nu_0 + \kappa\theta t - \kappa \int_0^t \nu_s ds + \sigma \int_0^t\sqrt{\nu_s}dW_s^{(1)} \nonumber \end{array}

The probability density function of the variance process \nu_t is given by a noncentral chi-squared distribution

\nu_t = \frac{\sigma^2\left( 1-e^{-\kappa t}\right)}{4\kappa}\chi_d^{'2}\left(\frac{4\kappa e^{-\kappa t}}{\sigma^2\left(1-e^{-\kappa t}\right)}\nu_0\right), d=\frac{4\theta\kappa}{\sigma^2}

The distribution \Psi(0,t) of the integral \int_0^t \nu_s ds conditional on \nu_t and \nu_0 can be calculated via the characteristic function

\begin{array}{rcl} \text{Pr}(\Psi(t) \le x)&=& \frac{2}{\pi}\int_0^\infty \frac{\sin ux}{u}\text{Re}(\Phi(u)) du \\ \\ \Phi(a)&=& \frac{\gamma(a)e^{-\frac{1}{2}(\gamma(a)-\kappa)t} \left(1-e^{-\kappa t}\right)} {\kappa\left(1-e^{\gamma(a)t}\right)} \exp\left( \frac{\nu_t+\nu_0}{\sigma^2} \left[ \frac{\kappa\left(1+e^{-\kappa t}\right)}{1-e^{-\kappa t}} - \frac{\gamma(a)\left(1+e^{-\gamma(a)t}\right)}{1-e^{-\gamma(a)t}} \right] \right) \\ && \times \frac{I_{0.5d-1} \left( \sqrt{\nu_0\nu_t} \frac{4\gamma (a) e^{-0.5\gamma(a)t}}{\sigma^2\left(1-e^{-\gamma(a)t}\right)}\right)}{ I_{0.5d-1} \left( \sqrt{\nu_0\nu_t} \frac{4\kappa e^{-0.5\kappa t}}{\sigma^2\left(1-e^{-\kappa t}\right)}\right)} \\ && \times \frac{\exp\left((0.5d-1) \left[-\frac{1}{2}\gamma(a)t + \ln \frac{\gamma(a)}{1-e^{-\gamma(a)t}} \right]\right)}{\left(\frac{\gamma(a) e^{-0.5\gamma(a)t} }{ 1-e^{-\gamma(a)t}}\right)^{0.5d-1}} \\ \\ \gamma(a)&=&\sqrt{\kappa^2-2 i \sigma^2 a} \end{array}

The modified Bessel function of first kind I_{0.5d-1}(z) can be evaluated using series expansion for small and medium |z| or asymptotic approximation for large |z| [5]. Unfortunately Boost provides only real versions of the Bessel functions and the copyright status of older complex valued Fortran77 routine is vague. Therefore QuantLib comes with its own implementation.

Please notice that \Phi(a) is already a continuous version of the characteristic function and therefore the integration does not need to track the branches of arg(z) when calculating the complex valued Bessel function [4].

The integration over the characteristic function is best done using either Gauss-Laguerre, Gauss-Lobatto or trapezoidal rule. The two later algorithms need to truncate the integration at some upper bound. First guess for a truncation limit can be taken from the Cornish-Fisher expansion  for some very small \epsilon. The moment-generating function \Phi(a) can be used to get the first, second and third moment of the distribution via finite difference quotient.

m_n = E(X^n) = \frac{d^n}{dy^n}\Phi(x+iy)\Big|_{x=y=0}

The next term is now fairly easy to calculate

\int_0^t \sqrt{\nu_s} dW_s^{(1)} = \frac{1}{\sigma}\left( \nu_t - \nu_0 - \kappa\theta t+\kappa \int_0^t \nu_s ds \right)

The log spot process can now be sampled using a standard normal random variable Z and

\begin{array}{rcl} x_t &=& x_0 + m(t) + \sigma(t)Z \nonumber \\ \sigma^2(t) &=& (1-\rho^2)\int_0^t \nu_s ds \nonumber \\ m(t) &=& (r_t-q_t)t - \frac{1}{2}\int_0^t \nu_s ds + \rho\int_0^t \sqrt{\nu_s}dW_s^{(1)} \nonumber \end{array}

This sampling algorithm is exact even for very large time steps and therefore gives some advantages for quasi random Monte-Carlo methods but the inversion of the integration of the characteristic function is also very slow. The algorithm is implemented within the HestonProcess class.

[1] I. Clark, Foreign Exchange Option Pricing: A Practitioners Guide, p. 113

[2] A. Dragulescu, V. Yakovenko, Probability distribution of returns in the Heston model with stochastic volatility

[3] M. Broadie, Ö. Kaya, Exact Simulation of Stochastic Volatility and other Affine Jump Diffusion Processes

[4] R. Lord, Efficient pricing algorithms for exotic derivatives, p. 40

[5] J.R. Culham, Bessel Functions of the First and Second Kind

Inter-Thread Communication using Lock-Free Algorithms

In his blog Martin Fowler discusses the LMAX architecture, a high throughput retail financial trading platform. A remarkable detail of the architecture is the central business logic processor, which is implemented as a single threaded Java program. The supporting pre- and post-processing is running as a multi threaded application using lock-free ring buffers.

In general lock-free algorithms are implemented using atomic read-modify-write primitives which are provided by modern CPUs. Probably the most used lock-free algorithm in the boost library is the reference counting in boost::shared_ptr. On popular hardware platforms these reference count updates are based on atomic increments/decrements using  compare-and-swap (CAS) operations. The performance improvements over normal mutex exclusion locks are significant as the following little experiment shows. Basis for this experiment is a function which increments an integer counter 500 million times in a loop [1]. This loop is carried out in

  • simple single threaded loop
  • single thread loop using atomic increments
  • single thread loop acquiring a mutex exclusion lock for every pass.
  • two threads using atomic increments
  • two threads using mutex exclusion locks

A Java and the C++ implementation is available here. The Java code uses the package java.util.concurrent.atomic and the C++ code uses boost::detail::atomic_count to implement the atomic increments. The run times are measured on a Core i3@3GHz.

Lock-free algorithms are difficult to implement and to debug. Tim Blechmann has written a little gem library boost::lockfree (Parts of the library are now in the boost release 1.53.0). Among others this library contains a wait-free single-producer/single-consumer ring buffer. This ring buffer is e.g. tailor-made to separate the Monte-Carlo path generation from the pricing of a derivate. With a few lines of code the path generation can then run in a different thread.

The code available here contains a BufferedDataFactory based on the boost::lockfree::ringbuffer class and a slightly modified version of the QuantLib test case HybridHestonHullWhiteProcessTest::testZeroBondPricing. In this version of the test case the path generation is running in a separated thread and uses the BufferedDataFactory to hand over the data to the pricing thread.

[1] Disruptor: High performance alternative to bounded queues for exchanging data between concurrent threads.

VPP Pricing V: Least-Squares Monte-Carlo

For the sake of completeness please find here the code for the evaluation of the virtual power plant (VPP) using a least-squares Monte-Carlo algorithm. The code depends on the latest QuantLib version from the SVN trunk or the upcoming QuantLib 1.2 release. The model and power plant specifications can be found in the previous blog entries. A more general description  of the problem and the algorithms can be found e.g. here [1]. Test forward curves can be taken e.g. from the Kyos example download page.

The regression polynomials are of third order in the spark spread x and the stochastic component of the gas price y.

p_a(x,y)=a_0 + a_1 x + a_2 x^2 +a_3x^3 + a_4y + a_5y^2 + a_6y^3 + a_7xy+a_8x^2y+a_9xy^2

The regression is carried out for every exercise right (every hour) and every possible VPP state separately. The calibration phase is based on ordinary Monte-Carlo scenarios, whereas the pricing is done using Quasi Monte-Carlo scenarios (Sobol sequence) and a Brownian Bridge (BB).

The following table summarizes the performance of the different pricing algorithms for the example contract and maturity of six month. Target accuracy is around 1% relative error in the NPV. The timings are given for a Core i5@3GHz CPU using four threads or a GTX560@0.8/1.6GHz GPU with 336 cores.

\footnotesize{  \begin{tabular}{|c|c|c|c|c|c|} \hline Algorithm & Optimisaton & Approximation & Hardware & Runtime & Comment\\ \hline \hline Quasi-MC + BB & dyn. prog. & perfect foresight & GPU & 0.19s & single precision \\ Quasi-MC + BB & dyn. prog. & perfect foresight & CPU & 20.3s & \\ Monte-Carlo & dyn. prog. & perfect foresight & CPU & 286.3s & \\ Finite Difference & dyn. prog.& no & CPU & 487.7s & \\ Least-Squares MC & dyn. prog. & no & CPU & 645.6s & \\ Quasi-MC + BB & linear prog. & perfect foresight & CPU & 4198s & using GLPK \\ \hline \end{tabular} }

I don’t know the reason for the bad performance of the Gnu Linear Programming Kit for these kind of problems. Some commercial linear optimizer are much faster but they can not compete with dynamic programming for a simple VPP. As soon as e.g. time integral constraints are involved linear programming might become the method of choice.

[1] H. van Dijken, The value of starting up the power plant.

Pricing of a Virtual Power Plant on a GPU

Even the pricing of  a simple virtual power plant (VPP) is challenging. Main reasons are the high number of possible states of the VPP and the large number of possible exercise dates because often a VPP is priced as a bermudan-style option with hourly exercise rights. The implementation effort for an exact pricing engine based on finite difference methods (see e.g.[1]) or based on least squares Monte-Carlo is comparable large. As shown in [1] Monte-Carlo combined with  perfect foresight optimization can result in a very good approximation. The algorithm consists of a Monte-Carlo path generator and a dynamic programming optimization part, which calculates the optimal load schedule plan for each path separately. The stochastic processes involved are outlined in [1].

The CUDA based GPU implementation is available here.  It depends on the latest QuantLib version from the SVN trunk or the upcoming QuantLib 1.2 release and CUDA 4.0. The corresponding C++ implementation is a speed-optimized version of the test-case VPPTest::testVPPPricing. This version also supports multi-threading. The following hardware was used to compare both implementations:

  • CPU: Core i5@3.0 GHz, quad-core
  • GPU: GTX560@810/1620MHz, 336 cores

As can be seen in the diagram below the GPU outperforms the CPU roughly by a factor 100 for single precision and a factor of 50 if the GPU is using double precision.

The CUDA implementation consists of the following files:

gpuvpppricingengine.hpp / gpuvpppricingengine.cpp
A QuantLib pricing engine for a simple VPP based on a Monte-Carlo simulation and perfect foresight optimization via dynamic programming. The physical size of the Monte-Carlo simulation is controlled by the following parameters of the constructor

  1. Size nSimulations: number of Monte-Carlo simulations carried out.
  2. bool antithetic: enables/disables antithetic sampling
  3. Size blockSize: number of threads in a CUDA block.
  4. Size gridSize: number of CUDA blocks that are grouped together in a simulation kernel.

gpuvpppricingengine_kernel.hpp
gpuvpppricingengine_kernel.cu/ gpuvpppricingengine_kernel.def
The CUDA implementation consists of two kernels. The first kernel is the Monte-Carlo path generator, which calculates the paths on hourly granularity and stores them in the global memory of the graphic card.. The technics used are outlined e.g. in [2], [3] and [4]. The second kernel performs the optimization of the load schedule based on dynamic programming. The memory layout of this step depends on the number of possible states of the VPP because every possible state is stored in the shared memory of the GPU. The number of states is given by N_{states} = 2t_{up}+t_{down}. CUDA does not support efficient dynamic shared memory allocation. Therefore the sizes of all shared memory arrays must be given at compile time. To allow an optimal use of the limited shared memory capacity different kernels with different N_{states} values are generated using X-macros and the appropriate kernel is chosen at runtime.

cudatype.hpp
defines basic CUDA types, especially the typedef for the type “real” can be used to compile the code either for single or double precision.

gpurand.hpp
C++ interface for a GPU random number generator

gpucurand.hpp / gpucurand.cpp / gpucurand_kernel.hpp / gpucurand_kernel.cu
implementation of the GPURand interface based on the CURAND library, which is part of CUDA 4.0.

[1] this blog, VPP Pricing III: Exact Pricing based on Finite Difference Methods.

[2] L. Howes, D. Thomas, Efficient Random Number Generation and Application Using CUDA.

[3] A. Bernemann, R. Schreyer, K. Spanderen, Accelerating Exotic Option Pricing and Model Calibration Using GPUs

[4] M. Joshi, Graphical Asian Options