Optimized Heston Model Integration: Exponentially-Fitted Gauss-Laguerre Quadrature Rule.

Aim: Develop an exponentially-fitted Gauss-Laguerre quadrature rule to price European options under the Heston model, which outperforms given Gauss-Lobatto, Gauss-Laguerre and other pricing method implementations in QuantLib.

Status quo: Efficient pricing routines for the Heston model

\begin{array}{rcl} d \ln S_t&=& \left(r_t - q_t - \frac{\nu_t}{2}\right)dt + \sqrt\nu_t dW^{S}_t \nonumber \\ d\nu_t&=& \kappa\left(\theta-\nu_t \right ) dt + \sigma\sqrt\nu_t dW^{\nu}_t \nonumber \\ \rho dt &=& dW^{S}_tdW^{\nu}_t \end{array}

are based on the integration over the normalized characteristic function in the Gatheral formulation

\phi_t(z) = \exp\left\{\frac{v_0}{\sigma^2}\frac{1-e^{-Dt}}{1-Ge^{-Dt}}\left(\kappa-i\rho\sigma z-D\right) + \frac{\kappa\theta}{\sigma^2}\left(\left(\kappa-i\rho\sigma z-D\right)t-2\ln\frac{1-Ge^{-Dt}}{1-G}\right) \right\}

\begin{array}{rcl} D&=&\sqrt{\left(\kappa - i\rho\sigma z\right)^2+\left(z^2+iz\right)\sigma^2} \nonumber \\ G&=&\displaystyle\frac{\kappa -i\rho\sigma z-D}{\kappa -i\rho\sigma z + D}\end{array}

in combination with a Black-Scholes control variate to improve the numerical stability of Lewis’s formula [1][2][3]. The normalized characteristic function of the Black-Scholes model is given by

\phi^{BS}_T(z)=e^{-\frac{1}{2}\sigma_{BS}^2 T (z^2 + iz)}

and the price of a vanilla call option can then be calculated based on

\begin{array}{rcl} C(S_0, K, T)&=&BS\left(S_0, K, T, \sigma_{BS}\right) \nonumber \\ &+& \frac{\sqrt{Se^{(r-q)T}K}e^{-rT}}{\pi}\displaystyle\int\limits_{0}^{\infty}{\Re\left( \frac{\phi^{BS}_T\left(u-\frac{i}{2}\right) - \phi_T\left(u-\frac{i}{2}\right)}{u^2+\frac{1}{4}} e^{i u \left(\ln\frac{S}{K}+(r-q)T\right) } \right)  \mathrm{d}u}\end{array}

where BS\left(S_0, K, T, \sigma_{BS}\right) is the corresponding Black-Scholes price. Different choices for the volatility of the control variate are discussed in the literature. For the following examples the volatility will be defined by either

\sigma_{BS}^2 = \frac{1}{\kappa T}\left(1-e^{-\kappa T}\right)\left(\nu_0-\theta\right) + \theta

or

\displaystyle \sigma_{BS}^2 = -\frac{8}{T} \ln{\Re\left(\phi_T\left(-i/2\right)\right)}.

The first one matches the overall variance for \sigma=0 whereas the latter one matches the values of the characteristic functions at the starting point z=-i/2 of the integration. Usually the latter choice gives better results. Looking at the integrand above one can directly spot a weak point in the algorithm for deep in the money/deep out of the money options. In this case the integrand becomes a highly oscillating function due to the term

e^{i u \left(\ln\frac{S}{K}+(r-q)T\right) }.

Second, when using Gauss-Laguerre quadrature the overlap of the weight function e^{-x} with the characteristic function becomes sub-optimal for very short maturities or small effective volatilities as can be seen with the Black-Scholes characteristic function already.

Third, for very small \sigma the integrand becomes prone to subtractive cancellation errors.

The last issue is easy to overcome by using the second order Taylor expansion of the normalized characteristic for small \sigma. The corresponding mathematica script can be seen here.

Rescaling the integrand improves the second issue by rewriting the integral in terms of x = \eta u with

\eta = \max\left(0.01, \min\left(10, \frac{1}{\sqrt{8(1-e^{-\kappa t})(v_0-\theta)/\kappa + \theta t}}\right)\right)

The first problem, the highly oscillating integrand, can be tackled with the exponentially fitted Gauss-Laguerre quadrature rule [4]. The numerical integration of two smooth functions f_1(x) and f_2(x) with

I = \displaystyle \int_0^\infty e^{-x} \left(f_1(x) \sin(\mu x) + f_2(x)\cos(\mu x)\right)\mathrm{dx} = \int_0^\infty f(x)\mathrm{dx}

is approximated by a Gauss-Laguerre quadrature rule of the form

I \approx \sum_i^N w_i(\mu) f(x_i(\mu)).

In this use case the frequency \mu is

\mu =\ln\frac{S}{K}+(r-q)T,

and the weights w_i and nodes x_i of the Gauss-Laguerre quadrature rule become functions of the frequency \mu. The algorithm outlined in [4] is too slow to be used at runtime and in the original form only usable up to N=8. In order to use it for the Heston model we pre-calculate the weights and nodes for a defined set of frequencies \{0 \leq \mu_i\ \leq 50\}. The closest pre-calculated value \mu_j for a given frequency \mu is then used to evaluate the integral. Of course it would be optimal to use the weights and nodes for \mu instead of \mu_j but it is far, far better than assuming \mu=0 as it is done within the conventional Gauss-Laguerre quadrature rule.

Two techniques are key to get to larger N for the pre-calculated weights and nodes:

  • Start the algorithm with \mu=0 to get the standard Gauss-Laguerre weights/nodes and gently increase \mu afterwards. Use the old weights and nodes from the previous up to 20 steps to generate the next starting vector for the Newton solver based on the Lagrange interpolating polynomials.
  • Increase “numerical head room” by using the boost multi-precision package instead of double precision. This is only relevant for the pre-calculation. The resulting weights and nodes are stored as normal double precision values.

The source code to generate the weights and nodes for the exponential fitted Gauss-Laguerre quadrature rule is available here.

The reference results for the comparison with the other Heston pricing methods are generated using a boost multi-precision version of the Gauss-Laguerre pricing algorithm of order N=2000. Comparison with the results for N=1500 and N=2500 ensures the correctness of the results. The exponentially fitted Gauss-Laguerre quadrature rule is always carried out using N=48 nodes, corresponding to only 48 valuations of the characteristic function. First example model is

\displaystyle s=1.0, t=1,v_0=0.04, \kappa=2.5,\theta=0.06, \sigma=0.75,\rho=-0.1

expfit1

Exponential fitting outperforms the other methods especially when taking the total number of characteristic function valuations into consideration. As expected the method works remarkable well for deep OTM/ITM options and ensures that the absolute pricing error stays below 10^{-15} for an extremely large range of option strikes. Next round, same model but t =10, \rho=-0.75.

expfit2

Again the pricing error for exponential fitting stays below 10^{-14} for all moneynesses between -15 and 15 and well below the other methods. Next test, same model but very short maturity with t=\frac{1}{365}. The COS-method with 250 calls of the characteristic function and exponential fitting with N=48 are close together and outperform all other methods.

expfit3

The same graphs are shown for a variety of different Heston parameters being used in the literature in the following document. The exponentially fitted Gauss-Laguerre quadrature rule with only 64 nodes solves the problem almost always with \epsilon < 10^{-13} for moneynesses between -20 and 20 and maturities between one day and 10 years and outperforms the other methods, especially when taking the number of characteristic function calls needed into consideration. This method can also be extend to larger number of nodes.

The QuantLib implementation of the algorithm is part of the PR#812.

[1] Lewis, A. A simple option formula for general jump-diffusion and other exponential Lévy processes

[2] F. Le Floc’h, Fourier Integration and Stochastic Volatility Calibration.

[3] L. Andersen, and V. Piterbarg, 2010,  Interest Rate Modeling, Volume I: Foundations and Vanilla Models,  Atlantic Financial Press London.

[4] D. Conte, L. Ixaru, B. Paternoster, G. Santomauro, Exponentially-fitted Gauss–Laguerre quadrature rule for integrals over an unbounded interval.

Advertisement

3 thoughts on “Optimized Heston Model Integration: Exponentially-Fitted Gauss-Laguerre Quadrature Rule.

  1. Hello Mr. Spanderen,
    first of all a big thank you for your article! I currently working on developing a fast GPU based heston calibration as my bachelor thesis. First, I implemented a GPU version of Carr/Madan and ran into many numerical issues before discovering your article, which apparently solved almost all my issues! Therefor I have two questions:

    1) The scaling factor eta, clamped into [0.1,10], seems kinda like black magic to me. Can you give me a reference where this scaling factor and its boundaries are coming from?

    2) I could use the scaling factor such that it matches exactly one (arbitrary) frequency, e.g.
    const auto& target_freq = flt_coefficients[target_index][0];
    scalingFactor = (target_freq/freq); //ignoring/overwriting eta

    Tests show that for some target_index, it does not, or only slightly, change the final option value, but for some it breaks down. I assume that’s expected and it would be really great if someone could give me a hint here.

    Kind regards from Germany (Aachen),
    MK

    • Scaling to extreme values does not play nice with the limited order of the Laguerre integration as one might miss other important features of the characteristic function. The interval was the optimum for a very large set of benchmark Heston parameter sets I’m using for testing. Unfortunately I have no further references.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s