Finite-Difference Solver for the SABR Model

Despite being based on a fairly simple stochastic differential equation

\begin{array}{rcl} \displaystyle dF_t &=& \alpha_t F_t^\beta dW_t \\ \nonumber d\alpha_t &=& \nu \alpha_t dZ_t \\ \nonumber \rho dt &=& <dW_t, dZ_t> \end{array}

the corresponding partial differential equation

\displaystyle \frac{\partial u}{\partial t} - \frac{\nu^2}{2}\frac{\partial u}{\partial x} + \frac{1}{2}e^{2x}F_t^{2\beta}\frac{\partial^2 u}{\partial F^2}+\frac{\nu^2}{2}\frac{\partial^2 u}{\partial x^2}+\rho\nu e^{x_t}F_t^\beta\frac{\partial^2 u}{\partial F \partial x} -ru = 0

for the SABR model – derived using the variable transformation x_t = \ln \alpha_t together with Ito’s lemma and the Feynman-Kac formula – is quite difficult to solve numerically. Part of the problem is the process of the underlying, which corresponds to a constant elasticity of variance (CEV) model if \nu is zero. This model exhibits a variety of different behaviours depending on the value of \beta and on the boundary conditions. The authors in [1] give a comprehensive overview on this topic. To limit the possible model zoo let’s define

\displaystyle X\left(F_t\right) = X_t=\frac{F_t^{2(1-\beta)}}{\alpha^2(1-\beta)^2} \ \wedge \  \delta=\frac{1-2\beta}{1-\beta}

and assume absorbing boundary conditions at X=0 if \delta < 2. First step for an implementation of a finite difference scheme is to find efficient limits for the discretization grid. These limits can e.g. be derived from the cumulative distribution function of the underlying process.

\textrm{Pr}(F \le F_t | F_0) = \begin{cases} 1-{\chi '}^{2}\left( \frac{X_0}{t}; 2-\delta, \frac{X_t}{t}\right), & \textrm{if} \ \delta < 2 \\ \nonumber 1 - {\chi '}^{2}\left( \frac{X_t}{t}; \delta, \frac{X_0}{t}\right), & \textrm{if}  \ \delta > 2 \end{cases} .

Please notice that the underlying X(F_t) in the first case \delta < 2 appears in the non centrality parameter of the non central chi-squared distribution {\chi '}^{2}, hence the calculation of the inverse can only be carried out by a numerical root-finding algorithm. Sankaran’s approximation of the non central chi-squared distribution {\chi '}^{2} can be used to speed-up this method [2]. In the second case \delta > 2 the inverse of the equation \displaystyle \textrm{Pr}(F \le F_t | F_0) = q is

F_t= \left[{t \left({\chi'}^2\right)}^{-1} \left(1-q; \delta, \frac{X_0}{t}\right)  \alpha^2(1-\beta)^2\right]^{\frac{1}{2(1-\beta)}}

which can be computed using the boost library. In addition adaptive grid refinement around important points and cell averaging around special points of the payoff at maturity [3] improve the accuracy of the CEV finite difference scheme. F_T is only a local martingale when \delta > 2 or equivalent \beta > 1. In this case the call-put parity reads [1]

P_{\delta > 2} = C_{\delta>2} + K - F_0\Gamma\left(-\nu; \frac{X_0}{2t}\right)

and acts as a litmus test for the implementation of the boundary conditions and the finite difference scheme.

The variance direction x_t = \ln \alpha_t matches a Brownian motion, hence the discretization is straight forward. Operator splitting schemes like Craig-Syned or Hundsdorfer-Verwer [4] are tailor made to solve the two dimensional partial differential equation.

The finite difference solver can now be used to compare different approximations with the correct model behaviour and also to compare it with high accurate Monte-Carlo simulations [2]. The model configuration [5]

F_0=1.0, \alpha=0.35, \beta=0.25, \nu=1.0, \rho=0.25, T=2.0

should act as a test bed. As can be seen in the diagram below the Monte-Carlo results are in line with the results from finite difference methods within very small error bars. The standard Hagen et al. formula as well as the Le Floc’h-Kennedy [6] log-normal formula deviate significantly from the correct implied volatilities.

implvol

The approximations are not arbitrage free. The probability densities for small strikes turn negative. Hagen’s formula exhibits this behaviour already for larger strikes than the Le Floc’h-Kennedy formula as can be seen in the following diagram. As expected the finite difference solution does not produce negative probabilities.

propdensity

A note on the Floc’h-Kennedy approximation, the formula becomes numerically unstable around ATM strike levels, hence a second order Taylor expansion is used for moneyness

m=\frac{F}{K}-1.0 \in \{-0.0025, 0.0025\}.

The following diagram shows the difference between second order Taylor expansion and the formula evaluated using IEEE-754 double precision.

taylor

The source code is part of the PR #589 and available here.

[1] D.R. Brecher, A.E. Lindsay: Results on the CEV Process, Past and Present.
[2] B. Chen, C.W. Oosterlee, H. Weide, Efficient unbiased simulation scheme for the SABR stochastic volatility model.
[3] K. in’t Hout: Numerical Partial Differential Equations in Finance explained.
[4] K. in’t Hout, S. Foulon: ADI Finite Difference Schemes for Option Pricing in the Heston Model with Correlation.
[5] P. Hagen, D. Kumar, A. Lesnieski, D. Woodward: Arbitrage free SABR.
[6] F. Le Floc’h, G. Kennedy: Explicit SABR Calibration through Simple Expansions.

 

Advertisements

Compile QuantLib on an Android Mobile

Termux is an Android terminal emulator and Linux environment app which can be installed directly w/o rooting. It comes with a minimal Linux base system and additional packages can be installed afterwards. The following script compiles and runs QuantLib in a Termux session

pkg install clang autoconf automake libtool boost-dev nano git
git clone https://github.com/lballabio/quantlib.git
cd quantlib
./autogen.sh
./configure --disable-static CXXFLAGS='-O2' CXX='clang++'
make
./test-suite/quantlib-test-suite

Arbitrary Number of Stencil Points

The standard finite difference implementations of derivative pricing algorithms based on partial differential equations have a spatial order of convergence of two. Reason is that these implementations are using a three point central stencil for the first and second order derivatives. The three point stencil leads to a tridiagonal matrix. Such linear systems can be solved efficiently with help of the Thomas algorithm.

Higher order of spatial accuracy relies on stencils with more points. The corresponding linear systems can no longer be solved by the Thomas algorithm but by the BiCGStab iterative solver with the classical three point stencil as a very efficient preconditioner. The calculation algorithm for the coefficients of larger stencils on arbitrary grids is outlined in [1].

Before using higher order stencils one should first make sure, that the two standard error reduction techniques are in place, namely adaptive grid refinement around important points [2] and cell averaging around special points of the payoff at maturity or – equally effective for vanilla options – put the strike in the middle between two grid points. Let’s focus on the Black-Scholes-Merton PDE

\displaystyle \frac{\partial V}{\partial t} + \frac{1}{2}\sigma^2\frac{\partial^2 V}{\partial x^2} + \left(r-q-\frac{\sigma^2}{2}\right)\frac{\partial V}{\partial x} -rV = 0 \ \wedge \ x= \ln S

after transforming it to the heat equation

\displaystyle \frac{\partial u}{\partial \tau}=\frac{1}{2}\sigma^2\frac{\partial^2 u}{\partial y^2} \ \wedge \ y = x + \left(r-q-\frac{1}{2}\sigma^2\right)\tau \ \wedge \ t=T-\tau \ \wedge \ u=e^{r\tau}V

to demonstrate the convergence improvements. The spatial pricing error is defined as the RMSE for a set of benchmark options. In this example the benchmark portfolio consists of OTM call or put options with strikes

k\in \{50, 75, 90, 100, 110, 125, 150, 200\}.

The other parameters in this example are

S_0=100, r=5\%, q=2.5\%, \sigma=20\%, T=1.

The Crank-Nicolson scheme with “more than enough” time steps was used to integrate the PDE in time direction such that only the spatial error remains.

convergence

As can be seen in the diagram above the spatial order of convergence increases from second order to fourth order when moving from the standard three point to the five point stencil. On the other hand solving the resulting linear systems takes longer now. All in all only if the target RMSE is below 10^{-5}, the five point stencil will be faster than the standard three point discretization.

The diagram below shoes the effective combination of a five point stencil and the Richardson extrapolation.

richardsonDiagrams are based on the test cases testHigerOrderBSOptionPricing and
testHigerOrderAndRichardsonExtrapolationg in the PR #483.

 

[1] B. Fornberg, [1988], Generation of Finite Difference Formulas on Arbitrarily Spaced Grids.
[2] Tavella, D. and C.Randall [2000], Pricing Financial Instruments The Finite Dierence Method, Wiley Series In Financial Engineering, John Wiley & Sons, New York

Andreasen-Huge Volatility Interpolation

A few years ago Andreasen and Huge have introduced an efficient and arbitrage free volatility interpolation method [1] based on a one step finite difference implicit Euler scheme applied to a local volatility parametrization. Probably the most notable use case is the generation of a local volatility surface from a set of option quotes.

Starting point is Dupire’s forward equation for the prices of European call options C_t(T, K) at time t with strike K and maturity T is given by

\displaystyle \frac{\partial C_t(T, K)}{\partial T} = \frac{1}{2}\sigma_{LV}^2(T,K)K^2\frac{\partial^2 C_t(T, K)}{\partial K^2} - \left(r(T) -q(T)\right) K\frac{\partial C_t(T, K)}{\partial K}-q(T)C_t(T, K)

Define the normalized call price \hat{C}_t(T, \kappa) in terms of the discount factor P(t,T), the forward price F(t,T) and the moneyness \kappa as

\begin{array}{rcl} \displaystyle P(t,T) &=& \displaystyle  e^{-\int_t^T r(\tau)d\tau} \nonumber \\ \displaystyle  F(t,T) &=&  S_t  \displaystyle  e^{\int_t^T \left( r(\tau)-q(\tau)\right)d\tau} \nonumber \\ \displaystyle \kappa &=& \displaystyle \frac{K}{F(t,T)} \nonumber \\ \displaystyle \hat{C}_t(T, \kappa) &=& \displaystyle \frac{C_t \left( T, \kappa F(t,T) \right) }{P(t,T)F(t,T)} \nonumber \end{array}.

The Dupire forward equation for the normalized prices \hat{C}_t(T,\kappa) is then given by

\displaystyle \frac{\partial \hat{C}_t(T, \kappa)}{\partial T} = \frac{1}{2} \sigma_{LV}(T, \kappa F(t,T))\kappa^2 \frac{\partial^2 \hat{C}_t(T, \kappa)}{\partial \kappa^2} .

Rewriting this equation in terms of u=\ln \kappa amd \tilde{C}_t(T, u) = \hat{C}_t(T, \kappa) yields

\displaystyle  \frac{\partial \tilde{C}_t(T,u)}{\partial T} = \frac{1}{2} \sigma_{LV}^2 \left( T, e^u F(t,T)\right) \left( \frac{\partial^2 \tilde{C}_t(T, u)}{\partial u^2} - \frac{\partial \tilde{C}_t(T, u)}{\partial u}\right)

The normalized put prices \tilde{P}_t(T, u) are fulfilling the same equation, which can easily been shown by inserting the call-put parity into the equation above

\displaystyle \tilde{C}_t(T, u) = \tilde{P}_t(T,u) + 1 - e^u.

The numerical stability of the original algorithm [1] can be enhanced for deep ITM options by calibrating to calls and puts instantaneously. Also the interpolation scheme has a significant impact on the stability. This topic has been discussed in [2][3]. Using concentrated meshes along the current spot level for the finite difference scheme is of advantage for the stability and accuracy of the algorithm.

In order to stabilize the calculation of the local volatility function

\displaystyle \sigma_{LV}\left( T, e^uF(t,T)\right) = \sqrt{\frac{2 \frac{\partial \tilde{C}_t(T,u)}{\partial T}}{  \frac{\partial^2 \tilde{C}_t(T, u)}{\partial u^2} - \frac{\partial \tilde{C}_t(T, u)}{\partial u} }}

one should evaluate the first order derivative of \tilde{C}_t(T,u) w.r.t. time T using the fact that the derivative of the inverse of the matrix \bf{A}(t) is given by

\displaystyle \frac{\partial \bf{A}(t)^{-1}}{\partial t} = -\bf{A}(t)^{-1} \frac{\partial \bf{A}(t)}{\partial t} \bf{A}(t)^{-1}

As an example, the diagram below shows different calibrations of the Andreasen-Huge volatility interpolation to a SABR volatility skew at discrete strike sets

x \in \{ 0.02, 0.025, 0.03, 0.035, 0.04, 0.05, 0.06\} \vee x\in \{0.03, 0.035, 0.04\}

for the SABR parameter

\alpha = 0.15, \beta = 0.8, \nu = 0.5, \rho = -0.48, \text{fwd} = 0.03, T=20
sabr

The QuantLib implementation is part of the release 1.12.

[1] J. Andreasen, B. Huge, Volatility Interpolation
[2] F. Le Floc’h, Andreasen-Huge interpolation – Don’t stay flat
[3] J. Healy, A spline to fill the gaps with Andreasen-Huge one-step method

Adaptive SOR Method for Implied Volatility Calculation

In a recent blog contribution Fabien Le Floc’h [1] suggests to combine the adaptive successive over-relexation method [2] with an improved explicit approximate implied volatility formula [3] to calculate the initial guess. The implementation of both algorithms is straight forward.

A large set of OTM and ITM options together with different displacement factors has been identified to serve as a benchmark portfolio to compare the performance with the original QuantLib implementation. As can be seen in the diagram below the new method is – depending on the accuracy – two to three times faster than the current QuantLib implementation.

sorThe implementation is available here, the diagram above is derived from the test case testImpliedVolAdaptiveSuccessiveOverRelaxation in the class BlackFormulaTest.

[1] Le Floc_h, F (2017) Implied Volatility from Black Scholes Price

[2] Li, M. (2008) An Adaptive Successive Over-relaxation Method for Computing the Black-Scholes Implied Volatility

[3] J. Gatheral, I. Matic, R. Radoicic, D. Stefanica (2017), Tighter Bounds for Implied Volatility

Andersen-Piterbarg Integration Limits for the Time Dependent Heston Model

The normalized characteristic function \phi_t(z) of a piecewise constant time dependent Heston model

\begin{array}{rcl} d \ln S_t&=& \left(r_t - q_t - \frac{\nu_t}{2}\right)dt + \sqrt\nu_t dW^{S}_t \nonumber \\ d\nu_t&=& \kappa_t\left(\theta_t-\nu_t \right ) dt + \sigma_t\sqrt\nu_t dW^{\nu}_t \nonumber \\ \rho_t dt &=& dW^{S}_tdW^{\nu}_t \end{array}

with n time intervals [t_0=0, t_1], ... ,[t_{n-1}, t_n] and constant parameters within these intervals

\kappa_t = \kappa_j \wedge \theta_t = \theta_j \wedge \sigma_t=\sigma_j \wedge \rho_t=\rho_j \forall j\in [1, n] \wedge t\in [t_{j-1}, t_j]

is given by the recurrence relation

\begin{array}{rcl}  k_j &=& \kappa_j - iz\rho_j\sigma_j \nonumber \\ d_j &=& \sqrt{k_j^2 +\sigma_j^2(z^2+iz)} \nonumber \\  g_j &=&\displaystyle \frac{k_j- d_j}{k_j + d_j} \nonumber \\ \tilde{g_j} &=& \displaystyle \frac{k_j- d_j - D_{j+1}\sigma_j^2}{k_j + d_j - D_{j+1}\sigma_j^2} \nonumber \\ D_j &=&  \displaystyle \frac{k_j + d_j}{\sigma_j^2}\frac{g_j-\tilde{g_j} e^{-d_j \tau_j}}{1-\tilde{g_j} e^{-d_j\tau_j}} \nonumber \\ C_j &=& \displaystyle \frac{\kappa_j\theta_j}{\sigma_j^2} \left( (k_j- d_j )\tau_j - 2\ln\left(\frac{1-\tilde{g_j}e^{-d_j\tau_j}}{1-\tilde{g_j}}\right)\right) + C_{j+1} \nonumber \\ \tau_j &=& t_j - t_{j-1} \nonumber \\ \phi_{t_n}(z) &=& \exp{\left(C_1(z)+D_1(z)\nu_0\right)}\end{array}

and the initial condition

C_{n+1} = D_{n+1} = 0

It should be noted that the complex logarithm in this formulation can be restricted to the principal branch without introducing any discontinuities [1]. It is important to know the asymptotic behaviour of C_1(z)+D_1(z)\nu_0 in order to calculate the truncation point for the integral over the characteristic function when using the Andersen-Piterbarg approach with control variate [2]. A Mathematica script gives

\begin{array}{rcl} \displaystyle \lim_{u\to\infty}\frac{D_1(u)}{u} &=& \displaystyle-\frac{i\rho_1 + \sqrt{1-\rho_1^2} }{\sigma_1} \nonumber \\ \displaystyle\lim_{u\to\infty}\frac{C_1(u)}{u} &=& \displaystyle -\sum_{j=1}^n \frac{\kappa_j\theta_j}{\sigma_j}\left(\sqrt{1-\rho_j^2} + i\rho_j\right)\tau_j\end{array}

The implementation of the Andersen-Piterbarg method for the piecewise constant time dependent Heston model is part of the pull request #251.

[1] Afshani, S. (2010) Complex logarithms and the piecewise constant extension of the Heston model.

[2] Andersen, L. and Piterbarg, V. (2010) Interest Rate Modeling, Volume I: Foundations and Vanilla Models, (Atlantic Financial Press London).

New Semi-Analytic Heston Pricing Algorithms

The pricing engines for the Heston model in QuantLib have aged a bit. Meanwhile newer and better algorithms have been developed and discussed in the literature. For a comprehensive review please see [1][2]. Time to refurbish the existing engines.

The Heston model is defined by the following stochastic differential equation of the log spot

\begin{array}{rcl} d \ln S_t&=& \left(r_t - q_t - \frac{\nu_t}{2}\right)dt + \sqrt\nu_t dW^{S}_t \nonumber \\ d\nu_t&=& \kappa\left(\theta-\nu_t \right ) dt + \sigma\sqrt\nu_t dW^{\nu}_t \nonumber \\ \rho dt &=& dW^{S}_tdW^{\nu}_t \end{array}

The normalized characteristic function in the Gatheral formulation is given by

\phi_t(z) = \exp\left\{\frac{v_0}{\sigma^2}\frac{1-e^{-Dt}}{1-Ge^{-Dt}}\left(\kappa-i\rho\sigma z-D\right) + \frac{\kappa\theta}{\sigma^2}\left(\left(\kappa-i\rho\sigma z-D\right)t-2\ln\frac{1-Ge^{-Dt}}{1-G}\right) \right\}

\begin{array}{rcl} D&=&\sqrt{\left(\kappa - i\rho\sigma z\right)^2+\left(z^2+iz\right)\sigma^2} \nonumber \\ G&=&\displaystyle\frac{\kappa -i\rho\sigma z-D}{\kappa -i\rho\sigma z + D}\end{array}

Andersen and Piterbarg [3] introduced a Black-Scholes control variate to improve the numerical stability of Lewis’s formula (2001) for the price of a vanilla call option

\begin{array}{rcl} C(S_0, K, T)&=&BS\left(S_0, K, T, \sqrt{\nu_0}\right) \nonumber \\ &+& \frac{\sqrt{Se^{(r-q)t}K}e^{-rt}}{\pi}\displaystyle\int\limits_{0}^{\infty}{\Re\left( \frac{\phi^{BS}_T\left(u-\frac{i}{2}\right) - \phi_T\left(u-\frac{i}{2}\right)}{u^2+\frac{1}{4}} e^{i u \left(\ln\frac{S}{K}+(r-q)T\right) } \right)  \mathrm{d}u}\end{array}

with the Black-Scholes price BS\left(S_0, K, T, \sqrt(\nu_0)\right) of a vanilla call option with volatility \sqrt{\nu_0} and the characteristic function

\phi^{BS}_T(z)=e^{-\frac{1}{2}\sigma_{BS}^2 T (z^2 + iz)}

Different proposals for the volatility of the vanilla option have been brought up in order to achieve an optimal control variate:

  • \sigma_{BS}^2 = \nu_0
  • \sigma_{BS}^2 = \frac{1}{\kappa t}\left(1-e^{-\kappa t}\right)\left(\nu_0-\theta\right) + \theta
  • \sigma_{BS}^2 = c_2 with c_2 the second cumulant of \ln \frac{F}{K} [1].

The diagram below shows the resulting integrand for the different control variate volatilities, the Heston model parameters

r=7.5\%, q=5\%, S_0=100, \nu_0=0.08, \rho=-80\%,\sigma=0.5, \kappa=4, \theta=0.05

and for a vanilla call option with strike K=200 and maturity t=2.0.

control_variate

The best choice depends on the Heston- and option parameters but it seems that

\sigma_{BS}^2 = \frac{1}{\kappa t}\left(1-e^{-\kappa t}\right)\left(\nu_0-\theta\right) + \theta

is a good option for a large variety of parameters. A direct comparison of the integrand with and without control variate shows how effective the control variate is.

control_comparison

The central trick of Andersen-Piterbarg is now to truncate Lewis’s infinite integral at a finite u_{max} such that the remaining part is smaller than a given threshold based on the following inequation

\displaystyle\int_{u_{max}}^\infty \left\vert \frac{\phi_B(u-\frac{i}{2}) - \phi(u-\frac{i}{2})}{u^2+\frac{1}{4}} \right\vert du \le e^{-C_\infty u_{max}}\displaystyle\int_{u_{max}}^\infty \frac{1}{u^2} du

Please see the original paper or [2] for further details on how to get from here to an algorithm for u_{max}, especially for short dated options.

As Andersen and Piterbarg have pointed out, the simple trapezoidal rule works surprisingly well to carry out the resulting integral if a control variate is used. Alan Lewis’s test cases with high precision Heston reference prices should serve as a test bed here, in particular the call with strike 100. The Gauss-Laguerre method is very thankful for this particular test case but the point here is that the trapezoidal rule converges much faster than the higher order Simpson rule or the even more complex adaptive Gauss-Lobatto method.

convergence_speed

The diagram below compares the recently added COS engine with the Andersen-Piterbarg method using the trapezoid rule for this test-case. The Andersen-Piterbarg method is more accurate for a similar number of points.

convergence_comparison

The implementation is part of the pull request #251.

[1] M. Schmelzle, Option Pricing Formulae using Fourier Transform: Theory and Application.

[2] F. Le Floc’h, Fourier Integration and Stochastic Volatility Calibration.

[3] L. Andersen, and V. Piterbarg, 2010,  Interest Rate Modeling, Volume I: Foundations and Vanilla Models,  Atlantic Financial Press London.