Heston PDE Grid: How small can we make it?

The aim is to apply the techniques used in Arbitrary Number of Stencil Points to the partial differential equation (PDE) of the Heston model with x_t = \ln S_t

\displaystyle  0 = \frac{\partial u}{\partial t} +\frac{1}{2}\nu\frac{\partial^2 u}{\partial x^2}  + \left(r_t-q_t -\frac{\nu}{2}\right)\frac{\partial u}{\partial x} + \rho\sigma\nu \frac{\partial^2 u}{\partial \nu \partial x} +\frac{1}{2}\sigma^2\nu\frac{\partial^2 u}{\partial \nu^2} + \kappa(\theta - \nu) \frac{\partial u}{\partial \nu}-r_tu

to get – for a given accuracy – to the smallest possible lattice grids. The Heston model parameters at hand are

\displaystyle S_0 = 100, v_0 = 0.04, \kappa = 1, \theta = v_0, \sigma=0.2, \rho=-0.75, r=5\%, q=0\%,

the strikes of the benchmark options are given by

\displaystyle k\in \{50, 75, 90, 100, 110, 125, 150, 200\}

and the maturity of the options should be one year. First let’s use a five point stencil in x direction. As expected the order of convergence is increasing from second to fourth order.

The same experiment but now with five point stencil also in \nu direction is shown below. Finite lattice effects show-up already at relatively small lattices due to the high convergence speed.

In time direction Richardson extrapolation can be used to increase the order of convergence from two to three.

So finally for this parameter set if the aim is to get the average pricing error below \displaystyle 10^{-3} then a lattice size of

\displaystyle x\times\nu\times T = 50\times 10 \times 20

seems to be sufficient when using five point stencil operators in x and \nu direction and Richardson extrapolation in time direction. Unfortunately for a given accuracy this technique is slower than the usual suspects, e.g. Operator splitting because the matrix inversion for the Crank-Nicolson scheme has to be carried out by an iterative solver, namely by BiCGstab. The code for the numerical experiments can be found in the test case

NthOrderDerivativeOpTest::testHigerOrderHestonOptionPricing().

Advertisement

Almost exact SABR Interpolation using Neural Networks and Gradient Boosted Trees

Update 03-11-2019: Added arbitrage free SABR calibration based on neural networks.

Very efficient approximations exist for the SABR model

\begin{array}{rcl} \displaystyle dF_t &=& \alpha_t F_t^\beta dW_t \\ \nonumber d\alpha_t &=& \nu \alpha_t dZ_t \\ \nonumber \rho dt &=& <dW_t, dZ_t> \end{array}

like the original Hagan et. al. formula [1] or variants of it [2] but these analytic formulas are in general not arbitrage free. Solving the corresponding partial differential equation leads to an arbitrage free solution

\displaystyle \frac{\partial u}{\partial t} - \frac{\nu^2}{2}\frac{\partial u}{\partial x} + \frac{1}{2}e^{2x}F_t^{2\beta}\frac{\partial^2 u}{\partial F^2}+\frac{\nu^2}{2}\frac{\partial^2 u}{\partial x^2}+\rho\nu e^{x_t}F_t^\beta\frac{\partial^2 u}{\partial F \partial x} -ru = 0

but is computationally demanding. The basic idea here is to use a neural network or gradient boosted trees to interpolate (predict) the difference between the analytic approximation and the exact result from the partial differential equation for a large variate of model parameters.

First step is to reduce the number of dimensions of the parameter space \{F_0,  \alpha, \beta, \nu, \rho \} by utilizing the scaling symmetry of the SABR model [3]

\begin{array}{rcl} \displaystyle F_t &\rightarrow& \lambda F_t \\ \nonumber \alpha_t &\rightarrow& \lambda^{1-\beta}\alpha_t. \end{array}

so that we can focus on the case F_0=1.0 without lose of generality

\displaystyle  \sigma_{BS}\left( K, F_0, T, \alpha, \beta, \nu, \rho \right) = \sigma_{BS}\left(\lambda K, \lambda F_0, T, \lambda^{1-\beta}\alpha, \beta, \nu, \rho \right).

This in turns also limits the “natural” parameter space for \{ \alpha, \beta, \nu \} which will be set to

\displaystyle \alpha \in [0, 1], \ \beta \in [0, 1], \ \nu \in [0, 1].

Next on the list is to set-up an efficient PDE solver to prepare the training data. The QuantLib solver supports already the two standard error reduction techniques, namely adaptive grid refinement around important points and cell averaging around special points of the payoff. The latter one ensure a smooth second order convergence in spatial direction [4]. The Hundsdorfer-Viewer ADI scheme is also of second order in the time direction and additional Rannacher smoothing steps at the beginning will ensure a smooth convergence in the time direction as well [5].  Hence the Richardson extrapolation can be used to improve the convergence order of the overall algorithm. An example pricing for

F_0=1.0, K=0.466, T=0.6, \alpha = 0.825, \beta=0.299, \nu=0.410, \rho=-0.166

is shown in the diagram below to demonstrate the efficiency of the Richardson extrapolation. The original grid size for scaling factor 1.0 is \{ T, S, v \} = \{20, 200, 25\}.
sabr_pde_richardson.png

The training data was generated by a five dimensional quasi Monte-Carlo Sobol sequence for the parameter ranges

\displaystyle \alpha \in [0, 1], \ \beta \in [0, 1], \ \rho \in [-1, 1], \ \nu \in [0, 1], T\in [\frac{1}{12}, 1].

The strikes are equally distributed between the 1\% and 99\% quantile of the risk neutral density distribution w.r.t to the ATM volatility of the SABR model. The PDE solver will not only calculate the fair value for F_0=1.0 but for a range of spot values around F_0. Using the scaling symmetry of the SABR model this can be utilized to calculate more prices with new K'=\lambda K and \alpha'=\lambda^{1-\beta}\alpha values for F_0=1.0.

The training set includes 617K samples values.  The network is trained to fit the difference between the correct SABR volatility from the solution of the partial differential equation and the Floc’h-Kennedy approximation. It does not need a large neural network to interpolate the parameter space, e.g. the following Tensorflow/Keras model definition with 46K parameters has been used in the examples below

model = Sequential()
model.add(Dense(20, activation='linear', input_shape=(7, )))
model.add(Dense(100, activation='linear'))
model.add(Dense(400, activation='sigmoid'))
model.add(Dense(10, activation='tanh'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='mae', optimizer='adam')

As always it is important for the predictive power of the neural network to normalize the input data e.g. by using sklearn.preprocessing.MinMaxScaler. The out-of-sample mean absolute error of the neural network is around 0.00025 in annualized volatility, far better than the Kennedy-Floc’h or Hagan et al approximation.sabr_vol_dist

The diagram below shows the difference between the correct volatility and the different approximations using the parameter example from the previous post. sabr_volsOne could also used gradient tree boosting algorithms like XGBoost or LightGBM. For example the models

xgb_model = xgb.XGBRegressor(nthread=-1,
                             max_depth=50,
                             n_estimators=100,
                             eval_metric ="mae")

gbm_model = lgb.train({'objective': 'mae',
                       'num_leaves': 500 }
                      lgb.Dataset(train_X, train_Y),
                      num_boost_round=2000,
                      valid_sets=lgb_eval,
                      early_stopping_rounds=20)

result in similar out-of-sample mean absolute errors of 0.00030 for XGBoost and 0.00035 for LightGBM. On the first glance the interpolation looks smooth as can be seen in the diagram below using the same SABR model parameters, but zooming into it exposes non differentiable points, which defeats the object of stable greeks.

xgboost.pngThe average run time for the different approximations is shown in the tabular below.

\begin{tabular}{|l|c|c|} \hline \textbf{Algorithm} & \textbf{Run Time}  \\ \hline Hagan et al & 0.16us  \\ \hline Floc'h-Kennedy & 1.86us  \\ \hline Tensorflow DNN & 12.39us  \\ \hline XGBoost & 4.61us \\ \hline LightGBM & 21.84 us \\ \hline PDE \& Richardson & 1.23s \\ \hline\end{tabular}

With this highly efficient pricing routines calibration of the full SABR model can be done in a fraction of a second. To test this approach several Heston parameter configurations have been used to calculated the implied volatility of 15 benchmark options for a single expiry. The full SABR model has been calibrated against these volatility sets with help of a standard Levenberg-Marquardt optimizer by either using the PDE pricer or the neural network pricer. As expected the neural network calibration routine has only taken 0.2 seconds but the PDE calibration has taken over half an hour on average.

[1] P. Hagan, D. Kumar, A. Lesnieski, D. Woodward: Managing Smile Risk.
[2] F. Le Floc’h, G. Kennedy: Explicit SABR Calibration through Simple Expansions.
[3] H. Park: Efficient valuation method for the SABR model.
[4] K. in’t Hout: Numerical Partial Differential Equations in Finance explained.
[5] K. in’t Hout, M. Wyns: Convergence of the Hundsdorfer–Verwer scheme for two-dimensional convection-diffusion equations with mixed derivative term

 

Finite-Difference Solver for the SABR Model

Despite being based on a fairly simple stochastic differential equation

\begin{array}{rcl} \displaystyle dF_t &=& \alpha_t F_t^\beta dW_t \\ \nonumber d\alpha_t &=& \nu \alpha_t dZ_t \\ \nonumber \rho dt &=& <dW_t, dZ_t> \end{array}

the corresponding partial differential equation

\displaystyle \frac{\partial u}{\partial t} - \frac{\nu^2}{2}\frac{\partial u}{\partial x} + \frac{1}{2}e^{2x}F_t^{2\beta}\frac{\partial^2 u}{\partial F^2}+\frac{\nu^2}{2}\frac{\partial^2 u}{\partial x^2}+\rho\nu e^{x_t}F_t^\beta\frac{\partial^2 u}{\partial F \partial x} -ru = 0

for the SABR model – derived using the variable transformation x_t = \ln \alpha_t together with Ito’s lemma and the Feynman-Kac formula – is quite difficult to solve numerically. Part of the problem is the process of the underlying, which corresponds to a constant elasticity of variance (CEV) model if \nu is zero. This model exhibits a variety of different behaviours depending on the value of \beta and on the boundary conditions. The authors in [1] give a comprehensive overview on this topic. To limit the possible model zoo let’s define

\displaystyle X\left(F_t\right) = X_t=\frac{F_t^{2(1-\beta)}}{\alpha^2(1-\beta)^2} \ \wedge \  \delta=\frac{1-2\beta}{1-\beta}

and assume absorbing boundary conditions at X=0 if \delta < 2. First step for an implementation of a finite difference scheme is to find efficient limits for the discretization grid. These limits can e.g. be derived from the cumulative distribution function of the underlying process.

\textrm{Pr}(F \le F_t | F_0) = \begin{cases} 1-{\chi '}^{2}\left( \frac{X_0}{t}; 2-\delta, \frac{X_t}{t}\right), & \textrm{if} \ \delta < 2 \\ \nonumber 1 - {\chi '}^{2}\left( \frac{X_t}{t}; \delta, \frac{X_0}{t}\right), & \textrm{if}  \ \delta > 2 \end{cases} .

Please notice that the underlying X(F_t) in the first case \delta < 2 appears in the non centrality parameter of the non central chi-squared distribution {\chi '}^{2}, hence the calculation of the inverse can only be carried out by a numerical root-finding algorithm. Sankaran’s approximation of the non central chi-squared distribution {\chi '}^{2} can be used to speed-up this method [2]. In the second case \delta > 2 the inverse of the equation \displaystyle \textrm{Pr}(F \le F_t | F_0) = q is

F_t= \left[{t \left({\chi'}^2\right)}^{-1} \left(1-q; \delta, \frac{X_0}{t}\right)  \alpha^2(1-\beta)^2\right]^{\frac{1}{2(1-\beta)}}

which can be computed using the boost library. In addition adaptive grid refinement around important points and cell averaging around special points of the payoff at maturity [3] improve the accuracy of the CEV finite difference scheme. F_T is only a local martingale when \delta > 2 or equivalent \beta > 1. In this case the call-put parity reads [1]

P_{\delta > 2} = C_{\delta>2} + K - F_0\Gamma\left(-\nu; \frac{X_0}{2t}\right)

and acts as a litmus test for the implementation of the boundary conditions and the finite difference scheme.

The variance direction x_t = \ln \alpha_t matches a Brownian motion, hence the discretization is straight forward. Operator splitting schemes like Craig-Sneyd or Hundsdorfer-Verwer [4] are tailor made to solve the two dimensional partial differential equation.

The finite difference solver can now be used to compare different approximations with the correct model behaviour and also to compare it with high accurate Monte-Carlo simulations [2]. The model configuration [5]

F_0=1.0, \alpha=0.35, \beta=0.25, \nu=1.0, \rho=0.25, T=2.0

should act as a test bed. As can be seen in the diagram below the Monte-Carlo results are in line with the results from finite difference methods within very small error bars. The standard Hagan et al. formula as well as the Le Floc’h-Kennedy [6] log-normal formula deviate significantly from the correct implied volatilities.implvol

The approximations are not arbitrage free. The probability densities for small strikes turn negative. Hagan’s formula exhibits this behaviour already for larger strikes than the Le Floc’h-Kennedy formula as can be seen in the following diagram. As expected the finite difference solution does not produce negative probabilities.

propdensity2.pngA note on the Floc’h-Kennedy approximation, the formula becomes numerically unstable around ATM strike levels, hence a second order Taylor expansion is used for moneyness

m=\frac{F}{K}-1.0 \in \{-0.0025, 0.0025\}.

The following diagram shows the difference between second order Taylor expansion and the formula evaluated using IEEE-754 double precision.

taylor

The source code is part of the PR #589 and available here.

[1] D.R. Brecher, A.E. Lindsay: Results on the CEV Process, Past and Present.
[2] B. Chen, C.W. Oosterlee, H. Weide, Efficient unbiased simulation scheme for the SABR stochastic volatility model.
[3] K. in’t Hout: Numerical Partial Differential Equations in Finance explained.
[4] K. in’t Hout, S. Foulon: ADI Finite Difference Schemes for Option Pricing in the Heston Model with Correlation.
[5] P. Hagan, D. Kumar, A. Lesnieski, D. Woodward: Arbitrage free SABR.
[6] F. Le Floc’h, G. Kennedy: Explicit SABR Calibration through Simple Expansions.

 

Arbitrary Number of Stencil Points

The standard finite difference implementations of derivative pricing algorithms based on partial differential equations have a spatial order of convergence of two. Reason is that these implementations are using a three point central stencil for the first and second order derivatives. The three point stencil leads to a tridiagonal matrix. Such linear systems can be solved efficiently with help of the Thomas algorithm.

Higher order of spatial accuracy relies on stencils with more points. The corresponding linear systems can no longer be solved by the Thomas algorithm but by the BiCGStab iterative solver with the classical three point stencil as a very efficient preconditioner. The calculation algorithm for the coefficients of larger stencils on arbitrary grids is outlined in [1].

Before using higher order stencils one should first make sure, that the two standard error reduction techniques are in place, namely adaptive grid refinement around important points [2] and cell averaging around special points of the payoff at maturity or – equally effective for vanilla options – put the strike in the middle between two grid points. Let’s focus on the Black-Scholes-Merton PDE

\displaystyle \frac{\partial V}{\partial t} + \frac{1}{2}\sigma^2\frac{\partial^2 V}{\partial x^2} + \left(r-q-\frac{\sigma^2}{2}\right)\frac{\partial V}{\partial x} -rV = 0 \ \wedge \ x= \ln S

after transforming it to the heat equation

\displaystyle \frac{\partial u}{\partial \tau}=\frac{1}{2}\sigma^2\frac{\partial^2 u}{\partial y^2} \ \wedge \ y = x + \left(r-q-\frac{1}{2}\sigma^2\right)\tau \ \wedge \ t=T-\tau \ \wedge \ u=e^{r\tau}V

to demonstrate the convergence improvements. The spatial pricing error is defined as the RMSE for a set of benchmark options. In this example the benchmark portfolio consists of OTM call or put options with strikes

k\in \{50, 75, 90, 100, 110, 125, 150, 200\}.

The other parameters in this example are

S_0=100, r=5\%, q=2.5\%, \sigma=20\%, T=1.

The Crank-Nicolson scheme with “more than enough” time steps was used to integrate the PDE in time direction such that only the spatial error remains.

convergence

As can be seen in the diagram above the spatial order of convergence increases from second order to fourth order when moving from the standard three point to the five point stencil. On the other hand solving the resulting linear systems takes longer now. All in all only if the target RMSE is below 10^{-5}, the five point stencil will be faster than the standard three point discretization.

The diagram below shoes the effective combination of a five point stencil and the Richardson extrapolation.

richardsonDiagrams are based on the test cases testHigerOrderBSOptionPricing and
testHigerOrderAndRichardsonExtrapolationg in the PR #483.

 

[1] B. Fornberg, [1988], Generation of Finite Difference Formulas on Arbitrarily Spaced Grids.
[2] Tavella, D. and C.Randall [2000], Pricing Financial Instruments The Finite Dierence Method, Wiley Series In Financial Engineering, John Wiley & Sons, New York

Andreasen-Huge Volatility Interpolation

A few years ago Andreasen and Huge have introduced an efficient and arbitrage free volatility interpolation method [1] based on a one step finite difference implicit Euler scheme applied to a local volatility parametrization. Probably the most notable use case is the generation of a local volatility surface from a set of option quotes.

Starting point is Dupire’s forward equation for the prices of European call options C_t(T, K) at time t with strike K and maturity T is given by

\displaystyle \frac{\partial C_t(T, K)}{\partial T} = \frac{1}{2}\sigma_{LV}^2(T,K)K^2\frac{\partial^2 C_t(T, K)}{\partial K^2} - \left(r(T) -q(T)\right) K\frac{\partial C_t(T, K)}{\partial K}-q(T)C_t(T, K)

Define the normalized call price \hat{C}_t(T, \kappa) in terms of the discount factor P(t,T), the forward price F(t,T) and the moneyness \kappa as

\begin{array}{rcl} \displaystyle P(t,T) &=& \displaystyle  e^{-\int_t^T r(\tau)d\tau} \nonumber \\ \displaystyle  F(t,T) &=&  S_t  \displaystyle  e^{\int_t^T \left( r(\tau)-q(\tau)\right)d\tau} \nonumber \\ \displaystyle \kappa &=& \displaystyle \frac{K}{F(t,T)} \nonumber \\ \displaystyle \hat{C}_t(T, \kappa) &=& \displaystyle \frac{C_t \left( T, \kappa F(t,T) \right) }{P(t,T)F(t,T)} \nonumber \end{array}.

The Dupire forward equation for the normalized prices \hat{C}_t(T,\kappa) is then given by

\displaystyle \frac{\partial \hat{C}_t(T, \kappa)}{\partial T} = \frac{1}{2} \sigma_{LV}(T, \kappa F(t,T))\kappa^2 \frac{\partial^2 \hat{C}_t(T, \kappa)}{\partial \kappa^2} .

Rewriting this equation in terms of u=\ln \kappa amd \tilde{C}_t(T, u) = \hat{C}_t(T, \kappa) yields

\displaystyle  \frac{\partial \tilde{C}_t(T,u)}{\partial T} = \frac{1}{2} \sigma_{LV}^2 \left( T, e^u F(t,T)\right) \left( \frac{\partial^2 \tilde{C}_t(T, u)}{\partial u^2} - \frac{\partial \tilde{C}_t(T, u)}{\partial u}\right)

The normalized put prices \tilde{P}_t(T, u) are fulfilling the same equation, which can easily been shown by inserting the call-put parity into the equation above

\displaystyle \tilde{C}_t(T, u) = \tilde{P}_t(T,u) + 1 - e^u.

The numerical stability of the original algorithm [1] can be enhanced for deep ITM options by calibrating to calls and puts instantaneously. Also the interpolation scheme has a significant impact on the stability. This topic has been discussed in [2][3]. Using concentrated meshes along the current spot level for the finite difference scheme is of advantage for the stability and accuracy of the algorithm.

In order to stabilize the calculation of the local volatility function

\displaystyle \sigma_{LV}\left( T, e^uF(t,T)\right) = \sqrt{\frac{2 \frac{\partial \tilde{C}_t(T,u)}{\partial T}}{  \frac{\partial^2 \tilde{C}_t(T, u)}{\partial u^2} - \frac{\partial \tilde{C}_t(T, u)}{\partial u} }}

one should evaluate the first order derivative of \tilde{C}_t(T,u) w.r.t. time T using the fact that the derivative of the inverse of the matrix \bf{A}(t) is given by

\displaystyle \frac{\partial \bf{A}(t)^{-1}}{\partial t} = -\bf{A}(t)^{-1} \frac{\partial \bf{A}(t)}{\partial t} \bf{A}(t)^{-1}

As an example, the diagram below shows different calibrations of the Andreasen-Huge volatility interpolation to a SABR volatility skew at discrete strike sets

x \in \{ 0.02, 0.025, 0.03, 0.035, 0.04, 0.05, 0.06\} \vee x\in \{0.03, 0.035, 0.04\}

for the SABR parameter

\alpha = 0.15, \beta = 0.8, \nu = 0.5, \rho = -0.48, \text{fwd} = 0.03, T=20
sabr

The QuantLib implementation is part of the release 1.12.

[1] J. Andreasen, B. Huge, Volatility Interpolation
[2] F. Le Floc’h, Andreasen-Huge interpolation – Don’t stay flat
[3] J. Healy, A spline to fill the gaps with Andreasen-Huge one-step method

The Collocating Local Volatility Model

In a recent paper [1] Lech Grzelak has introduced his Collocating Local Volatility Model (CLV). This model utilises the so called collocation method [2] to map the cumulative distribution function of an arbitrary kernel process onto the true cumulative distribution function (CDF) extracted from option market quotes.

Starting point for the collocating local volatility model is the market implied CDF of an underlying S_t at time t_i:

F_{S(t_i)}(x) = 1 + e^{r t_i}\frac{\partial V_{call}(t_0, t_i, K)}{\partial K}\mid_{K=x} = e^{r t_i}\frac{\partial V_{put}(t_0, t_i, K)}{\partial K}\mid_{K=x}

The prices can also be given by another calibrated pricing model, e.g. the Heston model or the SABR model. To increase numerical stability it is best to use OTM calls and puts.

The dynamics of the spot process S_t should be given by some stochastic process X_t and a deterministic mapping function g(t, x) such that

S_t=g(t, X_t)

The mapping function g(t, x) ensures that the terminal distribution of S_t given by the CLV model matches the market implied CDF. The model then reads

\begin{array}{rcl} S_t &=& g(t,X_t)  \nonumber \\ dX_t &=& \mu(X_t)dt + \sigma(X_t)dW_t\nonumber \end{array}

The choice of the stochastic process X_t does not influence the model prices of vanilla European options – they are given by the market implied terminal CDF – but influences the dynamics of the forward volatility skew generated by the model and therefore the prices of more exotic options. It is also preferable to choose an analytical trackable process X_t to reduce the computational efforts.

The collocation methods outlined in [2] defines an efficient algorithm to approximate the mapping function g(t, x) based on a set of collocation points x_{i,j}=x_j(T_i) for a given set of maturities T_i, i=1,...,m and j=1,...,n interpolation points per maturity T_i. Let F_{S_{T_i}}(s) be the market implied CDF for a given expiry T_i. Then we get

\begin{array}{rcl}  F_{X_{T_i}}\left(x_{i,j}\right) &=& F_{S_{T_i}}\left(g(T_i, x_{i,j})\right) = F_{S_{T_i}}  \left(s_{i,j}\right) \nonumber \\ \Rightarrow s_{i,j}&=&F^{-1}_{S_{T_i}}\left(F_{X_{T_i}}(x_{i,j})\right) \nonumber \end{array}

for the collocation points with s_{i,j}=g(T_i, x_{i,j}).

The optimal collocation points are given by the abscissas of the Gaussian quadrature for the terminal distribution of X_{T_i}. The simplest choice is a normally distribute kernel process X_t like the Ornstein-Uhlenbeck process

dX_t = \kappa(\theta-X_t)dt + \sigma dW_t.

The corresponding collocation points of the Normal-CLV model are then given by

\begin{array}{rcl} x_j(t) &=& \mathbb{E}\left[X_t\right] + \sqrt{\mathbb{V}ar\left[X_t\right]} x_j^{\mathcal{N}(0,1)}  \nonumber \\ &=& \theta + \left(X_0 - \theta)e^{-\kappa t}\right) + \sqrt{\frac{\sigma^2}{2\kappa}\left(1-e^{-2\kappa t}\right)} x_j^{\mathcal{N}(0,1)}, \ j=1,...,n\end{array}

in which the collocation points x_j^{\mathcal{N}(0,1)} of the standard normal distribution can be calculated by QuantLib’s Gauss-Hermite quadrature implementation

Array abscissas = std::sqrt(2)*GaussHermiteIntegration(n).x()

Lagrange polynomials [3] are an efficient interpolation scheme to interpolate the mapping function g(t, x) between the collocation points

g(t, X_t) = \sum_{j=1}^N s_j (t)\prod_{k=1, j\neq k}^N \frac{X(t)-x_j(t)}{x_k(t)-x_j(t)}

Strictly speaking Lagrange polynomials do not preserve monotonicity and one could also use monotonic interpolation schemes supported by QuantLib’s spline interpolation routines. As outlined in [2] this method can also be used to approximate the inverse CDF of an “expensive” distributions.

Calibration of the Normal-CLV model to market prices is therefore pretty fast and straight forward as it takes the calibration of g(t, x_t).

Monte-Carlo pricing means simulating the trackable process X_t and evaluate the  Lagrange polynomial if the value of the spot process S_t is needed. Pricing via partial differential equation involves the one dimensinal PDE

\frac{\partial V}{\partial t} + \mu(x)\frac{\partial V}{\partial x} + \frac{1}{2}\sigma^2(x)\frac{\partial^2 V}{\partial x^2}-rV = 0

with the terminal condition at maturity time T

V(T, x_T) = \text{Payoff}\left(S_T=g(T,x_T)\right)

For plain vanilla options the upper and lower boundary condition is

\frac{\partial^2 V}{\partial x^2} = 0 \ \ \forall x\in\left\{x_{min},x_{max}\right\}

Example 1: Pricing error for plain vanilla options

  • Market prices are given by the Black-Scholes-Merton model with

S_0=100, r=0.1, q=0.04, \sigma=0.25.

  • Normal-CLV process parameters are given by

\kappa=1.0, \theta=0.1,\sigma=0.5,x_0=0.1

Ten collocation points are used to define the mapping function g(t, x) and the time to maturity is one year. The diagram below shows the deviation of the implied volatility of the Normal-CLV price based on the PDE solution from the true value of 25%

clvpriceerror

Even ten collocation points are already enough to obtain a very small pricing error. The advice in [2] to stretch the collocation grid has turned out to be very efficient if the number of collocation points gets larger.

Example 2: Forward volatility skew

  • Market prices are given by the Heston model with

S_0=100, r=0.1, q=0.05, \nu_o=0.09, \kappa=1.0, \theta=0.06, \sigma=0.4, \rho=-0.75.

  • Normal-CLV process parameters are given by

\kappa=-0.075, \theta=0.05,\sigma=0.25,x_0=0.05

The diagram below shows the implied volatility of an forward starting European option with moneyness varying from 0.5 to 2 and maturity date six month after the reset date.

hestonforward

The shape of the forward volatility surface of the Normal-CLV model shares important similarities with the surfaces of the more complex Heston or Heston Stochastic Local Volatility (Heston-SLV) model with large mixing angles \eta. But by the very nature of the Normal-CLV model, the forward volatility does not depend on the values of \theta, \sigma or x_0, which limits the variety of different forward skew dynamics this model can create. CLV models with non-normal kernel processes will support a greater variety.

Example 3: Pricing of Double-no-Touch options

  • Market prices are given by the Heston model with

S_0=100, r=0.02, q=0.01, \nu_o=0.09, \kappa=1.0, \theta=0.06, \sigma=0.8\rho=-0.8.

  • Normal-CLV process parameters are given by different \kappa values and

\theta=100,\sigma=0.15,x_0=100.0

Unsurprisingly the prices of 1Y Double-no-Touch options exhibit similar patterns with the Normal-CLV model and the Heston-SLV model as shown below in the “Moustache” graph. But the computational efforts using the Normal-CLV model are much smaller than the efforts for calibrating and solving the Heston-SLV model.

moustache.png

The QuantLib implementation of the Normal-CLV model is available as a pull request #117, the Rcpp based package Rclv contains the R interface to the QuantLib implementation and the demo code for all three examples.

[1] A. Grzelak, 2016, The CLV Framework – A Fresh Look at Efficient Pricing with Smile

[2] L.A. Grzelak, J.A.S. Witteveen, M.Suárez-Taboada, C.W. Oosterlee,
The Stochastic Collocation Monte Carlo Sampler: Highly efficient sampling from “expensive” distributions

[3] J-P. Berrut, L.N. Trefethen,  Barycentric Lagrange interpolation, SIAM Review, 46(3):501–517, 2004.

Probability Distribution of the Heston Model, Part I

The Heston model is defined by the following stochastic differential equation of the log spot x_t = \ln S_t

\begin{array}{rcl} dx_t &=& \left(r_t - q_t - \frac{\nu_t}{2}\right)dt + \sqrt\nu_t dW^{x}_t \nonumber \\ d\nu_t&=& \kappa\left(\theta-\nu_t \right ) dt + \sigma\sqrt\nu_t dW^{\nu}_t \nonumber \\ \rho dt &=& dW^{x}_tdW^{\nu}_t \end{array}

To a significant extent the popularity of the Heston model is based on the fact that semi-closed formulas for vanilla European options exist using the characteristic function of the model. The time evolution of the probability density function p(x_t, v_t, t) is given by the corresponding Fokker-Planck equation [1]

\frac{\partial p}{\partial t} = \frac{1}{2}\frac{\partial^2}{\partial x^2}(\nu p) + \frac{\partial^2}{\partial x \partial \nu} (\rho\sigma\nu p) + \left(\frac{\nu}{2} -r_t+q_t\right)\frac{\partial}{\partial x} p + \frac{\sigma^2}{2}\frac{\partial^2}{\partial \nu^2}(\nu p) - \frac{\partial}{\partial \nu}\left(\kappa\left(\theta-\nu \right ) p\right)

with the initial condition

p(x,\nu,t=0) = \delta(x-x_0)\delta(\nu-\nu_0)

The reduced probability density function

p(x_t, t \mid x_0,\nu_0) = \int_0^\infty p(x_t, \nu_t, t) d\nu

for this initial value problem can be calculated using a semi-closed integral formula [2]

\begin{array}{rcl} \Gamma &=& \kappa+i\rho\sigma p_x \\ \Omega &=& \sqrt{\Gamma^2 + \sigma^2\left(p_x^2-ip_x \right )} \\ p(x_t, t \mid \nu_0) &=& \int_{-\infty}^\infty \frac{dp_x}{2\pi}\exp\left( ip_x (x_t-x_0 -(r-q)t) -\nu_0 \frac{p_x^2-ip_x}{\Gamma + \Omega \coth\left(\Omega t/2 \right )} \right) \\ && \ \ \ \ \ \ \ \ \times \exp\left(-\frac{2\theta\kappa}{\sigma^2}\ln\left(\cosh\frac{\Omega t}{2} + \frac{\Gamma}{\Omega} \sinh \frac{\Omega t}{2}\right )+\frac{\kappa\Gamma\theta t}{\sigma^2} \right) \\ &=& \int_{-\infty}^\infty \frac{dp_x}{2\pi} \tilde{p}(p_x,t \mid \nu_0) \end{array}

This gives the opportunity to write a pricing engine for arbitrary European payoffs. The value of an European option with payoff function P(S_t) \in \mathbb{L}^2 at maturity t is given by

\begin{array}{rcl} \text{npv} &=& \int_{-\infty}^\infty dx_t\int_{0}^\infty d\nu_t P(S_0 e^{x_t+(r_t-q_t)t})e^{-r_t t} p(x_t,\nu_t,t) \\ &=& e^{-r_t t}\int_{-\infty}^\infty dx_t P(S_0 e^{x_t+(r_t-q_t)t})p(x_t,t \mid x_0,\nu_0) \end{array}

The calculation needs two nested integrations which can be carried out efficiently using e. g. the Gauss-Lobatto algorithm. The solution of the equation

| \tilde{p}(p_x, t \mid \nu_0)| = \text{QL\_EPSILON}

determines the upper boundary for the integration over p_x. The boundaries \left[ -x_{min}, x_{max}\right] for the integration over x_t are chosen such that the interval covers ten times the expected variance

-x_{min} = x_{max}=10\sqrt{\int_0^{t}E\left[ \nu_t \right ] dt} = 10\sqrt{\theta t + \frac{1}{\kappa}\left(\nu_0-\theta\right)\left(1-e^{-\kappa t}\right)}

Obviously the nested integration makes this algorithm more tricky than the standard ways to price plain vanilla European options but it is not limited to vanilla payoffs. The implementation of this algorithm can be found here within the QuantLib trunk on Github.

Broadie and Kaya [1] have outlined an algorithm to sample from the full probability density function p(x_t, \nu_t, t \mid x_0, \nu_0) instead of the reduced density function p(x_t, t \mid x_0, \nu_0). Starting point for this algorithm is the exact solution of the Heston stochastic differential equation

\begin{array}{rcl} x_t &=& x_o + (r_t-q_t)t - \frac{1}{2}\int_0^t \nu_s ds + \rho\int_0^t \sqrt{\nu_s} dW_s^{(1)} + \sqrt{1-\rho^2}\int_0^t \sqrt{\nu_s} dW_s^{(2)} \nonumber \\ \nu_t &=& \nu_0 + \kappa\theta t - \kappa \int_0^t \nu_s ds + \sigma \int_0^t\sqrt{\nu_s}dW_s^{(1)} \nonumber \end{array}

The probability density function of the variance process \nu_t is given by a noncentral chi-squared distribution

\nu_t = \frac{\sigma^2\left( 1-e^{-\kappa t}\right)}{4\kappa}\chi_d^{'2}\left(\frac{4\kappa e^{-\kappa t}}{\sigma^2\left(1-e^{-\kappa t}\right)}\nu_0\right), d=\frac{4\theta\kappa}{\sigma^2}

The distribution \Psi(0,t) of the integral \int_0^t \nu_s ds conditional on \nu_t and \nu_0 can be calculated via the characteristic function

\begin{array}{rcl} \text{Pr}(\Psi(t) \le x)&=& \frac{2}{\pi}\int_0^\infty \frac{\sin ux}{u}\text{Re}(\Phi(u)) du \\ \\ \Phi(a)&=& \frac{\gamma(a)e^{-\frac{1}{2}(\gamma(a)-\kappa)t} \left(1-e^{-\kappa t}\right)} {\kappa\left(1-e^{\gamma(a)t}\right)} \exp\left( \frac{\nu_t+\nu_0}{\sigma^2} \left[ \frac{\kappa\left(1+e^{-\kappa t}\right)}{1-e^{-\kappa t}} - \frac{\gamma(a)\left(1+e^{-\gamma(a)t}\right)}{1-e^{-\gamma(a)t}} \right] \right) \\ && \times \frac{I_{0.5d-1} \left( \sqrt{\nu_0\nu_t} \frac{4\gamma (a) e^{-0.5\gamma(a)t}}{\sigma^2\left(1-e^{-\gamma(a)t}\right)}\right)}{ I_{0.5d-1} \left( \sqrt{\nu_0\nu_t} \frac{4\kappa e^{-0.5\kappa t}}{\sigma^2\left(1-e^{-\kappa t}\right)}\right)} \\ && \times \frac{\exp\left((0.5d-1) \left[-\frac{1}{2}\gamma(a)t + \ln \frac{\gamma(a)}{1-e^{-\gamma(a)t}} \right]\right)}{\left(\frac{\gamma(a) e^{-0.5\gamma(a)t} }{ 1-e^{-\gamma(a)t}}\right)^{0.5d-1}} \\ \\ \gamma(a)&=&\sqrt{\kappa^2-2 i \sigma^2 a} \end{array}

The modified Bessel function of first kind I_{0.5d-1}(z) can be evaluated using series expansion for small and medium |z| or asymptotic approximation for large |z| [5]. Unfortunately Boost provides only real versions of the Bessel functions and the copyright status of older complex valued Fortran77 routine is vague. Therefore QuantLib comes with its own implementation.

Please notice that \Phi(a) is already a continuous version of the characteristic function and therefore the integration does not need to track the branches of arg(z) when calculating the complex valued Bessel function [4].

The integration over the characteristic function is best done using either Gauss-Laguerre, Gauss-Lobatto or trapezoidal rule. The two later algorithms need to truncate the integration at some upper bound. First guess for a truncation limit can be taken from the Cornish-Fisher expansion  for some very small \epsilon. The moment-generating function \Phi(a) can be used to get the first, second and third moment of the distribution via finite difference quotient.

m_n = E(X^n) = \frac{d^n}{dy^n}\Phi(x+iy)\Big|_{x=y=0}

The next term is now fairly easy to calculate

\int_0^t \sqrt{\nu_s} dW_s^{(1)} = \frac{1}{\sigma}\left( \nu_t - \nu_0 - \kappa\theta t+\kappa \int_0^t \nu_s ds \right)

The log spot process can now be sampled using a standard normal random variable Z and

\begin{array}{rcl} x_t &=& x_0 + m(t) + \sigma(t)Z \nonumber \\ \sigma^2(t) &=& (1-\rho^2)\int_0^t \nu_s ds \nonumber \\ m(t) &=& (r_t-q_t)t - \frac{1}{2}\int_0^t \nu_s ds + \rho\int_0^t \sqrt{\nu_s}dW_s^{(1)} \nonumber \end{array}

This sampling algorithm is exact even for very large time steps and therefore gives some advantages for quasi random Monte-Carlo methods but the inversion of the integration of the characteristic function is also very slow. The algorithm is implemented within the HestonProcess class.

[1] I. Clark, Foreign Exchange Option Pricing: A Practitioners Guide, p. 113

[2] A. Dragulescu, V. Yakovenko, Probability distribution of returns in the Heston model with stochastic volatility

[3] M. Broadie, Ö. Kaya, Exact Simulation of Stochastic Volatility and other Affine Jump Diffusion Processes

[4] R. Lord, Efficient pricing algorithms for exotic derivatives, p. 40

[5] J.R. Culham, Bessel Functions of the First and Second Kind