In the previous article we looked at a way of computing expectations using partial differential equations (PDEs). In this one we define a hybrid approach that uses the Feynman-Kac formula together with a more classical computation of expectations, done either analytically or using Monte Carlo methods.

We will do it on an example and consider the stochastic differential equation (SDE)

\[\begin{align} dX(t) & = \sigma dW(t) \\ X(0) & = X_0, \end{align}\]

for $t \in (0, T)$, $\sigma \in \mathbb{R}^+$, and $W(t)$ a Wiener process. We want to compute the discounted expectation at $X_0$

\[\mathbb{E}\left[ e^{-r T} \Phi(X(T)) | X(t) = X_0 \right],\]

where $r \in \mathbb{R}$ and the function $\Phi$ is the terminal payoff. In our example we take $\Phi(x) = x^2$

As done in the previous article, we introduction a function $u(x, t)$ defined as

\[u(x, t) = \mathbb{E}\left[ e^{-r(T - t)} \Phi(X(T)) | X(t) = x \right].\]

From the Feynman-Kac formula, this is equivalent to solving the backward PDE

\[\frac{\partial u}{\partial t} + \frac{1}{2} \sigma^2 \frac{\partial^2 u}{\partial x^2} - r u = 0\]

with terminal conditions $u(x, T) = \Phi(x) = x^2$.

It is easy to find an analytical solution for this problem. Since $r$ is constant, what we need to compute reduces to

\[\mathbb{E} \left[ X(T)^2 | X(t) = x \right],\]

which, from the definition of variance, can be written as

\[\mathbb{V}[X(T)| X(t) = x] + \left(\mathbb{E}[X(T) | X(t) = x]\right)^2,\]

where $\mathbb{V}[X(T) \vert X(t) = x]$ is the conditional variance of $X(T)$.

For our example both terms are easy to compute,

\[\begin{align} \mathbb{E}[X(T) | X(t) = x] & = x \\ % \mathbb{V}[X(T)| X(t) = x] & = \sigma^2 (T - t), \end{align}\]

and therefore our solution reads

\[u(x, t) = e^{-r(T - t)} \left( \sigma^2 (T - t) + x^2 \right).\]

To show how we can couple the backward PDE solution just obtained with a forward solution of the SDE, suppose that there is a time $t^\star \in (0, T)$ at which we have calculated the value of $u(x, t)$ using the Feynman-Kac equation written above. To derive the solution at time $t=0$, thanks to the Markov property we need to compute

\[\begin{align} u(x, 0) & = \mathbb{E} \left[ e^{-r t^\star} \mathbb{E} \left[ e^{-r(T - t^\star)} x(T)^2 | X(t^\star) \right] | X(0) = x\right] \\ % & = e^{-r T} \mathbb{E}\left[ \sigma^2 (T - t^\star) + X(t^\star)^2 | X(0) = x \right] \\ % & = e^{-r T} \left( \sigma^2 (T - t^\star + \mathbb{E} \left[ X(t^\star)^2 | X(0) = x \right]) \right) \\ % & = e^{-r T}\left( \sigma^2(T - t^\star) + \sigma^2 t^\star + x^2 \right) \\ % & = e^{-r T}\left( \sigma^2 T + x^2 \right), \end{align}\]

which is the same result we found above. This example illustrates how we can use the Fokker-Planck formula and stop the process at an intermediate time step $t^\star$ and still derive the solution at the initial time using the probability distribution at $t^\star$.