Simply Extracting information out of your Stochastic Series
The unreasonable usefulness of Stochastic DE’s
Certain regions of science are unfortunately mired in formalism, despite the fact that they are invaluable. Stochastic calculus is without a doubt, one of these fields. A typical text on stochastic calculus will cover a series of formal theorems proving the existence of these integrals, but leave the reader without a clear picture of just how insanely practical these equations can be. If you’re modeling any type of noisy timeseries, it would be a huge mistake to not spend at least some time considering a stochastic DE as your zeroth order model. This post will cover the elementary math you need to fit a stochastic equation onto observed data by MLE, and how useful these equation are for extrapolation, normalization, featurization etc. As far as 1-d data goes, pretty much all of it is applicable to real world noisy incomplete data.
Brownian example
Suppose you have observed the coordinates of a particle drifting to the right, but also subject to random forces. You have a data series \(\{t, X_t\}\). You would like to answer elementary questions such as: “What’s an appropriate drift parameter with good time units based on these samples?” “How much is the particle affected by noise?” “How can I extrapolate trajectories of the particle in the future?”
One path to answering those questions is to model the particle as a Brownian Diffusion by first assuming a stochastic differential equation model of the form:
\[dX_t = \mu dt + \sigma dW_t\]This statement is a little loaded, especially the \(dW_t\) bit. But those unconcerned with formalism could focus on the finite difference approximation of \(X_t\). In this case \(dW_t\) is just a normal distribution of variance dt:
\[X_{t+1} \approx X_t + dt*\mu + N(0,dt)*\sigma\]Rearranging this to get the normal distribution on one side this expression implies a negative-log-likelihood objective to fix the parameters of the model:
\((X_{t+1}-X_t-dt*\mu)/\sigma \sim N(0,dt)\) … Which implies that the values on the left can be maximum-likelihood estimated. using the usual formula for the normal density: \(f(x)= {\frac{1}{\sigma\sqrt{2\pi}}}e^{- {\frac {1}{2}} (\frac {x-\mu}{\sigma})^2}\) The code below implements integration of the stochastic equation and maximum likelihood estimation of these parameters.
Here’s an example of what this looks like if you appropriately MLE SPX and then draw a large number of trajectories.
Information Is Noise
Once you’ve optimized parameters of a stochastic equation, you can propagate it forwards and backwards in time, which is useful. You can draw trajectories from it, but perhaps the most useful thing you can do is replace X with it’s noise process, by simply solving for dW which resulted in a particular trajectory. This results in a series with the same information as X, but which is quasi-normally distributed. This reduces all the information in the series in a form most-amenable to statistics.
\[\text{SPX}_t \rightarrow dW_t \rightarrow \text{IWM}_t\]To make one powerful final example. IWM (a Russell ETF) has a strong correllation to SPX. In the following plot I have MLE fit a particular (non-brownian) 4-parameter stochastic equation on both SPX and IWM. Then I obtain SPX’s dW noise process and then integrate it using the parameters of IWM:
The crazy thing about this plot is that the fake blue series only has four pieces of information about IWM. Besides that all it’s using is the fact that SPX and IWM’s noise processes are the same. How many noise processes of different assets you hold are the same?