How to Predict Market Crashes

May 23, 2020

Transcript

Earlier I made a video on whether AI systems can predict the stock market. In this video, we will talk about whether financial crises can be modeled mathematically and predicted before the markets crash. Unlike my earlier video, this time the answer is yes, at least partially. Some bubbles can be predicted before they burst, and we'll see how.

First of all, I'm not a financial advisor. I don't have a formal financial background. I'm a computer scientist. So, anything I say here is not investment advice. Alright, we got the disclaimer out of the way, so let's get started.

Prof. Didier Sornette and his colleagues from the Swiss Federal Institute of Technology in Zurich proposed a mathematical model to predict market crashes. Their method is based on the assumption that the market has log-periodic oscillations and that super-exponential growth is not sustainable. And I’ll explain what that means. Let's say you invested in some security that yielded 8% returns in the first year, 16% in the second, 32% in the third, and 64% in the fourth year. Can we expect 128% returns next year? Sounds too good to be true, right? Because it most likely is. Chances are it is either a Pyramid Scheme that is bound to collapse or a bubble that will burst at some point. Riding bubbles can be profitable if we manage to get out on time. But can we predict when a bubble is most likely to burst? Will it be tomorrow, will it be next week, next month? Can we at least guess whether it is very soon or not?

Prof. Sornette postulates that bubbles in financial markets show some similarities in the way they grow. His team models the behavior of a bubble using an equation called the Log-Periodic Power Law, or LPPL for short. By fitting a log-periodic-power function to historical prices, one can predict the most probable time of a crash.

Let’s take a look at their equation. It may look a bit math-scary, but bear with me. I’ll try to explain it as simply as possible.

This function has 7 parameters. Let’s start with the one that we care about the most: the critical time, t_c. This indicates when the market is likely to crash or change behavior. On the left hand side of the equation, we have p(t), which is the price at a given time. This function gives us the expected value of the logarithm of the price. The parameter A is a bias term that we can omit if the prices are normalized, otherwise, it indicates the price at the peak of the bubble. B indicates the height of the bubble, from top to bottom. C is the relative magnitude of the fluctuations. M is the exponent of the power law growth. Omega is the frequency of the log-periodic oscillations. And phi is a phase-shift parameter.

So, how do we solve for those parameters to get a close fit to our data? It's not trivial. The log periodic power law function is highly non-linear. It has many local minima, has a high variance, and is hard to optimize. Although it's a differentiable function, it's very fragile and is not very backpropagation friendly.

This paper proposes a relatively robust calibration scheme by reducing the number of non-linear parameters to three by rewriting the equation as this one.

Here, the only non-linear parameters are m, omega, and critical time, which can be solved for by using local search methods such as the Nelder-Mead simplex method.

Even after this reformulation, this function is still highly sensitive to input. For example, if we fit this function to the prices from the last 256 days rather than 512 days, we can get vastly different estimates of the critical time.

Ideally, we would want our bubble indicator to be consistent over different time scales so that we can have some confidence in our predictions. We can do so by fitting LPPL curves over a range of window sizes and see how many of them warn us about an upcoming market crash. That's what the LPPL Singularity Confidence Indicator aims to do. It gives the fraction of fits that satisfy a set of filtering conditions.

If a fit has parameters that fall between those ranges, it's considered a valid fit for a bubble. Those values were derived from historical data. To account for negative bubbles, bubbles that have a negative size are counted as -1. Therefore the confidence score takes values between -1 and 1, where -1 means all of the fits were negative bubbles and 1 means all of the fits were positive bubbles. The higher the value, the likelier a crash is about to occur.

The authors who proposed this method use window sizes that are evenly spaced between 125 and 750 trading days, where the size difference between consecutive windows is 5 days. As you may guess, computing that many fits for every single time step is very computationally expensive. They also have a short time scale version that spans windows between 20 days to 125 days, but I find it highly unstable, yet it still requires computing a lot of fits. So this is definitely not a suitable indicator for frequent trading.

Prof. Sornette's team has created a website called the Financial Crisis Observatory. They calculate this indicator for a variety of financial instruments and publish them on this website every day. Their indicator was able to detect several crashes although it had some false positives as well.

So, what's the catch? Can we use this to devise a trading strategy that consistently beats the market? We can try. Let's see what happens if we simply set a threshold for the LPPL confidence indicator. Sell if the risk of a bubble goes above it and buy again if it falls below. As you can see, except for not suffering from one of the largest drawdowns, it doesn't really seem to do any better than the buy-and-hold strategy.

In this example, I simply used a threshold of 0.4. We can tune our trading strategy further but tuning it too much can lead to implicit overfitting. This thesis, for example, proposes and backtests more sophisticated trading strategies named, the Dragon Hunting and Bubble Overlay strategies. Both strategies rely on the LPPL-based indicators.

My personal opinion is that none of these strategies are foolproof. LPPL-based indicators can be good for risk management but I don't think they can beat the market in terms of year over year gains. LPPL aims to model the herd behavior in financial markets but prices are driven by so many factors. Herd behavior, where people make decisions based on what everybody else is doing, is only one of them. When there is uncertainty in the market people tend to follow the crowd. This behavior is more prevalent in the cryptocurrency markets. That's probably why some of the earlier Bitcoin bubbles are modeled almost perfectly with an LPPL function. But obviously, this doesn't guarantee accurately predicting the next Bitcoin crash.

Markets may not be 100% efficient but it's very hard to find consistent inefficiencies in the market, especially if the number of participants in the market is sufficiently large. So, I would personally keep an eye on these indicators, but wouldn't rely on them too much.

Some of you, who watched my earlier videos may ask, so where's deep learning in all of this? How can we utilize log periodic power functions in neural networks?

A very basic way would be to use the LPPL indicators as inputs to a neural network to make trade decisions. A more sophisticated way would be to use the parameters of the LPPL fits as features, instead of the confidence indicator. This would eliminate the need for the hand-crafted filtering conditions.

I haven't tried those two approaches because it would take forever to compute those indices over enough data points to train a robust neural network. For example, computing the confidence score for 500 assets over the span of 20 years would require computing over 15 million function fits.

So, instead, I came up with a more computationally feasible approach. I used the log-periodic power law in the basis functions of a neural network. Just like RBF networks use radial basis functions, I used LPPL basis functions. I defined a layer having several LPPL functions with randomly initialized parameters and stacked a few convolutional and fully connected layers to process the residuals.

Unfortunately, I didn't have much luck with this approach. If you manage to come up with a strategy that can actually beat the market consistently, let me know in the comments section. Or don't let me know and keep your profits to yourself, as any rational investor would do.

Alright, that's all for today. I hope you liked it. Subscribe for more videos. Thanks for watching, stay tuned, and see you next time.

References in the Video