Backtesting is the process of running your algorithmic trading strategy on historical data. The ability to backtest a strategy is what distinguishes algorithmic trading from discretionary trading, and that’s what makes algorithmic trading more scientific. If we had invented a strategy ourselves, we obviously would want to see how it would have performed in the past. But even if we read about a strategy from a book, with all the performance details in plain sight, we would still want to backtest the strategy ourselves. Because we want to make sure that the original author did not commit one or more of the common backtesting pitfalls. And because we want to be able to understand every nuance in the strategy’s implementation, so that we can improve on it.
Let’s first check out those common backtesting pitfalls:
1. Look-ahead bias (or using tomorrow’s price as today’s trading signal)
It is remarkably easy to mistakenly use unknowable future information in a backtest, since we typically have access to the entire dataset all at once. What’s to stop the program from using the next bar’s closing price to trigger a hypothetical trade at the current bar? More subtly, many of us have used one set of data to optimize some trading parameters, and then run a backtest on the same set of data again.
The best cure for look-ahead bias is to pick a backtesting platform that can be turned into a live trading platform with the push of a button. If both backtesting and live trading use the same software code, it is impossible for this piece of code to look-ahead to the next bar, since such a step would result in a program crash during live trading. I will discuss in another article what backtesting platforms that can also serve as live trading platform. (To give a preview, many traders are already familiar with TradeStation, MetaTrader or NinjaTrader.)
2. Survivorship bias
Survivorship bias arises when we use a database of stocks that exist today to backtest a strategy, ignoring those stocks that were in existence during the time period for the backtest but have since been delisted. For example, a database of S&P 500 stocks that is survivorship-bias-free will contain stocks such as Enron or Worldcom, while one that has survivorship bias will not.
Why is survivorship bias a problem? Consider this simple “toy” strategy: Pick 10 stocks with the lowest prices from a 1000-stock universe at the beginning of a year, sell them at the beginning of the next year. If we use a database of stocks that has survivorship bias: we will find this portfolio returns 388% in 2001. But what if we use a good, survivorship-bias-free database? The portfolio will return -42%, because it will contain picks such as ETYS, INTW, and FDHG that have all gone out of business by the end of 2001. This -42% return is realistic, while the 388% is due to the fact that we have excluded all those bankrupt companies from our testing and is therefore highly unrealistic.
To buy a survivorship-bias-free database does not have to be an expensive proposition. For example, csidata.com will sell you a database of delisted stocks to complement their current active universe for several hundred dollars.
3. Data-snooping bias
When a trading strategy has many rules or many parameters, it will usually perform very well “in-sample”, but will often perform very poorly “out-of-sample”. In-sample data is those data that you have used to find those trading rules or parameters that work well. Suppose our data contains only five days’ worth of AAPL prices: $446, $450, $461, $459, $464. It is easy to make a trading rule that says “Buy AAPL unless the last price is $461”, and it would be profitable on every day of this in-sample data set, but it would be worthless for predicting the prices in almost any other data set!
To overcome data-snooping bias, we must keep our models simple and grounded in economic or financial principles, many of which are described in academic papers available for free download. Also, after we backtested a model, we must run it through unseen, out-of-sample data to confirm that it is still profitable. Ultimately, we want to test it using paper-trading or walk-forward testing for a certain period of time before risking real money on it.
Assuming that we are successful in backtesting a strategy without committing any of these pitfalls, we will discuss what we can do to tweak an existing strategy to make it more profitable in the next article.
Ernest Chan is a hedge fund manager and the author of "Quantitative Trading: How to Build Your Own Algorithmic Trading Business" and “Algorithmic Trading: Winning Strategies and Their Rationale”. Find out more about him at www.epchan.com.
The following article is from one of our external contributors. It does not represent the opinion of Benzinga and has not been edited.