神刀安全网

Machine learning for financial prediction

Update 1: In response to a suggestion from a reader, I’ve added asection on feature selection using the Boruta package. 

Update 2: Responding to another suggestion, I’ve added someequity curves of a simple trading systemusing the knowledge gained from this analysis.

One of the first books I read when I began studying the markets a few years ago was David Aronson’s Evidence Based Technical Analysis . The engineer in me was attracted to the ‘Evidence Based’ part of the title. This was soon after I had digested a trading book that claimed a basis in chaos theory, the link to which actually turned out to be non-existent. Apparently using complex-sounding terms in the title of a trading book lends some measure of credibility. Anyway,  Evidence Based Technical Analysis  is largely a justification of a scientific approach to trading, including a method for rigorous assessment of the presence of data mining bias in backtest results. There is also a compelling discussion based in cognitive psychology of the reasons that some traders turn away from objective methods and embrace subjective beliefs. I find this area fascinating.

Readers of this blog will know that I am very interested in using machine learning to profit from the markets. Imagine my delight when I discovered that David Aronson had co-authored a new book with Timothy Masters titled Statistically Sound Machine Learning for Algorithmic Trading of Financial Instruments – which I will herein refer to as SSML. I quickly devoured the book and have used it as a handy reference ever since. While it is intended as a companion to Aronson’s (free) software platform for strategy development, it contains numerous practical tips for any machine learning practitioner and I’ve implemented most of his ideas in R.

I used SSML to guide my early forays into machine learning for trading, and this series describes some of those early experiments. While a detailed review of everything I learned from SSML and all the research it inspired is a bit voluminous to relate in detail, what follows is an account of what I found to be some of the more significant and practical learnings that I encountered along the way.

This post will focus on feature engineering and also introduce the data mining approach. The next post will focus on algorithm selection and ensemble methods for combining the predictions of numerous learners.

The data mining approach

Data mining is just one approach to extracting profits from the markets and is different to a model-based approach. Rather than constructing a mathematical representation of price, returns or volatility from first principles, data mining involves searching for patterns first and then fitting a model to those patterns after the fact. Both model-based and data mining approaches have pros and cons, and I contend that using both approaches can lead to a valuable source of portfolio diversity.

The Financial Hacker summed up the advantages and disadvantages of the data mining approach nicely:

The advantage of data mining is that you do not need to care about market hypotheses. The disadvantage: those methods usually find a vast amount of random patterns and thus generate a vast amount of worthless strategies. Since mere data mining is a blind approach, distinguishing real patterns – caused by real market inefficiencies – from random patterns is a challenging task. Even sophisticated reality checks can normally not eliminate all data mining bias. Not many successful trading systems generated by data mining methods are known today.

David Aronson himself cautions against putting blind faith in data mining methods:

Though data mining is a promising approach for finding predictive patterns in data produced by largely random complex processes such as financial markets, its findings are upwardly biased. This is the data mining bias. Thus, the profitability of methods discovered by data mining must be evaluated with specialized statistical tests designed to cope with the data mining bias. 

I would add that the implicit assumption behind the data mining approach is that the patterns identified will continue to repeat in the future. Of course, the validity of this assumption is unlikely to ever be certain.

Data mining is a term that can mean different things to different people depending on the context. When I refer to a data mining approach to trading systems development, I am referring to the use of statistical learning algorithms to uncover relationships between feature variables and a target variable (in the regression context, these would be referred to as the independent and dependent variables, respectively). The feature variables are observations that are assumed to have some relationship to the target variable and could include, for example, historical returns, historical volatility, various transformations or derivatives of a price series, economic indicators, and sentiment barometers. The target variable is the object to be predicted from the feature variables and could be the future return (next day return, next month return etc), the sign of the next day’s return, or the actual price level (although the latter is not really recommended, for reasons that will be explained below).

Although I differentiate between the data mining approach and the model-based approach, the data mining approach can also be considered an exercise in predictive modelling. Interestingly, the model-based approaches that I have written about previously (for example ARIMA, GARCH, Random Walk etc) assume linear relationships between variables. Modelling non-linear relationships using these approaches is (apparently) complex and time consuming. On the other hand, some statistical learning algorithms can be considered ‘universal approximators’ in that they have the ability to model any linear or non-linear relationship. It was not my intention to get into a philosophical discussion about the differences between a model-based approach and a data mining approach, but clearly there is some overlap between the two. In the near future I am going to write about my efforts to create a hybrid approach that attempts a synergistic combination of classical linear time series modelling and non-linear statistical learning – trust me, it is actually much more interesting than it sounds. Watch this space.

Variables and feature engineering

The prediction target

The first and most obvious decision to be made is the choice of target variable. In other words, what are we trying to predict? For one-day ahead forecasting systems, profit is the usual target. I used the next day’s return normalized to the recent average true range, the implication being that in live trading, position sizes would be inversely proportionate to the recent volatility. In addition, by normalizing the target variable in this way, we may be able to train the model on multiple markets, as the target will be on the same scale for each.

Choosing predictive variables

In SSML, Aronson states that the golden rule of feature selection is that the predictive power should come primarily from the features and not from the model itself. My research corroborated this statement, with many (but not all) algorithm types returning correlated predictions for the same feature set. I found that the choice of features had a far greater impact on performance than choice of model. The implication is that spending considerable effort on feature selection and feature engineering is well and truly justified. I believe it is critical to achieving decent model performance.

Many variables will have little or no relationship with the target variable and including these will lead to overfitting or other forms of poor performance. Aronson recommends using chi-square tests and Cramer’s V to quantify the relationship between variables and the target. I actually didn’t use this approach, so I can’t comment on it. I used a number of other approaches, including ranking a list of candidate features according to their Maximal Information Coefficient (MIC) and selecting the highest ranked features, Recursive Feature Elimination (RFE) via the caret package in R, an exhaustive search of all linear models, and Principal Components Analysis (PCA). Each of these are discussed below.

Some candidate features

Following is the list of features that I investigated as part of this research. Most were derived from SSML. This list is by no means exhaustive and only consists of derivatives and transformations of the price series. I haven’t yet tested exogenous variables, such as economic indicators, the price histories of other instruments and the like, but I think these are deserving of attention too. The following list is by no means exhaustive, but provides a decent starting point:

  • 1-day log return
  • Trend deviation: the logarithm of the closing price divided by the lowpass filtered price
  • Momentum: the price today relative to the price x  days ago, normalized by the standard deviation of daily price changes.
  • ATR: the average true range of the price series
  • Velocity: a one-step ahead linear regression forecast on closing prices
  • Linear forecast deviation: the difference between the most recent closing price and the closing price predicted by a linear regression line
  • Price variance ratio: the ratio of the variance of the log of closing prices over a short time period to that over a long time period.
  • Delta price variance ratio: the difference between the current value of the price variance ratio and its value  x periods ago.
  • The Market Meanness Index: A measure of the likelihood of the market being in a state of mean reversion, created by the Financial Hacker .
  • MMI deviation: The difference between the current value of the Market Meanness Index and its value  x periods ago.
  • The Hurst exponenet
  • ATR ratio: the ratio of an ATR of a short (recent) price history to an ATR of a longer period.
  • Delta ATR ratio: the difference between the current value of the ATR ratio and the value  x bars ago.
  • Bollinger width: the log ratio of the standard deviation of closing prices to the mean of closing prices, that is a moving standard deviation of closing prices relative to the moving average of closing prices.
  • Delta bollinger width: the difference between the current value of the bollinger width and its value  bars ago.

Thus far I have only considered the most recent value of each variable. I suspect that the recent history of each variable would provide another useful dimension of data to mine. I left this out of the feature selection stage since it makes more sense to firstly identify features whose current values contain predictive information about the target variable before considering their recent histories. Incorporating this from the beginning of the feature selection stage would increase the complexity of the process by several orders of magnitude and would be unlikely to provide much additional value. I base that statement on a number of my own assumptions, not to mention the practicalities of the data mining approach, rather than any hard evidence.

Transforming the candidate features

In my experiments, the variables listed above were used with various cutoff periods (that is, the number of periods used in their calculation). Typically, I used values between 3 and 20 since Aronson states in SSML that lookback periods greater than about 20 will generally not contain information useful to the one period ahead forecast. Some variables (like the Market Meanness Index) benefit from a longer lookback. For these, I experimented with 50, 100, and 150 bars.

Additionally, it is important to enforce a degree of stationarity on the variables. Davind Aronson again:

Using stationary variables can have an enormous positive impact on a machine learning model. There are numerous adjustments that can be made in order to enforce stationarity such as centering, scaling, and normalization. So long as the historical lookback period of the adjustment is long relative to the frequency of trade signals, important information is almost never lost and the improvements to model performance are vast.

Aronsons suggests the following approaches to enforcing stationarity:

  • Scaling: divide the indicator by the interquartile range (note, not by the standard deviation, since the interquartile range is not as sensitive to extremely large or small values).
  • Centering: subtract the historical median from the current value.
  • Normalization: both of the above. Roughly equivalent to traditional z-score standardization, but uses the median and interquartile range rather than the mean and standard deviation in order to reduce the impact of outliers.
  • Regular normalization: standardizes the data to the range -1 to +1 over the lookback period (x-min)/(max-min) and re-centered to the desired range.

In my experiments, I generally adopted regular normalization using the most recent 50 values of the features.

Removing highly correlated variables

It makes sense to remove variables that are highly correlated with other variables since they are unlikely to provide additional information that isn’t already contained in the other variables. Keeping these variables will also add unnecessary computation time, increase the risk of overfitting and bias the final model towards the correlated variables.

library(corrplot)  cor.mat <- cor(gu_data[, -1]) #compute the correlation matrix, features in data frame 'gu_data' highCor <- findCorrelation(cor.mat, 0.50) #apply correlation filter cor.mat.filt <- cor(gu_data_filt) corrplot(cor.mat.filt, order = "hclust", type = 'lower') #plot correlation matrix 

These are the remaining variables and their pairwise correlations:

Machine learning for financial prediction

Feature selection via Maximal Information

The maximal information coefficient (MIC) is a non-parametric measure of two-variable dependence designed specifically for rapid exploration of many-dimensional data sets. Read more about MIC here . I used the minerva package in R to rank my variables according to their MIC with the target variable (next day’s return normalized to the 100-period ATR).  Results shown below and throughout this post are for the GBP/USD exchange rate from 19 March 2009 to 31 December 2014 (daily data):

## MIC library(minerva) # variables and target in data frame gu_data mine.filt <- mine(gu_data_filt[, -1], gu_data_filt[, 1]) mic <- as.data.frame(mine.filt$MIC) mic.ordered <- mic[order(mic), , drop = FALSE]   ### RESULTS MMI          0.1210261 atrRat10_20  0.1287799 bWidth10      0.1315506 apc3          0.1353889 ATR7          0.1356449 mom5          0.1404709 deltaPVR5    0.1447020 bWidth100    0.1454078 deltaMMI      0.1467144 deltaATRRat5  0.1503392 bWidth20      0.1517109 atrRat10_100  0.1559397 velocity10    0.1601864 deltabWidth3  0.1607715 mom3          0.1653665 

These results show that none of the features have a particularly high MIC with respect to the target variable, which is what I would expect from noisy data such as daily exchange rates. However, certain variables have a higher MIC than others. In particular, the 3-period momentum, the 3-period delta bollinger width and the 10-period velocity  outperform the rest of the variables by a decent margin.

Notice that the top 2 MICs come from variables that have a short lookback period (3 days). The top variables are also fairly simple measures of trend, momentum and price volatility, or the recent changes in those characteristics.

Recursive feature elimination

I also used recursive feature elimination (RFE) via the caret package in R to isolate the most predictive features from my list of candidates. RFE is an iterative process that involves constructing a model from the entire set of features, retaining the best performing features, and then repeating the process until all the features are eliminated. The model with the best performance is identified and the feature set from that model declared the most useful.

I performed RFE using a random forest model:

# define the control function and CV method cntrl <- rfeControl(functions=rfFuncs, method="repeatedcv", repeats = 50, number=5) rfe.results <- rfe(gu_data_filt[,-1], gu_data_filt[,1], sizes=c(2:15), rfeControl=cntrl) print(rfe.results) # list final feature set predictors(rfe.results)   #### Results Thetop 5 variables (outof 15): atrRat10_100, atrRat10_20, velocity10, ATR7, mom5 

In this case, the RFE process has emphasized variables that describe volatility and trend, but has decided that the best performance is obtained by incorporating all 15 variables into the model. Here’s a plot of the cross validated performance of the best feature set for various numbers of features:

Machine learning for financial prediction

I am tempted to take the results of the RFE with a grain of salt. My reasons are:

  1. The RFE algorithm does not consider interactions between variables. For example, assume that two variables individually have no effect on model performance, but due to some relationship between them they improve performance when both are included in the feature set. RFE is likely to miss this predictive relationship.
  2. The performance of RFE is directly related to the ability of the specific algorithm (in this case random forest) to uncover relationships between the variables and the target. At this stage of the process, we have absolutely no evidence that the random forest model is applicable in this sense to our particular data set.
  3. Finally, the implementation of RFE that I used was the ‘out of the box’ caret version. This implementation uses root mean squared error (RMSE) as the objective function, however I don’t believe that RMSE is the best objective function for this data due to the significant influence of extreme values on model performance. It is possible to have a low RMSE but poor overall performance if the model is accurate across the middle regions of the target space (corresponding to small wins and losses), but inaccurate in the tails (corresponding to big wins and losses)

In order to address (3) above, I implemented a custom summary function so that the RFE was performed such that the cross-validated absolute return was maximized. I also applied the additional criterion that only predictions with an absolute value of greater than 5 would be considered under the assumption that in live trading we wouldn’t enter positions unless the prediction exceeded this value. The custom summary function that I used and the results are as follows:

absretSummary <- function (data, lev = NULL, model = NULL) {   positions <-ifelse(abs(data[, "pred"]) > 5, sign(data[, "pred"]), 0)   trades <- abs(c(1,diff(positions)))    profits <- positions*data[, "obs"]   profit <- sum(profits)   names(profit) <- 'profit'   return(profit) }   ### The top 2 variables (out of 2): atrRat10_100, atrRat10_20 

The results are a little different to those obtained using RMSE as the objective function. The focus is still well and truly on the volatility indicators, but in this case the best cross validated performance occurred when selecting only 2 out of the 15 candidate variables. Here’s a plot of the cross validated performance of the best feature set for various numbers of features:

Machine learning for financial prediction

The model clearly performs better in terms of absolute return for a smaller number of predictors. Performance bottoms at 8 predictors and then improves, but never again achieves the performance obtained with 2-4 predictors. This is consistent with Aronson’s assertion that we should stick with at most 3-4 variables otherwise overfitting is almost unavoidable.

The performance profile of the model tuned on absolute return is very different to that of the model tuned on RMSE, which displays a consistent improvement as the number of predictors is increased. Using RMSE as the objective function (which seems to be the default in many applications I’ve come across) would result in a very sub-optimal final model in this case. This highlights the importance of ensuring that the objective function is a good proxy for the performance being sought in practice.

In the RFE example above, I used 50 iterations of 5-fold cross validation, but I haven’t held out a test set of data or estimated performance with an inner cross validation loop.

Models with in-built feature selection

A number of machine learning algorithms have feature selection in-built. Max Kuhn’s website for the caret package contains a list of such models that are accessible through the caret package. I’ll apply several and compare the features selected to those selected with other methods. For this experiment, I used a diverse range of algorithms that include various ensemble methods and both linear and non-linear interactions:

  • Bagged multi-adaptive regressive splines (MARS)
  • Boosted generalized additive model (bGAM)
  • Lasso
  • Spike and slab regression (SSR)
  • Model tree
  • Stochastic gradient boosting (SGB)
  • Bayesian additive regression trees (BART)

For each model, I did only very basic hyperparameter tuning within caret using 5-fold cross validation repeated 10 times. This returns the model with the best cross-validated performance (that is, the best average absolute return over each cross validation run). In practice, I would actually repeat cross validation at least 100 times and average the performance across the resulting 500 models. Maximization of absolute return was used as the objective function.

The table below shows the top 5 variables and the performance of the final cross validated model for each algorithm:

Top Variables By Algorithm
Algorithm Abs.Ret Var.1 Var.2 Var.3 Var.4 Var.5
MARS 316 mom5 bWidth10 deltaMMI velocity10 apc3
bGAM 494 atrRat10_100 mom5 bWidth100 deltaPVR5 ATR7
Lasso 251 velocity10 mom3 mom5 apc3 deltaPVR5
SSR 117 atrRat10_100 mom5 bWidth100 deltaPVR5 ATR7
Model tree 121 atrRat10_100 mom5 bWidth100 ATR7 deltaATRRat5
SGB 63 atrRat10_100 deltaATRRat5 ATR7 mom5 deltaPVR5
BART 198 atrRat10_100 bWidth20 mom5 atrRat10_20 deltaMMI

We can see that 5-day momentum is included in the top 5 predictors for every algorithm I investigated. The ratio of the 10- to 100-day ATR featured 5 out of 7 times, and was the top feature every time it was selected. The bollinger width variable was selected 5 times, in one form or another (10- and 20-day variables once each and 100-day variable thrice). Other notable mentions include the 7-day ATR and the 5-day change in the price variance ratio which were both included in the top variables 4 times. The table below summarizes the frequency with which each variable was selected:

Machine learning for financial prediction

13 of the 15 variables were selected in the top 5 by at least one algorithm. Only MMI and deltabWidth3 never made the top 5.

Model selection using glmulti

The glmulti package fits all possible unique generalized linear models from the variables and returns the ‘best’ models as determined by an information criterion (Aikake in this case). The package is essentially a wrapper for the glm (generalized linear model) function that allows selection of the ‘best’ model or models, providing insight into the most predictive variables. By default, glmulti builds models from the main interactions, but there is an option to also include pairwise interactions between variables. This increases the computation time considerably, and I found that the resulting ‘best’ models were orders of magnitude more complex than those obtained using main interactions only, and results were on par.

plot(x, type = 's', col = 'blue') print(x)   glmulti.analysis Method: h / Fitting: glm / ICused: aic Level: 1 / Marginality: FALSE From 1000 models: BestIC: 8443.65959954518 Bestmodel: [1] "target ~ 1 + atrRat10 + atrRat10_20 + bWidth100 + ATR7" Evidenceweight: 0.00982721902929488 WorstIC: 8449.63630478929 17 modelswithin 2 ICunits. 901 modelsto reach 95% ofevidenceweight. 

We retain the models whose AICs are less than 2 units from the ‘best’ model. 2 units is a rule of thumb for models that, for all intents and purposes, are likely to be on par in terms of their performance:

weights <- weightable(x) bst <- weights[weights$aic <= min(weights$aic) + 2,] print(bst)  model  1 target ~  1 + atrRat10_100 + atrRat10_20 + bWidth100 + ATR7  2 target ~  1 + atrRat10_100 + bWidth100 + ATR7  3 target ~  1 + bWidth100 + ATR7  4 target ~  1 + atrRat10_100 + deltaATRRat5 + atrRat10_20 + bWidth100 + ATR7  5 target ~  1 + atrRat10_100 + deltaMMI + atrRat10_20 + bWidth100 + ATR7 6 target ~  1 + atrRat10_100 + atrRat10_20 + bWidth100 + bWidth20 + ATR7  7 target ~  1 + atrRat10_100 + deltaMMI + bWidth100 + ATR7  8 target ~  1 + velocity10 + bWidth100 + ATR7  9 target ~  1 + atrRat10_100 + deltabWidth3 + atrRat10_20 + bWidth100 + ATR7  10 target ~ 1 + mom5 + atrRat10_100 + atrRat10_20 + bWidth100 + ATR7  11 target ~ 1 + bWidth100 + bWidth20 + ATR7  12 target ~ 1 + mom3 + atrRat10_100 + atrRat10_20 + bWidth100 + ATR7  13 target ~ 1 + velocity10 + atrRat10_100 + atrRat10_20 + bWidth100 + ATR7  14 target ~ 1 + atrRat10_100 + MMI + atrRat10_20 + bWidth100 + ATR7  15 target ~ 1 + apc3 + atrRat10_100 + atrRat10_20 + bWidth100 + ATR7  16 target ~ 1 + deltaPVR5 + atrRat10_100 + atrRat10_20 + bWidth100 + ATR7  17 target ~ 1 + atrRat10_100 + bWidth10 + atrRat10_20 + bWidth100 + ATR7 

Notice any patterns here? The top models all selected the 7-day ATR and a bollinger width variable. The ATR ratios also feature heavily. Noticeably sparse are the momentum variables. This is confirmed with this plot of the model averaged variable importance (averaged over the best 1,000 models):

Machine learning for financial prediction

Note that these models only considered the main, linear interactions between each variable and the target. Of course, there is no guarantee that any relationship is linear, if it exists at all. Still, this method provides some useful insight.

One of the great things about glmulti is that it facilitates model-averaged predictions – more on this when I delve into ensembles in part 2 of this series.

Generalized linear model with stepwise feature selection

Finally, I used a generalized linear model with stepwise feature selection:

# generalized linear model with stepwise selection glmStepAICModel <- train(gu_data_filt[, -1], gu_data_filt[, 1], method = "glmStepAIC",                  trControl = cntrl, metric = "profit", maximize = TRUE)   print(glmStepAICModel$finalModel)   Coefficients: (Intercept) atrRat10_100atrRat10_20bWidth100ATR7   -11.166 -11.319 8.911 -5.287 913.226    DegreesofFreedom: 766 Total (i.e. Null); 762 Residual Null Deviance: 2709000  ResidualDeviance: 2670000 AIC: 8444 

The final model selected 4 of the 15 variables: the ratio of the 10- to 100-day ATR, the ratio of the 10- to 20-day ATR , the 100-day bollinger width and the 7-day ATR.

Boruta: all relevant feature selection

Boruta finds relevant features by comparing the importance of the original features with the importance of random variables. Random variables are obtained by permuting the order of values of the original features. Boruta finds a minimum, mean and maximum value of the importance of these permuted variables, and then compares these to the original features. Any original feature that is found to be more relevant than the maximum random permutation is retained.

Boruta does not measure the absolute importance of individual features, rather it compares each feature to random permutations of the original variables and determines the relative importance. This theory very much resonates with me and I intuit that it will find application in weeding out uninformative features from noisy financial data. The idea of adding randomness to the sample and then comparing performance is analogous to the approach I use to benchmark my systems against a random trader with a similar trade distribution.

The box plots in the figure below show the results obtained when I ran the Boruta algorithm for the 15 filtered variables for 1,000 iterations. The blue box plots show the permuted variables of minimum, mean and maximum importance, the green box plots indicate the original features that ranked higher than the maximum importance of the random permuted variables, and the variables represented by the red box plots are discarded.

Machine learning for financial prediction

The variables retained by the Boruta algorithm were: MMI, 5-period momentum, 5-period delta ATR ratio, delta MMI, 7-day ATR, 10-day velocity, the 10-to-20 period ATR ratio and the 10-to-100 period ATR ratio. The latter 4 were ranked significantly higher than the former 4, with the 10-to-100 day ATR ratio being the clear winner. These results are largely consistent with the results obtained through other methods, perhaps with the exception of the inclusion of the MMI and the delta MMI, however these variables were ranked only marginally better than the best random permuted variable.

Side note: The developers state that “Boruta” means “Slavic spirit of the forest.” As something of a slavophile myself, I did some googling and discovered that this description is something of a euphemism. Check out some of the items that pop up in a Google image search!

Discussion of feature selection methods

It is important to note that any feature selection process naturally invites a degree of selection bias. For example, from a large set of uninformative variables, a small number may randomly correlate with the target variable. The selection algorithm would then rank these variables highly. The error would only be (potentially) uncovered through cross validation of the selection algorithm or by using an unseen test or validation set. Feature selection is difficult and can often make predictive performance worse, since it is easy to over-fit the feature selection criterion.  It is all to easy to end up with a subset of attributes that works really well on one particular sample of data, but not necessarily on any other.  There is a fantastic discussion of this at the Statistics Stack Exchange community that I have linked here because it is just so useful.

It is critical to take steps to minimize selection bias at every opportunity. The results of any feature selection process should be cross validated or tested on an unseen hold out set. If the hold out set selects a vastly different set of predictors, something has obviously gone wrong – or the features are worthless. The approach I took in this post was to cross validate the results of each test that I performed, with the exception of the Maximal Information Criterion and glmulti approaches. I’ve also selected features based on data for one market only. If the selected features are not robust, this will show up with poor performance when I attempt to build predictive models for other markets using these features.

I think that it is useful to apply a wide range of methods for feature selection, and then look for patterns and consistencies across these methods. This approach seems to intuitively be far more likely to yield useful information than drawing absolute conclusions from a single feature selection process. Applying this logic to the approach described above, we can conclude that the 5-day momentum, the ratio of the 10- to 100-day ATR  , the ratio of the 10- to 20-day ATR, the 100-day bollinger width and the 7-day ATR are probably the most likely to yield useful information since they continually show up in most of the feature selection methods that I investigated. Other variables that may be worth considering include the 5-day delta price variance ratio, the 3-day delta MMi and the 10-day velocity.

In part 2 of this article, I’ll describe how I built and combined various models based on these variables.

Principal Components Analysis

An alternative to feature selection is Principal Components Analysis (PCA), which attempts to reduce the dimensionality of the data while retaining the majority of the information. PCA is a linear technique: it transforms the data by linearly projecting it onto a lower dimension space while preserving as much of its variation as possible. Another way of saying this is that PCA attempts to transform the data so as to express it as a sum of uncorrelated components.

Again, note that PCA is limited to a linear transformation of the data, however there is no guarantee that non-linear transformations won’t be better suited. Another significant assumption when using PCA is that the principal components of future data will look those of the training data. It’s also possible that the smallest component, describing the least variance, is also the only one carrying information about the target variable, and would likely be lost when the major variance contributors are selected.

To investigate the effects of PCA on model performance, I cross validated 2 random forest models, the first using the principal components of the 15 variables, and the other using all 15 variables in their raw form. I chose the random forest model since it includes feature selection and thus may reveal some insights about how PCA stacks up in relation to other feature selection methods. For both models, I peformed 5-fold cross validation repeated 200 times for a total of 1,000 surrogate models. I also used the same random seed so that the cross validation folds would be the same across both models, allowing direct comparison.

set.seed(101) pca.model <- train(gu_data_filt[, -1], gu_data_filt[, 1], method = 'rf', preProcess = c('pca'),                   trControl = cntrl) set.seed(101) raw.model <- train(gu_data_filt[, -1], gu_data_filt[, 1], method = 'rf',                   trControl = cntrl) 

In order to infer the difference in model performance, I collected the results from each resampling iteration of both final models and compared their distributions via a pair of box and whisker plots:

resamp.results <- resamples(list(PCA = pca.model,                           RAW = raw.model))   trellis.par.set(theme = col.whitebg()) bwplot(resamp.results, layout = c(1, 1)) 

Machine learning for financial prediction

The model built on the raw data outperforms the model built on the data’s principal components in this case. The resampled mean profit is higher and the distribution is shifted in the positive direction. Sadly however, both distributions look only sightly better than random and have wide distributions.

A simple trading system

I will go into more detail about building a practical trading system using machine learning in the next post, but the following demonstrates a simple trading system based on some of the information gained from the analysis presented above.  The system is based on three of the indicators that the feature selection analysis identified as being predictive of the target variable. The features used were 10-day velocity, 3-day  momentum and 7-day ATR. The data used was the GBP/USD exchange rate’s daily closing price (at midnight GMT) from 2009-2015. I trained a generalized boosted regression model using the gbm package in R using these indicators as the independent variables predicting the next day return normalized to the recent ATR.

Firstly, I investigated the cross-validated performance on the entire data set and compared this performance with the distribution of performances from a system based on random trading. I split the data set into 5 random segments and then trained a model (with an objective function that maximized total profit) on 4 of the 5 segments and then tested it on 5th. The result of the test on the 5th, out-of-sample segment was saved as the performance statistic for that model run.  I repeated this process 5 times such that each time a different segment was held out as the test set. I then repeated this process a total of 1,000 times so that I ended up with 5,000 instances of the total profit performance statistic.  I compared this distribution to the distribution obtained by taking N/5 random trades a total of 5,000 times, where N is the number of next-day return observations in the entire data set. The distribution of the results are shown below:

Machine learning for financial prediction

There is almost no difference in the distributions of the random trading strategy and the strategy created with the carefully selected features! This doesn’t look good, and suggests that for all my effort, the features identified as being predictive turned out to be little better than random. Or did they?

The returns series of most financial instruments consists of a relatively large number of small positive and small negative values and a smaller number of large positive and large negative values. I assert that the values whose magnitude is smaller are more random in nature than the values whose magnitude is large. On any given day, all things being equal, a small negative return could turn out to be a small positive return by the time the close rolls around, or vice versa, as a result of any number of random occurrences related to the fundamentals of the exchange rate. These same random occurrences are less likely to push a large positive return into negative territory and vice versa, purely on account of the size of the price swings involved. Following this logic, I hypothesize that my model is likely to be more accurate in its extreme predictions than in its ‘normal’ range.

We can test this hypothesis on the simple trading strategy described above by entering positions only when the model predicts a return that is large in magnitude. I divided the data into training and testing sets using a 75:25 split. I trained the model on the 75% training data set and tested it on the 25% testing data set using increasing prediction thresholds as my entry criteria. Here are the results, along with the buy and hold return from the testing data set:

Machine learning for financial prediction

Now the strategy is looking a lot more enticing. We can see that increasing the prediction threshold for entering a trade significantly improves the out of sample performance compared with the buy and hold strategy and the strategy based on raw model predictions (corresponding to a prediction threshold of zero). The prediction threshold can be adjusted depending on the trader’s appetite for high returns (threshold = 3) as opposed to minimal drawdown (threshold = 20).

Conclusions

Following are the generalizations that will inform the next stage of model development:

  • The MIC analysis implied that variables based on only 3-5 days of price history are the most useful in predicting the next day’s normalized return. 3-day momentum was a clear winner, and delta bollinger width, 10-day velocity and the 10-to-100-day ATR ratio also scored relatively well.
  • The RFE analysis indicated that it may be prudent to focus on variables that measure volatility or recent changes in volatility, for lookback periods of up to 10 days.
  • An exhaustive search of all possible generalized linear models that considered main interactions using  glmulti   implied that 7-day ATR and bollinger width variables are most predictive. The ATR ratios are also regularly selected in the top models.
  • Stepwise feature selection using a generalized linear model selected the 10-to-100- and 10-to-20- day ATRs, the 100-day bollinger width and the 7-day ATR.
  • Transforming the variables using PCA reduced the performance of a random forest model relative to using the raw variables.

The same features seem to be selected over and over again using the different methods. Is this just a fluke, or has this long and exhaustive data mining exercise revealed something of practical use to a trader? In part 2 of this series, I will investigate the performance of various machine learning algorithms based on this feature selection exercise. I’ll compare different algorithms and then investigate combining their predictions using ensemble methods with the objective of creating a useful trading system.

References

Aronson, D. 2006, Evidence-Based Technical Analysis: Applying the Scientific Method and Statistical Inference to Trading Signals

Aronson, D. and Masters, T. 2014-, Statistically Sound Machine Learning for Algorithmic Trading of Financial Instruments: Developing Predictive-Model-Based Trading Systems Using TSSB

转载本站任何文章请注明:转载至神刀安全网,谢谢神刀安全网 » Machine learning for financial prediction

分享到:更多 ()

评论 抢沙发

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址