3.6 Model Optimization and Tuning
The search for the best tuning parameter values can be done in many ways but most fall into two main categories: those that predefine which values to evaluate and those that incrementally determine the values. In the first case, the most well-known procedure is grid search. Here, a set of candidate tuning parameter values are specified and then evaluated. In some cases, the model will have more than one tuning parameter and, in this case, a candidate parameter combination is multidimensional. We recommend using resampling to evaluate each distinct parameter value combination to get good estimates of how well each candidate performs. Once the results are calculated, the “best” tuning parameter combination is chosen and the final model is fit to the entire training set with this value. The best combination can be determined in various ways but the most common approach is to pick the candidate with the empirically best results.
As an example, a simple \(K\)-nearest neighbors model requires the number of neighbors. For the OkCupid data, this model will be used to predict the profession of the profile. In this context, the predictors contain many different types of profile characteristics, so that a “nearest neighbor” is really a similar profile based on many characteristics.
We will predefine the candidate set to be \(K = 1, 3, \ldots 201\)28. When combined with the same 10-fold cross-validation process, a total of 380 temporary models will be used to determine a good value of \(K\). Once the best value of K is chosen, one final model is created using the optimal number of neighbors. The resampling profile is shown in Figure 3.11. Each black point shown in this graph is the average of performance for ten different models estimated using a distinct 90% of the training set. The configuration with the largest area under the ROC curve used 201 neighbors with a corresponding AUC of 0.806. Figure 3.11 shows the individual resampling profiles for each of the ten resamples. The reduced amount of variation in these data is mostly due to the size of the training set29.
When there are many tuning parameters associated with a model, there are several ways to proceed. First, a multidimensional grid search can be conducted where candidate parameter combinations and the grid of combinations are evaluated. In some cases, this can be very inefficient. Another approach is to define a range of possible values for each parameter and to randomly sample the multidimensional space enough times to cover a reasonable amount (Bergstra and Bengio 2012). This random search grid can then be resampled in the same way as a more traditional grid. This procedure can be very beneficial when there are a large number of tuning parameters and there is no a priori notion of which values should be used. A large grid may be inefficient to search, especially if the profile has a fairly stable pattern with little change over some range of the parameter. Neural networks, gradient boosting machines, and other models can effectively be tuned using this approach30.
To illustrate this procedure, the OkCupid data was used once again. A single-layer, feed-forward neural network was used to model the probability of being in the STEM field using the same predictors as the previous two models. This model is an extremely flexible nonlinear classification system with many tuning parameters. See Goodfellow, Bengio, and Courville (2016) for an excellent primer on neural networks and deep learning models.
The main tuning parameters for the model are:
- The number of hidden units. This parameter controls the complexity of the neural network. Larger values enable higher performance but also increase the risk of overfitting. For these data, the number of units in the hidden layers were randomly selected to be between 2 and 20.
- The activation function. The nonlinear function set by this parameter links different parts of the network. Three different functions were used: traditional sigmoidal curves,
tanh
, or rectified linear unit functions (ReLU). - The dropout rate. This is the rate at which coefficients are randomly set to zero during training iterations and can attenuate overfitting (Nitish et al. 2014). Rates between 0 and 80% were considered.
The fitting procedure for neural network coefficients can be very numerically challenging. There are usually a large number of coefficients to estimate and there is a significant risk of finding a local optima. Here, we use a gradient-based optimization method called RMSProp31 to fit the model. This is a modern algorithm for finding coefficient values and there are several model tuning parameters for this procedure32:
- The batch size controls how many of the training set data points are randomly exposed to the optimization process at each epoch (i.e., optimization iteration). This has the effect of reducing potential overfitting by providing some randomness to the optimization process. Batch sizes between 10 and 40K were considered.
- The learning rate parameter controls the rate of descent during the parameter estimation iterations and these values were contrasted to be between zero and one.
- A decay rate that decreases the learning rate over time (ranging between zero and one).
- The root mean square gradient scaling factor (\(\rho\)) controls how much the gradient is normalized by recent values of the squared gradient values. Smaller values of this parameter give more emphasis to recent gradients. The range of this parameter was set to be [0.0, 1.0].
For this model, 20 different seven-dimensional tuning parameter combinations were created randomly using uniform distributions to sample within the ranges above. Each of these settings were evaluated using the same 10-fold cross-validation splits used previously.
The resampled ROC values significantly varied between the candidate parameter values. The best setting is italicized in Table 3.3 which had a corresponding area under the ROC curve of 0.837.
Units | Dropout | Batch Size | Learn Rate | Grad. Scaling | Decay | Act. Fun. | ROC |
---|---|---|---|---|---|---|---|
9 | 0.592 | 26231 | 0.00398 | 0.45653 | 4.46e-02 | relu | 0.837 |
16 | 0.572 | 25342 | 0.63142 | 0.29769 | 5.10e-01 | relu | 0.836 |
13 | 0.600 | 10196 | 0.33286 | 0.00675 | 1.76e-01 | relu | 0.835 |
10 | 0.286 | 19893 | 0.94679 | 0.49468 | 4.61e-01 | relu | 0.832 |
8 | 0.534 | 36267 | 0.38651 | 0.50524 | 1.49e-01 | relu | 0.832 |
14 | 0.617 | 21154 | 0.25216 | 0.29679 | 6.54e-03 | sigmoid | 0.829 |
17 | 0.250 | 11638 | 0.22683 | 0.72089 | 1.38e-02 | tanh | 0.825 |
20 | 0.392 | 17235 | 0.01681 | 0.36270 | 2.90e-04 | relu | 0.821 |
16 | 0.194 | 8477 | 0.20389 | 0.15507 | 1.82e-02 | tanh | 0.820 |
10 | 0.282 | 18989 | 0.87800 | 0.25988 | 6.39e-02 | tanh | 0.820 |
2 | 0.586 | 19404 | 0.72107 | 0.15870 | 1.85e-01 | sigmoid | 0.818 |
18 | 0.305 | 5563 | 0.83560 | 0.45655 | 1.05e-02 | sigmoid | 0.805 |
10 | 0.513 | 17877 | 0.49598 | 0.76990 | 3.61e-05 | tanh | 0.796 |
4 | 0.536 | 32689 | 0.71944 | 0.48349 | 1.11e-04 | sigmoid | 0.755 |
12 | 0.150 | 4015 | 0.55811 | 0.21277 | 1.37e-05 | relu | 0.732 |
14 | 0.368 | 7582 | 0.93841 | 0.12600 | 1.04e-05 | relu | 0.718 |
3 | 0.688 | 12905 | 0.63325 | 0.69805 | 1.36e-05 | relu | 0.598 |
3 | 0.645 | 5327 | 0.95232 | 0.71696 | 1.47e-05 | sigmoid | 0.560 |
2 | 0.688 | 14660 | 0.68575 | 0.44476 | 3.26e-05 | relu | 0.546 |
15 | 0.697 | 4800 | 0.80618 | 0.13876 | 3.23e-05 | sigmoid | 0.538 |
As previously mentioned, grid search and random search methods have the tuning parameters specified in advance and the search does not adapt to look for novel values. There are other approaches that can be taken which do. For example, there are many nonlinear search methods such as the Nelder-Mead simplex search procedure, simulated annealing, and genetic algorithms that can be employed (Chong and Zak 2008). These methods conduct very thorough searches of the grid space but tend to be computationally expensive. One reason for this is that each evaluation of a new parameter requires a good estimate of performance to guide the search. Resampling is one of the best methods for doing this. Another approach to searching the space is called Bayesian optimization (Mockus 1994). Here, an initial pool of samples are evaluated using grid or random search. The optimization procedure creates a separate model to predict performance as a function of the tuning parameters and can then make a recommendation as to the next candidate set to evaluate. Once this new point is assessed, the model is updated and the process continues for a set number of iterations (Jones, Schonlau, and Welch 1998)33.
One final point about the interpretation of resampling results is that, by choosing the best settings based on the results and representing the model’s performance using these values, there is the risk of optimization bias. Depending on the problem, this bias might overestimate the model’s true performance. There are nested resampling procedures that can be used to mitigate these biases. See Boulesteix and Strobl (2009) for more information.
This number of neighbors is abnormally large compared to other data sets. This model tends to prefer values of K in, at most, the dozens.↩
This is generally not the case in other data sets; visualizing the individual profiles can often show an excessive amount of noise from resample-to-resample that is averaged out in the end.↩
However, a regular grid may be more efficient. For some models, there are optimizations that can be used to compute the results for a candidate parameter set that can be determined without refitting that models. The nature of random search cancels this benefit.↩
RMSprop is an general optimization method that uses gradients. The details are beyond the scope of this book but more information can be found in Goodfellow, Bengio, and Courville (2016) and at
https://en.wikipedia.org/wiki/Stochastic_gradient_descent#RMSProp
.↩Many of these parameters are fairly arcane for those not well acquainted with modern derivative-based optimization. Chapter 8 of Goodfellow, Bengio, and Courville (2016) has a substantial discussion of these methods.↩
The GitHub repository
http://bit.ly/2yjzB5V
has example code for nonlinear search methods and Bayesian optimization of tuning parameters.↩