It is a challenging task to model the emerging high-dimensional clinical data with survival outcomes. For its simplicity and efficiency, penalized Cox models are significantly useful for accomplishing such tasks.

`hdnom`

streamlines the workflow of high-dimensional Cox
model building, nomogram plotting, model validation, calibration, and
comparison.

To build a penalized Cox model with good predictive performance, some parameter tuning is usually needed. For example, the elastic-net model requires to tune the \(\ell_1\)-\(\ell_2\) penalty trade-off parameter \(\alpha\), and the regularization parameter \(\lambda\).

To free the users from the tedious and error-prone parameter tuning
process, `hdnom`

provides several functions for automatic
parameter tuning and model selection, including the following model
types:

Function name | Model type | Auto-tuned hyperparameters |
---|---|---|

`fit_lasso()` |
Lasso | \(\lambda\) |

`fit_alasso()` |
Adaptive lasso | \(\lambda\) |

`fit_enet()` |
Elastic-net | \(\lambda\), \(\alpha\) |

`fit_aenet()` |
Adaptive elastic-net | \(\lambda\), \(\alpha\) |

`fit_mcp()` |
MCP | \(\gamma\), \(\lambda\) |

`fit_mnet()` |
Mnet (MCP + \(\ell_2\)) | \(\gamma\), \(\lambda\), \(\alpha\) |

`fit_scad()` |
SCAD | \(\gamma\), \(\lambda\) |

`fit_snet()` |
Snet (SCAD + \(\ell_2\)) | \(\gamma\), \(\lambda\), \(\alpha\) |

`fit_flasso()` |
Fused lasso | \(\lambda_1\), \(\lambda_2\) |

In the next, we will use the imputed SMART study data to demonstrate
a complete process of model building, nomogram plotting, model
validation, calibration, and comparison with `hdnom`

.

Load the packages and the `smart`

dataset:

`library("hdnom")`

```
data("smart")
x <- as.matrix(smart[, -c(1, 2)])
time <- smart$TEVENT
event <- smart$EVENT
y <- survival::Surv(time, event)
```

The dataset contains 3873 observations with corresponding survival
outcome (`time`

, `event`

). 27 clinical variables
(`x`

) are available as the predictors. See
`?smart`

for a detailed explanation of the variables.

Fit a penalized Cox model by adaptive elastic-net regularization with
`fit_aenet()`

and enable the parallel parameter tuning:

```
suppressMessages(library("doParallel"))
registerDoParallel(detectCores())
fit <- fit_aenet(x, y, nfolds = 10, rule = "lambda.1se", seed = c(5, 7), parallel = TRUE)
names(fit)
```

```
## [1] "model" "alpha" "lambda" "model_init" "alpha_init"
## [6] "lambda_init" "pen_factor" "type" "seed" "call"
```

Adaptive elastic-net includes two estimation steps. The random seed
used for parameter tuning, the selected best \(\alpha\), the selected best \(\lambda\), the model fitted for each
estimation step, and the penalty factor for the model coefficients in
the second estimation step are all stored in the model object
`fit`

.

Before plotting the nomogram, we need to extract some necessary information about the model: the model object and the selected hyperparameters:

```
model <- fit$model
alpha <- fit$alpha
lambda <- fit$lambda
adapen <- fit$pen_factor
```

Letâ€™s generate a nomogram object with `as_nomogram()`

and
plot it:

```
nom <- as_nomogram(
fit, x, time, event,
pred.at = 365 * 2,
funlabel = "2-Year Overall Survival Probability"
)
plot(nom)
```

According to the nomogram, the adaptive elastic-net model selected 6 variables from the original set of 27 variables, effectively reduced the model complexity.

Information about the nomogram itself, such as the point-linear
predictor unit mapping and total points-survival probability mapping,
can be viewed by printing the `nom`

object directly.

It is a common practice to utilize resampling-based methods to validate the predictive performance of a penalized Cox model. Bootstrap, \(k\)-fold cross-validation, and repeated \(k\)-fold cross-validation are the most employed methods for such purpose.

`hdnom`

supports both internal model validation and
external model validation. Internal validation takes the dataset used to
build the model and evaluates the predictive performance on the data
internally with the above resampling-based methods, while external
validation evaluates the modelâ€™s predictive performance on a dataset
which is independent to the dataset used in model building.

`validate()`

allows us to assess the model performance
internally by time-dependent AUC (Area Under the ROC Curve) with the
above three resampling methods.

Here, we validate the performance of the adaptive elastic-net model with bootstrap resampling, at every half year from the first year to the fifth year:

```
val_int <- validate(
x, time, event,
model.type = "aenet",
alpha = alpha, lambda = lambda, pen.factor = adapen,
method = "bootstrap", boot.times = 10,
tauc.type = "UNO", tauc.time = seq(1, 5, 0.5) * 365,
seed = 42, trace = FALSE
)
print(val_int)
#> High-Dimensional Cox Model Validation Object
#> Random seed: 42
#> Validation method: bootstrap
#> Bootstrap samples: 10
#> Model type: aenet
#> glmnet model alpha: 0.15
#> glmnet model lambda: 0.4322461
#> glmnet model penalty factor: specified
#> Time-dependent AUC type: UNO
#> Evaluation time points for tAUC: 365 547.5 730 912.5 1095 1277.5 1460 1642.5 1825
summary(val_int)
#> Time-Dependent AUC Summary at Evaluation Time Points
#> 365 547.5 730 912.5 1095 1277.5 1460
#> Mean 0.6736908 0.6980663 0.6883938 0.6877201 0.7171392 0.7329408 0.6801665
#> Min 0.6702972 0.6943275 0.6852397 0.6838341 0.7104875 0.7275005 0.6713858
#> 0.25 Qt. 0.6726018 0.6964805 0.6873354 0.6851884 0.7140499 0.7293569 0.6776290
#> Median 0.6740445 0.6983919 0.6879848 0.6884829 0.7163701 0.7321725 0.6799320
#> 0.75 Qt. 0.6749241 0.6996926 0.6896133 0.6896262 0.7210469 0.7371302 0.6825069
#> Max 0.6760826 0.7029078 0.6916850 0.6916987 0.7236970 0.7389818 0.6919914
#> 1642.5 1825
#> Mean 0.6841580 0.6935303
#> Min 0.6770945 0.6800316
#> 0.25 Qt. 0.6821133 0.6924729
#> Median 0.6831368 0.6956285
#> 0.75 Qt. 0.6864527 0.6966638
#> Max 0.6939574 0.6997908
```

The mean, median, 25%, and 75% quantiles of time-dependent AUC at each time point across all bootstrap predictions are listed above. The median and the mean can be considered as the bias-corrected estimation of the model performance.

It is also possible to plot the model validation result:

```
plot(val_int)
#> 365 547.5 730 912.5 1095 1277.5 1460
#> Mean 0.6736908 0.6980663 0.6883938 0.6877201 0.7171392 0.7329408 0.6801665
#> Min 0.6702972 0.6943275 0.6852397 0.6838341 0.7104875 0.7275005 0.6713858
#> 0.25 Qt. 0.6726018 0.6964805 0.6873354 0.6851884 0.7140499 0.7293569 0.6776290
#> Median 0.6740445 0.6983919 0.6879848 0.6884829 0.7163701 0.7321725 0.6799320
#> 0.75 Qt. 0.6749241 0.6996926 0.6896133 0.6896262 0.7210469 0.7371302 0.6825069
#> Max 0.6760826 0.7029078 0.6916850 0.6916987 0.7236970 0.7389818 0.6919914
#> 1642.5 1825
#> Mean 0.6841580 0.6935303
#> Min 0.6770945 0.6800316
#> 0.25 Qt. 0.6821133 0.6924729
#> Median 0.6831368 0.6956285
#> 0.75 Qt. 0.6864527 0.6966638
#> Max 0.6939574 0.6997908
```

The solid line represents the mean of the AUC, the dashed line represents the median of the AUC. The darker interval in the plot shows the 25% and 75% quantiles of AUC, the lighter interval shows the minimum and maximum of AUC.

It seems that the bootstrap-based validation result is stable: the median and the mean value at each evaluation time point are close; the 25% and 75% quantiles are also close to the median at each time point.

Bootstrap-based validation often gives relatively stable results.
Many of the established nomograms in clinical oncology research are
validated by bootstrap methods. \(K\)-fold cross-validation provides a more
strict evaluation scheme than bootstrap. Repeated cross-validation gives
similar results as \(k\)-fold
cross-validation, and usually more robust. These two methods are more
applied by the machine learning community. Check
`?hdnom::validate`

for more examples about internal model
validation.

Now we have the internally validated model. To perform external validation, we usually need an independent dataset (preferably, collected in other studies), which has the same variables as the dataset used to build the model. For penalized Cox models, the external dataset should have at least the same variables that have been selected in the model.

For demonstration purposes, here we draw 1000 samples from the
`smart`

data and *assume* that they form an external
validation dataset, then use `validate_external()`

to perform
external validation:

```
x_new <- as.matrix(smart[, -c(1, 2)])[1001:2000, ]
time_new <- smart$TEVENT[1001:2000]
event_new <- smart$EVENT[1001:2000]
val_ext <- validate_external(
fit, x, time, event,
x_new, time_new, event_new,
tauc.type = "UNO",
tauc.time = seq(0.25, 2, 0.25) * 365
)
print(val_ext)
#> High-Dimensional Cox Model External Validation Object
#> Model type: aenet
#> Time-dependent AUC type: UNO
#> Evaluation time points for tAUC: 91.25 182.5 273.75 365 456.25 547.5 638.75 730
summary(val_ext)
#> Time-Dependent AUC Summary at Evaluation Time Points
#> 91.25 182.5 273.75 365 456.25 547.5 638.75
#> AUC 0.4328909 0.5713055 0.6371661 0.6351403 0.6575692 0.6768453 0.683239
#> 730
#> AUC 0.6956754
plot(val_ext)
#> 91.25 182.5 273.75 365 456.25 547.5 638.75
#> AUC 0.4328909 0.5713055 0.6371661 0.6351403 0.6575692 0.6768453 0.683239
#> 730
#> AUC 0.6956754
```

The time-dependent AUC on the external dataset is shown above.

Measuring how far the model predictions are from actual survival
outcomes is known as *calibration*. Calibration can be assessed
by plotting the predicted probabilities from the model versus actual
survival probabilities. Similar to model validation, both internal model
calibration and external model calibration are supported in
`hdnom`

.

`calibrate()`

provides non-resampling and resampling
methods for internal model calibration, including direct fitting,
bootstrap resampling, \(k\)-fold
cross-validation, and repeated cross-validation.

For example, to calibrate the model internally with the bootstrap method:

```
cal_int <- calibrate(
x, time, event,
model.type = "aenet",
alpha = alpha, lambda = lambda, pen.factor = adapen,
method = "bootstrap", boot.times = 10,
pred.at = 365 * 5, ngroup = 3,
seed = 42, trace = FALSE
)
print(cal_int)
#> High-Dimensional Cox Model Calibration Object
#> Random seed: 42
#> Calibration method: bootstrap
#> Bootstrap samples: 10
#> Model type: aenet
#> glmnet model alpha: 0.15
#> glmnet model lambda: 0.4322461
#> glmnet model penalty factor: specified
#> Calibration time point: 1825
#> Number of groups formed for calibration: 3
summary(cal_int)
#> Calibration Summary Table
#> Predicted Observed Lower 95% Upper 95%
#> 1 0.7937694 0.7556172 0.7275288 0.7847901
#> 2 0.8940039 0.8956157 0.8744876 0.9172544
#> 3 0.9376169 0.9424155 0.9256913 0.9594419
```

We split the samples into three risk groups. In practice, the number of risk groups is decided by the users according to their needs.

The model calibration results (the median of the predicted survival probability; the median of the observed survival probability estimated by Kaplan-Meier method with 95% CI) are summarized as above.

Plot the calibration result:

`plot(cal_int, xlim = c(0.5, 1), ylim = c(0.5, 1))`

In practice, you may want to perform calibration for multiple time
points separately, and put the plots together in one figure. See
`?hdnom::calibrate`

for more examples about internal model
calibration.

To perform external calibration with an external dataset, use
`calibrate_external()`

:

```
cal_ext <- calibrate_external(
fit, x, time, event,
x_new, time_new, event_new,
pred.at = 365 * 5, ngroup = 3
)
print(cal_ext)
#> High-Dimensional Cox Model External Calibration Object
#> Model type: aenet
#> Calibration time point: 1825
#> Number of groups formed for calibration: 3
summary(cal_ext)
#> External Calibration Summary Table
#> Predicted Observed Lower 95% Upper 95%
#> 1 0.7937879 0.7471369 0.6991861 0.7983762
#> 2 0.8917521 0.8727998 0.8363680 0.9108185
#> 3 0.9214463 0.9387588 0.9122184 0.9660715
plot(cal_ext, xlim = c(0.5, 1), ylim = c(0.5, 1))
```

The external calibration results have the similar interpretations as the internal calibration results, except the fact that external calibration is performed on the external dataset.

Internal calibration and external calibration both classify the testing set into different risk groups. For internal calibration, the testing set means all the samples in the dataset that was used to build the model, for external calibration, the testing set means the samples from the external dataset.

We can further analyze the differences in survival time for different
risk groups with Kaplan-Meier survival curves and a number at risk
table. For example, here we plot the Kaplan-Meier survival curves and
evaluate the number at risk from one year to six years for the three
risk groups, with the function `kmplot()`

:

```
kmplot(
cal_int,
group.name = c("High risk", "Medium risk", "Low risk"),
time.at = 1:6 * 365
)
```

```
kmplot(
cal_ext,
group.name = c("High risk", "Medium risk", "Low risk"),
time.at = 1:6 * 365
)
```