Interpretability (and, relatedly, communicability), flexibility, and scalability are desirable properties for statistical techniques. Many newer techniques are quite flexible and work well on large data sets but are “black boxes” when it comes to interpretation, making it hard to understand the underlying conditions which make predictions accurate (or how long they will last for). Kernel Regularized Least Squares (KRLS) is a kernel-based, complexity-penalized method developed by Hainmueller and Hazlett (2013), and designed to minimize parametric assumptions while maintaining interpretive clarity. However, the interpretability and flexibility come at the cost of scalability because most of the calculations require comparing each observation to each and every other observation and therefore many computationally-costly **N** x **N** calculations. We introduce *bigKRLS*, an updated version of the original KRLS R package with algorithmic and implementation improvements designed to optimize speed and memory usage. These improvements allow users to straightforwardly fit KRLS models to medium and large data sets (N > ~2,500).

bigKRLS is the workhorse of this package; there are only two basic inputs: a vector of *N* observations on the dependent variable, **y**, and an *N* x *P* matrix **X**, where *P* is the number of independent variables and also ncol(**X**).^{1}

`mtcars[1:5,]`

```
## mpg cyl disp hp drat wt qsec vs am gear carb
## Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
## Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
## Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
## Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
## Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
```

Suppose we want to regress fuel efficiency on the other observables.

```
reg.out <- bigKRLS(y = as.matrix(mtcars$mpg),
X = as.matrix(mtcars[,-1]), Ncores = 1)
```

```
## .........................
## All done. You may wish to use summary() for more detail, predict() for out-of-sample forecasts, or shiny.bigKRLS() to interact with results. For an alternative approach, see help(crossvalidate.bigKRLS). Type vignette("bigKRLS_basics") for sample syntax. Use save.bigKRLS() to store results and load.bigKRLS() to re-open them.
```

Unlike classical regression, the algorithm does not directly obtain an estimate of the slope, the “average” marginal effect (the slope). By contrast, KRLS first estimates the the marginal effect d*y*/d*x*_{p} at each observation (given the pairwise distances with each of the other observations). For example, we may want to know the effect the number of gears has on a particular vehicle’s fuel efficiency.

`summary(reg.out)`

```
##
##
## MODEL SUMMARY:
##
## lambda: 0.3416
## N: 32
## N Effective: 19.08477
## R2: 0.9026
## R2AME**: 0.0445
##
## Average Marginal Effects:
##
## Estimate Std. Error t value Pr(>|t|)
## cyl -0.1021 0.0628 -1.6264 0.1380
## disp -0.0038 0.0018 -2.0519 0.0701
## hp -0.0118 0.0030 -3.8867 0.0036
## drat 0.8694 0.5074 1.7135 0.1205
## wt -1.3088 0.2415 -5.4206 0.0004
## qsec 0.2052 0.1363 1.5053 0.1662
## vs* 1.1703 0.5921 1.9764 0.0792
## am* 1.2026 0.5748 2.0923 0.0656
## gear 0.6606 0.2604 2.5370 0.0316
## carb -0.4921 0.1351 -3.6417 0.0053
##
##
## Percentiles of Marginal Effects:
##
## 5% 25% 50% 75% 95%
## cyl -0.4490 -0.1275 -0.0527 0.0078 0.0352
## disp -0.0080 -0.0061 -0.0046 -0.0012 0.0007
## hp -0.0236 -0.0185 -0.0098 -0.0072 0.0017
## drat -0.3506 0.1677 0.8387 1.3532 2.7333
## wt -2.8160 -1.6379 -1.2147 -0.9285 -0.0753
## qsec -0.2054 0.0183 0.1660 0.3608 0.6492
## vs* 0.0603 0.7058 0.9894 1.6145 2.3792
## am* -0.2348 0.6957 1.0089 1.9073 2.8283
## gear -0.4540 -0.2339 0.5379 1.3585 2.1821
## carb -1.0276 -0.6244 -0.5027 -0.3113 0.0502
##
## (*) Reported average and percentiles of dy/dx is for discrete change of the dummy variable from min to max (usually 0 to 1)).
##
##
## (**) Pseudo-R^2 computed using only the Average Marginal Effects.
##
##
## You may also wish to use predict() for out-of-sample forecasts or shiny.bigKRLS() to interact with results. Type vignette("bigKRLS_basics") for sample syntax. Use save.bigKRLS() to store results and load.bigKRLS() to re-open them.
```

The “Percentiles of the Marginal Effects” can be interpreted as evidence about whether *y* is a monotonic function of *x*_{p} and the extent to which the effect of *x*_{p} on *y* is homogeneous, if at all. In this toy data set, the number of cylinders is not a statistically significant predictor of fuel efficiency; perhaps unsurprisingly, the marginal effect of cylinders is negative for about half of the cars investigated. By contrast, horsepower has a more uniformly negative effect on fuel efficiency.

Suppose you wanted to plot how similar a Toyota Corolla is to the other four cylinder cars:

```
s <- reg.out$K[which(mtcars$cyl == 4), grep("Corolla", rownames(mtcars))]
barplot(s, main = "Similarity to a Toyota Corolla",
ylab = "Kernel", sub="Toy Data from mtcars", cex.names = .7,
col = colorRampPalette(c("red", "blue"))(length(s))[rank(s)],
names.arg = lapply(strsplit(rownames(mtcars), split=" "),
function(x) x[2])[which(mtcars$cyl == 4)])
```

Apparently my Corolla is more similar to a Civic than a Porsche 914 but more tests are needed… Note on exceedingly large data sets, you may wish to grab the relevant subset first, standardize that **X** data, and then call *bGaussKernel* on the smaller set.

It appears that fuel efficiency decreases as horsepower increases but that the effect isn’t quite monotonic further. We might first ask whether the outcome is an additive function of horsepower…

```
scatter.smooth(mtcars$hp, reg.out$derivatives[,3], ylab="HP's Effect", xlab="Horsepower", pch = 19, bty = "n",
main="Horsepower's Marginal Effect on Fuel Efficiency",
sub="Toy Data from mtcars",
col = colorRampPalette(c("blue", "red"))(nrow(mtcars))[rank(reg.out$coeffs^2)],
ylim = c(-0.042, 0.015), xlim = c(50, 400))
```

The above graph suggests that though in general lower horsepower helps explain which cars have better fuel efficiency, beyond a certain threshold, that’s no longer the case (or perhaps log horsepower is more relevant).

Suppose you want to cross-validate your data…

```
CV.out <- crossvalidate.bigKRLS(y = as.matrix(mtcars$mpg), seed = 123, Kfolds = 4,
X = as.matrix(mtcars[,-1]), Ncores = 1)
```

```
## ..........................
## ..................................
## ..........................
## ........................
```

`cor(CV.out$fold_3$tested$predicted, CV.out$fold_3$tested$ytest)`

`## [1] 0.9654151`

The predictions and the holdout data correlate at 0.86. To look at particular models:

`summary(CV.out$fold_1$trained) # not run`

`CV.out`

contains several test statistics including:

`CV.out$MSE_is`

```
## fold1 fold2 fold3 fold4
## 3.162456 1.229304 1.193794 3.412066
```

`CV.out$MSE_oos`

```
## fold1 fold2 fold3 fold4
## 3.380496 4.873787 21.978100 13.410338
```

`CV.out$R2_oos`

```
## fold1 fold2 fold3 fold4
## 0.9073130 0.6892571 0.9320263 0.5515949
```

`CV.out$R2AME_oos`

```
## fold1 fold2 fold3 fold4
## 0.9291616 0.6173352 0.9041560 0.7124559
```

The first two test stats are the mean squared error, in and out of sample (respectively). The second two show the performance of the full model (the N coefficients) vs. the portion that is linear and additive in X. To do a simple train and test, leave `Kfolds`

blank and set `ptesting`

instead.

To interact with your results in a pop up window or your browser, simply call:

`shiny.bigKRLS(reg.out) # not run`

To remove the big square matrices so that you can easily put your results up on a server, use export:

`shiny.bigKRLS(reg.out, export = T) # not run`

The output will describe the new, more compact object that has been created.

Let’s say we wanted to know what percentage of cars would have lower gas mileage if they had 200 horsepower.

```
Xnew <- mtcars[,-1]
Xnew$hp <- 200
forecast <- predict(reg.out, as.matrix(Xnew))
```

`## .`

`mean(forecast$predicted < mtcars$mpg)`

`## [1] 0.625`

So approximately 62.5% of cars would be less efficient.

If *N* > 2,500 or if you supply big matrices, using save() and load() will crash your R session. Instead you may do one of two things to save:

```
out <- bigKRLS(y, X, model_subfolder_name = "my_results") # not run
save.bigKRLS(out, "my_results") # not run
```

Either will save the model estimates to a new subfolder called “my_results” in your current working directory. To re-load,

`load.bigKRLS("my_results") # not run`

When *N* > 2,500 or the user provides big matrices, big matrices will be returned, which are really just memory addresses.

```
Z <- big.matrix(nrow=5, ncol=5, init=1)
Z
```

```
## An object of class "big.matrix"
## Slot "address":
## <pointer: 0x7fd1b1783f30>
```

You do not necessarily need to work with the big square matrices in the output. But if you do and your comfortable they fit in memory, just use the square brackets:

`Z[]`

```
## [,1] [,2] [,3] [,4] [,5]
## [1,] 1 1 1 1 1
## [2,] 1 1 1 1 1
## [3,] 1 1 1 1 1
## [4,] 1 1 1 1 1
## [5,] 1 1 1 1 1
```

**X**and**y**should only contain numeric data (no missing data, factors, or vectors of constants) and may be base*R*matrices or “big” matrices (from*bigmemory*).↩