Multinomial logistic regression using brglm2

Ioannis Kosmidis

01 July 2017

brmultinom

The brglm2 R package provides brmultinom which is a wrapper of brglmFit for fitting multinomial logistic regression models (a.k.a. baseline category logit models) using either maximum likelihood or any of the various bias reduction methods described in brglmFit. brmultinom uses the equivalent Poisson log-linear model, by appropriately re-scaling the Poisson means to match the multinomial totals (a.k.a. the “Poisson trick”). The mathematical details and algorithm on using the Poisson trick for mean-bias reduction are given in Kosmidis and Firth (2011).

This vignettes illustrates the use of brmultinom and of the associated methods, using the alligator food choice example in Agresti (2002, Section 7.1)

Alligator data

The alligator data set ships with brglm2. Agresti (2002, Section 7.1) provides a detailed description of the variables recorded in the data set.

library("brglm2")
data("alligators", package = "brglm2")

Maximum likelihood estimation

The following chunk of code reproduces Agresti (2002, Table 7.4). Note that in order to get the estimates and standard errors reported in the latter table, we have to explicitly specify the contrasts that Agresti (2002) uses.

Mean and median bias reduction

Fitting the model using mean-bias reducing adjusted score equations gives

The corresponding fit using median-bias reducing adjusted score equations is

The estimates and the estimated standard errors from bias reduction are close to those for maximum likelihood. As a result, it is unlikely that either mean or median bias is of any real consequence for this particular model and data combination.

Infinite estimates and multinomial logistic regression

Let’s scale the frequencies in alligators by 3 in order to get a sparser data set. The differences between maximum likelihood and mean and median bias reduction should be more apparent on the resulting data set. Here we have to “slow-down” the Fisher scoring iteration (by scaling the step-size), because otherwise the Fisher information matrix quickly gets numerically rank-deficient. The reason is data separation (Albert and Anderson 1984).

all_ml_sparse <- update(all_ml, weights = round(freq/3), slowit = 0.2)
#> Warning: brglmFit: algorithm did not converge
summary(all_ml_sparse)
#> Call:
#> brmultinom(formula = foodchoice ~ size + lake, data = alligators, 
#>     weights = round(freq/3), contrasts = agresti_contrasts, ref = 1, 
#>     type = "ML", slowit = 0.2)
#> 
#> Coefficients:
#>              (Intercept)   size<=2.3 lakeHancock lakeOklawaha lakeTrafford
#> Invertebrate   -1.892117  1.74080751  -1.7936496   0.96047783    1.0747116
#> Reptile       -22.030425 -1.05673342  20.4794861  21.17196942   21.3070760
#> Bird           -2.225759 -0.33405988   0.9538709 -19.71099992    0.6957762
#> Other          -1.730365  0.04554622   1.1091441  -0.07652818    1.2070845
#> 
#> Std. Errors:
#>               (Intercept) size<=2.3  lakeHancock lakeOklawaha lakeTrafford
#> Invertebrate 8.042604e-01 0.7409450 1.188368e+00 8.431835e-01 8.917234e-01
#> Reptile      2.297088e+04 1.2808647 2.297088e+04 2.297088e+04 2.297088e+04
#> Bird         1.184975e+00 1.1598538 1.324354e+00 2.486631e+04 1.545522e+00
#> Other        8.869382e-01 0.7815002 9.589614e-01 1.337964e+00 1.084437e+00
#> 
#> Residual Deviance: 161.913 
#> Log-likelihood: -80.95651 
#> AIC: 161.913

Specifically, judging from the estimated standard errors, the estimates for (Intercept), lakeHancock, lakeOklawaha and lakeTrafford for Reptile and lakeHancock for Bird seem to be infinite.

To quickly check if that’s indeed the case we can use the check_infinite_estimates method (see, also the separation vignette).

se_ratios <- check_infinite_estimates(all_ml_sparse)
matplot(se_ratios, type = "l", lty = 1, ylim = c(0.5, 1.5), xlab = "Iteration")

Some of the estimated standard errors diverge as the number of Fisher scoring iterations increases, which is evidence of complete or quasi-complete separation (Lesaffre and Albert 1989).

In contrast, both mean and median bias reduction result in finite estimates

all_mean_sparse <- update(all_ml_sparse, type = "AS_mean")
summary(all_mean_sparse)
#> Call:
#> brmultinom(formula = foodchoice ~ size + lake, data = alligators, 
#>     weights = round(freq/3), contrasts = agresti_contrasts, ref = 1, 
#>     type = "AS_mean", slowit = 0.2)
#> 
#> Coefficients:
#>              (Intercept)   size<=2.3 lakeHancock lakeOklawaha lakeTrafford
#> Invertebrate   -1.679168  1.53865702  -1.4524526   0.85785527    0.9583239
#> Reptile        -2.690142 -0.76143882   1.4034489   1.97464011    2.0975458
#> Bird           -1.820474 -0.25362492   0.7182343  -0.59602306    0.6373375
#> Other          -1.517519  0.04996481   0.9501195   0.06500802    1.0685650
#> 
#> Std. Errors:
#>              (Intercept) size<=2.3 lakeHancock lakeOklawaha lakeTrafford
#> Invertebrate   0.7986896 0.7384928   1.0933484    0.8526692    0.9002978
#> Reptile        1.5495133 1.0465737   1.7628485    1.7032986    1.7165517
#> Bird           1.0387358 1.0051365   1.1808982    1.8023206    1.3550407
#> Other          0.8606607 0.7662529   0.9397838    1.2305934    1.0633023
#> 
#> Residual Deviance: 164.878 
#> Log-likelihood: -82.43901 
#> AIC: 164.878

all_median_sparse <- update(all_ml_sparse, type = "AS_median")
#> Warning: brglmFit: algorithm did not converge
summary(all_median_sparse)
#> Call:
#> brmultinom(formula = foodchoice ~ size + lake, data = alligators, 
#>     weights = round(freq/3), contrasts = agresti_contrasts, ref = 1, 
#>     type = "AS_median", slowit = 0.2)
#> 
#> Coefficients:
#>              (Intercept)   size<=2.3 lakeHancock lakeOklawaha lakeTrafford
#> Invertebrate   -1.738085  1.57656366  -1.6099398   0.87918613    0.9772025
#> Reptile        -3.665433 -0.89728841   2.1898217   2.82443525    2.9355875
#> Bird           -1.991952 -0.28581026   0.8094639  -1.37237204    0.6272294
#> Other          -1.613288  0.04720733   1.0087178  -0.01733333    1.1038823
#> 
#> Std. Errors:
#>              (Intercept) size<=2.3 lakeHancock lakeOklawaha lakeTrafford
#> Invertebrate   0.7922381 0.7310924   1.1339038    0.8411221    0.8888212
#> Reptile        2.4236130 1.1696370   2.5995650    2.5308831    2.5413662
#> Bird           1.0941192 1.0683125   1.2317627    2.5549132    1.4277852
#> Other          0.8695131 0.7700378   0.9449311    1.2711051    1.0676074
#> 
#> Residual Deviance: 162.9522 
#> Log-likelihood: -81.47608 
#> AIC: 162.9522

Relevant resources

?brglmFit and ?brglm_control contain quick descriptions of the various bias reduction methods supported in brglm2. The iteration vignette describes the iteration and gives the mathematical details for the bias-reducing adjustments to the score functions for generalized linear models.

Citation

If you found this vignette or brglm2, in general, useful, please consider citing brglm2 and the associated paper. You can find information on how to do this by typing citation("brglm2").

References

Agresti, A. 2002. Categorical Data Analysis. Wiley.

Albert, A., and J. A. Anderson. 1984. “On the Existence of Maximum Likelihood Estimates in Logistic Regression Models.” Biometrika 71 (1): 1–10.

Kosmidis, I., and D. Firth. 2011. “Multinomial Logit Bias Reduction via the Poisson Log-Linear Model.” Biometrika 98 (3): 755–59.

Lesaffre, E., and A. Albert. 1989. “Partial Separation in Logistic Discrimination.” Journal of the Royal Statistical Society. Series B (Methodological) 51 (1). [Royal Statistical Society, Wiley]: 109–16. http://www.jstor.org/stable/2345845.