NEWS | R Documentation |
new function bootstrapCI()
to compute bootstrapped coefficients
add the dataset 'emotion' containing EEG and EMG measures under different experimental conditions
with scalar response, FDboost()
works with the response as
vector and not as matrix with one row;
thus, fitted()
and predict()
return a vector
update.FDboost()
works now with scalar response
FDboost()
works with family Binomial(type = "glm")
,
see isssue #1
applyFolds()
works for factor response,
see issue #7
cvLong
and cvMA
return a matrix for only one resampling
fold with B = 1
(proposed by Almond Stoecker)
adapt FDboost to mboost 2.8-0 that allows for mstop = 0
restructure FDboostLSS() such that it calls mboostLSS_fit() from gamboostLSS 2.0-0
in FDboost, set options("mboost_indexmin" = +Inf)
to disable the
internal use of ties in model fitting, as this breaks some methods for models with response
in long format and for models containing bhsitx
,
see issue #10
deprecate validateFDboost()
,
use applyFolds()
and bootstrapCI()
instead
add function applyFolds() to compute the optimal stopping iteration
allow for extrapolation in predict() with bbsc()
bugfix in bolsc(): correctly use index in bolsc() / bbsc(), before: for the computation of Z each observation was used only once
add function %Xa0% that computes a row-tensor product of two base-learners where the penalty in one direction is zero
add function reweightData() that computes the data for Bootstrap or cross-falidation folds
add function stabsel.FDboost() that refits the smooth offset in each fold
add argument 'fun' to validateFDboost()
add update.FDboost() that overwrites update.mboost()
FDboost() works with family = Binomial()
fix oobpred in validateFDboost() for irregular response and resampling on the level of curves and thus plot.validateFDboost() works for that case
fix scope of formula in FDboost(): now the formula given to mboost() within FDboost() uses the variables in the environment of the formula specified in FDboost()
plot.FDboost() works for more effects, especially for effects like bolsc() %X% bhistx()
new operator %A0% for Kronecker product of two base-learners with anisotropic penalty for the special case where lambda1 or lambda2 is zero
the base-learner bbsc() can be used with center = TRUE, derived by Almond Stoecker
in FDboostLSS() a list of one-sided formulas can be specified for timeformula
FDboostLSS works with families = GammaLSS()
operator %A% uses weights in model call; only works correctly for weights on level of blg1 and blg2 (which is the same as weights on rows and columns of the response matrix)
call to internal functions of mboost is done using mboost_intern()
hyper_olsc() is based on hyper_ols() of mboost
changed the operator %Xc% for row tensor product of two scalar covariates. The design matrix of the interaction effects is constrained such that the interaction is centred around the intercept and around the two main effects of the scalar covariates (experimental!); use e.g. as bols(x1) %Xc% bols(x2)
changed the operator %Xc% for row tensor product where the sum-to-zero constraint is applied to the design matrix resulting from the row-tensor product (experimental!), such that first a, intercept-column is added to the design-matrix and then the sum-to-zero constraint is applied, use e.g. as bolsc(x1) %Xc% bolsc(x2)
use the functional index s as argsvals in the FPCA conducted within bfpc()
new operator %A% that implies anisotropic penalties for differently specified df in the two base-learners
do not penalize in direction of ONEx in smooth intercept specified implicitly by ~1, as bols(ONEx, intercept=FALSE, df=1) %A% bbs(time)
do not expand an effect that contains %A% or %O% with the timeformula, allowing for different effects over time for the effects in the model
add the function FDboostLSS() to fit GAMLSS models with functional data using R-package gamboostLSS
new operator %Xc% for row tensor product where the sum-to-zero constraint is applied to the design matrix resulting from the row-tensor product (experimental!)
allow newdata to be a list in predict.FDboost() in combination with signal base-learners
expand coef.FDboost() such that it works for 3-dimensional tensor products of with bhistx() the form bhistx() %X% bolsc() %X% bolsc() (with David Ruegamer)
add a new possibility for scalar-on-function regression: for timeformula=NULL, no Kronecker-product with 1 is used, which changes the penalty as otherwise in the direction of 1 is penalized as well.
new dependency on R-package gamboostLSS
remove dependency on R-package MASS
use the argument 'prediction' in the internal computation of the base-learners (work in progress)
throw an error if 'timeLab' of the hmatrix-object in bhistx() is not equal to the time-variable in 'timeformula'.
in function FDboost() the offset is supplied differently, for a scalar offset, use offset = "scalar", the default is still the same offset=NULL
predict.FDboost() has new argument toFDboost (logical)
fitted.FDboost() has argument toFDboost explicitly and not only in ...
new base-learner bhistx() especially suited for effects with %X%, like bhistx %X% bolsc
coef.FDboost() and plot.FDboost() suited for effects like bhistx %X% bolsc
for predict.FDboost() with effects bhistx() and newdata the latest mboostPatch is necessary
check for necessity of smooth offset works for missings in regular response (spotted by Tore Erdmann)
Internal experimental version.
integrationWeights() gives equal weights for regular grids
new base-learner bfpc() for a functional covariate where functional covariate and the coeffcient are both expanded using fPCA (experimental feature!); only works for regularly observed functional covariate.
the function coef.FDboost() only works for bhist() if the time variable is the same in the timeformula and in bhist()
predict.FDboost() has a check that for newdata only type="link" can be predicted
change the default in difference-penalties to first order difference penalty differences=1, as then the effects are better identifiable
new method cvrisk.FDboost() that uses per default sampling on the levels of curves, which is important for functional response
reorganize documentation of cvrisk() and validateFDboost()
in bhist(): effect can be standardized
add a CITATION file
use mboost 2.4-2 as it exports all important functions
main argument is always passed in plot.FDboost()
bhist() and bconcurrent() work for equal time and s
predict.FDboost() works with tensor-product base-learners bl1 %X% bl2